Image-to-Image
Diffusers
StableDiffusionImageVariationPipeline
stable-diffusion
stable-diffusion-diffusers
Instructions to use lambda/sd-image-variations-diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lambda/sd-image-variations-diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("lambda/sd-image-variations-diffusers", dtype=torch.bfloat16, device_map="cuda") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Commit Β·
e5b0426
1
Parent(s): a2a1398
Update README.md
Browse files
README.md
CHANGED
|
@@ -12,15 +12,19 @@ tags:
|
|
| 12 |
# Stable Diffusion Image Variations Model Card
|
| 13 |
|
| 14 |
π£ V2 model released, and blurriness issues fixed! π£
|
|
|
|
| 15 |
π§¨π Image Variations is now natively supported in π€ Diffusers! ππ§¨
|
| 16 |
|
|
|
|
|
|
|
| 17 |
## Version 2
|
| 18 |
|
| 19 |
This version of Stable Diffusion has been fine tuned from [CompVis/stable-diffusion-v1-4-original](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) to accept CLIP image embedding rather than text embeddings. This allows the creation of "image variations" similar to DALLE-2 using Stable Diffusion. This version of the weights has been ported to huggingface Diffusers, to use this with the Diffusers library requires the [Lambda Diffusers repo](https://github.com/LambdaLabsML/lambda-diffusers).
|
| 20 |
|
| 21 |
This model was trained in two stages and longer than the original variations model and gives better image quality and better CLIP rated similarity compared to the original version
|
| 22 |
|
| 23 |
-
|
|
|
|
| 24 |
|
| 25 |
## Example
|
| 26 |
|
|
|
|
| 12 |
# Stable Diffusion Image Variations Model Card
|
| 13 |
|
| 14 |
π£ V2 model released, and blurriness issues fixed! π£
|
| 15 |
+
|
| 16 |
π§¨π Image Variations is now natively supported in π€ Diffusers! ππ§¨
|
| 17 |
|
| 18 |
+

|
| 19 |
+
|
| 20 |
## Version 2
|
| 21 |
|
| 22 |
This version of Stable Diffusion has been fine tuned from [CompVis/stable-diffusion-v1-4-original](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) to accept CLIP image embedding rather than text embeddings. This allows the creation of "image variations" similar to DALLE-2 using Stable Diffusion. This version of the weights has been ported to huggingface Diffusers, to use this with the Diffusers library requires the [Lambda Diffusers repo](https://github.com/LambdaLabsML/lambda-diffusers).
|
| 23 |
|
| 24 |
This model was trained in two stages and longer than the original variations model and gives better image quality and better CLIP rated similarity compared to the original version
|
| 25 |
|
| 26 |
+
See training details and v1 vs v2 comparison below.
|
| 27 |
+
|
| 28 |
|
| 29 |
## Example
|
| 30 |
|