Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stable diffusion pipeline #168

Merged
merged 7 commits into from
Aug 14, 2022
Merged

Stable diffusion pipeline #168

merged 7 commits into from
Aug 14, 2022

Conversation

patil-suraj
Copy link
Contributor

This PR adds StableDiffusionPipeline

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Aug 12, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@pcuenca
Copy link
Member

pcuenca commented Aug 12, 2022

Looks good to me! One minor question: since this is only for StableDiffusion, would it make sense to assume the scheduler has no eta and remove that argument and the inspect?

@patrickvonplaten
Copy link
Contributor

Looks good to me! One minor question: since this is only for StableDiffusion, would it make sense to assume the scheduler has no eta and remove that argument and the inspect?
+1

Good remark - think this makes sense! It'll make the code cleaner and we could always add it later to the party

@patil-suraj
Copy link
Contributor Author

Good point @pcuenca ! I agree with this, but since the original codebase allows to play with both DDIM and PNDM, think it would important to have the eta there. I left a more detailed comment in the code about when eta will be used.

@patil-suraj
Copy link
Contributor Author

@patrickvonplaten @anton-l
Could you take one more look, updated the pipeline a bit to make it more cleaner:

  1. Simplify classifier free guidance.
  2. Add more comments
  3. add type hints and change ordering of parameters.
  4. remove unused parameters.

Comment on lines 80 to 82
# expand the latents if we are doing classifier free guidance
if do_classifier_free_guidance:
latents = torch.cat((latents, latents), dim=0)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since text_embeddings are already expanded, we only need to expand latents here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that gave a problem with naming but should be fixed now


for t in tqdm(self.scheduler.timesteps):
# expand the latents if we are doing classifier free guidance
latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@patil-suraj there was a bug with the naming here -> corrected it

if torch_device is None:
torch_device = "cuda" if torch.cuda.is_available() else "cpu"

if isinstance(prompt, str):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@patil-suraj -> made sure a string can be passed as input

self.text_encoder.to(torch_device)

# get prompt text embeddings
text_input = self.tokenizer(prompt, padding=True, truncation=True, return_tensors="pt")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@patil-suraj removed the weird 77 max length here

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merging now! Tested it and it works fine

@patrickvonplaten patrickvonplaten changed the title add stable diffusion pipeline Stable diffusion pipeline Aug 14, 2022
@patrickvonplaten patrickvonplaten merged commit 5782e03 into main Aug 14, 2022
@kashif kashif deleted the sd-pipeline branch September 16, 2022 15:49
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* add stable diffusion pipeline

* get rid of multiple if/else

* batch_size is unused

* add type hints

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* fix some bugs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants