-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification needed: Token indices sequence length is longer ...
during inference
#20
Comments
Interesting, i haven't seen that before. Is it happening on one of the calls to |
I'm sorry, I just forgot to paste the Dependencies:
Code, can be run in the colab with GPU: from diffusers import StableDiffusionPipeline
from compel import Compel
import torch
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5",
safety_checker=None,
requires_safety_checker=False,
feature_extractor=None,
torch_dtype=torch.float16).to('cuda')
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder,
truncate_long_prompts=False)
prompt = 25 * "a cat playing with a ball in the forest"
negative_prompt = 25 * "ugly, blurry, out of focus "
prompt_embeds = compel.build_conditioning_tensor(prompt)
negative_prompt_embeds = compel.build_conditioning_tensor(negative_prompt)
prompt_embeds, negative_prompt_embeds = compel.pad_conditioning_tensors_to_same_length(
conditionings=[prompt_embeds, negative_prompt_embeds]
)
images = pipeline(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds) Similarly, I have |
Is the error happening on one of the calls to |
This happens on |
i spent some time looking into this, it is in fact an expected warning that i don't have any control over. token sequences longer than 77 tokens are not a supported usage of the stable diffusion model (i assume you're using SD) - that it works at all is a quirk. |
Is the sequence truncated over 77 tokens in Compel ? Would a method such as this here make sense ? @damian0815 |
@BEpresent you can pass an argument, i think it's |
Could it be this?
Even when initializing it like this, I still get
|
i wonder if this means i'm suppose to slice the prompt into segments before sending it to the text encoder? can you try slicing your prompt in half somewhere and using the new |
thanks, just for clarification, would the So instead of this
Something like this?
Could I use tiktoken to calculate tokens or some other method to count tokens ? |
close you need to put the '.and()' inside the prompt string: compel has methods to count tokens, you can call |
Versions
Reproduction code sample:
During this, I encounter the following message:
So I have those questions:
Thanks in advance
The text was updated successfully, but these errors were encountered: