Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

region based not working for multiple prompts #28

Open
Mao718 opened this issue Oct 4, 2023 · 4 comments · May be fixed by #30
Open

region based not working for multiple prompts #28

Mao718 opened this issue Oct 4, 2023 · 4 comments · May be fixed by #30

Comments

@Mao718
Copy link

Mao718 commented Oct 4, 2023

Hello. I ran into a problem, can anyone help me on this.
Here's the code I run

device = torch.device('cuda')
sd = MultiDiffusion(device)


mask = torch.zeros(2,1,512,512).cuda()
mask[0,:,:256]=1
mask[1,:,256:]=1

fg_masks = mask
bg_mask = 1 - torch.sum(fg_masks, dim=0, keepdim=True)
bg_mask[bg_mask < 0] = 0
masks = torch.cat([bg_mask, fg_masks])

prompts = ['dog' ,'cat']# + ['artifacts' ] ,'cat'
#neg_prompts = [opt.bg_negative] + opt.fg_negative
print(masks.shape , len(prompts))
img = sd.generate(masks, prompts , '' , width = 512 )

It gave the following error.

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[12], line 17
     15 #neg_prompts = [opt.bg_negative] + opt.fg_negative
     16 print(masks.shape , len(prompts))
---> 17 img = sd.generate(masks, prompts , '' , width = 512 )

File ~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
    112 @functools.wraps(func)
    113 def decorate_context(*args, **kwargs):
    114     with ctx_factory():
--> 115         return func(*args, **kwargs)

File ~/Desktop/project/MultiDiffusion/region_based.py:142, in MultiDiffusion.generate(self, masks, prompts, negative_prompts, height, width, num_inference_steps, guidance_scale, bootstrapping)
    139     bg = self.scheduler.add_noise(bg, noise[:, :, h_start:h_end, w_start:w_end], t)
    140     #print(latent.shape , 'latent')
    141     #print(latent_view.shape ,bg.shape,masks_view.shape)
--> 142     latent_view[1:] = latent_view[1:] * masks_view[1:] + bg * (1 - masks_view[1:])
    144 # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes.
    145 latent_model_input = torch.cat([latent_view] * 2)

RuntimeError: The expanded size of the tensor (1) must match the existing size (2) at non-singleton dimension 0.  Target sizes: [1, 4, 64, 64].  Tensor sizes: [2, 4, 64, 64]

Thank you.

@MortezaMardani
Copy link

i faced the same issue. region base ode is buggy. it doesn't run. please advise.

@MortezaMardani
Copy link

in get_attention_scores │
│ │
│ 455 │ │ │ baddbmm_input = attention_mask │
│ 456 │ │ │ beta = 1 │
│ 457 │ │ │
│ ❱ 458 │ │ attention_scores = torch.baddbmm( │
│ 459 │ │ │ baddbmm_input, │
│ 460 │ │ │ query, │
│ 461 │ │ │ key.transpose(-1, -2), │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Expected size for first two dimensions of batch2 tensor to be: [30, 64] but got: [15, 64].

@billpsomas
Copy link

Hello everyone,

Did you manage to find a solution to this?

Thanks a lot

@daeunni
Copy link

daeunni commented Aug 22, 2024

@billpsomas @MortezaMardani @Mao718 Same here. Do you guys figure out the issue?

@dribnet dribnet linked a pull request Aug 25, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants