Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ip-adapters doesn't work. torch.cuda.FloatTensor #2558

Closed
mikheys opened this issue Jan 23, 2024 · 4 comments · Fixed by #2560
Closed

ip-adapters doesn't work. torch.cuda.FloatTensor #2558

mikheys opened this issue Jan 23, 2024 · 4 comments · Fixed by #2560
Labels
bug Something isn't working

Comments

@mikheys
Copy link

mikheys commented Jan 23, 2024

Hello. Today ip-adapters stopped working. I tried uninstalling CN completely and installing it again. I tried uninstalling venv A1111. What can it be?
image

image

@huchenlei huchenlei added the bug Something isn't working label Jan 23, 2024
@huchenlei
Copy link
Collaborator

Successfully reproduced.

@huchenlei
Copy link
Collaborator

Might be caused by #2556.

@Mozoloa
Copy link

Mozoloa commented Jan 23, 2024

I was literally searching for people having the same problem, this also happens to me with ip adapter face id v2, just generating the preview fails

2024-01-23 17:27:46,038 - ControlNet - INFO - Preview Resolution = 1780
G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
Traceback (most recent call last):
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet_ui\controlnet_ui_group.py", line 999, in run_annotator
    result, is_image = preprocessor(
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\utils.py", line 80, in decorated_func
    return cached_func(*args, **kwargs)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\utils.py", line 64, in cached_func
    return func(*args, **kwargs)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\global_state.py", line 37, in unified_preprocessor
    return preprocessor_modules[preprocessor_name](*args, **kwargs)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\processor.py", line 732, in face_id_plus
    clip_embed, _ = clip(img, config='clip_h', low_vram=low_vram)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\processor.py", line 390, in clip
    result = clip_encoder[config](img)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\clipvision\__init__.py", line 129, in __call__
    result = self.model(**feat, output_hidden_states=True)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 1310, in forward
    vision_outputs = self.vision_model(
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 865, in forward
    hidden_states = self.embeddings(pixel_values)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 195, in forward
    patch_embeds = self.patch_embedding(pixel_values)  # shape = [*, width, grid, grid]
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 501, in network_Conv2d_forward
    return originals.Conv2d_forward(self, input)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "G:\AI\Image\Stable Diffusion\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

@FLi79Za
Copy link

FLi79Za commented Jan 23, 2024

Same issue here, with all versions of IPAdapter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants