Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Diffusers IPAdapter bug #839

Closed
joel-simon opened this issue Apr 24, 2024 · 28 comments
Closed

Diffusers IPAdapter bug #839

joel-simon opened this issue Apr 24, 2024 · 28 comments

Comments

@joel-simon
Copy link

Describe the bug

I have the baseline text_to_image_sdxl_light.py working (using lora checkpoint) and tried adding ipadapter to it. Any ipadapter examples would be great, ideally I'm looking to have multiple ip adapters as well as per https://huggingface.co/docs/diffusers/main/en/using-diffusers/ip_adapter#multi-ip-adapter.

Thank you very much.

Your environment

OS

ubuntu

version: 0.9.1.dev20240420+cu118
git_commit: 665bcf8
cmake_build_type: Release
rdma: True
mlir: True
enterprise: False

How To Reproduce

Steps to reproduce the behavior(code or script):
Modify text_to_image_sdxl_light.py

add pipe.load_ip_adapter( "h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.safetensors", ) pipe.set_ip_adapter_scale(0.6)

then run with
`from diffusers.utils import load_image
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png")

print("Warmup with running graphs...")
torch.manual_seed(args.seed)
image = pipe(
prompt=args.prompt,
height=args.height,
width=args.width,
num_inference_steps=n_steps,
guidance_scale=0,
ip_adapter_image=image,
output_type=OUTPUT_TYPE,
).images
`

The complete error message

ERROR building graph got error.
ERROR [2024-04-24 22:40:40] /home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/utils.py:23 - Exception in forward: e=RuntimeError("\x1b[1m\x1b[38;2;255;000;000mError\x1b[0m: weight's second dim should be equal to input's second dim. \n")
WARNING [2024-04-24 22:40:40] /home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/utils.py:24 - Recompile oneflow module ...
ERROR building graph got error.
0%| | 0/4 [00:10<?, ?it/s]
Traceback (most recent call last):
File "/home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/utils.py", line 21, in wrapper
return func(self, *args, **kwargs)
File "/home/ubuntu/onediff/src/onediff/infer_compiler/utils/graph_management_utils.py", line 91, in wrapper
ret = func(self, *args, **kwargs)
File "/home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/deployable_module.py", line 99, in forward
output = dpl_graph(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 295, in call
self._compile(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 861, in _compile
return self._dynamic_input_graph_cache._compile(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/cache.py", line 121, in _compile
return graph._compile(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 865, in _compile
return self._compile_new(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 884, in _compile_new
_, eager_outputs = self.build_graph(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 1429, in build_graph
outputs = self.__build_graph(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 1577, in __build_graph
outputs = self.build(*lazy_args, **lazy_kwargs)
File "/home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/graph.py", line 19, in build
return self.model(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in call
result = self.__block_forward(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
result = unbound_forward_of_module_instance(self, *args, **kwargs)
File "/home/ubuntu/onediff/src/infer_compiler_registry/register_diffusers/unet_2d_condition_oflow.py", line 289, in forward
image_embeds = self.encoder_hid_proj(image_embeds).to(
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in call
result = self.__block_forward(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
result = unbound_forward_of_module_instance(self, *args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/diffusers/models/embeddings.py", line 910, in forward
image_embed = image_projection_layer(image_embed)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in call
result = self.__block_forward(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
result = unbound_forward_of_module_instance(self, *args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/diffusers/models/embeddings.py", line 868, in forward
x = self.proj_in(x)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in call
result = self.__block_forward(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
result = unbound_forward_of_module_instance(self, *args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/modules/linear.py", line 131, in forward
return flow._C.fused_matmul_bias(x, self.weight, self.bias)
RuntimeError: Error: weight's second dim should be equal to input's second dim.

@ccssu
Copy link
Contributor

ccssu commented Apr 29, 2024

Here is a supported case that you can try using first #837 @joel-simon

@xiecon
Copy link

xiecon commented Apr 29, 2024

I meet a similar problem.

......
pipe = pipe.to("cuda")
pipe.load_ip_adapter(
    "h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-plus-face_sd15.bin"
)
pipe = compile_pipe(pipe)
......

ERROR message

Exception in forward: e=AttributeError("'list' object has no attribute 'to'")
......
    _, eager_outputs = self.build_graph(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/oneflow/nn/graph/graph.py", line 1428, in build_graph
    outputs = self.__build_graph(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/oneflow/nn/graph/graph.py", line 1576, in __build_graph
    outputs = self.build(*lazy_args, **lazy_kwargs)
  File "/opt/conda/lib/python3.10/site-packages/onediff/infer_compiler/oneflow/graph.py", line 19, in build
    return self.model(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/oneflow/nn/graph/proxy.py", line 188, in __call__
    result = self.__block_forward(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
    result = unbound_forward_of_module_instance(self, *args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/infer_compiler_registry/register_diffusers/unet_2d_condition_oflow.py", line 289, in forward
    image_embeds = self.encoder_hid_proj(image_embeds).to(
AttributeError: 'list' object has no attribute 'to'

@ccssu
Copy link
Contributor

ccssu commented Apr 29, 2024

I meet a similar problem.

Please try cd onediff && git checkout dev_support_diffusers_ipa @xiecon

@xiecon
Copy link

xiecon commented Apr 29, 2024

Thank you, the problem is solved. @ccssu

@joel-simon
Copy link
Author

Hi @ccssu , I tried that before but ran into this error.

Code

import os
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers.utils import load_image as _load_image
from onediffx import compile_pipe, load_pipe, save_pipe
from onediff.infer_compiler import oneflow_compile

def main():
    pipe = StableDiffusionXLPipeline.from_pretrained(
        "stabilityai/stable-diffusion-xl-base-1.0",
        variant="fp16",
        torch_dtype=torch.float16,
    ).to("cuda")

    pipe.load_ip_adapter(
        "h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin"
    )
    pipe.set_ip_adapter_scale(0.6)

    # Load images
    image = load_image(
        "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png"
    )

    generator = torch.Generator(device="cuda").manual_seed(0)

    pipe = compile_pipe(pipe)

    cache_path = "ip_adapter_cache"
    if os.path.exists(cache_path):
        load_pipe(pipe, cache_path)

    images = pipe(
        prompt="a bear at a restaurant",
        ip_adapter_image=image,
        negative_prompt="",
        num_inference_steps=20,
        guidance_scale=7.5,
        generator=generator,
    ).images[0]

    images.save("test.jpg")

    if not os.path.exists(cache_path):
        os.makedirs(cache_path)
        save_pipe(pipe, cache_path)


if __name__ == "__main__":
    main()

Error

ERROR building graph got error.
ERROR [2024-04-29 17:55:20] /home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/utils.py:23 - Exception in forward: e=ValueError('too many values to unpack (expected 3)')
WARNING [2024-04-29 17:55:20] /home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/utils.py:24 - Recompile oneflow module ...
ERROR building graph got error.
0%| | 0/20 [00:03<?, ?it/s]
Traceback (most recent call last):
File "/home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/utils.py", line 21, in wrapper
return func(self, *args, **kwargs)
File "/home/ubuntu/onediff/src/onediff/infer_compiler/utils/graph_management_utils.py", line 91, in wrapper
ret = func(self, *args, **kwargs)
File "/home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/deployable_module.py", line 99, in forward
output = dpl_graph(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 295, in call
self._compile(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 861, in _compile
return self._dynamic_input_graph_cache._compile(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/cache.py", line 121, in _compile
return graph._compile(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 865, in _compile
return self._compile_new(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 884, in _compile_new
_, eager_outputs = self.build_graph(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 1429, in build_graph
outputs = self.__build_graph(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 1577, in __build_graph
outputs = self.build(*lazy_args, **lazy_kwargs)
File "/home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/graph.py", line 19, in build
return self.model(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in call
result = self.__block_forward(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
result = unbound_forward_of_module_instance(self, *args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/diffusers/models/unets/unet_2d_condition.py", line 1219, in forward
sample, res_samples = downsample_block(
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in call
result = self.__block_forward(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
result = unbound_forward_of_module_instance(self, *args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 1279, in forward
hidden_states = attn(
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in call
result = self.__block_forward(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
result = unbound_forward_of_module_instance(self, *args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/diffusers/models/transformers/transformer_2d.py", line 397, in forward
hidden_states = block(
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in call
result = self.__block_forward(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
result = unbound_forward_of_module_instance(self, *args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/diffusers/models/attention.py", line 366, in forward
attn_output = self.attn2(
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in call
result = self.__block_forward(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
result = unbound_forward_of_module_instance(self, *args, **kwargs)
File "/home/ubuntu/onediff/src/infer_compiler_registry/register_diffusers/attention_processor_oflow.py", line 388, in forward
return self.processor(
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in call
result = self.__block_forward(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward
result = unbound_forward_of_module_instance(self, *args, **kwargs)
File "/home/ubuntu/onediff/src/infer_compiler_registry/register_diffusers/attention_processor_oflow.py", line 2325, in forward
ip_key = attn.head_to_batch_dim(ip_key)
File "/home/ubuntu/onediff/src/infer_compiler_registry/register_diffusers/attention_processor_oflow.py", line 407, in head_to_batch_dim
batch_size, seq_len, dim = tensor.shape
ValueError: too many values to unpack (expected 3)

Thank you,
Joel

@ccssu
Copy link
Contributor

ccssu commented Apr 30, 2024

Code

Your Diffusers version and please try @joel-simon

# pipe = compile_pipe(pipe)
 pipe.unet = oneflow_compile(pipe.unet)

@joel-simon
Copy link
Author

@ccssu

Thanks, unfortunately that returns the same error.
Diffusers just build from source - 0.28.0.dev0 - I tried earlier on 0.27.0 , torch==1.13.1

@joel-simon
Copy link
Author

In case its helpful, I add a print there...


    def head_to_batch_dim(self, tensor, out_dim=3):
        head_size = self.heads
        print("tensor.shape", tensor.shape)
        batch_size, seq_len, dim = tensor.shape
        tensor = tensor.reshape(batch_size, -1, head_size, dim // head_size)
        tensor = tensor.permute(0, 2, 1, 3)

        if out_dim == 3:
            tensor = tensor.reshape(batch_size * head_size, -1, dim // head_size)

        return tensor

tensor.shape oneflow.Size([2, 4096, 640])
tensor.shape oneflow.Size([2, 4096, 640])
tensor.shape oneflow.Size([2, 4096, 640])
tensor.shape oneflow.Size([2, 4096, 640])
tensor.shape oneflow.Size([2, 77, 640])
tensor.shape oneflow.Size([2, 77, 640])
tensor.shape oneflow.Size([2, 1, 4, 640])
[ERROR](GRAPH:OneflowGraph_1:OneflowGraph) building graph got error.
  0%|                                                                                                                           | 0/20 [00:04<?, ?it/s]
Traceback (most recent call last):
  File "/home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/utils.py", line 21, in wrapper
    return func(self, *args, **kwargs)
...

@joel-simon
Copy link
Author

adding these two lines got the unet to compile and full pipeline to compile as well. Almost a fix but then in my app with multiple ipadapters it ignores all but the last one.

    def head_to_batch_dim(self, tensor, out_dim=3):
        head_size = self.heads
        # print("tensor.shape", tensor.shape)
        if len(tensor.shape) == 4:
            tensor = tensor[:, 0]
        batch_size, seq_len, dim = tensor.shape

@ccssu
Copy link
Contributor

ccssu commented Apr 30, 2024

get , Please try pip3 install -U torch torchvision torchaudio . torch > 2.0 and torch==1.13.1 different attention processors will be used ,Using Torch>2.0 version will solve this problem

@ccssu

Thanks, unfortunately that returns the same error. Diffusers just build from source - 0.28.0.dev0 - I tried earlier on 0.27.0 , torch==1.13.1

@joel-simon
Copy link
Author

@ccssu ah gotcha, torch >= 2.0 did fix it, thanks!

It's not working for multiple ip adapters but I don't think that's supported yet, right?
Looking forward to that, thanks.

@cthulhu-tww
Copy link

@ccssu 我有尝试切换到您提供的支持ip adapter 的分支,拉取代码后重新执行你在分支上的demo代码,还是会有这个问题AttributeError: 'list' object has no attribute 'to',我diffuser是 0.27 torch 是最新的,cuda是12.1。谢谢

@ccssu
Copy link
Contributor

ccssu commented Apr 30, 2024

@ccssu 我有尝试切换到您提供的支持ip adapter 的分支,拉取代码后重新执行你在分支上的demo代码,还是会有这个问题AttributeError: 'list' object has no attribute 'to',我diffuser是 0.27 torch 是最新的,cuda是12.1。谢谢

可以 设置下 export ONEDIFF_DEBUG=1 看下您那边 debug 模式下 日志么

@ccssu
Copy link
Contributor

ccssu commented Apr 30, 2024

@ccssu ah gotcha, torch >= 2.0 did fix it, thanks!

It's not working for multiple ip adapters but I don't think that's supported yet, right? Looking forward to that, thanks.

Please try #837 (comment) , pip install git+https://github.com/huggingface/diffusers.git
diffusers: 0.28.0.dev0

@joel-simon
Copy link
Author

@ccssu Thanks, that example does now work! My code with some other modifications does not so I will investigate further and follow up.

@cthulhu-tww
Copy link

@ccssu 您好,打开debug日志后,出现了这样的报错
ERROR run got error: <class 'oneflow.oneflow_internal.exception.Exception'> Check failed: (3 == 1)
File "oneflow/core/job/job_interpreter.cpp", line 325, in InterpretJob
RunNormalOp(launch_context, launch_op, inputs)
File "oneflow/core/job/job_interpreter.cpp", line 237, in RunNormalOp
it.Apply(*op, inputs, &outputs, OpExprInterpContext(empty_attr_map, JUST(launch_op.device)))
File "oneflow/core/framework/op_interpreter/eager_local_op_interpreter.cpp", line 84, in NaiveInterpret
& -> Maybe { LocalTensorMetaInferArgs ... mut_local_tensor_infer_cache()->GetOrInfer(infer_args)); }()
File "oneflow/core/framework/op_interpreter/eager_local_op_interpreter.cpp", line 87, in operator()
user_op_expr.mut_local_tensor_infer_cache()->GetOrInfer(infer_args)
File "oneflow/core/framework/local_tensor_infer_cache.cpp", line 210, in GetOrInfer
Infer(user_op_expr, infer_args)
File "oneflow/core/framework/local_tensor_infer_cache.cpp", line 178, in Infer
user_op_expr.InferPhysicalTensorDesc( infer_args.attrs ... ) -> TensorMeta
{ return &output_mut_metas.at(i); })
File "oneflow/core/framework/op_expr.cpp", line 602, in InferPhysicalTensorDesc
physical_tensor_desc_infer_fn
(&infer_ctx)
File "oneflow/user/ops/concat_op.cpp", line 55, in InferLogicalTensorDesc
CHECK_EQ_OR_RETURN(in_desc.shape().At(i), out_dim_vec.at(i))
Error Type: oneflow.ErrorProto.check_failed_error
Traceback (most recent call last):
File "/data/application/qmb-aigc-sdxl/demo.py", line 87, in
image = pipeline(
^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 1124, in call
image_embeds = self.prepare_ip_adapter_image_embeds(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 538, in prepare_ip_adapter_image_embeds
single_image_embeds, single_negative_image_embeds = self.encode_image(
^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 514, in encode_image
image_embeds = self.image_encoder(image).image_embeds
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1561, in _call_impl
result = forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/onediff/infer_compiler/utils/online_quantization_utils.py", line 48, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/onediff/infer_compiler/utils/args_tree_util.py", line 50, in wrapper
output = func(self, *mapped_args, **mapped_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/onediff/infer_compiler/oneflow/utils.py", line 18, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/onediff/infer_compiler/utils/graph_management_utils.py", line 91, in wrapper
ret = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/onediff/infer_compiler/oneflow/deployable_module.py", line 99, in forward
output = dpl_graph(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/oneflow/nn/graph/graph.py", line 297, in call
return self.__run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/application/qmb-aigc-sdxl/venv/lib/python3.11/site-packages/oneflow/nn/graph/graph.py", line 1862, in __run
_eager_outputs = oneflow._oneflow_internal.nn.graph.RunLazyNNGraphByVM(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
oneflow.oneflow_internal.exception.Exception: Check failed: (3 == 1)
File "oneflow/core/job/job_interpreter.cpp", line 325, in InterpretJob
RunNormalOp(launch_context, launch_op, inputs)
File "oneflow/core/job/job_interpreter.cpp", line 237, in RunNormalOp
it.Apply(*op, inputs, &outputs, OpExprInterpContext(empty_attr_map, JUST(launch_op.device)))
File "oneflow/core/framework/op_interpreter/eager_local_op_interpreter.cpp", line 84, in NaiveInterpret
& -> Maybe { LocalTensorMetaInferArgs ... mut_local_tensor_infer_cache()->GetOrInfer(infer_args)); }()
File "oneflow/core/framework/op_interpreter/eager_local_op_interpreter.cpp", line 87, in operator()
user_op_expr.mut_local_tensor_infer_cache()->GetOrInfer(infer_args)
File "oneflow/core/framework/local_tensor_infer_cache.cpp", line 210, in GetOrInfer
Infer(user_op_expr, infer_args)
File "oneflow/core/framework/local_tensor_infer_cache.cpp", line 178, in Infer
user_op_expr.InferPhysicalTensorDesc( infer_args.attrs ... ) -> TensorMeta
{ return &output_mut_metas.at(i); })
File "oneflow/core/framework/op_expr.cpp", line 602, in InferPhysicalTensorDesc
physical_tensor_desc_infer_fn
(&infer_ctx)
File "oneflow/user/ops/concat_op.cpp", line 55, in InferLogicalTensorDesc
CHECK_EQ_OR_RETURN(in_desc.shape().At(i), out_dim_vec.at(i))
Error Type: oneflow.ErrorProto.check_failed_error

感谢您的回复

@ccssu
Copy link
Contributor

ccssu commented May 6, 2024

您好,onediff暂时不支持 python3.11, 请使用 python3.10 。 @cthulhu-tww

@ccssu 您好,打开debug日志后,出现了这样的报错 ERROR run got error: <class 'oneflow._oneflow_internal.exception.Exception'> Check failed: (3 == 1) File "oneflow/core/job/job_interpreter.cpp", line

@cthulhu-tww
Copy link

@ccssu okk,我再试试,谢谢您

@cthulhu-tww
Copy link

@ccssu 您好,按照您所说的,降python版本改为3.10后还是出现这样的报错
ERROR run got error: <class 'oneflow.oneflow_internal.exception.Exception'> Check failed: (3 == 1)
File "oneflow/core/job/job_interpreter.cpp", line 325, in InterpretJob
RunNormalOp(launch_context, launch_op, inputs)
File "oneflow/core/job/job_interpreter.cpp", line 237, in RunNormalOp
it.Apply(*op, inputs, &outputs, OpExprInterpContext(empty_attr_map, JUST(launch_op.device)))
File "oneflow/core/framework/op_interpreter/eager_local_op_interpreter.cpp", line 84, in NaiveInterpret
& -> Maybe { LocalTensorMetaInferArgs ... mut_local_tensor_infer_cache()->GetOrInfer(infer_args)); }()
File "oneflow/core/framework/op_interpreter/eager_local_op_interpreter.cpp", line 87, in operator()
user_op_expr.mut_local_tensor_infer_cache()->GetOrInfer(infer_args)
File "oneflow/core/framework/local_tensor_infer_cache.cpp", line 210, in GetOrInfer
Infer(user_op_expr, infer_args)
File "oneflow/core/framework/local_tensor_infer_cache.cpp", line 178, in Infer
user_op_expr.InferPhysicalTensorDesc( infer_args.attrs ... ) -> TensorMeta
{ return &output_mut_metas.at(i); })
File "oneflow/core/framework/op_expr.cpp", line 602, in InferPhysicalTensorDesc
physical_tensor_desc_infer_fn
(&infer_ctx)
File "oneflow/user/ops/concat_op.cpp", line 55, in InferLogicalTensorDesc
CHECK_EQ_OR_RETURN(in_desc.shape().At(i), out_dim_vec.at(i))
Error Type: oneflow.ErrorProto.check_failed_error
Traceback (most recent call last):
File "/data/application/qmb-aigc-sdxl/demo.py", line 87, in
image = pipeline(
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 1124, in call
image_embeds = self.prepare_ip_adapter_image_embeds(
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 538, in prepare_ip_adapter_image_embeds
single_image_embeds, single_negative_image_embeds = self.encode_image(
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 514, in encode_image
image_embeds = self.image_encoder(image).image_embeds
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1561, in _call_impl
result = forward_call(*args, **kwargs)
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/onediff/infer_compiler/utils/online_quantization_utils.py", line 48, in wrapper
output = func(self, *args, **kwargs)
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/onediff/infer_compiler/utils/args_tree_util.py", line 50, in wrapper
output = func(self, *mapped_args, **mapped_kwargs)
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/onediff/infer_compiler/oneflow/utils.py", line 18, in wrapper
return func(self, *args, **kwargs)
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/onediff/infer_compiler/utils/graph_management_utils.py", line 91, in wrapper
ret = func(self, *args, **kwargs)
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/onediff/infer_compiler/oneflow/deployable_module.py", line 99, in forward
output = dpl_graph(*args, **kwargs)
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/oneflow/nn/graph/graph.py", line 297, in call
return self.__run(*args, **kwargs)
File "/root/anaconda3/envs/venv/lib/python3.10/site-packages/oneflow/nn/graph/graph.py", line 1862, in __run
_eager_outputs = oneflow._oneflow_internal.nn.graph.RunLazyNNGraphByVM(
oneflow.oneflow_internal.exception.Exception: Check failed: (3 == 1)
File "oneflow/core/job/job_interpreter.cpp", line 325, in InterpretJob
RunNormalOp(launch_context, launch_op, inputs)
File "oneflow/core/job/job_interpreter.cpp", line 237, in RunNormalOp
it.Apply(*op, inputs, &outputs, OpExprInterpContext(empty_attr_map, JUST(launch_op.device)))
File "oneflow/core/framework/op_interpreter/eager_local_op_interpreter.cpp", line 84, in NaiveInterpret
& -> Maybe { LocalTensorMetaInferArgs ... mut_local_tensor_infer_cache()->GetOrInfer(infer_args)); }()
File "oneflow/core/framework/op_interpreter/eager_local_op_interpreter.cpp", line 87, in operator()
user_op_expr.mut_local_tensor_infer_cache()->GetOrInfer(infer_args)
File "oneflow/core/framework/local_tensor_infer_cache.cpp", line 210, in GetOrInfer
Infer(user_op_expr, infer_args)
File "oneflow/core/framework/local_tensor_infer_cache.cpp", line 178, in Infer
user_op_expr.InferPhysicalTensorDesc( infer_args.attrs ... ) -> TensorMeta
{ return &output_mut_metas.at(i); })
File "oneflow/core/framework/op_expr.cpp", line 602, in InferPhysicalTensorDesc
physical_tensor_desc_infer_fn
(&infer_ctx)
File "oneflow/user/ops/concat_op.cpp", line 55, in InferLogicalTensorDesc
CHECK_EQ_OR_RETURN(in_desc.shape().At(i), out_dim_vec.at(i))
Error Type: oneflow.ErrorProto.check_failed_error

@ccssu
Copy link
Contributor

ccssu commented May 6, 2024

看报错挂在 image_embeds = self.image_encoder(image).image_embeds 应该是用的是 compile_pipe 接口,
@cthulhu-tww
解决 1: 可以 使用

from onediff.infer_compiler import oneflow_compile
# pipe = compile_pipe(pipe)
 pipe.unet = oneflow_compile(pipe.unet)

解决2: 可以设置 export VM_REBUILD_DYNAMIC_SHAPE=1 试下。 推荐用 1

diffuser>= 0.27
torch > 2.0

@joel-simon
Copy link
Author

@ccssu Hello, I have a minimal script that reproduces the error.

It does work for IP adapter; however, it does not support pipe.set_ip_adapter_scale
In the script I loop over a few different scales. When --compile 0 is set it outputs correctly and when compiled returns the same image for every value.

Using torch 2.3.0, python 3.9, diffusers from source 28.0, dev_support_diffusers_ipa branch.

import argparse
import time

import torch
from safetensors.torch import load_file
from diffusers import StableDiffusionXLPipeline
from transformers import CLIPVisionModelWithProjection
from onediffx import compile_pipe, save_pipe, load_pipe
from huggingface_hub import hf_hub_download
from onediff.infer_compiler import oneflow_compile


try:
    print("diffusers.utils.USE_PEFT_BACKEND=", diffusers.utils.USE_PEFT_BACKEND)
    USE_PEFT_BACKEND = diffusers.utils.USE_PEFT_BACKEND
except Exception as e:
    USE_PEFT_BACKEND = False

USE_PEFT_BACKEND = True

parser = argparse.ArgumentParser()
parser.add_argument(
    "--base", type=str, default="stabilityai/stable-diffusion-xl-base-1.0"
)
parser.add_argument("--repo", type=str, default="ByteDance/SDXL-Lightning")
parser.add_argument("--cpkt", type=str, default="sdxl_lightning_4step_unet.safetensors")
parser.add_argument("--variant", type=str, default="fp16")
parser.add_argument(
    "--prompt",
    type=str,
    # default="street style, detailed, raw photo, woman, face, shot on CineStill 800T",
    default="A girl smiling",
)
parser.add_argument("--save_graph", action="store_true")
parser.add_argument("--load_graph", action="store_true")
parser.add_argument("--save_graph_dir", type=str, default="cached_pipe")
parser.add_argument("--load_graph_dir", type=str, default="cached_pipe")
parser.add_argument("--height", type=int, default=1024)
parser.add_argument("--width", type=int, default=1024)
parser.add_argument(
    "--saved_image", type=str, required=False, default="sdxl-light-out.png"
)
parser.add_argument("--seed", type=int, default=1)
parser.add_argument(
    "--compile",
    type=(lambda x: str(x).lower() in ["true", "1", "yes"]),
    default=True,
)


args = parser.parse_args()

OUTPUT_TYPE = "pil"

n_steps = int(args.cpkt[len("sdxl_lightning_") : len("sdxl_lightning_") + 1])

is_lora_cpkt = "lora" in args.cpkt

if args.compile:
    from onediff.schedulers import EulerDiscreteScheduler
else:
    from diffusers import EulerDiscreteScheduler

from diffusers.utils import load_image

image_encoder = CLIPVisionModelWithProjection.from_pretrained(
    "h94/IP-Adapter", subfolder="models/image_encoder", torch_dtype=torch.float16
).to("cuda")

if is_lora_cpkt:
    print("is_lora_cpkt")
    from diffusers import AutoencoderKL

    vae = AutoencoderKL.from_pretrained(
        "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16
    )
    if not USE_PEFT_BACKEND:
        print("PEFT backend is required for load_lora_weights")
        exit(0)
    pipe = StableDiffusionXLPipeline.from_single_file(
        args.base,
        vae=vae,
        torch_dtype=torch.float16,
        image_encoder=image_encoder,
        variant="fp16",
    ).to("cuda")
    if os.path.isfile(os.path.join(args.repo, args.cpkt)):
        pipe.load_lora_weights(os.path.join(args.repo, args.cpkt))
    else:
        pipe.load_lora_weights(hf_hub_download(args.repo, args.cpkt))
    pipe.fuse_lora()
else:
    from diffusers import UNet2DConditionModel, AutoencoderKL

    vae = AutoencoderKL.from_pretrained(
        "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16
    )
    unet = UNet2DConditionModel.from_config(args.base, subfolder="unet").to(
        "cuda", torch.float16
    )
    if os.path.isfile(os.path.join(args.repo, args.cpkt)):
        unet.load_state_dict(
            load_file(os.path.join(args.repo, args.cpkt), device="cuda")
        )
    else:
        unet.load_state_dict(
            load_file(hf_hub_download(args.repo, args.cpkt), device="cuda")
        )
    pipe = StableDiffusionXLPipeline.from_pretrained(
        args.base,
        unet=unet,
        vae=vae,
        image_encoder=image_encoder,
        torch_dtype=torch.float16,
        variant="fp16",
    ).to("cuda")


pipe.load_ip_adapter(
    "h94/IP-Adapter",
    subfolder="sdxl_models",
    weight_name=[
        "ip-adapter-plus_sdxl_vit-h.safetensors",
        "ip-adapter-plus-face_sdxl_vit-h.safetensors",
        "ip-adapter-plus_sdxl_vit-h.safetensors",
    ],
    image_encoder_folder=None,
)
pipe.set_ip_adapter_scale(
    [
        0.6,
        0.8,
        {"up": {"block_0": [0.0, 0.8, 0.0]}},  # Style layers.
    ]
)


pipe.scheduler = EulerDiscreteScheduler.from_config(
    pipe.scheduler.config, timestep_spacing="trailing"
)

if args.compile:
    pipe = compile_pipe(pipe)
    # pipe = compile_pipe(pipe, ignores=("vae",))
    # pipe.text_encoder = oneflow_compile(pipe.text_encoder)
    # pipe.text_encoder_2 = oneflow_compile(pipe.text_encoder_2)
    # pipe.unet = oneflow_compile(pipe.unet)

if args.load_graph:
    print("Loading graphs...")
    load_pipe(pipe, args.load_graph_dir)

from diffusers.utils import load_image

ip_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png"
)

ip_image_face = load_image(
    "https://artbreeder.b-cdn.net/imgs/7d79b3dc18516775d224.jpeg"
)

ip_style_face = load_image(
    "https://artbreeder.b-cdn.net/imgs/a9898db5a9cd90a03db31e57e076.jpeg"
)

image_embeds = pipe.prepare_ip_adapter_image_embeds(
    ip_adapter_image=[ip_image, ip_image_face, ip_style_face],
    ip_adapter_image_embeds=None,
    device="cuda",
    num_images_per_prompt=1,
    do_classifier_free_guidance=False,
)

print("Warmup with running graphs...")
torch.manual_seed(args.seed)
image = pipe(
    prompt=args.prompt,
    width=args.width,
    height=args.height,
    num_inference_steps=n_steps,
    guidance_scale=1,
    ip_adapter_image_embeds=image_embeds,
    adapter_conditioning_scale=1.0,
    output_type=OUTPUT_TYPE,
).images


# Normal run
print("Normal run...")
for scale in [0.2, 0.4, 0.6, 0.8]:
    pipe.set_ip_adapter_scale(
        [
            scale,
            0.8,
            {"up": {"block_0": [0.0, scale, 0.0]}},
        ]
    )
    torch.manual_seed(args.seed)
    start_t = time.time()
    image = pipe(
        prompt=args.prompt,
        width=args.width,
        height=args.height,
        num_inference_steps=n_steps,
        guidance_scale=1,
        ip_adapter_image_embeds=image_embeds,
        output_type=OUTPUT_TYPE,
    ).images
    end_t = time.time()
    print(f"e2e ({n_steps} steps) elapsed: {end_t - start_t} s")
    image[0].save(f"sdxl_light_{int(scale*20)}.jpg")

for width, height in [(960, 960), (1280, 720)]:
    torch.manual_seed(args.seed)
    start_t = time.time()
    image = pipe(
        prompt=args.prompt,
        width=width,
        height=height,
        num_inference_steps=n_steps,
        guidance_scale=1,
        ip_adapter_image_embeds=image_embeds,
        output_type=OUTPUT_TYPE,
    ).images
    end_t = time.time()
    print(f"e2e ({n_steps} steps) elapsed: {end_t - start_t} s")
    image[0].save(f"sdxl_light_{width}_{height}.jpg")


if args.save_graph:
    print("Saving graphs...")
    save_pipe(pipe, args.save_graph_dir)

print("done")

It's a different issue, but the script also errors with different image sizes on sd_lightning.

`ERROR run got error: <class 'oneflow._oneflow_internal.exception.RuntimeError'> Error: Reshape infered output element count is different with input in op_name: model.down_blocks.1.attentions.0-reshape-484 input shape is : (1,60,60,640) , output shape is : (1,4096,640) , output logical shape is (1,4096,640) , and reshape shape conf is : (1,4096,640) op_loc:

ERROR [2024-05-13 21:57:56] /home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/utils.py:23 - Exception in forward: e=RuntimeError('\x1b[1m\x1b[38;2;255;000;000mError\x1b[0m: Reshape infered output element count is different with input in op_name: model.down_blocks.1.attentions.0-reshape-484 input shape is : (1,60,60,640) , output shape is : (1,4096,640) , output logical shape is (1,4096,640) , and reshape shape conf is : (1,4096,640) op_loc: \n')
WARNING [2024-05-13 21:57:56] /home/ubuntu/onediff/src/onediff/infer_compiler/oneflow/utils.py:24 - Recompile oneflow module ...`

Thanks.

@zhangvia
Copy link

这个分支貌似并不支持动态分辨率,不同的分辨率会触发recompile

@lqfool
Copy link

lqfool commented May 21, 2024

无法动态修改ip-adapter的强度,无论如何调整,只会按照首次编译时的scale进行推理 @ccssu

@joel-simon
Copy link
Author

@ccssu hi, any update on the ipadapter weight issue I mentioned above?

Thanks,
joel

@lqfool
Copy link

lqfool commented May 28, 2024

@ccssu hi, any update on the ipadapter weight issue I mentioned above?

Thanks, joel

#837 (comment)

@joel-simon
Copy link
Author

@lqfool thanks, unfortunately that example did not work.
@ccssu

On commit f3f7e4e
Diffusers 0.29.0 (and tried 0.28.0)
Python 3.9.0
torch 2.3.0

Script works ok without compilation

Thanks

python examples/script_01.py ---------------------------------------- start ---------------------------------------- /opt/conda/envs/pytorch2/lib/python3.9/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_downloadis deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, useforce_download=True. warnings.warn( Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:02<00:00, 2.92it/s] WARNING [2024-05-30 23:22:11] /home/ubuntu/onediff/src/onediff/infer_compiler/backends/oneflow/transform/custom_transform.py:49 - Failed to import register_diffusers from /home/ubuntu/onediff/src/infer_compiler_registry/register_diffusers. e=ImportError("cannot import name 'transform_mgr' from 'onediff.infer_compiler.transform' (unknown location)") 0%| | 0/25 [00:00<?, ?it/s]cross_attention_kwargs ['ip_adapter_masks'] are not expected by <class 'diffusers.models.attention_processor.AttnProcessor2_0'> and will be ignored. cross_attention_kwargs ['ip_adapter_masks'] are not expected by <class 'diffusers.models.attention_processor.AttnProcessor2_0'> and will be ignored. cross_attention_kwargs ['ip_adapter_masks'] are not expected by <class 'diffusers.models.attention_processor.IPAdapterAttnProcessor2_0'> and will be ignored. cross_attention_kwargs ['ip_adapter_masks'] are not expected by <class 'diffusers.models.attention_processor.IPAdapterAttnProcessor2_0'> and will be ignored. [ERROR](GRAPH:OneflowGraph_0:OneflowGraph) building graph got error. ERROR [2024-05-30 23:22:20] /home/ubuntu/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py:37 - Exception in forward: e=NotImplementedError() WARNING [2024-05-30 23:22:20] /home/ubuntu/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py:38 - Recompile oneflow module ... cross_attention_kwargs ['ip_adapter_masks'] are not expected by <class 'diffusers.models.attention_processor.AttnProcessor2_0'> and will be ignored. cross_attention_kwargs ['ip_adapter_masks'] are not expected by <class 'diffusers.models.attention_processor.AttnProcessor2_0'> and will be ignored. cross_attention_kwargs ['ip_adapter_masks'] are not expected by <class 'diffusers.models.attention_processor.IPAdapterAttnProcessor2_0'> and will be ignored. cross_attention_kwargs ['ip_adapter_masks'] are not expected by <class 'diffusers.models.attention_processor.IPAdapterAttnProcessor2_0'> and will be ignored. [ERROR](GRAPH:OneflowGraph_1:OneflowGraph) building graph got error. 0%| | 0/25 [00:07<?, ?it/s] Traceback (most recent call last): File "/home/ubuntu/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py", line 35, in wrapper return func(self, *args, **kwargs) File "/home/ubuntu/onediff/src/onediff/infer_compiler/backends/oneflow/graph_management_utils.py", line 122, in wrapper ret = func(self, *args, **kwargs) File "/home/ubuntu/onediff/src/onediff/infer_compiler/backends/oneflow/deployable_module.py", line 136, in forward output = dpl_graph(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 295, in __call__ self._compile(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 861, in _compile return self._dynamic_input_graph_cache._compile(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/cache.py", line 121, in _compile return graph._compile(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 865, in _compile return self._compile_new(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 884, in _compile_new _, eager_outputs = self.build_graph(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 1429, in build_graph outputs = self.__build_graph(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/graph.py", line 1577, in __build_graph outputs = self.build(*lazy_args, **lazy_kwargs) File "/home/ubuntu/onediff/src/onediff/infer_compiler/backends/oneflow/graph.py", line 19, in build return self.model(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in __call__ result = self.__block_forward(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward result = unbound_forward_of_module_instance(self, *args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/diffusers/models/unets/unet_2d_condition.py", line 1220, in forward sample, res_samples = downsample_block( File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in __call__ result = self.__block_forward(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward result = unbound_forward_of_module_instance(self, *args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 1288, in forward hidden_states = attn( File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in __call__ result = self.__block_forward(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward result = unbound_forward_of_module_instance(self, *args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/diffusers/models/transformers/transformer_2d.py", line 448, in forward hidden_states = block( File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in __call__ result = self.__block_forward(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward result = unbound_forward_of_module_instance(self, *args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/diffusers/models/attention.py", line 366, in forward attn_output = self.attn2( File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in __call__ result = self.__block_forward(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward result = unbound_forward_of_module_instance(self, *args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/diffusers/models/attention_processor.py", line 539, in forward return self.processor( File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 188, in __call__ result = self.__block_forward(*args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/graph/proxy.py", line 238, in __block_forward result = unbound_forward_of_module_instance(self, *args, **kwargs) File "/opt/conda/envs/pytorch2/lib/python3.9/site-packages/oneflow/nn/modules/module.py", line 200, in forward raise NotImplementedError() NotImplementedError

@joel-simon
Copy link
Author

@ccssu any updates here? I need ipadapter support to use this library. Thank you very much.

@strint
Copy link
Collaborator

strint commented Jul 5, 2024

@joel-simon please add a new issue for this.

This one is too old and too long to follow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants