-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
t2iadapter_style_sd14v1 gives something strange #547
Comments
make a copy of yaml config file MUST have the same NAME and be on same FOLDER as the adapters that could be enhanced, to support models from also more generic name support like it is not stated in instructions to those adapters, but https://huggingface.co/webui/ControlNet-modules-safetensors/tree/main have those prunned t2iadapter files, so some ppl might download those, as they have same results, but smaller |
Where do you get this (t2iadapter_style_sd14v1.yaml ) file? I didn't find it. @brunogcar |
thanks, so putting this file in the models folder with the right same name should solve the RuntimeError issue? |
for me style adapter doesn't work, i have the issue described in issue #539 but as far as i've understood is mandatory to have the same name |
Does anyone have any idea why t2iadapter_style_sd14v1/fp16 gives this weird image that doesn't look at all like the style of the image I load? |
Have you tried adding |
No, I haven't, I'll try it now and report if anything has changed. |
Some strange images have been replaced by other strange images...... |
your arguments should not include --medvram or --lowvram for using style adapter. enable lowvram checkbox, you can only run this on 6gb or above. The preprocessor has to load into gpu memory. |
I disabled "--medvram", no errors in the terminal, but in terms of color the pictures are still weird. upd1: By "color" I meant those colors that gives t2iadapter_style. |
Hi Guys!!! upd: It looks like everything is working correctly ... it's just that the expectations crashed sharply against the results ))) |
Maybe related: lllyasviel/ControlNet#255, could you replicate it in the official t2i-adapter demo (https://huggingface.co/spaces/Adapter/T2I-Adapter)? |
whats your vae setting? auto, none, ema, vse? |
I am still having weird issues just like AndreyRGW was saying. I was curious about something. Mikubill hopefully you can figure this out. I have looked at this video (https://www.youtube.com/watch?v=wDM8iDK-yng), you can too, my friend did the same thing and it worked, and I did it on a fresh install of webui and it didn't work. I have xformers 0.0.16 installed, tried upgrading xformers, tried torch v17 and v16, both still ugly results. I keep getting results that look like a broken model just like this. I have the same controlnet settings as sebastian kamph as he did in the video I have linked, and I tried different art styles, image sizes, tokens are under 75, tried taking out anything in my negative prompt completely, you name it. I still have the same horrible results. So if I copied everything he did in the video and my friend did but it worked for him, does this mean there are specific requirements for models we train in order for controlnet in general to work clean? I use shivam's repo and I train with diffusers 0.7.0 and these in my requirements txt file and xformers 0.0.14dev version accelerate==0.14.0 I have noticed when I switched between different models, the quality looks slightly better or slightly worse than others if using the same settings he used. I had to change settings to get just a decent result but for how much I changed, it makes no sense how my friend and sebastian got those results. So what is your idea about all of this? It makes no sense because without controlnet, my model is overall fine. |
I'm using vae-ft-mse-840000-ema-pruned |
I noticed that I still have a bad understandings about how best to configure the parameters of the basic preprocessors, like Midas resolution and threshold in DEPTH(prepocessor)... And when I stopped changing them, leaving these parameters as they are by default, my results improved dramatically!!! Right now I'm setting my resolution only at main width/heigth... |
After the latest controlnet and webui updates, t2iadapter_style_sd14v1 started working fine, I didn't do anything else besides updates. |
The prompts can't be longer than 75 characters in order to get it to work |
it gives this not only with these two images, but also with the others.
upd1: now, it gives me error:
upd2:
t2iadapter_style-fp16.safetensors - gives the error above
t2iadapter_style_sd14v1.pth - gives faded image as above
upd3:
my args:
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.9,max_split_size_mb:512
set COMMANDLINE_ARGS=--xformers --medvram --no-half-vae --api --opt-channelslast
upd4:
Changing the weight for clip_vision does nothing, either 0 or 2 gives the same result.
I'm about to lose my mind :)
The text was updated successfully, but these errors were encountered: