-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: SDXL 1.0 base model extremely long loading time #12086
Comments
are you using a pagefile or an HDD? |
I'm loading it from an SSD and no pagefile is used. My configuration:
|
I go from 6gb to 20gb on 24gb, I load it with this https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/blob/main/sdxl_vae.safetensors try that vae. |
This is interesting: I previously had my SDXL models (base + refiner) stored inside a subdirectory named "SDXL" under /models/Stable-Diffusion. Now I moved them back to the parent directory and also put the VAE there, named sd_xl_base_1.0.vae.safetensors. The loading time is now perfectly normal at around 15 seconds. No idea what kind of funny bug this is, but this workaround worked for me. |
I am having the exact same issue, though my models were never stored in a subdirectory. They have been stored in the proper models folder since I downloaded them. The VAE has not helped the issue in my case. My load time is just about two minutes long. Both for the base and for the refiner. They also almost completely freeze my computer during load at various stretches of up to 30 seconds. Not completely frozen, but almost. All tips, tricks and questions welcome. After the model loads, it runs perfectly fine and image generation times is on par with generation times I see on YouTube. But the loading is absolute torture. No other model acts this way for me. I also have a 3060 12gb and am loading from SSD. |
The |
Try setting no cache of checkpoint..... somehow it seems fix the loading issue: |
Thanks for suggesting; in my case, anyhow, this has apparently always been set to 0, and the issue has presented itself as described. |
I can confirm that changing this setting to 0 does not resolve the issue |
im also getting the exactly the same issue, not able to load the sdxl -1.0 weights,its taking too much time & |
Well, boys, seeing as almost each person in this thread is a 3060 12gb RAM user... we may have found the element upon which this problem hinges. I'll be interested to see if the community trained XL models present us with the same problem. Maybe someone can try one out and report back. Some are already available on Civitai. |
FYI: #11958 may have resolved this, or at least partially. Only on the |
I tried this branch and didn't notice any big difference in loading speed |
There's no logic to it. I've been pulling my hair out. It was working absolutely fine - until it wasn't. I've literally emptied the pip cache, deleted the entire installation, and started from scratch TWICE. The only way I was able to get it to launch was by putting a 1.5 checkpoint in the models folder, but as soon as I tried to then load SDXL base model, I got the "Creating model from config: " message for what felt like a lifetime and then the PC restarted itself. 24 hours ago it was cranking out perfect images with dreamshaperXL10_alpha2Xl10.safetensors from Civit.ai and now it's just a really expensive space heater. I don't get it. |
fwiw - for me, dreamshaper takes just as long and causes the same load issues as the base model. catastrophic. |
If we can get it working in Comfy that would rule out it being a 3060 problem, right? Has anyone tried that? |
not me, but i bet it works in Comfy even on a 3060 12gb. i just don't have the time to learn a whole new UI atm. maybe someone can chime back in on this. |
This issue seems exclusive to A1111 - I had no issue at all using SDXL in Comfy |
i have geforec rtx 3090 24 gb,still same issue |
OK, cool. So I'll install Comfy and report back if it works OK with SDXL. At that point I think we're all happy to say it needs to be looked at by the Automatic1111 developer. I've no idea what the process is for that, if anyone here can advise? |
Olivio has a video on his YouTube channel that takes you through installing it with an Automatic1111 style prompting interface. https://www.youtube.com/watch?v=330z7P_m7-c |
This is also happening on an AWS g5.4xlarge (which has an Nvidia A10G - 24 gb gpu), so... |
I don't want to derail this issue but keep in mind this is a hobby project, whereas the ComfyUI developer is a paid employee of StabilityAI. |
It's not a criticism at all. The developer can't test his code on every possible platform with every possible graphics card. We're here to help make it better. I don't like ComfyUI. I can see why people who are more used to a Node based interface like Blender would be into it, but I'm a drag and drop point and click kind of guy. But every "how to do x" tutorial on YouTube starts with installing Automatic1111. If someone wanting to get into it for the first time runs into this problem it could be enough to put them off or cause the less experienced to try something that could actually damage their equipment. So it's important we flag things like this as and when they pop up. But it's totally understandable if it takes a while to fix. I personally wouldn't know where to even start, so I have huge respect for coders and developers who do. |
So the feedback in a few different Discord groups is to downgrade back to the previous version of A1111 which was working with SDXL. I have no idea how to do that. |
There is no A1111 version prior to 1.50 that supports SDXL, at least not directly. |
That's what I was lead to believe too, but the weird thing about this particular issue is SDXL was working perfectly fine when the A1111 update which supported it was first dropped. Whereas now it doesn't start up at all even on a completely fresh install, and on the odd occasion when you do wait 20 to 40 mins for it to load, it reloads everything all over again when you Apply even the most minor of UI changes, like saving defaults or putting the Clip Skip slider at the top of the UI, for example. Someone else on Discord was saying it could be the NVidia driver, but that wouldn't explain SDXL working perfectly in ComfyUI, The mystery deepens. |
I found that (for me) the issue will only occur with the .safetensors version of the SDXL models. If I convert the models to .ckpt format, the models will load at normal speed. Here is the console output of me switching back and forth between the base and refiner models in A1111 1.5.1 (VAE selection set to "Auto"):
The times above seem to be measured from the moment the model file was fully loaded from hard disk. So depending on your HDD / SSD speed, the actual time will be longer (around 30-40 seconds for HDD, 5-10 seconds for SSD) - Still a HUGE improvement over the 2-3 minutes it took for the .safetensors version. RAM usage peaked at about 22 GB during each switch. This is the tool I used to convert the models: Can anyone reproduce this? |
It could be that there's more than one issue, because I can only dream of it "only" taking 2 to 3 minutes. When you say "RAM usage peaked at about 22 GB during each switch" do you mean VRAM or system RAM? |
... for anyone who wants a quick fix, by the way, installing A1111 under Pinokio does allow you to switch between SDXL base model and Community Trained models about as quickly as it does under SD 1.5, so there's definitely something not right with the latest release of A1111 you would get as normal just by running git pull |
I'm obviously talking about system RAM. Try the method I suggested and report back if that fixes the issue for you. |
converting .safetensors to .ckpt did NOT alter the issue in any way for me. identical results (in my case, load time between 2 and 3 minutes while computer intermittently freezes). for me, Pinokio install changed my SDXL model loading times from 2-3 minutes to about 40 seconds. an improvement, but not a solution. |
Installing under Pinokio isn't the fix I thought it was either. It's now doing the same thing as reported above after briefly working fine. Bottom line is, for some users and for an unknown reason, A1111 is NOT working with SDXL. |
I thought it was because of my card which is a nvidia 3070 8ram and takes forever to load xl models but in Comfy if you load fast I hope you can give a solution because I do not like to use Comfy. |
using |
thanks for the suggestion. i'm going to try that today and Fooocus, featured on Sebastian Kamph today. https://www.youtube.com/watch?v=8krykSwOz3E |
sadly, this changed nothing for me. still frozen, buggy, long loading. |
Seems like it got fixed with |
This issue is actually a combination of several issues
Happy generating everybody! |
Thanks for the tips. Wil try the suggested "fixes" and report back. Just as a matter of principle however, I find the suggestion of the long loading times being the result of only 16 megabytes system RAM quite interesting as I would expect more users to face this issue if that truly were one of the culprits. |
It is a combination of slow disk access and low system RAM. How you run into the low system RAM issue is again a combination of many factors that entirely depend on your setup and configuration. You may even run into this issue with 32 GB of RAM - I sure do when enabling checkpoint caching and have other Apps open while running SD. Using ckpt models also helped me reducing RAM usage. The upcoming version of A1111 uses a more optimized method of loading models thus reducing the chance of you running into the low RAM situation. However if you're running SD on a potato PC with other Apps open at the same time, you'll most likely see the same issue again. Just for the record: None of that is related to the VRAM of your GPU. This is only relevant once you hit the red "Generate" button. |
Ok again thank you for your input and thank you for at least some suggestions of where to begin, as I have written in this forum I have changed safe tensors to check points and still experienced long loading times. I am running on an SSD with 16 gigabytes system memory. The PC in question is neither a potato nor a high end machine but something in between. I'm also - like most in this thread - running on a 3060 with 12 gigabytes vram and my issues only extends to the loading and not to the image generation, as reported. |
I've noticed that this problem is specific to A1111 too and I thought it was my GPU. I encountered no issues when using SDXL in Comfy. When I first learned about Stable Diffusion, I wasn't aware of the many UI options available beyond Automatic1111. Perhaps it's time for us to explore other alternatives. EasyDiffusion, for instance, seems to handle SDXL more efficiently. The only reason I stick with A1111 is because of some plugins. |
So interesting, experiencing same thing but only after first generation on a fresh load of the webui. Looks like some kinda VAE issue, which might be why it worked for some? Same thing, other tools don't seem to cause this issue. Specifying the VAE had same results. Copying VAE just really made it mad as I expected, as its not a valid safetensor file. --disable-nan-check argument makes it black. --no-half-vae skips the first fast image production. |
Have the same problem. A1111 1.7.0. 16gb RAM, 16gb VRam, SDXL model loading time 80-90 sec from SSD, 15-20 sec for SD 1.5 model. But if you already used the SDXL model then turn off and open WebUI again the loading time is fast - 5-10 sec. Not the problem with SSD since with Comfy it load very fast. |
Is there an existing issue for this?
What happened?
Loading the SDXL 1.0 base model takes an extremely long time. From my log:
So a total of almost 2 minutes, with most of the time spent on "apply weights to model".
What exactly happens at this step and is there a way to optimize it?
Steps to reproduce the problem
Select the SDXL model from checkpoints
What should have happened?
The model should load in around 10-20 seconds. The SD 1.5 model loads in about 8 seconds for me
Version or Commit where the problem happens
1.5.0
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Nvidia GPUs (RTX 20 above)
Cross attention optimization
xformers
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
None
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: