-
Notifications
You must be signed in to change notification settings - Fork 27.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Resolved] NVIDIA driver performance issues #11063
Comments
Funny, I am randomly getting the issues where an output is stuck at 50% for an hour, and I am on 531.41 for an NVIDIIA 3060 12GB model |
Strangely mine seems to go at normal speed for the first gen on a checkpoint, or if I change the clip on a checkpoint, but subsequent gens go muuuch slower. Annoyingly Diablo won't run on 531. |
I can confirm this bug. I was getting results (as expected) before I installed the latest Titan RTX drivers. I will try installing a previous build. |
Yeah, that's exactly how it is for me. When I tried inpainting, the first gen runs through just fine, but any subsequent ones have massive hang-ups, necessitating a restart of the commandline window and rerunning webui-user.bat. |
I wasn't sure if there was a problem with the drivers so I reinstalled WebUI, but the problem didn't go away. To think, everything generates fine like before, but once the High Res Fix starts and finishes, it looks like a minute pause. |
If you are stuck with a newer Nvidia driver version, downgrading to Torch 1.13.1 seems to work too.
|
Anyone, is this problem still relevant? |
I haven't tried with the latest drivers, so I don't know if this issue is still ongoing. |
Extremely slow for me. Downgraded the pytorch, and had a whole lotta of new problems. What usually took 4h is taking 10+ |
Please tell me there is a fix in the pipeline? |
For pro graphics (at least for my A4000) 531 is not going to help with eliminating the issue. Need to downgrade to at least 529 to get rid of the shared memory usage. And both 529 / 531 / 535 / 536 in production brunch are working way worse, than 531 at new feature (uses shared VRAM, but way smaller footprint for some reason) |
Can confirm this is still an issue, I have a RTX 3080 TI and downgrading to 531.68 solved it for me. |
I'm using a 3070, torch: 2.0.1+cu118, and can confirm that this is still an issue with the 536.40 driver. Using highres.fix in particular makes everything break once you reach 98% progress on an image. |
It got a tiny bit better here. torch 1.13.1+cu117. 531.79. Cuda compilation
tools release 12.0, V12.0.76
Still having issues with the duration of the generations. Usually, 200
frames took 4h, and now it is taking 10 (720x1280, 30 steps, 2~3
controlNets). Don't know how to fix it properly. Every other fix I did,
severely damaged the quality of the images. I now know that I was using the
1.2.1 version of the webUI and the torch was not 2.0. Every other setting I
do not remember. Now I have everything written somewhere hahahah
Em dom., 9 de jul. de 2023 às 11:10, Detrian ***@***.***>
escreveu:
… I'm using a 3070, torch: 2.0.1+cu118, and can confirm that this is still
an issue with the 536.40 driver. Using highres.fix in particular makes
everything break once you reach 98% progress on an image.
—
Reply to this email directly, view it on GitHub
<#11063 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQXED22T44Y6R7JGQJVB5LDXPJ7RZANCNFSM6AAAAAAY4U6YGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
536.67 fixed this? or not? |
I did not try it. A lot of wasted time already hahjaja
Em qua., 19 de jul. de 2023 às 00:12, dajusha ***@***.***>
escreveu:
… 536.67 fixed this? or not?
—
Reply to this email directly, view it on GitHub
<#11063 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQXED2ZHD4ROQFAXGYNLTKLXQ4J7HANCNFSM6AAAAAAY4U6YGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
536.67 fixed it for me. |
536.67 also worked for me somewhat, meaning it still seems to drop to shared memory but not as aggressively (latest versions seem to start using shared memory at 10GB rather than fully maxing out all available 12GB which matters. The 536.67 driver release notes still references shared memory, and I recently started getting the "hanging at 50% bug" again today after updating some plugins which prompted me to dig a bit deeper for some solutions. I often use 2 or 3 ControlNet 1.1 models + Hi-res Fix upscaling on a 12GB card which is what triggers it if I watch my Performance tab and see the GPU begin to use shared CPU memory. The ideal fix would be finding some way to create a However, for the good news - I was able to massively reduce this >12GB memory usage without resorting to Initial environment baseline
Biggest improvementAssuming your environment already looks similar to the above, by far the biggest VRAM drop I found was switching from the 1.4GB unpruned Hope this helps anyone in a similar frustrating position 😁 |
From my understanding ComfyUI might've done something with CUDA's malloc to fix this. comfyanonymous/ComfyUI@1679abd Looks like a lot of cards also don't support this though: https://github.com/search?q=repo%3Acomfyanonymous%2FComfyUI+malloc&type=commits&s=author-date&o=desc |
536.67 also did not fix this, according to the release notes. https://us.download.nvidia.com/Windows/536.67/536.67-win11-win10-release-notes.pdf
|
I updated the drivers without thinking this might happen and now I can't go back. I have tried removing the drivers with "Display Driver Uninstaller" and then installing v531.68 and v528.49 , but it still doesn't go as fast as before. RTX 4080 (Laptop) 12GB. I seem to be missing something. Edit: finally my problem seems to be with the laptop itself. Yesterday I was testing 536.67 and 536.99 on my desktop using RTX 3080 with no problems. |
That's actually the difference between Game Ready Drivers and Studio Drivers. The first only pass much simpler tests and gets pushed almost at every commit, while the Studio Drivers are almost actually tested (but if only a few tests don't pass it could get released anyway even if it still doesn't fix an issue like from 531 to 537.58). Or at least this is what I've experienced since the Studio Driver first release |
My 4090 can use the 531 version driver to directly output 4k 3840*2160 through i2i. The video memory is almost full without OOM. If the resolution is higher, it will directly OOM. |
@catboxanon I'm going to reopen this as it seems there are more reports saying that issue is not yet fixed |
545.84 tanks the performance on the 3090, down from multiple it/s to 6s per iteration. |
Not yet fixed. Although this isn't an issue with a1111 I respect that it's open. Waiting for a new driver as the latest as of Oct 20 on the new feature branch didnt fix it. A6000, Linux, 30 steps DPM++ 2M Karras 512x768 A 300% SLOWER image generation |
545.84 on a 4060 Ti, no issue during generation. |
I have NVIDIA GTX 1650Ti 4GB VRAM.. (I know its low spec) with driver 532.03.. current generation time is lowest I have got i.e 2-3 minutes for 1024 x 1024 image. I'm skeptical to upgrade driver to the latest version. Would appreciate answers to my below queries.
PS: I'm a noob! |
I am on 2060 6GB VRAM. Previously I update from 531.68 Studio Version to the 537.42 and got problem.
I also noticed increase in speed in latest driver. Maybe it is as NVDIA advertised or something I cleared out because I reset my whole PC, but my MEDVRAM speed has increased from about 1-3 s/it to 1-2 its/s. Splendid. |
NVIDIA recommends uninstalling any new driver and then re-installing the old driver - as opposed to using windows to roll back the driver. Several people mentioned using a program named DDU to completely uninstall the new driver. https://www.guru3d.com/download/display-driver-uninstaller-download/ Create a restore point first. |
NVCleanstall sounds like good automated option to install specific version |
Anyone know if this got fixed in the latest version 545.92 ? EDIT: Started testing myself and so far so good! Before I could only generate a few SDXL images and then it would choke completely and generating time increased to like 20min or so. Needed to restart SD fully to get normal speed again. With this new drivers I've generated already a dozen images and speeds stays the same! |
I tried reverting to 528 myself from whatever the latest is now, and I'm still barely getting 1 It/s on my 3080ti (was getting 22 it/s before). I'm not sure it's the driver to be honest. |
It's getting close to impossible to re-download 531 or below now, oldest i could find are 532.
Oof, what did you try to do for having such a drop ? |
Guys
|
You can still download them: https://www.nvidia.com/download/driverResults.aspx/204245/en-us/
Now with the newest drivers (545.92) I don't have that issue anymore! Each image generation takes the same amount of time. |
Will give it a shot sometime. Thanks for the info. Edit: 545.92 seem better for generations but for some heavy cases like described above (sdxl + hires 2x), it's now the whole system that gets really sluggish despite close to 0% cpu use, no disk activity and close to 0% gpu use, when current batch is complete. Will persist like that even after browser close and SD console close. Actually, until a restart. Guess i'll refrain from using that but i hope it won't affect something else, or i'm good for another rollback. |
confirming 2023-10-31 v546.01 new CUDA memory fallback option works as described, previously requiring v531.x for same behavior 🎉
|
So, with the new driver, we don't need medvram switch for 8GB gpus? |
That's good to hear ! |
As mentioned in #11063 (comment), NVIDIA has made a help article to disable the system memory fallback behavior. Please upgrade to the latest driver and follow the guide on their website: https://nvidia.custhelp.com/app/answers/detail/a_id/5490 |
Are you planning to add this option inside the ui? |
Just updated to 546.17, they still haven't fixed the black screen issue... |
|
Update (2023-10-31)
This issue should now be entirely resolved. NVIDIA has made a help article to disable the system memory fallback behavior. Please upgrade to the latest driver (546.01 or newer) and follow the guide on their website: https://nvidia.custhelp.com/app/answers/detail/a_id/5490
Update (2023-10-19)
issue has been reopened as it seems like more and more reports are saying that the issue is not yet fixed
Update (2023-10-17)
there seems to be some reports saying that the issue is still not fixed
comments
#11063 (comment)
#11063 (comment)
Update (2023-10-14)
This issue has reportedly been fixed by NVIDIA as of 537.58 (537.42 if using Studio release). Please update your drivers to this version or later.
The original issue description follows.
Discussed in #11062
Originally posted by w-e-w June 7, 2023
some users have reported some issues related to the latest Nvidia drivers
nVidia drivers change in memory management vladmandic#1285
#11050 (comment)
if you have been experiencing generation slowdowns or getting stuck, consider downgrading to driver version 531 or below
NVIDIA Driver Downloads
This issue will be closed when NVIDIA resolves this issue. It currently has a tracking number of [4172676]
The text was updated successfully, but these errors were encountered: