Help identify actions that cause the VRAM overflow bug #457
Replies: 2 comments
-
Found a pattern. Really annoying bug with VRAM overflow when just adding LoRA to prompt: |
Beta Was this translation helpful? Give feedback.
-
If it helps, I run my box hard as it will let me (least I think i am?), and I've only OOMed since switching to forge when there is a problem with... effectively control-net but yea... I've kinda just avoided integrated CN for now, unless its got something A-detail, or any other extension can't which sadly, is a moderate occurrence, so do get the very occasional OOM, but pretty sure only after touching control-net (and as you described, it don't let go after turning it off, gotta reboot whole thing to swipe your cuda back :(. ) hope my observation/opinion helps :) Relevant launch commands: set PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync P.S. p.s.s: |
Beta Was this translation helpful? Give feedback.
-
Problem Description:
After generating the next picture for some reason VRAM is not cleaned. I start generating the next picture and see that the speed has very low and in the resource monitor I see that VRAM is completely full and shared memory is being used.
But I have not yet been able to figure out the exact sequence of actions after which the bug appears. That's why I'm writing here and not in 'Issues'. Maybe you have the same problem and you can help me find a pattern for bug report.
Here's what information I have so far:
Beta Was this translation helpful? Give feedback.
All reactions