You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
My own task or dataset (give details below)
Reproduction
i'm running train_xl.sh in this repo. and i change the 8bit adam optimizer to adafactor optimizer using transformers.optimization.Adafactor. i'm using two 40GB a100, deepspeed stage 2, batchsize=1,VTON-HD dataset.
the adafactor optimizer should use less gpu memory, because of less optimizer states than 8bit adam, but it get oom in this line
and oom happens after 10 steps, i don't know what happen in 10th step, i call the accelerate.backward() and optimizer.step() every step.
and in 10th step, the memory usage increased from 29GB to 39GB when using 8bit adam optimizer, and get oom when using adafactor optimizer
Expected behavior
could anybody explain this phenomenon
The text was updated successfully, but these errors were encountered:
System Info
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
i'm running train_xl.sh in this repo. and i change the 8bit adam optimizer to adafactor optimizer using transformers.optimization.Adafactor. i'm using two 40GB a100, deepspeed stage 2, batchsize=1,VTON-HD dataset.
the adafactor optimizer should use less gpu memory, because of less optimizer states than 8bit adam, but it get oom in this line
and oom happens after 10 steps, i don't know what happen in 10th step, i call the
accelerate.backward()
andoptimizer.step()
every step.and in 10th step, the memory usage increased from 29GB to 39GB when using 8bit adam optimizer, and get oom when using adafactor optimizer
Expected behavior
could anybody explain this phenomenon
The text was updated successfully, but these errors were encountered: