You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
This is my small contribution based on my personal experience since few versions ago to the current one., I don't see many people do that so I decided to take the time and share.
I believe that my feedback can help the developers and some future users as it's related to minimum requirements.
This feedback is based on my old PC and GPU (almost 10 years old)
CPU: Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz 3.50 GHz
GPU: Nvidia GTX 980 4GB GDDR5
SSD: Kingstone SUV400S37 480GB
When I first started to try this FORK version I could only use:
batch_size = 1 was the MAXIMUM I could use without getting OOM (Out Of Memory)
It took roughly about 168 - 174 minutes to get to 100 epochs
Using the latest version: (as for 22-04-2023) with the EXACT same PC / hardware:
batch_size = 3 is the MAXIMUM I can use without getting OOM (Out Of Memory)
It takes roughly about 126 - 132 minutes to get to 100 epochs
Obviously it is VERY slow compare to cloud training or any local MODERN PC with latest generation GPU, but this is exactly why I'm sharing this, as my PC isn't even listed on the minimum recommended hardware but still working + show improvements.
Since I'm not a programmer, I don't know HOW the developers did this magic but I must mention that you can SURE mention that it works with old tech such as GTX 980 (not even TI but the classic model)
I hope that this contribution of mine helps a bit based on my personal feedback,
I can now train much faster without OOM on the same 10 years old PC.
Please keep up the good work, much love 💙
The text was updated successfully, but these errors were encountered:
Hello,
This is my small contribution based on my personal experience since few versions ago to the current one., I don't see many people do that so I decided to take the time and share.
I believe that my feedback can help the developers and some future users as it's related to minimum requirements.
This feedback is based on my old PC and GPU (almost 10 years old)
When I first started to try this FORK version I could only use:
batch_size = 1
was the MAXIMUM I could use without getting OOM (Out Of Memory)168 - 174 minutes
to get to100 epochs
Using the latest version: (as for 22-04-2023) with the EXACT same PC / hardware:
batch_size = 3
is the MAXIMUM I can use without getting OOM (Out Of Memory)126 - 132 minutes
to get to100 epochs
Obviously it is VERY slow compare to cloud training or any local MODERN PC with latest generation GPU, but this is exactly why I'm sharing this, as my PC isn't even listed on the minimum recommended hardware but still working + show improvements.
Since I'm not a programmer, I don't know HOW the developers did this magic but I must mention that you can SURE mention that it works with old tech such as GTX 980 (not even TI but the classic model)
I hope that this contribution of mine helps a bit based on my personal feedback,
I can now train much faster without OOM on the same 10 years old PC.
Please keep up the good work, much love 💙
The text was updated successfully, but these errors were encountered: