-
Copy the model file from LRZ container /dss/dssfs04/lwp-dss-0002/pn25ke/pn25ke-dss-0002/work/TUM_model to TUM_FM/dinov2/downstream/TUM_small.
-
Run the inference_example.py either using single GPU or multi-GPUs.
-
If you are using multi-GPUs, you need to setup os.environ["NCCL_SOCKET_IFNAME"] = "ibp170s0f0" to a proper ifconfig socket. You may need also have a look at the host file.
-
Notifications
You must be signed in to change notification settings - Fork 0
TumVink/TUM_FM
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published