You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
Currently, the build takes quiet long (~3 hours). I've noticed that the majority of the time is spent building the Cuda modules.
For my use case, I'll be rebuilding the library often (as part of my CI pipeline) but would like to reduce the build time. Is there some way to run an incremental build which caches these Cuda modules (as they are relatively unchanging) and re-uses them between builds to speed things up?
The text was updated successfully, but these errors were encountered:
I am trying to build Mxnet with GPU support using the following options:
Currently, the build takes quiet long (~3 hours). I've noticed that the majority of the time is spent building the Cuda modules.
For my use case, I'll be rebuilding the library often (as part of my CI pipeline) but would like to reduce the build time. Is there some way to run an incremental build which caches these Cuda modules (as they are relatively unchanging) and re-uses them between builds to speed things up?
The text was updated successfully, but these errors were encountered: