-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mackenzie/change synthsr to script #37
Conversation
I scaled down the repo and removed some unused models, but it only changed to around 2.4 GB. When the singularity image is downloaded, it adds approximately 1 GB. However, with this 2.4 GB implementation we can use the docker image with SynthSR. What do you think @ataha24 @Dhananjhay ? |
just tested this out and it looks great. However, Tensorflow flooded my terminal with warnings. I wonder how/why because we had suppressed these (in apply.py). Is this issue also on main branch? |
I noticed this too - I think this was introduced after the linting was applied in apply.py because it also occurs on the main branch currently |
Amazing work! I'll test the changes and update later. @ataha24 Yea, I noticed the flooded terminal too; if you know the solution from the top of your head might as well push the patch, otherwise I'll open an issue and a relevant PR to fix it!! |
Yeah, I think I have an idea of why this happens but I wonder if linting will just override what I end up doing. Its probably because we need to assign the suppression variables before we import tensorflow. I think the linting likes imports to go first. Will test this out to see if its the case. |
It was indeed the case. Just pushed a fix to this PR. |
The Singularity image is approximately 1 GiB, while the downloaded repository is around 2 GiB. While storage isn't a major concern, the key achievement here is resolving the Docker-related issue, which is a significant milestone!
After pulling the image, i.e.,
. Was the unzip dependency included in the Dockerfile used to build this image?
I generated a report using snakemake. While the timing results are nearly identical, a closer look shows that Singularity is slightly faster than downloading and running the repository manually. |
I'll look into the docker image issue, I only ran it up to past SynthSR so I'll check if I left that out! I was using cbs basic when I did this testing - I've applied for heavy but nothing has changed on my account yet. Did you run this test on heavy? I can try it on basic too and see the results |
This was on heavy. And your basic account wouldn't change, you'll just see another CBS heavy account when you login into VMHorizon. |
Yes, I connected IT support and they had missed my original request, I have access to it now! I compared the times on cbs basic and I was wrong before, I was running it after I had downloaded to repo in my cache dir so thats why I thought it was faster lol. My results matched yours while using CBS basic. I rebuilt the docker container and was able to run the program using SynthSR end-to-end! |
I've extensively tested this PR using both Docker and a local build, and I'm happy with its performance—smooth as butter! In the end, it makes more sense to treat SynthSR as a model, downloading it instead of packaging it in a Singularity container, just like we do with afids models. This PR enhances our reliance on Docker and improves compatibility across different operating systems. What do you guys think @ataha24 @mackenziesnyder ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes! I agree with @Dhananjhay's last comment. I am happy with this overall. Tested via local build and worked well. This is huge!
I agree!!! It's working well on my end! |
Superb work @mackenziesnyder! |
Still to do:
for issue: #36