-
-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can worker/cluster resource usage be limited? #453
Comments
You can use cpu-affinity to set which worker will use which cores on your machine. For example; 2 workers with affinity 1 , would use core 1 for worker 1 and core 2 for worker 2. That would leave the other cores available for your main app. If you're on Linux you can ofcourse use I do have it on my todo list to at some point implement a resources module and have a way to limit memory and cpu usage. |
Thanks for the tips! |
I'm not sure if this is exactly the right issue to bring this up or not, but the resource usage of Django-Q is also a bit of an issue for me too. The main problem is that my Django project has some large dependencies and each Python process is ~140+MB. Even with only a single worker, running qlcuster results in 5 processes which means Django-Q is taking up ~750MB for me. With the following config:
Running
I wonder if it would be possible to run just the worker in the apps virtualenv and then qcluster in its own much lighter weight virtualenv? (I don't know enough about the architecture yet to know if this makes sense). However, I want to echo @pedrovgp comments that this is a minor gripe and django-q is a fantastic project! |
That is the correct amount of processes, but they should all share the same memory space initially and only claim more memory when they diverge. I have a production server that runs a 250Mb image with 4 works at around 320Mb memory. After a while the memory usuage will go up to around 800Mb and then the recycle will usually drop it back to around the 350Mb mark. This is the cost of carrying around the entire django code base in the cluster. How did you arrive at 750Mb? Most linux utilities will show full memory usage per child process, even though most of the memory is shared. You should look at the actual memory usage. Compare it with the cluster off and on. That's said, I do think that I could tweak the memory usage a bit. I haven't found a good way of doing it yet. |
Thanks for the response! To get that number I just added up the "RES" column from running I just tried the following:
So I think was definitely mistaken here since this looks like total usage is ~300MB which makes a lot more sense. Sorry for wasting your time and thanks again for your time. |
Not a waste of time at all. I enjoyed double checking the code and reminding myself how I wrote this back then. |
FYI I have a PR ready #457 that will add a memory limit for workers. Once workers hit this limit they recycle to release any extra memory. I still need to test it a bit more and get some coverage on it, but it's on the way. |
Is there anyway to limit worker resource usage, or cluster resource usage, for that matter?
I am running django Q in the same machine as my Django project and it would be very useful to make sure it won't degrade app performance more than a certain amount.
By the way, incredible project. It alone justified migrating to Django 2 and Python 3.8.
The text was updated successfully, but these errors were encountered: