-
Notifications
You must be signed in to change notification settings - Fork 122
Memory leak in V2.0.2 #10
Comments
Is there any update on this issue? I'm experiencing the same thing. I have a daemon that fetches jobs from a database and submits them to gearman. After a week of running the daemon is consuming 1GB of memory and I have to restart it. |
i am also having the same issue, and since i am adding jobs into queue fairly frequently, the leak reaches significant size fairly quickly.. |
My application is a daemon that runs forever and the memory really adds up fast so I had to work around it.
|
hah, unfortunately, i also just came with a very similar approach. I keep track of the last time I have instantiated the gearman client, and I re-instantiate it if more than X minutes have passed. Since, I get the jobs at a pretty steady rate, these 2 approaches pretty much amount to the same :) [Thanks for the info, btw] Also, just as a further info, I am pretty sure that the memory leak is related to the 'wait_until_jobs_accepted' method in client.py. This method ensures that the gearman server acknowledges its receipt of the tasks, but it somehow does not dispose of the tasks properly afterwards. |
I'm taking a look at this now - I'll update if I find anything. |
daniyalzade: you're pretty much correct There are two leaks of When a job is first sent, In addition, in the instance of The fix for the first problem is relatively easy: if time_remaining = stopwatch.get_time_remaining()
if wait_until_complete and bool(time_remaining != 0.0):
processed_requests = self.wait_until_jobs_completed(processed_requests, poll_timeout=time_remaining)
else:
# Remove jobs from the rotating connection queue to avoid a leak
for current_request in processed_requests:
self.request_to_rotating_connection_queue.pop(current_request, None) However the command handler object currently isn't accessible at all from the client. Perhaps calls to send_job_request should have an extra parameter which indicates whether the request should be unregistered upon reaching the In any case, it seems that the "don't wait until completed" behavior of python-gearman needs to be completely reevaluated, since currently there is no care taken to ensure request objects are cleaned up in the often-used non-blocking scenario. |
Merged fix. A loop of backgrounded jobs is steady on RSS now, before it was growing rapidly. |
Infinitly submitting jobs results in a big memory leak. The client memory consumption keeps growing and growing when the the queue is full
while True: gm_client.submit_job("task1", "some data ", background=True, wait_until_complete=False, poll_timeout=0.020)
The text was updated successfully, but these errors were encountered: