You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Thanks for the great project, celery was giving me nightmares
I have 2 questions
if i run a cluster with supervisor and pipe stdout to the log file for the supervisor process will all workers log to the same file and will that cause any conflicts if workers try and write to the same log file at the same time. Should i disable logging all together or there is no problem?
if i run clusters on multiple servers, will i still be ok with scheduled tasks. As in they wont repeat. I know thats why celery runs beat in its own process. How are you handling this
Thanks again for the great work
The text was updated successfully, but these errors were encountered:
You should be ok there. All workers use the same logging bucket. It either set up it's own or uses the one you set up for Django.
There is a small chance that multiple clusters pick up the same schedule. Currently the only way to prevent this is by disabling the scheduler on all but one cluster server.
if i use a central cache server to lock do you think i can get away with running the scheduler on all instances
e.g
def scheduled_task():
lock_id = "something unique"
lock_expire = 60 * 5 # five minutes
acquire_lock = lambda: cache.add(lock_id, "true", lock_expire)
release_lock = lambda: cache.delete(lock_id)
if acquire_lock():
# do some things here ..
release_lock()
return True
I use elastic beanstalk so all my servers will have the same settings, if you think this could work then that would be sweet at lease if the tasks are 10ms from each other when they are recieved
Hi Thanks for the great project, celery was giving me nightmares
I have 2 questions
if i run a cluster with supervisor and pipe stdout to the log file for the supervisor process will all workers log to the same file and will that cause any conflicts if workers try and write to the same log file at the same time. Should i disable logging all together or there is no problem?
if i run clusters on multiple servers, will i still be ok with scheduled tasks. As in they wont repeat. I know thats why celery runs beat in its own process. How are you handling this
Thanks again for the great work
The text was updated successfully, but these errors were encountered: