-
Hey hey, Thank you. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 13 replies
-
Are you saying that an individual job is running on more than one pod at a time? That should not be possible, and we would need a lot more info about what you’re seeing to be able to look into it. Or are you saying that each of your pods is running different jobs? Because that’s how it’s supposed to work—each client is supposed to work jobs for whichever queues are specified in its config. You are supposed to run as many workers on the same queue as you want in order to distribute many jobs across them. I don’t think there’s anything k8s specific to pay attention to with config. Each client is essentially stateless except for actively running jobs. As long as you give the Client time to shut down cleanly before terminating your process and respect context cancellation in workers you should be fine. You can utilize the more aggressive shutdown mode as needed if you really want to interrupt existing jobs immediately. |
Beta Was this translation helpful? Give feedback.
-
Hey @bgentry , I work together with @mesquita. |
Beta Was this translation helpful? Give feedback.
@gabrielclimb We will have that out in a release shortly. However this issue should only have resulted in duplicated work for the responsibilities of the leader such as pruning old jobs, inserting scheduled jobs, etc. If you have any reason to believe the same job was being worked twice simultaneously, please do share more details as that absolutely should not happen 🙏 Thanks for all the info to help us fix this fast.