Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not all sidekiq:sidekiq_unique keys are removed from Redis #31

Closed
mlangenberg opened this issue Feb 20, 2014 · 2 comments
Closed

Not all sidekiq:sidekiq_unique keys are removed from Redis #31

mlangenberg opened this issue Feb 20, 2014 · 2 comments

Comments

@mlangenberg
Copy link

I am seeing weird behavior in production where sidekiq:sidekiq_unique are not always removed after completing a job.

I am running an hourly import job, that queues over 1000 jobs to fetch and process data from an API. To prevent multiple workers processing the same job, I am using sidekiq-unique-jobs with a unique_job_expiration of 1.day.

When I run this on my development machine (OS X), everything is fine. When running in production (Linux, the uniqueness keys are not always removed. This causes import jobs not the run for a whole day.

Normally (and what I see in on my development machine) is that the number of sidekiq:sidekiq_unique keys is equal to the number of currently running jobs plus the queue size. When I running the same import on production, I see over 120 sidekiq:sidekiq_unique keys not being unlocked.

My first thought was that this is caused by some worker jobs, queueing other worker jobs. But I could also reproduce this in production by performing the same worker multiple times.

At this moment I don't have any clue what the cause of this is. But maybe someone has the same issue or is able to provide debugging instructions.

@mlangenberg
Copy link
Author

Interesting, does anyone know what is going on?

On Wed, Mar 5, 2014 at 11:38 PM, Zhaohan Weng notifications@github.comwrote:

I see the same thing, but on dev machine as well.
all I have is a simple test worker

class CountWorker
include Sidekiq::Worker
sidekiq_options retry: 3, queue: 'counter'
sidekiq_options unique: true, unique_job_expiration: 60

sidekiq_retries_exhausted do |msg|
# something wrong
end

def perform(id)
sleep(10)
end
end

and I have a loop try to schedule this worker CountWorker.perform_at(10.seconds,
1)
every second, only the first one is scheduled. In theory, the second one
should be scheduled after 10 seconds, since the first one will be finished
(sleep 10). But instead, the second one is only queued after 60 second.

Reply to this email directly or view it on GitHubhttps://github.com//issues/31#issuecomment-36803532
.

@mlangenberg
Copy link
Author

An old sidekiq process was still taking jobs from the queue, and failing them. Since this process did not care to remove the unique key from redis, this resulted in unexpected behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant