Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protect against double receive #2

Closed
ryanwitt opened this issue Mar 23, 2017 · 2 comments
Closed

Protect against double receive #2

ryanwitt opened this issue Mar 23, 2017 · 2 comments
Assignees

Comments

@ryanwitt
Copy link
Member

SQS guarantees at least once delivery, but sometimes jobs are not idempotent and should not be executed more than once.

It would be helpful to provide an optional way for users to prevent duplicate messages using a fact store like Redis. We already have the MessageId to use as a key.

We could obtain a Redis lock, and mirror the message visibility timeout extension calls to extend the lock TTL using the same values we send to AWS, finally setting a long TTL (same as message retention) upon job finish and successful SQS delete call.

Of course, failure in any of these writes or the Redis instance could defeat the safety of this feature.

@ryanwitt
Copy link
Member Author

ryanwitt commented Nov 5, 2017

The only network dependency in this project should be SQS. To solve, we should couple this issue with #18 and support the exactly once processing feature of FIFO queues.

@ryanwitt
Copy link
Member Author

This was fixed with the release of --fifo in v1.3.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant