You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SQS guarantees at least once delivery, but sometimes jobs are not idempotent and should not be executed more than once.
It would be helpful to provide an optional way for users to prevent duplicate messages using a fact store like Redis. We already have the MessageId to use as a key.
We could obtain a Redis lock, and mirror the message visibility timeout extension calls to extend the lock TTL using the same values we send to AWS, finally setting a long TTL (same as message retention) upon job finish and successful SQS delete call.
Of course, failure in any of these writes or the Redis instance could defeat the safety of this feature.
The text was updated successfully, but these errors were encountered:
The only network dependency in this project should be SQS. To solve, we should couple this issue with #18 and support the exactly once processing feature of FIFO queues.
SQS guarantees at least once delivery, but sometimes jobs are not idempotent and should not be executed more than once.
It would be helpful to provide an optional way for users to prevent duplicate messages using a fact store like Redis. We already have the
MessageId
to use as a key.We could obtain a Redis lock, and mirror the message visibility timeout extension calls to extend the lock TTL using the same values we send to AWS, finally setting a long TTL (same as message retention) upon job finish and successful SQS delete call.
Of course, failure in any of these writes or the Redis instance could defeat the safety of this feature.
The text was updated successfully, but these errors were encountered: