-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS.SimpleQueueService.BatchEntryIdsNotDistinct #63
Comments
Hi @benkeil! I have moved this to the SQS repo. I think we can support a new option ack type, called Would you link to send a PR for that? |
I will do |
Not sure how to handle the configuration part. If we add the new type |
@benkeil why do you say it makes no sense? Maybe it has no practical use but in theory there is no problem in allowing success to not ack duplicates and allow failures to ack so. |
To give such an example, consider this:
In this case, you want to have exactly |
Don't get your point. Either you want to ack or you don't. And if you want to ack you can't ack the same message id twice in the same batch, so you must skip it to not get errors. Also from a Business Logic side it makes no sense, because it's the same message. |
I see your point. It is also important to notice that we can't see it is a duplicate until we ack, so it is more like |
You see at the very beginning if it is a duplicate on the message id. Theoretically you can skip the processing at all. |
Oh, I see. If they are duplicate in the same batch, can it be the producer could have filtered them too? In any case, I think having a separate option called “filter_duplicate_ids_in_batch” will suffice. Should it be false or true by default? |
I think if you misconfigured the queue with a too low visibility_timeout, this could also happen. |
Let's go this approach (logging), we don't even need an option. Thanks for all the discussion so far! |
For what it's worth, we too are running into this issue. |
Hello guys, I was trying to understand this problem (this issue and #49), just to see if I have understand the discussion, the idea here is to filter out the duplicate messages during the acknowledgment and log a warning on the filtered messages by default, it is it? |
Correct! Perhaps I would make it opt-in though, so people decide if changing their settings may be better. |
One thing that I'm thinking about is that as mentioned in issue #49, we have this advise in the SQS docs talking about deleting duplicates:
Should we be concerned about that? Because if yes, I didn't found a way to get the receive timestamp of the messages, any tips? |
@HeavyBR perhaps it needs to be explicitly requested in the |
I was experiencing the AWS.SimpleQueueService.BatchEntryIdsNotDistinct error multiple times and was wondering why it was happening so frequently. After investigating the issue, I realized that the problem could be due to the visibility timeout configuration being insufficient for my use case. In my particular scenario, my consumer takes around an hour to process messages, and I was having this error frequently. After checking the AWS documentation, I found that the recommended solution is to increase the visibility timeout to a value greater than the time it takes for the consumer to process the message. So, I increased the visibility timeout to 90 minutes, and the error disappeared. |
When using non FIFO queues, it happens quite often that I receive the same message more than once. That's ok during processing, but when broadway wants to acknowledge the messages (which are in the same batch) if throws an error and move the messages to the DLQ.
Could we just filter out duplicates during acknowledging?
The text was updated successfully, but these errors were encountered: