-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis pubsub improvement #339
Redis pubsub improvement #339
Conversation
RemindD
commented
Jul 9, 2024
- Subscriber launch two async threads to
- reclaim the ownership on messages that are previously claimed by its client (scan once per 5 seconds, claim again if the message not claimed for 30s)
- claim the messages that are previously claimed by other replica clients, but the previous client has been dead for a while (scan once per 1 minutes, claim again if the message not claimed for 60s)
- Added an in-memory set to track what messages are being processed by handler
- add the message id to the set before handler starts processing the message and remove the message id when handler either succeeds or fails
- Every time a message is claimed by subscriber async thread, the provider will decide whether the handler should be invoked for this message depending on whether the message id is in the set
- bump redis client from v7 to v9
- support symphony multiple replica scenario
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great PR.
Do we have multiple subscribers to subscribe to same topic but take different actions? This is not the same as same subscribers with multiple instances. The former case should let different subscriber consume every message. The latter case is once one subscriber execution routine claim the message, the other routine doesn't need to take that message. |
339d51e
to
0ee4781
Compare
I have some ideas about this and will work out another PR for single publisher/multiple subscriber mode. |