-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking Token Updated if Creating a Publisher Fails #454
Comments
Hi Laurynas, thanks for reporting. I might have a look today. I do have one question, which token store did you use? |
Hello Gerard, I appreciate you looking into this. I used MongoDB token store. |
Here's how the tracking store is configured:
|
I was able to reproduce the issue, I'll dive into it further. |
@gklijs unfortunately I am still experiencing an issue where publishing fails but token is updated. This time, I had an authentication issue:
As you can see, on each retry it kept fetching an incremented token - Once I fixed the authentication issue and restarted the app it didn't republish the failed events. |
Are you sure you are using 4.9.0 of the extension? To be sure before diving into this, as I don't understand how this would work with the new code. Although, from the line numbers, you do. Maybe the exception triggers after calling Also, please create a new issue when there are new problems. Although related, this issue did solve some of the problems. So, instead of reopening this one, I rather have a new issue. |
I am on 4.9.0. I've actually discovered a few additional issues:
We identified and fixed the issue. However, it didn't prevent the token sequence from advancing, so the failing event wasn't reprocessed. We had to nuke the topic and reprocess events from the beginning. Whether or not it makes sense to disable Kafka transactions I can't really say. I think conceptually it makes sense to have transactions because you should be able to rollback a transaction if anything else goes wrong to get exactly-once publishing. In our case, we wanted to guarantee exactly-once publishing because at the moment our consumers are not idempotent. Sorry for not opening a separate issue, leaving a comment was just quicker. For the time being, we managed to work around the issues and are monitoring for errors in case we need to replay events manually. |
The problem is that it will never become exactly once because there are two different systems. If it's working now, its fine I guess. |
Hi @n3ziniuka5, just chipping in for this remark you made:
This is indeed why handling didn't do anything...an unforeseen side effect of making that adjustment in Axon Framework. Although a small enhancement, it may be practical to provide a property to switch that behavior. @gklijs, what are you thoughts on this pointer? It would of course merit a new issue, but it seemed fair to me to hold the discussion here, as @n3ziniuka5 already dropped it here :-) |
Basic information
I am setting up Axon to publish events to Kafka. I've tried various confirmation and event processor modes, and in all combinations I've managed to reproduce an issue when the tracking token is updated when Kafka is down, losing some events when Kafka is back up.
I am not fully up to speed with Axon internals, but one thing that jumps out in
KafkaPublisher.send
is that only Kafka commit happens underuow.onPrepareCommit
. Other important steps, such as creation of the producer and sending of the message, both of which can throw exceptions, happen outside of it.Steps to reproduce
I used the following spring boot configuration to publish events to Kafka:
Steps to reproduce:
Expected behaviour
Kafka event processor's tracking token shouldn't be updated, and all events should eventually be published to Kafka when it recovers.
Actual behaviour
Kafka event processor's tracking token is updated even while Kafka is down.
The text was updated successfully, but these errors were encountered: