-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cut off buffering from ConsumerStage #35
Comments
One interesting thing is than in scala sources they use buffering, but for completely another reason. The structure of their package is a little bit more complex, with more abstractions - because they have lots of stages implemented (and I guess we will need to re implement this in #36 ). So the interesting part is contained in this file: What they do is: In our implementation, instead of consumer actor aggregating requests, currently we have a timer that requests messages from kafka and populates buffer - even if there is no demand for messages. So, what I want to say, is that indeed we do not need buffering right now, as well as a timer. We will consume messages from Kafka directly on demand. But we may consider having requests queue populated in onPull method, and continue consuming messages from kafka (for example if at the moment partition is empty, but we have a request from sink and keep trying to consume next message). Also, should we implement that stuff with consumer actor as scala does? @Aaronontheweb what do you think about it? I know, we need to keep as close to scala implementation as possible, but this is more complicated then just removing timers and buffers, so need your approvement. So far, I am going to remove all buffering anyway. |
Currently, we have consumer constantly pooling Kafka and storing all messages in buffer:
Akka.Streams.Kafka/src/Akka.Streams.Kafka/Stages/ConsumerStage.cs
Lines 205 to 215 in 659e33f
Now we need just consume messages from Kafka topics on-demand. This will also improve back-pressure support for our provider.
I am going to submit PR for this on this week.
The text was updated successfully, but these errors were encountered: