-
Notifications
You must be signed in to change notification settings - Fork 592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Knative service can not scale up with kafka cluster channel provisioner #693
Comments
I believe the intent (not yet implemented) was to perform partition
assignment and management in the kafka delivery adapter, at which point one
event per partition may be in flight at a time. Removing multiple items
from a partition removes the ordering guarantees of kafka and also
complicates the at-least-once delivery guarantees (as the delivery adapter
could crash or fail on an earlier message after delivering a later one).
…On Mon, Dec 24, 2018 at 12:34 AM fatkun ***@***.***> wrote:
Expected Behavior
Knative service scale up after many request.
Actual Behavior
Knative service not scale up.
Steps to Reproduce the Problem
1. Start a knative service, the service just sleep one second.
2. Use wrk send request to channel
Additional Info
Knative service scale according request concurrency.
Current kafka channel implement:
1. Channel receive one event, push to kafka topic
2. kafka consume one event
3. dispatch message and wait it finish
4. consume next event
https://github.com/knative/eventing/blob/e8e6dd27db8791c44864170ff5432e0e788ab495/pkg/provisioners/kafka/dispatcher/dispatcher.go#L183-L200
There only one concurrency if start one dispatcher pod. It's not enough to
scale up knative service.
Can we use multi-goroutine to dispatch message?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#693>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHlyNwup-MuQGAJ6NDZ1Ux2SqBhq2ASVks5u8JGegaJpZM4ZgPkv>
.
--
Evan Anderson <argent@google.com>
|
@evankanderson
I think this is not important, If we create multiple partition, can't guarantee the ordering between partitions.
Yes. Each markOffset should commit previous offset of minimum offset in flight ( |
Notice the same issue here, the messages are all dispatched one by one synchronously and there is only one go routine which really limits the scalability, gcp pubsub allows multiple go routines(https://github.com/knative/eventing/blob/master/vendor/cloud.google.com/go/pubsub/subscription.go#L474). |
GCP PubSub allows out-of-order acknowledgements (and does not preserve
order). These were design tradeoffs. For some customers, ordering on a
particular event stream (e.g. from the same customer or device) is very
important, and kafka (possibly with a high partition count) is attractive.
For others, ordering is unimportant, and something like RabbitMQ or PubSub
might be a better fit.
One item which (IIRC) is missing from the Kafka channel is the ability to
supply a function to assign events to a particular partition, to be able to
enforce ordering at the event stream level.
…On Wed, Jan 30, 2019 at 9:58 AM Dan Sun ***@***.***> wrote:
Notice the same issue here, the messages are all dispatched one by one
synchronously and there is only one go routine which really limits the
scalability, gcp pubsub allows multiple go routines(
https://github.com/knative/eventing/blob/master/vendor/cloud.google.com/go/pubsub/subscription.go#L474
).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#693 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHlyNyAmn9ChCBMlssSXKdhiCWt79V8Mks5vId1PgaJpZM4ZgPkv>
.
--
Evan Anderson <argent@google.com>
|
@evankanderson Thanks for the explanation! I got that we need to preserve the ordering but this is needed at partition level, sarama supports |
/assign @yuzisun |
@evankanderson: GitHub didn't allow me to assign the following users: yuzisun. Note that only knative members and repo collaborators can be assigned. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Expected Behavior
Knative service scale up after many request.
Actual Behavior
Knative service not scale up.
Steps to Reproduce the Problem
Additional Info
Knative service scale according request concurrency.
Current kafka channel implement:
eventing/pkg/provisioners/kafka/dispatcher/dispatcher.go
Lines 183 to 200 in e8e6dd2
There only one concurrency if start one dispatcher pod. It's not enough to scale up knative service.
Can we use multi-goroutine to dispatch message?
The text was updated successfully, but these errors were encountered: