-
Notifications
You must be signed in to change notification settings - Fork 339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(kuma-cp) KDS multiplexer #917
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
@@ -10,7 +10,7 @@ spec: | |||
names: | |||
kind: Zone | |||
plural: zones | |||
scope: "" | |||
scope: Cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good!
kdsStream := client.NewKDSStream(session.ClientStream(), session.PeerID()) | ||
zone := &system.ZoneResource{} | ||
if err := rt.ReadOnlyResourceManager().Get(context.Background(), zone, store.GetByKey(session.PeerID(), "default")); err != nil { | ||
// send error back to Remote CP, it will re-try later when ZoneResource will appear |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That looks very robust!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and what are the conditions of retries? Is it retried immediately? 1s? exponential retry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It depends on ResilientComponent
that wraps KDSClient
. By default backoff time is 5 sec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exponential is actually a good idea, probably I should implement it
pkg/kds/mux/session_test.go
Outdated
err = errors.New("timeout waiting for ServerStream.Recv()") | ||
} | ||
Expect(err).ToNot(HaveOccurred()) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about this
It("should block while proper Send called", func(done Done) {
go func() {
request, err := serverSession.ServerStream().Recv()
Expect(err).ToNot(HaveOccurred())
Expect(request.VersionInfo).To(Equal("1"))
close(done)
}()
err := clientSession.ServerStream().Send(&envoy_api_v2.DiscoveryResponse{VersionInfo: "2"})
Expect(err).ToNot(HaveOccurred())
err = clientSession.ClientStream().Send(&envoy_api_v2.DiscoveryRequest{VersionInfo: "1"})
Expect(err).ToNot(HaveOccurred())
resp, err := serverSession.ClientStream().Recv()
Expect(err).ToNot(HaveOccurred())
Expect(resp.VersionInfo).To(Equal("2"))
}, "1s")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, much better! Didn't know about done Done
What is the UX if user is trying to connect to Global CP, but Zone is not present? If I was reading PR correctly, we will terminate such connection, but what is the error message in Remote? We should state that in order to connect Remote to Global, you need to add Zone resource |
@jakubdyszkiewicz you are right, now in Remote there is an error every 5 sec |
Signed-off-by: Ilya Lobkov <ilya.lobkov@konghq.com>
Signed-off-by: Ilya Lobkov <ilya.lobkov@konghq.com>
cbdb2b8
to
dcbc89e
Compare
Signed-off-by: Ilya Lobkov <ilya.lobkov@konghq.com>
Signed-off-by: Ilya Lobkov <ilya.lobkov@konghq.com>
Signed-off-by: Ilya Lobkov <ilya.lobkov@konghq.com>
Summary
This PR is an enhancement of existing KDS protocol, it allows Remote CP and Global CP to utilize the same grpc-stream under the hood. Thanks to that enhancement we can get rid of
GlobalLbAddress
andRemoteLbAddresses
.Current PR doesn't update
kuma-cp.config
.Full changelog
mux.proto
for multiplexingZoneResource
is cluster scope (not namespace scope)Issues resolved
Fix #XXX
Documentation