Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for configuring more gRPC client settings #1041

Closed
RashmiRam opened this issue May 28, 2020 · 8 comments · Fixed by #1353
Closed

Support for configuring more gRPC client settings #1041

RashmiRam opened this issue May 28, 2020 · 8 comments · Fixed by #1353
Assignees
Labels
enhancement New feature or request

Comments

@RashmiRam
Copy link
Contributor

Is your feature request related to a problem? Please describe.
There is no way to configure load balancer name in the gRPC client settings and the default is pickFirst which won't work in case of gRPC endpoint being simple dns.

Describe the solution you'd like
Allow gRPC client settings like balancerName to be configured via config file.

Describe alternatives you've considered
Nothing that I think of

Additional context
I have a setup where OpenTelemetry collector is running as agent and is configured to have jaeger exporter. The jaeger collectors are behind a DNS. In this case, I require the otel collector to do the load balancing as there is no external load balancer. By default, the gRPC client at otel collector end chooses pickFirst lb and there is no way to configure the lb name in gRPC client settings. So, all the requests are going to a single jaeger collector.

@RashmiRam
Copy link
Contributor Author

If this is a valid ask, I would be happy to send a PR for the same.

@tigrannajaryan
Copy link
Member

I have no experience with this part of gRPC.
@bogdandrutu can you please comment on this (if you have the experience)?

@RashmiRam
Copy link
Contributor Author

@bogdandrutu Could you please tell what do you think about this?

@bogdandrutu
Copy link
Member

Please point me to the config that needs to be changed in grpc. We should support nthis if it is a property in the grpc client DialOptions.

@RashmiRam
Copy link
Contributor Author

@bogdandrutu
Copy link
Member

Sorry that I missed it, yes please make a PR to add the necessary config in the GRPCClientSettings

@andrewhsu andrewhsu added the enhancement New feature or request label Jan 6, 2021
MovieStoreGuy pushed a commit to atlassian-forks/opentelemetry-collector that referenced this issue Nov 11, 2021
* Clean stale indirect dependency requirements

In the recent changes to isolate the main `otel` package there were many
indirect dependencies of the package that were removed, however, the
go.mod was not automatically cleaned of these. This removes those (and
similar ones in the otel-collector example and otel exporter) and prunes
the go.sum files accordingly.

* Run in a clean system to reproduce build
@atibdialpad
Copy link

Hi @RashmiRam @bogdandrutu
I have a setup which looks like this :
Group of Otel-Agent PODs (otlp exporters) --> K8s Service backed by a group of OTEL Collector PODs (otlp receivers)

The service is the default ClusterIP and I am using "svc_name.default.svc.local" to connect from the exporters to the collectors and I am seeing one (or few) otel collector PODs doing most of the work and I feel thats because of connection level load balancing not working for gRPC.

My question is :

  1. Will doing a balancer_name:round_robin work here ?
    Since I am using clusterIP I feel DNS will return just a single IP (the clusterIP) so weather I do first_match or round_robin I still end up on the one cluster_ip and then the default connection_level k8s load balancing

  2. If that's true, k8s headless svc might help me here ?? The DNS by service name will directly return all the backend PODs and then round_robin at the OTLP Exporter side might help ??

Will appreciate your take on this. Thanks.

@RashmiRam
Copy link
Contributor Author

Hello @atibdialpad

Will doing a balancer_name:round_robin work here ?
Since I am using clusterIP I feel DNS will return just a single IP (the clusterIP) so weather I do first_match or round_robin I still end up on the one cluster_ip and then the default connection_level k8s load balancing

Yes. As you rightly said, it will always provide you the service ip and it doesn't matter which load balancer that you choose. The LB is taken care for you at your receiving k8s service side only on connection level. Since gRPC is HTTP/2, you might end up seeing all requests from a client pod goes to single server again.

If that's true, k8s headless svc might help me here ?? The DNS by service name will directly return all the backend PODs and then round_robin at the OTLP Exporter side might help ?

headless svc will work here since it returns all Pod IPs back and client side grpc lb will do the load balancing based on the balancer that you have configured.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants