-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OTEL Collector - error reading server preface: http2: frame too large" #7680
Comments
I have the same problem and don't know how to solve it |
Problems on similar line, don't know how to solve it. |
This error:
says that you are sending a HTTP request to a gRPC endpoint. Make sure to enable the http protocol. |
+1 |
2 similar comments
+1 |
+1 |
This error is usually a symptom of trying to connect without TLS to a server that's expecting TLS. Are you dialing these connections using TLS or not? |
I had a similar issue and just solved it. HOW DID I SOLVE THIS: On the Ingress side, don't receive the data through 443, but instead design a designate port (4307). Which does use an SSL certificate, and forwards to port 14317 (You should use 4307). You can see that I use EKS, and my load-balancer is in AWS too, so you'll need to make some adjustments. |
This is happening because you're using the OTLP exporter, which is a gRPC exporter. However, you need an HTTP endpoint, so you'll likely need to update the OTLP exporter to otlphttp. |
Grafana was no longer able to communicate with the Tempo backend after some config changes recently. Checking the Grafana logs, I found ``` 2024-12-27 04:52:35 logger=grafana-apiserver t=2024-12-27T09:52:35.291900754Z level=info msg="[core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: \"tempo:3200\", ServerName: \"tempo:3200\", }. Err: connection error: desc = \"error reading server preface: http2: frame too large\"" ``` After some googling, this seemed to be an indicator that the client (Grafana) was trying to connect to the backend (Tempo) with http2, but the backend only supporting http1.1. In particular, Tempo uses grpc for internal communication, but by default, exposes its grpc capabilities on the grpc port, 4317; However, we had Grafana connecting to the http port, 3200. One option would be to expose the grpc port and use that, but it seems this is common enough that there's a built in [flag](https://github.com/grafana/tempo/blob/main/example/docker-compose/shared/tempo.yaml#L1) for it. This causes tempo to expose its grpc API over http; in particular, this means that the http2 request to initiate the grpc connection now succeeds. I chose to go with this approach, since that's the way the [examples](https://github.com/grafana/tempo/blob/main/example/docker-compose/shared/tempo.yaml#L1) I saw were structured, and it's the first thing that worked. See also this github discussion for context: open-telemetry/opentelemetry-collector#7680
Grafana was no longer able to communicate with the Tempo backend after some config changes recently. Checking the Grafana logs, I found ``` 2024-12-27 04:52:35 logger=grafana-apiserver t=2024-12-27T09:52:35.291900754Z level=info msg="[core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: \"tempo:3200\", ServerName: \"tempo:3200\", }. Err: connection error: desc = \"error reading server preface: http2: frame too large\"" ``` After some googling, this seemed to be an indicator that the client (Grafana) was trying to connect to the backend (Tempo) with http2, but the backend only supporting http1.1. In particular, Tempo uses grpc for internal communication, but by default, exposes its grpc capabilities on the grpc port, 4317; However, we had Grafana connecting to the http port, 3200. One option would be to expose the grpc port and use that, but it seems this is common enough that there's a built in [flag](https://github.com/grafana/tempo/blob/main/example/docker-compose/shared/tempo.yaml#L1) for it. This causes tempo to expose its grpc API over http; in particular, this means that the http2 request to initiate the grpc connection now succeeds. I chose to go with this approach, since that's the way the [examples](https://github.com/grafana/tempo/blob/main/example/docker-compose/shared/tempo.yaml#L1) I saw were structured, and it's the first thing that worked. See also this github discussion for context: open-telemetry/opentelemetry-collector#7680
Hello,
Need help with the issue i am facing with below usecase
Usecase: Test if we are able to export traces using a local otel collector over ingress to another otel collector service setup on an kubernetes cluster
I want to test this usecase without using self signed certificates.
Also i cannot use CA signed certificates as a part of TLS configuration for the otel collector as this would require CA key and private key of the server ( NGINX in this case) to generate CA signed client certificates which i think is not possible for production environments
Question :
Is it a must to use CA signed certificates for TLS configuration of otel collector? If yes, then as i said above, this would require CA key and private key of the server ( NGINX in this case) to generate CA signed client certificates which i think is not possible for productions environments
Is there is a way where we can use just the public key of the server ( NGINX in this case) for tls configuration for trusting the server certificate? Similar to how it works in case of browsers
Details
As part of this deployment, otel collector would have been setup as below
Issue :
Otel collector on a local host is throwing below error when trying to export traces to another otel collector deployed on K8S over ingress
Debugging Steps Tried:
proxy_buffers: 4 16k
The text was updated successfully, but these errors were encountered: