Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the ability to expose a port on a collector container #1011

Closed
kristinapathak opened this issue Jul 28, 2022 · 9 comments · Fixed by #1070
Closed

Add the ability to expose a port on a collector container #1011

kristinapathak opened this issue Jul 28, 2022 · 9 comments · Fixed by #1070
Labels
area:collector Issues for deploying collector help wanted Extra attention is needed

Comments

@kristinapathak
Copy link
Contributor

collector.Container() returns a Container that doesn't include Port information.

return corev1.Container{
Name: naming.Container(),
Image: image,
ImagePullPolicy: otelcol.Spec.ImagePullPolicy,
VolumeMounts: volumeMounts,
Args: args,
Env: envVars,
EnvFrom: otelcol.Spec.EnvFrom,
Resources: otelcol.Spec.Resources,
SecurityContext: otelcol.Spec.SecurityContext,
LivenessProbe: livenessProbe,

Will this prevent the Prometheus PodMonitor from finding the collector? How can we add the port information? Are the container ports the same as otelcol.Spec.Ports?

Ports []v1.ServicePort `json:"ports,omitempty"`

@pavolloffay pavolloffay added the area:collector Issues for deploying collector label Jul 29, 2022
@pavolloffay
Copy link
Member

Will this prevent the Prometheus PodMonitor from finding the collector?

I don't know, perhaps if the pod monitor uses port names. Could you please report back if you played with this? I think it's worth fixing this.

@pavolloffay pavolloffay added the help wanted Extra attention is needed label Jul 29, 2022
@kristinapathak
Copy link
Contributor Author

I investigated this more - I set up a pod monitor for the collector while running the collecter as a statefulset. The ports for the operator could be found in service discovery labels, but there weren't any ports with the collectors' labels. I tried with port and targetPort (deprecated), but neither worked.

@jpkrohling
Copy link
Member

Stupid question, but can't you use a ServiceMonitor instead? From what I remember, we create a service explicitly meant to be used with the ServiceMonitor.

@kristinapathak
Copy link
Contributor Author

I believe this is a problem for both ServiceMonitor and PodMonitor but will try a ServiceMonitor scenario. 🙂

@pavolloffay
Copy link
Member

@kevinearls will be working on this sortly.

@kevinearls
Copy link
Member

Hi @kristinapathak Do you have an example CR I can use while working on this? I don't have much experience working with prometheus and none with a collector instance that is using it. Thanks.

@kristinapathak
Copy link
Contributor Author

Hi @kevinearls,

My setup is not public, but below is information on how I verified that the lack of ports is a problem. I've attached my servicemonitor and podmonitor configurations as well as the helm chart and configuration files for prometheus.
files.zip

I am running the collector as a statefulset, using the following images.

  • collector: otel/opentelemetry-collector-contrib:0.58.0
  • operator: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:0.56.0
  • targetallocator: ghcr.io/open-telemetry/opentelemetry-operator/target-allocator:latest

As long as prometheus and a collector are running, the pod monitor and service monitor can be applied.

Once everything is up, I port forward the prometheus UI (hosted at 9090) and view the monitors at http://localhost:9091/service-discovery. There, I can see that targets were found and dropped for my pod monitor and service monitor. The targets for the collectors in the statefulset are missing the labels __meta_kubernetes_pod_container_port_name and __meta_kubernetes_pod_container_port_number. These labels should be there and contain the metric port information (8888).

This can be easily seen using prometheus, but the targetallocator has the same issue when prometheusCR is enabled. I find it easier to troubleshoot with the prometheus UI.

I can work on making a kubernetes setup that I can share, but at least this will hopefully give you a better idea of what I'm doing.

@kristinapathak
Copy link
Contributor Author

Hi @kevinearls, were you able to get prometheus and a pod/service monitor set up? If not, would you rather I work on this issue?

@kevinearls
Copy link
Member

Hi @kristinapathak Sorry, I have been tied up with other things this week. Yes, please go ahead and work on it. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:collector Issues for deploying collector help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants