Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] Flagd doesn't start when using k8s manifests #1983

Open
julianocosta89 opened this issue Jan 30, 2025 · 1 comment
Open

[bug] Flagd doesn't start when using k8s manifests #1983

julianocosta89 opened this issue Jan 30, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@julianocosta89
Copy link
Member

Bug Report

Which version of the demo you are using?
84304d3

Symptom

When using the k8s manifests: https://github.com/open-telemetry/opentelemetry-demo/blob/main/kubernetes/opentelemetry-demo.yaml the flagd service doesn't start, and the pod hangs on Init:0/1.

Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    34m                  default-scheduler  Successfully assigned default/opentelemetry-demo-flagd-5d67ccf97-rmfrn to otel-demo-worker2
  Warning  FailedMount  118s (x24 over 34m)  kubelet            MountVolume.SetUp failed for volume "config-ro" : configmap "opentelemetry-demo-flagd-config" not found

What is the expected behavior?

All services should start and demo should be usable.

What is the actual behavior?

Flagd doesn't start.

Reproduce

kubectl apply -f kubernetes/opentelemetry-demo.yaml

Additional details

I've installed the demo via Helm and all services started fine.
When checking if there was missing changes from Helm in the k8s manifests files I only got this update: #1982

@puckpuck
Copy link
Contributor

When writing the Helm Chart PR to remove the release name prefix, I noticed the same with the install. The solution I used was to add command line arguments to flagd so it would force bind to port 8013 (it was binding to a random port otherwise). I think if we do a make generate-kubernetes-manifest it will resolve this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants