Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod isn't restarted when configmap is updated #197

Closed
jpkrohling opened this issue Feb 12, 2021 · 6 comments · Fixed by #215
Closed

Pod isn't restarted when configmap is updated #197

jpkrohling opened this issue Feb 12, 2021 · 6 comments · Fixed by #215
Labels
bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed

Comments

@jpkrohling
Copy link
Member

When the collector's configuration has changed, the pods that are using the respective configmap should be restarted, and it currently isn't.

@jpkrohling jpkrohling added bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed labels Feb 12, 2021
@bhiravabhatla
Copy link
Contributor

Hey @jpkrohling ,

I would like to work on this. I have gone through the code where we are handling updates to the configmap. I would like to understand couple things.

  • Any specific reason we are using Client.Patch instead of Client.Update?

if err := params.Client.Patch(ctx, updated, patch); err != nil {

  • Also, I see that we generate the patch for configmap from &params.Instance ( which of type v1alpha1.OpenTelemetryCollector).

patch := client.MergeFrom(&params.Instance)

Dint get the use of this.

For restarting pod on configmap change, I guess we should do following -

  • Have a function which could compare the existing and desired configmaps (preferably data) and returns true - if they are different.
  • If true, use existing logic to update configmap. Additionally get the pod currently running for operator deployment and delete it. New pod should come up with new config.

Thoughts @jpkrohling ?

@jpkrohling
Copy link
Member Author

Thank you for helping out!

Any specific reason we are using Client.Patch instead of Client.Update?

Patches are less likely to get in conflict with full object updates. See #54 for more info.

Additionally get the pod currently running for operator deployment and delete it. New pod should come up with new config.

Do you have any references on what's the recommendation on Kubernetes' documentation? A rolling update would be my preference, so that pod using the new configuration will have to first get into a running state before they are killed, preventing bad configs from causing an outage of the collector.

@bhiravabhatla
Copy link
Contributor

I was coming from the thought process behind the configmap reloaders out there. For example - https://github.com/stakater/Reloader. This actually does the same, kills the pod when config changes.

But as you said, as deployment itself is created/updated by controller, we could actually do a rolling update - which is better than simply killing the pod. I agree.

To achieve this

Second point I am not sure how to implement using controller-runtime.

@bhiravabhatla
Copy link
Contributor

@jpkrohling , went through above code links, as you said - its doing the same - annotating with timestamp.
Shall I go ahead with first option then. Generate the checksum of configmap and annotate the deployment with same?

@jpkrohling
Copy link
Member Author

Sounds worth a try!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants