Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support triggering rollout when referenced ConfigMaps and/or Secrets change #958

Closed
vito-laurenza-zocdoc opened this issue Jan 21, 2021 · 14 comments
Labels
enhancement New feature or request

Comments

@vito-laurenza-zocdoc
Copy link

Summary

Support for optionally triggering a rollout when one or more referenced ConfigMaps and/or Secrets change.

Use Cases

Deployments can often be config changes only. It would be useful if a rollout could optionally happen when underlying ConfigMaps and/or Secrets used by the Deployment (as volume mounts, environment vars) are changed.


Message from the maintainers:

Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.

@vito-laurenza-zocdoc vito-laurenza-zocdoc added the enhancement New feature or request label Jan 21, 2021
@jessesuen
Copy link
Member

A common technique to solving this is:

  • helm - hash contents of configmap/secret and include it as an annotation in the pod.
  • kustomize - use kustomize configmap/secret generators so that the name of the configmap/secret incorporates a hash of the contents of the config

@risinger
Copy link

I would be interested in seeing first-class support for config changes in Argo Rollouts similar to how Flagger handles it. While I'm not fond of requiring the same config on basically every application, my main complaint with the status quo is the diff it generates. When using a hash to create a new configmap/secret for every change to it's contents, we're stuck with a useless Argo CD diff of the entire configmap/secret being added/removed.

@jessesuen
Copy link
Member

jessesuen commented Jun 3, 2021

I've been thinking about this some more. I think the way I would solve this, is with a simple controller that monitors ConfigMaps and Secrets. These objects would be annotated to have back-references to a rollout (or even deployment) which would need to be redeployed upon change. For example:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
  annotations:
    redeploy-on-update: rollout.argoproj.io/guestbook
data:
  foo: bar

The controller would continuously watch configmaps and secrets. When these objects are updated, it would inject a hash of the configmap into an annotation of the referenced rollout or deployment pod template. e.g.:

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: guestbook
spec:
  template:
    metadata:
      annotations:
        configmap.my-config.hash: abcd1234

Because of the change in the spec.template.metadata.annotations, the rollout (or deployment) would then go through the normal update process.

The beauty of this approach is that this controller could operate standalone, and would even work with Deployments. In other words, non-rollout users of Argo CD and Deployments would benefit from this.

@jessesuen
Copy link
Member

I just came across a project which took a slightly different but similar approach and already has built-in support for Argo Rollouts!
https://github.com/stakater/Reloader

Has anyone tried using reloader with Argo Rollouts?

@jessesuen
Copy link
Member

Looks like their implementation is tied to rollout releases:
stakater/Reloader#232

I think we should improve Rollout support in Reloader so that it is no longer tightly coupled to Rollout versions

@risinger
Copy link

#958 (comment)
I like this proposal because it would show a clean diff between old and new config and trigger a rollout for config changes.

I just tried skataker Reloader and it worked as billed: it ran a rolling update. I would much prefer a solution that triggers a rollout and goes through the configured update process. As an added bonus your proposal should result in a pleasant diff, which would be a major improvement over the configmap name with hash solution mentioned here #958 (comment).

@pa4h1u3-BRONGA
Copy link

@jessesuen we have this same need....we're planning on file mounted configmaps for our needs, but need seamless roll fwd/back w/ argo rollouts, and the has option is super attractive to us.

@tobernguyen
Copy link

tobernguyen commented Mar 11, 2022

A common technique to solving this is:

  • helm - hash contents of configmap/secret and include it as an annotation in the pod.
  • kustomize - use kustomize configmap/secret generators so that the name of the configmap/secret incorporates a hash of the contents of the config

There is an issue with this approach when combined with ArgoCD with autosync: prune turned on because:

  • ArgoCD will prune the old ConfigMap on sync
  • The stable deployment is still referencing the old ConfigMap, which is now no longer exist

Now there are several things can happen:

  1. If we use HPA with Canary strategy and HPA increases the number of replicas, Argo Rollouts will scale up the stable replica => new pods will fail to create because of the invalid ConfigMap reference.
  2. If we abort the rollout, the same thing could happen as the old replica is still referencing the old ConfigMap

What do you think @jessesuen ? What is your suggestion to overcome this limitation/issue?

Edit 1: After more investigation, found this argoproj/argo-cd#1629 which is exactly what I described above but the problem is not complete until this is done argoproj/argo-cd#1636

Edit 2: For anyone having the same question and need a workaround, this works for us now by adding these 2 annotations to the configmap generated by Kustomize or Helm

  annotations:
    argocd.argoproj.io/compare-options: IgnoreExtraneous
    argocd.argoproj.io/sync-options: Prune=false

If you are using configMapGenerator or secretGenerator, you can add these lines to kustomization.yaml:

generatorOptions:
  annotations:
    argocd.argoproj.io/compare-options: IgnoreExtraneous
    argocd.argoproj.io/sync-options: Prune=false

Keep in mind that ArgoCD won't prune old ConfigMap(s) anymore and it will start to pile up if you change the ConfigMap a lot. But I guess once in a while, you can prune them manually or write an argocd hook job to clean them up when the sync is complete (= rollout is promoted)

@risinger
Copy link

@tobernguyen have you tried PruneLast (on its own, without IgnoreExtraneous)? We currently append a SHA to our configmap names. The reference change in the Rollout spec triggers a rollout. After the rollout is complete, ArgoCD prunes the old configmap.

@tobernguyen
Copy link

@tobernguyen have you tried PruneLast (on its own, without IgnoreExtraneous)? We currently append a SHA to our configmap names. The reference change in the Rollout spec triggers a rollout. After the rollout is complete, ArgoCD prunes the old configmap.

How this will work in the case that we abort the rollout? When we abort the rollout, will ArgoCD prune the old configmap because now the sync is complete, right?

@PhilippPlotnikov
Copy link
Contributor

I would like to implement it

@Jolley71717
Copy link

@tobernguyen have you tried PruneLast (on its own, without IgnoreExtraneous)? We currently append a SHA to our configmap names. The reference change in the Rollout spec triggers a rollout. After the rollout is complete, ArgoCD prunes the old configmap.

How this will work in the case that we abort the rollout? When we abort the rollout, will ArgoCD prune the old configmap because now the sync is complete, right?

Based on the description of how PruneLast works, I'd venture a guess that the old configmap is still present.

after the other resources have been deployed and become healthy, and after all other waves completed successfully

What I'd like to see is the addition of the revisionHistoryLimit that only lets x number of configmaps stay around before being pruned.

Similar to

spec:
  revisionHistoryLimit: 3

@abdennour
Copy link

abdennour commented Nov 13, 2022

Since 2020, My method for that as i am using always Helm with Argocd:

  1. In your Job, calculating the sha256sum of the configmap X & compare it with its stored SHA (check 2)..

    • If there are the same , don't do anything.
    • If they are different, trigger whatever you want to trigger based on your need
  2. calculating the sha256sum of the configmap X & store it in a centralized configmap in PostSync Job with too high synwave

@zachaller
Copy link
Collaborator

Going to close this in favor of external tooling managing this such as https://github.com/stakater/Reloader as well as other tools like helm being able to do the same thing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

9 participants