Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

confusion around namespace replacement #880

Closed
donbowman opened this issue Mar 15, 2019 · 48 comments · Fixed by #4708
Closed

confusion around namespace replacement #880

donbowman opened this issue Mar 15, 2019 · 48 comments · Fixed by #4708
Labels
area/plugin issues for plugins kind/feature Categorizes issue or PR as related to a new feature. triage/under-consideration

Comments

@donbowman
Copy link
Contributor

kustomize has an ability to replace all namespaces in one config, setting namespace: XXX

Naively I thought this meant all unset namepaces, but it actually overwrites all.

This creates a challenge when you have something that uses >1 namespace (e.g. cert-manager, it installs 1 thing into kube-system, rest into cert-manager).

I think maybe we want either:

  • namespace sets unset only
  • namespace works somewhat like image, e.g. oldNS: X -> newNS: Y

Although technically possible to json/strat patch all objects, this is exceptionally tedious when there are many of them.

@Liujingfang1 Liujingfang1 added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 18, 2019
@Liujingfang1
Copy link
Contributor

namespac overwrites .metadata.namespace in all resources, it provides easy separation of resources between different environments. For some type of resources, namespace is not needed or users don't want to overwrite the namespace. Thus we need to be able to skip certain types of resources.

Currently, Kustomize skips adding namespace for some hard-coded types, https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/gvk/gvk.go#L154.
This need to be extended to allow user specify a skip type.

@donbowman
Copy link
Contributor Author

here it would not be a skip type, it would be a named entity.
e.g. I might have a setup that installs 1 pod into kube-system and the rest into its own namespace.

@iamsaso
Copy link

iamsaso commented May 14, 2019

I have the same problem and I would like to use bases to link to dependencies but not change the dependency namespace. @donbowman were you able to get around this issue?

@rcrowe
Copy link

rcrowe commented Jun 6, 2019

I've just hit the same problem when using a role binding, where the subject is in a different namespace than metadata.name. The subject namespace is being overwritten.

@donbowman
Copy link
Contributor Author

another example. when using istio, i need to be able to have a certificate and ingressgateway in namespace istio-system, but have the rest of the material (including the config generator) in its own.

so e.g. I have:

  • deployment
  • secretGenerator
  • certificate
  • ingressgateway

if i do not set a namespace in kustomization.yaml, I end up w/ the Secret from the secretGenerator in the root namespace, but the deployment that would ref it is not, so it doesn't work.
If i do set a namespace in kustomization.yaml, then my ingressgateway and certificate are rewritten to be in the wrong namespace.

it seems there is no way to make this work.

@quentinleclerc
Copy link

Hello,
I've been facing issues with the namespace override also.

Wouldn't it be possible to add an option to the namespace key in kustomization.yaml to specify if we want it to override or not ?

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: 
  overwrite: <true | false>
  name: <namespace>

@Russell-IO
Copy link

Wouldn't it be possible to add an option to the namespace key in kustomization.yaml to specify if we want it to override or not ?
As long as it defaults to false to observe current behaviour

@donbowman
Copy link
Contributor Author

For interest, I created a transformer to do this for me:

#!/usr/bin/env /usr/bin/python3

import sys
import yaml

with open(sys.argv[1], "r") as stream:
    try:
        data = yaml.safe_load(stream)
    except yaml.YAMLError as exc:
        print("Error parsing NamespaceTransformer input", file=sys.stderr)

# See kubectl api-resources --namespaced=false
blacklist = [
    "ComponentStatus",
    "Namespace",
    "Node",
    "PersistentVolume",
    "MutatingWebhookConfiguration",
    "ValidatingWebhookConfiguration",
    "CustomResourceDefinition",
    "APIService",
    "MeshPolicy",
    "TokenReview",
    "SelfSubjectAccessReview",
    "SelfSubjectRulesReview",
    "SubjectAccessReview",
    "CertificateSigningRequest",
    "ClusterIssuer",
    "BGPConfiguration",
    "ClusterInformation",
    "FelixConfiguration",
    "GlobalBGPConfig",
    "GlobalFelixConfig",
    "GlobalNetworkPolicy",
    "GlobalNetworkSet",
    "HostEndpoint",
    "IPPool",
    "PodSecurityPolicy",
    "NodeMetrics",
    "PodSecurityPolicy",
    "ClusterRoleBinding",
    "ClusterRole",
    "ClusterRbacConfig",
    "PriorityClass",
    "StorageClass",
    "VolumeAttachment",
]

try:
    for yaml_input in yaml.safe_load_all(sys.stdin):
        if yaml_input['kind'] not in blacklist:
            if "namespace" not in yaml_input["metadata"]:
                yaml_input["metadata"]["namespace"] = data["namespace"]
        print("---")
        print(yaml.dump(yaml_input, default_flow_style=False))
except yaml.YAMLError as exc:
    print("Error parsing YAML input\n\n%s\n\n" % input, file=sys.stderr)

@flunderveiko
Copy link

Can you please describe how you put the transformer in the kustomize build ... run !

@donbowman
Copy link
Contributor Author

donbowman commented Jul 3, 2019

  1. put the transformer in ~/.config/kustomize/plugin/agilicus/v1/namespacetransformer/
  2. add a yaml file as below as e.g. 'set-ns.yaml'
  3. add 'transformers:\n - set-ns.yaml' to kustomization.yaml
---
apiVersion: agilicus/v1
kind: NamespaceTransformer
metadata:
  name: not-used-ns
namespace: foobar

I need to get around to creating a github repo w/ my generator + transformers.

edit: they are here

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 3, 2019
krishnadurai added a commit to krishnadurai/manifests that referenced this issue Oct 4, 2019
@invidian
Copy link
Member

invidian commented Oct 4, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 4, 2019
krishnadurai added a commit to krishnadurai/manifests that referenced this issue Oct 12, 2019
krishnadurai added a commit to krishnadurai/manifests that referenced this issue Oct 17, 2019
k8s-ci-robot pushed a commit to kubeflow/manifests that referenced this issue Oct 17, 2019
…369)

* Upgrades cert-manager to v0.10.0
Structures CertificateIssuers into 2 overlays:
- letsencrypt
- self-signed

* Moves namespace resource to cert-manager manifests instead of crds
Corrects entries for namespace param substitution

* Namespaces now governed through params and not Kustomization namespace
Webhook Role Binding Namespace added
caBundle field set in Webhook configuration for validation to pass in kubectl apply

* Replaces apiGroup extensions with networking.k8s.io for 1.16 K8s

* Remove namespace parameter from kustomization metadata.namespace

* Restores namespace attribute to cert-manager manifest

* Separates cert-manager's webhook role binding due to kustomization namespace issue
Refer: kubernetes-sigs/kustomize#880

* Removes trailing 's' from resource file names

* Strips off --- from beginning of the files

* Splits cert-manager kustomization into cert-manager, webhook and ca-injector

* Applications for Cert-Manager, Webhook and CA-Injector
Removes labels and applies them through Applications

* Modifies tests for cert-manager/cert-manager application

* Updated tests for cert-manager application overlays
Corrected params

* Addressing review comments

* Consolidates under application cert-manager

* Removing unnecessary yaml formatting

* Adds missing selector in cert-manager service

* Minor correction in unit test

* Upgrades certmanager to v0.11.0

* Fixes validatingwebhookconfiguration for certmanager webhook

* Makes corrections to certmanager-kube-system resources
Tests updated

* Fix removing cert-manager-leaderelection role
Adds extensions back to ingresses in controller-challenges clusterrole
Images in Kustomization set to v0.11.0

* Corrects cert-manager-controller-challenges clusterrole

* Updates overlays with new APIVersion

* New convention for application overlay
Addresses review comments

* Removes cert-manager-kube-system-resources overlay as it is not an application

* Corrects cert-manager application name
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 2, 2020
@invidian
Copy link
Member

invidian commented Jan 2, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 2, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 1, 2020
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 7, 2021
@invidian
Copy link
Member

invidian commented Sep 7, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 7, 2021
@harindaka
Copy link

I just hit this issue with my ingress as well. Is there an ETA for a fix?

@benjamin-wright
Copy link

FWIW, I've had this problem and the best workaround I could find was to patch over the overridden namespace:

patchesJson6902:
  - target:
      group: ""
      version: v1
      kind: ConfigMap
      name: my-config-name
    patch: |-
      - op: replace
        path: /metadata/namespace
        value: other-namespace

Not great if you've got a ton of resources but works in a pinch for a couple

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 10, 2022
@paullryan
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 10, 2022
@vvatlin
Copy link

vvatlin commented Mar 27, 2022

I have the same issue, when I have Istio Gateway, VirtualService in one namespace and I have to have certificate in the Istio-ingress namespace.
Can't do it right now with Kustomize

@stianlagstad
Copy link

I found this issue while trying to figure out why Kustomize wasn't overriding the namespace for a MutatingWebhookConfiguration resource. I'm using https://github.com/influxdata/telegraf-operator/blob/v1.3.6/deploy/dev.yml, and the namespace for the mutating webhook configuration wasn't being updated by Kustomize. I ended up solving it like this, thanks to benjamin-wright above:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namespace: observability

# For some reason the above `namespace: observability` doesn't update the namespace for the MutatingWebhookConfiguration
# resource, so we do that with a patch:
patchesJson6902:
  - target:
      group: ""
      version: v1
      kind: MutatingWebhookConfiguration
      name: telegraf-operator
    patch: |-
      - op: replace
        path: /metadata/namespace
        value: observability

@Moulick
Copy link

Moulick commented Jul 12, 2023

@stianlagstad unless I am mistaken, MutatingWebhookConfiguration is a cluster scoped object hence adding a namespace there is incorrect and hence why kustomize ignores it.

@RyanSquared
Copy link
Contributor

@Moulick you're right, I've found an alternative solution: https://github.com/james-callahan/cert-manager-kustomize/tree/main/webhook#usage

@mathe-matician
Copy link

As @benjamin-wright mentioned, the following works:

patchesJson6902:
  - target:
      group: ""
      version: v1
      kind: ConfigMap
      name: my-config-name
    patch: |-
      - op: replace
        path: /metadata/namespace
        value: other-namespace

When adding this, I get the output:

# Warning: 'patchesJson6902' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.

Using patches, though, doesn't work:

patches:
  - target:
      group: ""
      version: v1
      kind: ConfigMap
      name: my-config-name
    patch: |-
      - op: replace
        path: /metadata/namespace
        value: other-namespace

Has anyone found a work around for this using patches?

@quixoten
Copy link

@mathe-matician

using the transformers field to do the patch seems to work:

transformers:
- |-
  apiVersion: builtin
  kind: PatchTransformer
  metadata:
    name: fix-cert-namespace
  patch: '[{"op": "replace", "path": "/metadata/namespace", "value": "istio-system"}]'
  target:
    group: cert-manager.io
    kind: Certificate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/plugin issues for plugins kind/feature Categorizes issue or PR as related to a new feature. triage/under-consideration
Projects
None yet
Development

Successfully merging a pull request may close this issue.