From 409e77d3e5a68a71b716412aa0fea56e6b14af73 Mon Sep 17 00:00:00 2001 From: Jennifer Rondeau Date: Mon, 26 Mar 2018 21:33:11 -0400 Subject: [PATCH] Merge 1.10 to master for release (#7861) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 1.10 update (#7151) * Fix partition value expected behaviour explanation (#7123) Fixes issue #7057 * Correct "On-Premise" to "On-Premises" * Updates the Calico installation page (#7094) * All files for Haufe Groups case study (#7051) * Fix typo (#7127) * fix typo of device-plugins.md (#7106) * fix broken links (#7136) * Updated configure-service-account (#7147) Error from server resolved by escaping kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}' JSON string by '\' * Remove docs related to 'require-kubeconfig' (#7138) With kubernetes/kubernetes#58367 merged, v1.10 will not use the "require-kubeconfig" flag. The flag has become a no-op solely to ensure existing deployments won't break. * Added Verification Scenario for a Pod that Uses a PVC in Terminating State (#7164) The below PR: https://github.com/kubernetes/kubernetes/pull/55873 modified scheduler in such a way that scheduling of a pod that uses a PVC in Terminating state fails. That's why verification of such scenario was added to documentation. * fix LimitPodHardAntiAffinityTopology name (#7221) * Document the removal of the KubeletConfigFile feature gate (#7140) With kubernetes/kubernetes#58978 merged, the said feature gate is removed. This PR removes texts related to the gate and revises the Feature Gates reference to reflect this change. * deprecate three admission controller (#7363) * Document the removal of Accelerators feature gate (#7389) The `Accelerators` feature gate will be removed in 1.11. 1.10 will be its last mile. References: kubernetes/kubernetes#57384 * Update local storage docs for beta (#7473) * Document that HugePages feature gate is Beta (#7387) The `HugePages` feature gate has graduated to Beta in v1.10. This PR documents this fact. * Add HyperVContainer feature gates (#7502) * Remove the beta reference from Taints and Tolerations doc (#7493) * Kms provider doc (#7479) * Kms provider doc * issue# 7399, Create KMS-provider.md and update encrypt-data.md * address review comments * Document that Device Plugin feature is Beta (1.10) (#7512) * Add docs for CRD features for 1.10 (#7439) * Add docs for CRD features for 1.10 * Add CustomResourcesSubresources to list of feature gates * Add latest changes to custom resources doc * Add crds as abbreviated alias (#7437) * Bring PVC Protection Feature to Beta (#7165) * Bring PVC Protection Feature to Beta The PR: https://github.com/kubernetes/kubernetes/pull/59052 brought PVC Protection feature to beta. That's why the documentation is updated accordingly. * The PVC Protection feature was renamed to Storage Protection. That's why the documentation is updated. * promote PodNodeSelector to stable; document detailed behavior (#7134) * promote PodNodeSelector to stable; document detailed behavior * respond to feedback * Update CPU manager feature enabling (#7390) With `CPUManager` feature graduating to beta. No explicit enabling is required starting v1.10. References: kubernetes/kubernetes#55977 * Adding block volumeMode documentation for local volumes. (#7531) Code review comments. Changed property to field. Address tech review comment. * remove description kubectl --show-all (#7574) --show-all has been deprecated and set to true by default. https://github.com/kubernetes/kubernetes/pull/60210 * fix description about contribute style guide (#7592) * fix description about KUBECONFIG (#7589) s/envrionment/environment * fix description about cni (#7588) s/simultanously/simultaneously/ * fix description about MutatingAdmissionWebhook and ValidatingAdmissionWebhook (#7587) * fix description about persistent volume binding (#7590) s/slighty/slightly/ * Doc change for configurable pod resolv.conf Beta (#7611) * fix description about out of resource handling (#7597) s/threshhold/threshold * fix description about zookeeper (#7598) s/achive/achieve * fix description about kubeadm (#7594) s/compatability/compatibility/ * fix description about kubeadm (#7593) * fix description about kubeadm implementation details (#7595) * fix description about api concepts (#7596) * Storage Protection was renamed to Storage Object in Use Protection (#7576) * Storage Protection was renamed to Storage Object in Use Protection The K8s PR: https://github.com/kubernetes/kubernetes/pull/59901 renamed Storage Protection to Storage Object in Use Protection. That's why the same is also renamed in the documentation. * Moved Storage Object in Use Protection admission plugin description down according to alphabetic order. * Use PSP from policy API group. (#7562) * update kubeletconfig docs for v1.10, beta (#7561) * Update port-forwarding docs (#7575) * add pv protection description (#7620) * fix description about client library (#7634) * Add docs on configuring NodePort IP (#7631) * Document that LocalStorageCapacityIsolation is beta (#7635) A follow-up to the kubernetes/kubernetes#60159 change which has promoted the `LocalStorageCapacityIsolation` feature gate to Beta. * Update CoreDNS docs for beta (#7638) * Update CoreDNS docs for beta * Review comments * Fix typo (#7640) * Update feature gates move to beta (#7662) * Added the inability to use colon ':' character as environment variable names and described workaround (#7657) * merge master to 1.10, with fixes (#7682) * Flag names changed (s/admission-control/enable-admission-plugins); disable-admissions-plugin entry added; removed reference to admission controller/plugins requiring set order (for v1.10), redundant example enabling specific plugin, and redundant version-specific info (#7449) * Documentation for MountPropagation beta (#7655) * Remove job's scale-related operations (#7684) * authentication: document client-go exec plugins (#7648) * authentication: document client-go exec plugins * Update authentication.md * Update local ephemeral storage feature to beta (#7685) Update local ephemeral storage feature to beta * Update docs for windows container resources (#7653) * add server-side print docs (#7671) * Create a task describing Pod process namespace sharing (#7489) * Add external metrics to HPA docs (#7664) * Add external metrics to HPA docs * Update horizontal-pod-autoscale-walkthrough.md * Apply review comments to HPA walkthrough * remove description about "scale jobs" (#7712) * CSI Docs for K8s v1.10 (#7698) * Add a warning about increased memory consumption for audit logging feature. (#7725) Signed-off-by: Mik Vyatskov * Update Audit Logging documentation for 1.10 (#7679) Signed-off-by: Mik Vyatskov * Fix stage names in audit logging documentation (#7746) Signed-off-by: Mik Vyatskov * Feature gate update for release 1.10 (#7742) * State in the docs that the value of default Node labels are not reliable. (#7794) * Kill the reference to --admission-control option (#7755) The `--admission-control` option has been replaced by two new options in v1.10. This PR kills the last appearance of the old option in the doc. * Pvcprotection toc (#7807) * Refreshing installation instructions (#7495) * Refreshing installation instructions Added conjure-up. Updated displays and juju versions to current versions. * Updated anchors * Fixed image value version typo (#7768) Was inconsistent with other values * Update flocker reference to the github repo (#7784) * Fix typo in federation document (#7779) * an user -> a user (#7778) * Events are namespaced (#7767) * fix 'monitoring' link lose efficacy problem' (#7764) * docs/concepts/policy/pod-security-policy.md: minor fix. (#7659) * Update downward-api-volume-expose-pod-information.md (#7771) * Update downward-api-volume-expose-pod-information.md The pod spec puts the downward api files into /etc/podinfo, not directly in /etc. Updated docs to reflect this fact. * Update downward-api-volume-expose-pod-information.md One more spot needed fixing. * Update downward-api-volume-expose-pod-information.md Yet another fix, in the container example. * Add Amadeus Case Study (#7783) * Add Amadeus Case Study * add Amadeus logo * Fixed Cyrillic с in 'kube-proxy-cm' (#7787) There was a typo (wrong character) in kube-proxy-cm.yaml - Cyrillic с (UTF-8 0x0441) was used instead of Latin c. * install-kubectl: choose one installation method (#7705) The previous text layout suggested that all installations had to be done, one after another. * Update install-kubeadm.md (#7781) Add note to kubeadm install instruction to help install in other arch i.e. aarch64, ppc64le etc. * repair failure link (#7788) * repair failure link * repair failure link * do change as required * Update k8s201.md (#7777) * Update k8s201.md Change instructions to download yams files directly from the website (as used in other pages.) Added instructions to delete labeled pod to avoid warnings in the subsequent deployment step. * Update k8s201.md Added example of using the exposed host from the a node running Kubernetes. (This works on AWS with Weave; not able to test it on other variations...) * Gramatical fix to kompose introduction (#7792) The original wording didn't through very well. As much of the original sentence has been preserved as possible, primarily to ensure the kompose web address is see both in text and as a href link. * update amadeus.html (#7800) * Fix a missing word in endpoint reconciler section (#7804) * add toc entry for pvcprotection downgrade issue doc * Pvcprotection toc (#7809) * Refreshing installation instructions (#7495) * Refreshing installation instructions Added conjure-up. Updated displays and juju versions to current versions. * Updated anchors * Fixed image value version typo (#7768) Was inconsistent with other values * Update flocker reference to the github repo (#7784) * Fix typo in federation document (#7779) * an user -> a user (#7778) * Events are namespaced (#7767) * fix 'monitoring' link lose efficacy problem' (#7764) * docs/concepts/policy/pod-security-policy.md: minor fix. (#7659) * Update downward-api-volume-expose-pod-information.md (#7771) * Update downward-api-volume-expose-pod-information.md The pod spec puts the downward api files into /etc/podinfo, not directly in /etc. Updated docs to reflect this fact. * Update downward-api-volume-expose-pod-information.md One more spot needed fixing. * Update downward-api-volume-expose-pod-information.md Yet another fix, in the container example. * Add Amadeus Case Study (#7783) * Add Amadeus Case Study * add Amadeus logo * Fixed Cyrillic с in 'kube-proxy-cm' (#7787) There was a typo (wrong character) in kube-proxy-cm.yaml - Cyrillic с (UTF-8 0x0441) was used instead of Latin c. * install-kubectl: choose one installation method (#7705) The previous text layout suggested that all installations had to be done, one after another. * Update install-kubeadm.md (#7781) Add note to kubeadm install instruction to help install in other arch i.e. aarch64, ppc64le etc. * repair failure link (#7788) * repair failure link * repair failure link * do change as required * Update k8s201.md (#7777) * Update k8s201.md Change instructions to download yams files directly from the website (as used in other pages.) Added instructions to delete labeled pod to avoid warnings in the subsequent deployment step. * Update k8s201.md Added example of using the exposed host from the a node running Kubernetes. (This works on AWS with Weave; not able to test it on other variations...) * Gramatical fix to kompose introduction (#7792) The original wording didn't through very well. As much of the original sentence has been preserved as possible, primarily to ensure the kompose web address is see both in text and as a href link. * update amadeus.html (#7800) * Fix a missing word in endpoint reconciler section (#7804) * add toc entry for pvcprotection downgrade issue doc * revert TOC change * Release 1.10 (#7818) * Refreshing installation instructions (#7495) * Refreshing installation instructions Added conjure-up. Updated displays and juju versions to current versions. * Updated anchors * Fixed image value version typo (#7768) Was inconsistent with other values * Update flocker reference to the github repo (#7784) * Fix typo in federation document (#7779) * an user -> a user (#7778) * Events are namespaced (#7767) * fix 'monitoring' link lose efficacy problem' (#7764) * docs/concepts/policy/pod-security-policy.md: minor fix. (#7659) * Update downward-api-volume-expose-pod-information.md (#7771) * Update downward-api-volume-expose-pod-information.md The pod spec puts the downward api files into /etc/podinfo, not directly in /etc. Updated docs to reflect this fact. * Update downward-api-volume-expose-pod-information.md One more spot needed fixing. * Update downward-api-volume-expose-pod-information.md Yet another fix, in the container example. * Add Amadeus Case Study (#7783) * Add Amadeus Case Study * add Amadeus logo * Fixed Cyrillic с in 'kube-proxy-cm' (#7787) There was a typo (wrong character) in kube-proxy-cm.yaml - Cyrillic с (UTF-8 0x0441) was used instead of Latin c. * install-kubectl: choose one installation method (#7705) The previous text layout suggested that all installations had to be done, one after another. * Update install-kubeadm.md (#7781) Add note to kubeadm install instruction to help install in other arch i.e. aarch64, ppc64le etc. * repair failure link (#7788) * repair failure link * repair failure link * do change as required * Update k8s201.md (#7777) * Update k8s201.md Change instructions to download yams files directly from the website (as used in other pages.) Added instructions to delete labeled pod to avoid warnings in the subsequent deployment step. * Update k8s201.md Added example of using the exposed host from the a node running Kubernetes. (This works on AWS with Weave; not able to test it on other variations...) * Gramatical fix to kompose introduction (#7792) The original wording didn't through very well. As much of the original sentence has been preserved as possible, primarily to ensure the kompose web address is see both in text and as a href link. * update amadeus.html (#7800) * Fix a missing word in endpoint reconciler section (#7804) * Partners page updates (#7802) * Partners page updates * Update to ZTE link * Make using sysctls a task instead of a concept (#6808) Closes: #4505 * add a note when mount a configmap to pod (#7745) * adjust a note format (#7812) * Update docker-cli-to-kubectl.md (#7748) * Update docker-cli-to-kubectl.md Edited the document for adherence to the style guide and word usage. * Update docker-cli-to-kubectl.md * Incorporated the changes suggested. * Mount propagation update to include docker config (#7854) * update overridden config for 1.10 (#7847) * update overridden config for 1.10 * fix config file per comments * Update Extended Resource doc wrt cluster-level resources (#7759) --- OWNERS | 1 + _config.yml | 18 +- _data/reference.yml | 6 +- _data/setup.yml | 5 + _data/tasks.yml | 3 + .../kubelet-authentication-authorization.md | 8 +- cn/docs/admin/kubelet-tls-bootstrapping.md | 1 - .../administer-cluster/kubelet-config-file.md | 2 +- cn/docs/user-guide/kubectl-overview.md | 2 +- docs/admin/admission-controllers.md | 76 +++-- docs/admin/authentication.md | 156 +++++++++ docs/admin/authorization/index.md | 2 +- docs/admin/authorization/node.md | 2 +- docs/admin/authorization/rbac.md | 1 + .../admin/extensible-admission-controllers.md | 165 ++++++++- .../high-availability/kube-apiserver.yaml | 2 +- .../kubelet-authentication-authorization.md | 8 +- docs/admin/kubelet-tls-bootstrapping.md | 1 - .../api-extension/custom-resources.md | 4 +- .../cluster-administration/device-plugins.md | 7 +- .../concepts/configuration/assign-pod-node.md | 5 + .../manage-compute-resources-container.md | 106 ++++-- .../configuration/pod-priority-preemption.md | 2 +- .../configuration/taint-and-toleration.md | 2 +- docs/concepts/policy/example-psp.yaml | 2 +- docs/concepts/policy/pod-security-policy.md | 4 +- docs/concepts/policy/privileged-psp.yaml | 2 +- docs/concepts/policy/resource-quotas.md | 4 +- docs/concepts/policy/restricted-psp.yaml | 2 +- .../services-networking/custom-dns.yaml | 2 +- .../services-networking/dns-pod-service.md | 10 +- docs/concepts/services-networking/service.md | 4 +- docs/concepts/storage/persistent-volumes.md | 35 +- docs/concepts/storage/storage-classes.md | 10 +- docs/concepts/storage/volumes.md | 229 ++++++++----- .../controllers/jobs-run-to-completion.md | 17 +- docs/concepts/workloads/pods/podpreset.md | 2 +- .../coreos/cloud-configs/master.yaml | 2 +- docs/getting-started-guides/scratch.md | 2 +- docs/getting-started-guides/windows/index.md | 87 ++++- docs/reference/api-concepts.md | 57 ++++ docs/reference/feature-gates.md | 66 +++- docs/reference/index.md | 3 +- docs/reference/kubectl/cheatsheet.md | 2 +- docs/reference/kubectl/overview.md | 23 ++ .../kubeadm/implementation-details.md | 2 +- docs/setup/pick-right-solution.md | 1 + ...port-forward-access-application-cluster.md | 78 +++-- .../extend-api-custom-resource-definitions.md | 216 +++++++++++- docs/tasks/administer-cluster/coredns.md | 22 +- .../cpu-management-policies.md | 5 +- docs/tasks/administer-cluster/encrypt-data.md | 3 +- docs/tasks/administer-cluster/kms-provider.md | 181 ++++++++++ .../administer-cluster/kubelet-config-file.md | 65 ++-- .../administer-cluster/reconfigure-kubelet.md | 113 +++---- .../running-cloud-controller.md | 2 +- .../storage-object-in-use-protection.md | 315 ++++++++++++++++++ .../configure-service-account.md | 2 +- .../share-process-namespace.md | 111 ++++++ .../share-process-namespace.yaml | 17 + docs/tasks/debug-application-cluster/audit.md | 83 ++++- .../set-up-placement-policies-federation.md | 4 +- .../job/parallel-processing-expansion.md | 9 +- docs/tasks/manage-gpus/scheduling-gpus.md | 7 +- .../manage-hugepages/scheduling-hugepages.md | 6 +- .../horizontal-pod-autoscale-walkthrough.md | 35 +- .../horizontal-pod-autoscale.md | 6 +- docs/tutorials/clusters/apparmor.md | 7 +- test/examples_test.go | 3 + 69 files changed, 2043 insertions(+), 400 deletions(-) create mode 100644 docs/tasks/administer-cluster/kms-provider.md create mode 100644 docs/tasks/administer-cluster/storage-object-in-use-protection.md create mode 100644 docs/tasks/configure-pod-container/share-process-namespace.md create mode 100644 docs/tasks/configure-pod-container/share-process-namespace.yaml diff --git a/OWNERS b/OWNERS index cd20432f48b1c..2dabed0d93910 100644 --- a/OWNERS +++ b/OWNERS @@ -2,6 +2,7 @@ reviewers: - tengqm - zhangxiaoyu-zidif - xiangpengzhao +- bradtopol approvers: - heckj - bradamant3 diff --git a/_config.yml b/_config.yml index cd8413a4dcb4a..d0decddd54065 100644 --- a/_config.yml +++ b/_config.yml @@ -13,22 +13,27 @@ incremental: true safe: false lsi: false -latest: "v1.9" +latest: "v1.10" defaults: - scope: path: "" values: - fullversion: "v1.9.0" - version: "v1.9" + fullversion: "v1.10.0" + version: "v1.10" githubbranch: "master" docsbranch: "master" versions: + - fullversion: "v1.10.0" + version: "v1.10" + githubbranch: "v1.10.0" + docsbranch: "release-1.10" + url: https://kubernetes.io - fullversion: "v1.9.0" version: "v1.9" githubbranch: "v1.9.0" docsbranch: "release-1.9" - url: https://kubernetes.io + url: https://v1-9.docs.kubernetes.io - fullversion: "v1.8.4" version: "v1.8" githubbranch: "v1.8.4" @@ -44,11 +49,6 @@ defaults: githubbranch: "v1.6.8" docsbranch: "release-1.6" url: https://v1-6.docs.kubernetes.io - - fullversion: "v1.5.7" - version: "v1.5" - githubbranch: "v1.5.7" - docsbranch: "release-1.5" - url: https://v1-5.docs.kubernetes.io deprecated: false currentUrl: https://kubernetes.io/docs/home/ nextUrl: http://kubernetes-io-vnext-staging.netlify.com/ diff --git a/_data/reference.yml b/_data/reference.yml index 879164011354a..2da8e55fe3b6e 100644 --- a/_data/reference.yml +++ b/_data/reference.yml @@ -32,10 +32,10 @@ toc: - docs/reference/workloads-18-19.md - title: API Reference - landing_page: /docs/api-reference/v1.8/ + landing_page: /docs/api-reference/v1.10/ section: - - title: v1.9 - path: /docs/reference/generated/kubernetes-api/v1.9/ + - title: v1.10 + path: /docs/reference/generated/kubernetes-api/v1.10/ - docs/reference/labels-annotations-taints.md - title: OpenAPI and Swagger section: diff --git a/_data/setup.yml b/_data/setup.yml index ec85fcd791aff..f09b31417d87f 100644 --- a/_data/setup.yml +++ b/_data/setup.yml @@ -11,6 +11,11 @@ toc: - docs/imported/release/notes.md - docs/setup/building-from-source.md +- title: Version 1.10 Troubleshooting + landing page: /docs/reference/pvc-finalizer-downgrade-issue/ + section: + - docs/reference/pvc-finalizer-downgrade-issue.md + - title: Independent Solutions landing_page: /docs/getting-started-guides/minikube/ section: diff --git a/_data/tasks.yml b/_data/tasks.yml index d7c03e36ee370..4c1175e47fe24 100644 --- a/_data/tasks.yml +++ b/_data/tasks.yml @@ -32,6 +32,7 @@ toc: - docs/tasks/configure-pod-container/configure-pod-initialization.md - docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md - docs/tasks/configure-pod-container/configure-pod-configmap.md + - docs/tasks/configure-pod-container/share-process-namespace.md - docs/tools/kompose/user-guide.md - title: Inject Data Into Applications @@ -163,6 +164,7 @@ toc: - docs/tasks/administer-cluster/reserve-compute-resources.md - docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md - docs/tasks/administer-cluster/declare-network-policy.md + - docs/tasks/administer-cluster/kms-provider.md - title: Install Network Policy Provider section: - docs/tasks/administer-cluster/calico-network-policy.md @@ -184,6 +186,7 @@ toc: - docs/tasks/administer-cluster/dns-custom-nameservers.md - docs/tasks/administer-cluster/dns-debugging-resolution.md - docs/tasks/administer-cluster/pvc-protection.md + - docs/tasks/administer-cluster/storage-object-in-use-protection.md - title: Federation - Run an App on Multiple Clusters landing_page: /docs/tasks/federation/set-up-cluster-federation-kubefed/ diff --git a/cn/docs/admin/kubelet-authentication-authorization.md b/cn/docs/admin/kubelet-authentication-authorization.md index 03ab5fd64b70b..4df5a3654911c 100644 --- a/cn/docs/admin/kubelet-authentication-authorization.md +++ b/cn/docs/admin/kubelet-authentication-authorization.md @@ -33,11 +33,9 @@ To enable X509 client certificate authentication to the kubelet's HTTPS endpoint To enable API bearer tokens (including service account tokens) to be used to authenticate to the kubelet's HTTPS endpoint: * ensure the `authentication.k8s.io/v1beta1` API group is enabled in the API server -* start the kubelet with the `--authentication-token-webhook`, `--kubeconfig`, and `--require-kubeconfig` flags +* start the kubelet with the `--authentication-token-webhook` and the `--kubeconfig` flags * the kubelet calls the `TokenReview` API on the configured API server to determine user information from bearer tokens -**Note:** The flag `--require-kubeconfig` is deprecated as of Kubernetes 1.8, this will be removed in a future version. You no longer need to use `--require-kubeconfig` in Kubernetes 1.8. - ## Kubelet authorization Any request that is successfully authenticated (including an anonymous request) is then authorized. The default authorization mode is `AlwaysAllow`, which allows all requests. @@ -51,11 +49,9 @@ There are many possible reasons to subdivide access to the kubelet API: To subdivide access to the kubelet API, delegate authorization to the API server: * ensure the `authorization.k8s.io/v1beta1` API group is enabled in the API server -* start the kubelet with the `--authorization-mode=Webhook`, `--kubeconfig`, and `--require-kubeconfig` flags +* start the kubelet with the `--authorization-mode=Webhook` and the `--kubeconfig` flags * the kubelet calls the `SubjectAccessReview` API on the configured API server to determine whether each request is authorized -**Note:** The flag `--require-kubeconfig` is deprecated as of Kubernetes 1.8, this will be removed in a future version. You no longer need to use `--require-kubeconfig` in Kubernetes 1.8. - The kubelet authorizes API requests using the same [request attributes](/docs/admin/authorization/#request-attributes) approach as the apiserver. The verb is determined from the incoming request's HTTP verb: diff --git a/cn/docs/admin/kubelet-tls-bootstrapping.md b/cn/docs/admin/kubelet-tls-bootstrapping.md index 3297da10cdb5d..92abe801d6d6a 100644 --- a/cn/docs/admin/kubelet-tls-bootstrapping.md +++ b/cn/docs/admin/kubelet-tls-bootstrapping.md @@ -190,7 +190,6 @@ When starting the kubelet, if the file specified by `--kubeconfig` does not exis **Note:** The following flags are required to enable this bootstrapping when starting the kubelet: ``` ---require-kubeconfig --bootstrap-kubeconfig="/path/to/bootstrap/kubeconfig" ``` diff --git a/cn/docs/tasks/administer-cluster/kubelet-config-file.md b/cn/docs/tasks/administer-cluster/kubelet-config-file.md index afbe3dba20581..08aaf24057944 100644 --- a/cn/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/cn/docs/tasks/administer-cluster/kubelet-config-file.md @@ -45,7 +45,7 @@ title: 通过配置文件设置 Kubelet 参数 ## 启动通过配置文件配置的 Kubelet 进程 -启动 Kubelet,需要打开 `KubeletConfigFile` 特性开关(feature gate)并将其 `--init-config-dir` 标志设置为包含 `kubelet` 文件的文件夹路径。Kubelet 将从 `kubelet` 文件中读取由 `KubeletConfiguration` 定义的参数,而不是从参数相关的命令行标志中读取。 +启动 Kubelet 需要将其 `--init-config-dir` 标志设置为包含 `kubelet` 文件的文件夹路径。Kubelet 将从 `kubelet` 文件中读取由 `KubeletConfiguration` 定义的参数,而不是从参数相关的命令行标志中读取。 {% endcapture %} diff --git a/cn/docs/user-guide/kubectl-overview.md b/cn/docs/user-guide/kubectl-overview.md index 704999df096c7..c5c1c25379ea9 100644 --- a/cn/docs/user-guide/kubectl-overview.md +++ b/cn/docs/user-guide/kubectl-overview.md @@ -93,7 +93,7 @@ Operation | Syntax | Description `configmaps` |`cm` `controllerrevisions` | `cronjobs` | -`customresourcedefinition` |`crd` +`customresourcedefinition` |`crd`, `crds` `daemonsets` |`ds` `deployments` |`deploy` `endpoints` |`ep` diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md index 4aeacc2c84589..6a88e6b17a21d 100644 --- a/docs/admin/admission-controllers.md +++ b/docs/admin/admission-controllers.md @@ -31,8 +31,7 @@ controllers may modify the objects they admit; validating controllers may not. The admission control process proceeds in two phases. In the first phase, mutating admission controllers are run. In the second phase, validating admission controllers are run. Note again that some of the controllers are -both. In both phases, the controllers are run in the order specified by the -`--admission-control` flag of `kube-apiserver`. +both. If any of the controllers in either phase reject the request, the entire request is rejected immediately and an error is returned to the end-user. @@ -54,13 +53,12 @@ support all the features you expect. ## How do I turn on an admission controller? -The Kubernetes API server supports a flag, `admission-control` that takes a comma-delimited, -ordered list of admission control choices to invoke prior to modifying objects in the cluster. -For example, the following command line turns on the `NamespaceLifecycle` and the `LimitRanger` -admission controller: +The Kubernetes API server flag `enable-admission-plugins` takes a comma-delimited list of admission control plugins to invoke prior to modifying objects in the cluster. +For example, the following command line enables the `NamespaceLifecycle` and the `LimitRanger` +admission control plugins: ```shell -kube-apiserver --admission-control=NamespaceLifecyle,LimitRanger ... +kube-apiserver --enable-admission-plugins=NamespaceLifecyle,LimitRanger ... ``` **Note**: Depending on the way your Kubernetes cluster is deployed and how the @@ -70,11 +68,19 @@ deployed as a systemd service, you may modify the manifest file for the API server if Kubernetes is deployed in a self-hosted way. {: .note} +## How do I turn off an admission controller? + +The Kubernetes API server flag `disable-admission-plugins` takes a comma-delimited list of admission control plugins to be disabled, even if they are in the list of plugins enabled by default. + +```shell +kube-apiserver --disable-admission-plugins=PodNodeSelector,AlwaysDeny ... +``` + ## What does each admission controller do? -### AlwaysAdmit +### AlwaysAdmit (DEPRECATED) -Use this admission controller by itself to pass-through all requests. +Use this admission controller by itself to pass-through all requests. AlwaysAdmit is DEPRECATED as no real meaning. ### AlwaysPullImages @@ -86,9 +92,9 @@ scheduled onto the right node), without any authorization check against the imag is enabled, images are always pulled prior to starting containers, which means valid credentials are required. -### AlwaysDeny +### AlwaysDeny (DEPRECATED) -Rejects all requests. Used for testing. +Rejects all requests. AlwaysDeny is DEPRECATED as no real meaning. ### DefaultStorageClass @@ -134,7 +140,7 @@ enabling this admission controller. ### EventRateLimit (alpha) -This admission controller is introduced in v1.9 to mitigate the problem where the API server gets flooded by +This admission controller mitigates the problem where the API server gets flooded by event requests. The cluster admin can specify event rate limits by: * Ensuring that `eventratelimit.admission.k8s.io/v1alpha1=true` is included in the @@ -180,7 +186,7 @@ for more details. ### ExtendedResourceToleration -This plug-in is introduced in v1.9 to facilitate creation of dedicated nodes with extended resources. +This plug-in facilitates creation of dedicated nodes with extended resources. If operators want to create dedicated nodes with extended resources (like GPUs, FPGAs etc.), they are expected to taint the node with the extended resource name as the key. This admission controller, if enabled, automatically adds tolerations for such taints to pods requesting extended resources, so users don't have to manually @@ -188,11 +194,7 @@ add these tolerations. ### ImagePolicyWebhook -The ImagePolicyWebhook admission controller allows a backend webhook to make admission decisions. You enable this admission controller by setting the admission-control option as follows: - -```shell ---admission-control=ImagePolicyWebhook -``` +The ImagePolicyWebhook admission controller allows a backend webhook to make admission decisions. #### Configuration File Format @@ -314,7 +316,6 @@ In any case, the annotations are provided by the user and are not validated by K ### Initializers (alpha) -This admission controller is introduced in v1.7. The admission controller determines the initializers of a resource based on the existing `InitializerConfiguration`s. It sets the pending initializers by modifying the metadata of the resource to be created. @@ -331,7 +332,7 @@ The annotations added contain the information on what compute resources were aut See the [InitialResources proposal](https://git.k8s.io/community/contributors/design-proposals/autoscaling/initial-resources.md) for more details. -### LimitPodHardAntiAffinity +### LimitPodHardAntiAffinityTopology This admission controller denies any pod that defines `AntiAffinity` topology key other than `kubernetes.io/hostname` in `requiredDuringSchedulingRequiredDuringExecution`. @@ -414,11 +415,7 @@ This admission controller also protects the access to `metadata.ownerReferences[ of an object, so that only users with "update" permission to the `finalizers` subresource of the referenced *owner* can change it. -### Persistent Volume Claim Protection (alpha) -{% assign for_k8s_version="v1.9" %}{% include feature-state-alpha.md %} -The `PVCProtection` plugin adds the `kubernetes.io/pvc-protection` finalizer to newly created Persistent Volume Claims (PVCs). In case a user deletes a PVC the PVC is not removed until the finalizer is removed from the PVC by PVC Protection Controller. Refer to the [PVC Protection](/docs/concepts/storage/persistent-volumes/#persistent-volume-claim-protection) for more detailed information. - -### PersistentVolumeLabel +### PersistentVolumeLabel (DEPRECATED) This admission controller automatically attaches region or zone labels to PersistentVolumes as defined by the cloud provider (for example, GCE or AWS). @@ -426,7 +423,7 @@ It helps ensure the Pods and the PersistentVolumes mounted are in the same region and/or zone. If the admission controller doesn't support automatic labelling your PersistentVolumes, you may need to add the labels manually to prevent pods from mounting volumes from -a different zone. +a different zone. PersistentVolumeLabel is DEPRECATED and labeling persistent volumes has been taken over by [cloud controller manager](/docs/tasks/administer-cluster/running-cloud-controller/). ### PodNodeSelector @@ -434,7 +431,7 @@ This admission controller defaults and limits what node selectors may be used wi #### Configuration File Format -PodNodeSelector uses a configuration file to set options for the behavior of the backend. +`PodNodeSelector` uses a configuration file to set options for the behavior of the backend. Note that the configuration file format will move to a versioned file in a future release. This file may be json or yaml and has the following format: @@ -445,7 +442,7 @@ podNodeSelectorPluginConfig: namespace2: ``` -Reference the PodNodeSelector configuration file from the file provided to the API server's command line flag `--admission-control-config-file`: +Reference the `PodNodeSelector` configuration file from the file provided to the API server's command line flag `--admission-control-config-file`: ```yaml kind: AdmissionConfiguration @@ -457,7 +454,7 @@ plugins: ``` #### Configuration Annotation Format -PodNodeSelector uses the annotation key `scheduler.alpha.kubernetes.io/node-selector` to assign node selectors to namespaces. +`PodNodeSelector` uses the annotation key `scheduler.kubernetes.io/node-selector` to assign node selectors to namespaces. ```yaml apiVersion: v1 @@ -468,6 +465,19 @@ metadata: name: namespace3 ``` +#### Internal Behavior +This admission controller has the following behavior: + 1. If the `Namespace` has an annotation with a key `scheduler.kubernetes.io/nodeSelector`, use its value as the + node selector. + 1. If the namespace lacks such an annotation, use the `clusterDefaultNodeSelector` defined in the `PodNodeSelector` + plugin configuration file as the node selector. + 1. Evaluate the pod's node selector against the namespace node selector for conflicts. Conflicts result in rejection. + 1. Evaluate the pod's node selector against the namespace-specific whitelist defined the plugin configuration file. + Conflicts result in rejection. + +**Note:** `PodTolerationRestriction` is more versatile and powerful than `PodNodeSelector` and can encompass the scenarios supported by `PodNodeSelector`. +{: .note} + ### PersistentVolumeClaimResize This admission controller implements additional validations for checking incoming `PersistentVolumeClaim` resize requests. @@ -545,8 +555,6 @@ objects in your Kubernetes deployment, you MUST use this admission controller to See the [resourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) and the [example of Resource Quota](/docs/concepts/policy/resource-quotas/) for more details. -It is strongly encouraged that this admission controller is configured last in the sequence of admission controllers. This is -so that quota is not prematurely incremented only for the request to be rejected later in admission control. ### SecurityContextDeny @@ -557,6 +565,10 @@ This admission controller will deny any pod that attempts to set certain escalat This admission controller implements automation for [serviceAccounts](/docs/user-guide/service-accounts). We strongly recommend using this admission controller if you intend to make use of Kubernetes `ServiceAccount` objects. +### Storage Object in Use Protection (beta) +{% assign for_k8s_version="v1.10" %}{% include feature-state-beta.md %} +The `StorageObjectInUseProtection` plugin adds the `kubernetes.io/pvc-protection` or `kubernetes.io/pv-protection` finalizers to newly created Persistent Volume Claims (PVCs) or Persistent Volumes (PV). In case a user deletes a PVC or PV the PVC or PV is not removed until the finalizer is removed from the PVC or PV by PVC or PV Protection Controller. Refer to the [Storage Object in Use Protection](/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection) for more detailed information. + ### ValidatingAdmissionWebhook (alpha in 1.8; beta in 1.9) This admission controller calls any validating webhooks which match the request. Matching @@ -577,7 +589,7 @@ versions >= 1.9). ## Is there a recommended set of admission controllers to use? Yes. -For Kubernetes >= 1.9.0, we strongly recommend running the following set of admission controllers (order matters): +For Kubernetes >= 1.9.0, we strongly recommend running the following set of admission controllers (order matters for 1.9 but not >1.10): ```shell --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md index 6d1db7e15c8c0..44d1ced588a04 100644 --- a/docs/admin/authentication.md +++ b/docs/admin/authentication.md @@ -684,3 +684,159 @@ rules: verbs: ["impersonate"] resourceNames: ["view", "development"] ``` + +## client-go credential plugins + +{% assign for_k8s_version="v1.10" %}{% include feature-state-alpha.md %} + +`k8s.io/client-go` and tools using it such as `kubectl` and `kubelet` are able to execute an +external command to receive user credentials. + +This feature is intended for client side integrations with authentication protocols not natively +supported by `k8s.io/client-go` (LDAP, Kerberos, OAuth2, SAML, etc.). The plugin implements the +protocol specific logic, then returns opaque credentials to use. Almost all credential plugin +use cases require a server side component with support for the [webhook token authenticator](#webhook-token-authentication) +to interpret the credential format produced by the client plugin. + +As of 1.10 only bearer tokens are supported. Support for client certs may be added in a future release. + +### Example use case + +In a hypothetical use case, an organization would run an external service that exchanges LDAP credentials +for user specific, signed tokens. The service would also be capable of responding to [webhook token +authenticator](#webhook-token-authentication) requests to validate the tokens. Users would be required +to install a credential plugin on their workstation. + +To authenticate against the API: + +* The user issues a `kubectl` command. +* Credential plugin prompts the user for LDAP credentials, exchanges credentials with external service for a token. +* Credential plugin returns token to client-go, which uses it as a bearer token against the API server. +* API server uses the [webhook token authenticator](#webhook-token-authentication) to submit a `TokenReview` to the external service. +* External service verifies the signature on the token and returns the user's username and groups. + +### Configuration + +Credential plugins are configured through [`kubectl` config files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) +as part of the user fields. + +```yaml +apiVersion: v1 +kind: Config +users: +- name: my-user + user: + exec: + # Command to execute. Required. + command: "example-client-go-exec-plugin" + + # API version to use when encoding and decoding the ExecCredentials + # resource. Required. + # + # The API version returned by the plugin MUST match the version encoded. + apiVersion: "client.authentication.k8s.io/v1alpha1" + + # Environment variables to set when executing the plugin. Optional. + env: + - name: "FOO" + value: "bar" + + # Arguments to pass when executing the plugin. Optional. + args: + - "arg1" + - "arg2" +clusters: +- name: my-cluster + cluster: + server: "https://172.17.4.100:6443" + certificate-authority: "/etc/kubernetes/ca.pem" +contexts: +- name: my-cluster + context: + cluster: my-cluster + user: my-user +current-context: my-cluster +``` + +Relative command paths are interpreted as relative to the directory of the config file. If +KUBECONFIG is set to `/home/jane/kubeconfig` and the exec command is `./bin/example-client-go-exec-plugin`, +the binary `/home/jane/bin/example-client-go-exec-plugin` is executed. + +```yaml +- name: my-user + user: + exec: + # Path relative to the directory of the kubeconfig + command: "./bin/example-client-go-exec-plugin" + apiVersion: "client.authentication.k8s.io/v1alpha1" +``` + +### Input and output formats + +When executing the command, `k8s.io/client-go` sets the `KUBERNETES_EXEC_INFO` environment +variable to a JSON serialized [`ExecCredential`]( +https://github.com/kubernetes/client-go/blob/master/pkg/apis/clientauthentication/v1alpha1/types.go) +resource. + +``` +KUBERNETES_EXEC_INFO='{ + "apiVersion": "client.authentication.k8s.io/v1alpha1", + "kind": "ExecCredential", + "spec": { + "interactive": true + } +}' +``` + +When plugins are executed from an interactive session, `stdin` and `stderr` are directly +exposed to the plugin so it can prompt the user for input for interactive logins. + +When responding to a 401 HTTP status code (indicating invalid credentials), this object will +include metadata about the response. + +```json +{ + "apiVersion": "client.authentication.k8s.io/v1alpha1", + "kind": "ExecCredential", + "spec": { + "response": { + "code": 401, + "header": { + "WWW-Authenticate": [ + "Bearer realm=ldap.example.com" + ] + }, + }, + "interactive": true + } +} +``` + +The executed command is expected to print an `ExceCredential` to `stdout`. `k8s.io/client-go` +will then use the returned bearer token in the `status` when authenticating against the +Kubernetes API. + +```json +{ + "apiVersion": "client.authentication.k8s.io/v1alpha1", + "kind": "ExecCredential", + "status": { + "token": "my-bearer-token" + } +} +``` + +Optionally, this output can include the expiry of the token formatted as a RFC3339 timestamp. +If an expiry is omitted, the bearer token is cached until the server responds with a 401 HTTP +status code. + +```json +{ + "apiVersion": "client.authentication.k8s.io/v1alpha1", + "kind": "ExecCredential", + "status": { + "token": "my-bearer-token", + "expirationTimestamp": "2018-03-05T17:30:20-08:00" + } +} +``` diff --git a/docs/admin/authorization/index.md b/docs/admin/authorization/index.md index 6cf0e650efd27..d488d2f96105e 100644 --- a/docs/admin/authorization/index.md +++ b/docs/admin/authorization/index.md @@ -67,7 +67,7 @@ DELETE | delete (for individual resources), deletecollection (for collections Kubernetes sometimes checks authorization for additional permissions using specialized verbs. For example: -* [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) checks for authorization of the `use` verb on `podsecuritypolicies` resources in the `extensions` API group. +* [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) checks for authorization of the `use` verb on `podsecuritypolicies` resources in the `policy` API group. * [RBAC](/docs/admin/authorization/rbac/#privilege-escalation-prevention-and-bootstrapping) checks for authorization of the `bind` verb on `roles` and `clusterroles` resources in the `rbac.authorization.k8s.io` API group. * [Authentication](/docs/admin/authentication/) layer checks for authorization of the `impersonate` verb on `users`, `groups`, and `serviceaccounts` in the core API group, and the `userextras` in the `authentication.k8s.io` API group. diff --git a/docs/admin/authorization/node.md b/docs/admin/authorization/node.md index 1bc07d600e0a3..00c848928abba 100644 --- a/docs/admin/authorization/node.md +++ b/docs/admin/authorization/node.md @@ -45,7 +45,7 @@ This group and user name format match the identity created for each kubelet as p To enable the Node authorizer, start the apiserver with `--authorization-mode=Node`. -To limit the API objects kubelets are able to write, enable the [NodeRestriction](/docs/admin/admission-controllers#NodeRestriction) admission plugin by starting the apiserver with `--admission-control=...,NodeRestriction,...` +To limit the API objects kubelets are able to write, enable the [NodeRestriction](/docs/admin/admission-controllers#NodeRestriction) admission plugin by starting the apiserver with `--enable-admission-plugins=...,NodeRestriction,...` ## Migration considerations diff --git a/docs/admin/authorization/rbac.md b/docs/admin/authorization/rbac.md index b8cc83e75f43b..274e23522b149 100644 --- a/docs/admin/authorization/rbac.md +++ b/docs/admin/authorization/rbac.md @@ -628,6 +628,7 @@ These roles include: * system:controller:node-controller * system:controller:persistent-volume-binder * system:controller:pod-garbage-collector +* system:controller:pv-protection-controller * system:controller:pvc-protection-controller * system:controller:replicaset-controller * system:controller:replication-controller diff --git a/docs/admin/extensible-admission-controllers.md b/docs/admin/extensible-admission-controllers.md index afe5d58308a6e..f8bb4b5370e31 100644 --- a/docs/admin/extensible-admission-controllers.md +++ b/docs/admin/extensible-admission-controllers.md @@ -21,9 +21,15 @@ the following: * They need to be compiled into kube-apiserver. * They are only configurable when the apiserver starts up. +<<<<<<< HEAD +1.7 introduced two alpha features, *Initializers* and *External Admission +Webhooks*, that address these limitations. These features allow admission +controllers to be developed out-of-tree and configured at runtime. +======= Two features, *Admission Webhooks* (beta in 1.9) and *Initializers* (alpha), address these limitations. They allow admission controllers to be developed out-of-tree and configured at runtime. +>>>>>>> 4ac258363735f8d35150e4dcd0213516fcdc83b9 This page describes how to use Admission Webhooks and Initializers. @@ -240,7 +246,7 @@ perform its assigned task and remove its name from the list. *Initializers* is an alpha feature, so it is disabled by default. To turn it on, you need to: -* Include "Initializers" in the `--admission-control` flag when starting +* Include "Initializers" in the `--enable-admission-plugins` flag when starting `kube-apiserver`. If you have multiple `kube-apiserver` replicas, all should have the same flag setting. @@ -294,3 +300,160 @@ the pods will be stuck in an uninitialized state. Make sure that all expansions of the `` tuple in a `rule` are valid. If they are not, separate them in different `rules`. +<<<<<<< HEAD + +## External Admission Webhooks + +### What are external admission webhooks? + +External admission webhooks are HTTP callbacks that are intended to receive +admission requests and do something with them. What an external admission +webhook does is up to you, but there is an +[interface](https://github.com/kubernetes/kubernetes/blob/v1.7.0-rc.1/pkg/apis/admission/v1alpha1/types.go) +that it must adhere to so that it responds with whether or not the +admission request should be allowed. + +Unlike initializers or the plugin-style admission controllers, external +admission webhooks are not allowed to mutate the admission request in any way. + +Because admission is a high security operation, the external admission webhooks +must support TLS. + +### When to use admission webhooks? + +A simple example use case for an external admission webhook is to do semantic validation +of Kubernetes resources. Suppose that your infrastructure requires that all `Pod` +resources have a common set of labels, and you do not want any `Pod` to be +persisted to Kubernetes if those needs are not met. You could write your +external admission webhook to do this validation and respond accordingly. + +### How are external admission webhooks triggered? + +Whenever a request comes in, the `GenericAdmissionWebhook` admission plugin will +get the list of interested external admission webhooks from +`externalAdmissionHookConfiguration` objects (explained below) and call them in +parallel. If **all** of the external admission webhooks approve the admission +request, the admission chain continues. If **any** of the external admission +webhooks deny the admission request, the admission request will be denied, and +the reason for doing so will be based on the _first_ external admission webhook +denial reason. _This means if there is more than one external admission webhook +that denied the admission request, only the first will be returned to the +user._ If there is an error encountered when calling an external admission +webhook, that request is ignored and will not be used to approve/deny the +admission request. + +**Note** In kubernetes versions earlier than v1.10, the admission chain depends +only on the order of the `--admission-control` option passed to `kube-apiserver`. +In versions v1.10 and later, the `--admission-control` option is replaced by the +`--enable-admission-plugins` and the `--disable-admission-plugins` options. +The order of plugins for these two options no longer matters. +{: .note} + +### Enable external admission webhooks + +*External Admission Webhooks* is an alpha feature, so it is disabled by default. +To turn it on, you need to + +* Include "GenericAdmissionWebhook" in the `--enable-admission-plugins` flag when + starting the apiserver. If you have multiple `kube-apiserver` replicas, all + should have the same flag setting. + +* Enable the dynamic admission controller registration API by adding + `admissionregistration.k8s.io/v1alpha1` to the `--runtime-config` flag passed + to `kube-apiserver`, e.g. + `--runtime-config=admissionregistration.k8s.io/v1alpha1`. Again, all replicas + should have the same flag setting. + +### Write a webhook admission controller + +See [caesarxuchao/example-webhook-admission-controller](https://github.com/caesarxuchao/example-webhook-admission-controller) +for an example webhook admission controller. + +The communication between the webhook admission controller and the apiserver, or +more precisely, the GenericAdmissionWebhook admission controller, needs to be +TLS secured. You need to generate a CA cert and use it to sign the server cert +used by your webhook admission controller. The pem formatted CA cert is supplied +to the apiserver via the dynamic registration API +`externaladmissionhookconfigurations.clientConfig.caBundle`. + +For each request received by the apiserver, the GenericAdmissionWebhook +admission controller sends an +[admissionReview](https://github.com/kubernetes/kubernetes/blob/v1.7.0-rc.1/pkg/apis/admission/v1alpha1/types.go#L27) +to the relevant webhook admission controller. The webhook admission controller +gathers information like `object`, `oldobject`, and `userInfo`, from +`admissionReview.spec`, sends back a response with the body also being the +`admissionReview`, whose `status` field is filled with the admission decision. + +### Deploy the webhook admission controller + +See [caesarxuchao/example-webhook-admission-controller deployment](https://github.com/caesarxuchao/example-webhook-admission-controller/tree/master/deployment) +for an example deployment. + +The webhook admission controller should be deployed via the +[deployment API](/docs/api-reference/{{page.version}}/#deployment-v1beta1-apps). +You also need to create a +[service](/docs/api-reference/{{page.version}}/#service-v1-core) as the +front-end of the deployment. + +### Configure webhook admission controller on the fly + +You can configure what webhook admission controllers are enabled and what +resources are subject to the admission controller via creating +externaladmissionhookconfigurations. + +We suggest that you first deploy the webhook admission controller and make sure +it is working properly before creating the externaladmissionhookconfigurations. +Otherwise, depending whether the webhook is configured as fail open or fail +closed, operations will be unconditionally accepted or rejected. + +The following is an example `externaladmissionhookconfiguration`: + +```yaml +apiVersion: admissionregistration.k8s.io/v1alpha1 +kind: ExternalAdmissionHookConfiguration +metadata: + name: example-config +externalAdmissionHooks: +- name: pod-image.k8s.io + rules: + - apiGroups: + - "" + apiVersions: + - v1 + operations: + - CREATE + resources: + - pods + failurePolicy: Ignore + clientConfig: + caBundle: + service: + name: + namespace: +``` + +For a request received by the apiserver, if the request matches any of the +`rules` of an `externalAdmissionHook`, the `GenericAdmissionWebhook` admission +controller will send an `admissionReview` request to the `externalAdmissionHook` +to ask for admission decision. + +The `rule` is similar to the `rule` in `initializerConfiguration`, with two +differences: + +* The addition of the `operations` field, specifying what operations the webhook + is interested in; + +* The `resources` field accepts subresources in the form or resource/subresource. + +Make sure that all expansions of the `` tuple +in a `rule` are valid. If they are not, separate them to different `rules`. + +You can also specify the `failurePolicy`. As of 1.7, the system supports `Ignore` +and `Fail` policies, meaning that upon a communication error with the webhook +admission controller, the `GenericAdmissionWebhook` can admit or reject the +operation based on the configured policy. + +After you create the `externalAdmissionHookConfiguration`, the system will take a few +seconds to honor the new configuration. +======= +>>>>>>> 4ac258363735f8d35150e4dcd0213516fcdc83b9 diff --git a/docs/admin/high-availability/kube-apiserver.yaml b/docs/admin/high-availability/kube-apiserver.yaml index 057764fc529af..af5b1812e8b45 100644 --- a/docs/admin/high-availability/kube-apiserver.yaml +++ b/docs/admin/high-availability/kube-apiserver.yaml @@ -11,7 +11,7 @@ spec: - /bin/sh - -c - /usr/local/bin/kube-apiserver --address=127.0.0.1 --etcd-servers=http://127.0.0.1:4001 - --cloud-provider=gce --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota + --cloud-provider=gce --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service-cluster-ip-range=10.0.0.0/16 --client-ca-file=/srv/kubernetes/ca.crt --basic-auth-file=/srv/kubernetes/basic_auth.csv --cluster-name=e2e-test-bburns --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key diff --git a/docs/admin/kubelet-authentication-authorization.md b/docs/admin/kubelet-authentication-authorization.md index a3bd887317f69..9af6514c30aa9 100644 --- a/docs/admin/kubelet-authentication-authorization.md +++ b/docs/admin/kubelet-authentication-authorization.md @@ -33,11 +33,9 @@ To enable X509 client certificate authentication to the kubelet's HTTPS endpoint To enable API bearer tokens (including service account tokens) to be used to authenticate to the kubelet's HTTPS endpoint: * ensure the `authentication.k8s.io/v1beta1` API group is enabled in the API server -* start the kubelet with the `--authentication-token-webhook`, `--kubeconfig`, and `--require-kubeconfig` flags +* start the kubelet with the `--authentication-token-webhook` and `--kubeconfig` flags * the kubelet calls the `TokenReview` API on the configured API server to determine user information from bearer tokens -**Note:** The flag `--require-kubeconfig` is deprecated as of Kubernetes 1.8, this will be removed in a future version. You no longer need to use `--require-kubeconfig` in Kubernetes 1.8. - ## Kubelet authorization Any request that is successfully authenticated (including an anonymous request) is then authorized. The default authorization mode is `AlwaysAllow`, which allows all requests. @@ -51,11 +49,9 @@ There are many possible reasons to subdivide access to the kubelet API: To subdivide access to the kubelet API, delegate authorization to the API server: * ensure the `authorization.k8s.io/v1beta1` API group is enabled in the API server -* start the kubelet with the `--authorization-mode=Webhook`, `--kubeconfig`, and `--require-kubeconfig` flags +* start the kubelet with the `--authorization-mode=Webhook` and the `--kubeconfig` flags * the kubelet calls the `SubjectAccessReview` API on the configured API server to determine whether each request is authorized -**Note:** The flag `--require-kubeconfig` is deprecated as of Kubernetes 1.8, this will be removed in a future version. You no longer need to use `--require-kubeconfig` in Kubernetes 1.8. - The kubelet authorizes API requests using the same [request attributes](/docs/admin/authorization/#request-attributes) approach as the apiserver. The verb is determined from the incoming request's HTTP verb: diff --git a/docs/admin/kubelet-tls-bootstrapping.md b/docs/admin/kubelet-tls-bootstrapping.md index f05540ff40e4f..880aba692ba35 100644 --- a/docs/admin/kubelet-tls-bootstrapping.md +++ b/docs/admin/kubelet-tls-bootstrapping.md @@ -198,7 +198,6 @@ When starting the kubelet, if the file specified by `--kubeconfig` does not exis **Note:** The following flags are required to enable this bootstrapping when starting the kubelet: ``` ---require-kubeconfig --bootstrap-kubeconfig="/path/to/bootstrap/kubeconfig" ``` diff --git a/docs/concepts/api-extension/custom-resources.md b/docs/concepts/api-extension/custom-resources.md index cf1bd67d1fdba..4e22ccc635b1f 100644 --- a/docs/concepts/api-extension/custom-resources.md +++ b/docs/concepts/api-extension/custom-resources.md @@ -154,7 +154,7 @@ Aggregated APIs offer more advanced API features and customization of other feat | Feature | Description | CRDs | Aggregated API | |-|-|-|-| -| Validation | Help users prevent errors and allow you to evolve your API independently of your clients. These features are most useful when there are many clients who can't all update at the same time. | Alpha feature of CRDs in v1.8. Checks limited to what is supported by OpenAPI v3.0. | Yes, arbitrary validation checks | +| Validation | Help users prevent errors and allow you to evolve your API independently of your clients. These features are most useful when there are many clients who can't all update at the same time. | Beta feature of CRDs in v1.9. Checks limited to what is supported by OpenAPI v3.0. | Yes, arbitrary validation checks | | Defaulting | See above | No, but can achieve the same effect with an Initializer (requires programming) | Yes | | Multi-versioning | Allows serving the same object through two API versions. Can help ease API changes like renaming fields. Less important if you control your client versions. | No | Yes | | Custom Storage | If you need storage with a different performance mode (for example, time-series database instead of key-value store) or isolation for security (for example, encryption secrets or different | No | Yes | @@ -217,7 +217,7 @@ When you add a custom resource, you can access it using: - kubectl - The kubernetes dynamic client. - A REST client that you write. - - A client generated using Kubernetes client generation tools (generating one is an advanced undertaking, but some projects may provide a client along with the CRD or AA). + - A client generated using [Kubernetes client generation tools](https://github.com/kubernetes/code-generator) (generating one is an advanced undertaking, but some projects may provide a client along with the CRD or AA). {% endcapture %} diff --git a/docs/concepts/cluster-administration/device-plugins.md b/docs/concepts/cluster-administration/device-plugins.md index 00786c49fcff7..eff1716dea302 100644 --- a/docs/concepts/cluster-administration/device-plugins.md +++ b/docs/concepts/cluster-administration/device-plugins.md @@ -4,7 +4,7 @@ title: Device Plugins description: Use the Kubernetes device plugin framework to implement plugins for GPUs, NICs, FPGAs, InfiniBand, and similar resources that require vendor-specific setup. --- -{% include feature-state-alpha.md %} +{% include feature-state-beta.md %} {% capture overview %} Starting in version 1.8, Kubernetes provides a @@ -20,8 +20,9 @@ that may require vendor specific initialization and setup. ## Device plugin registration -The device plugins feature is gated by the `DevicePlugins` feature gate and is disabled by default. -When the device plugins feature is enabled, the kubelet exports a `Registration` gRPC service: +The device plugins feature is gated by the `DevicePlugins` feature gate which +is disabled by default before 1.10. When the device plugins feature is enabled, +the kubelet exports a `Registration` gRPC service: ```gRPC service Registration { diff --git a/docs/concepts/configuration/assign-pod-node.md b/docs/concepts/configuration/assign-pod-node.md index 7ef1b40ba054f..0b76d077928f0 100644 --- a/docs/concepts/configuration/assign-pod-node.md +++ b/docs/concepts/configuration/assign-pod-node.md @@ -77,6 +77,11 @@ with a standard set of labels. As of Kubernetes v1.4 these labels are * `beta.kubernetes.io/os` * `beta.kubernetes.io/arch` +**Note:** The value of these labels is cloud provider specific and is not guaranteed to be reliable. +For example, the value of `kubernetes.io/hostname` may be the same as the Node name in some environments +and a different value in other environments. +{: .note} + ## Affinity and anti-affinity `nodeSelector` provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity diff --git a/docs/concepts/configuration/manage-compute-resources-container.md b/docs/concepts/configuration/manage-compute-resources-container.md index f23d55e2bcaca..a688bf5d33fc2 100644 --- a/docs/concepts/configuration/manage-compute-resources-container.md +++ b/docs/concepts/configuration/manage-compute-resources-container.md @@ -306,7 +306,8 @@ LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-0 You can see that the Container was terminated because of `reason:OOM Killed`, where `OOM` stands for Out Of Memory. -## Local ephemeral storage (alpha feature) +## Local ephemeral storage +{% include feature-state-beta.md %} Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managing local ephemeral storage. In each Kubernetes node, kubelet's root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by pods via EmptyDir volumes, container logs, image layers and container writable layers. @@ -369,37 +370,35 @@ For container-level isolation, if a Container's writable layer and logs usage ex ## Extended Resources -Kubernetes version 1.8 introduces Extended Resources. Extended Resources are -fully-qualified resource names outside the `kubernetes.io` domain. Extended -Resources allow cluster operators to advertise new node-level resources that -would be otherwise unknown to the system. Extended Resource quantities must be -integers and cannot be overcommitted. +Extended Resources are fully-qualified resource names outside the +`kubernetes.io` domain. They allow cluster operators to advertise and users to +consume the non-Kubernetes-built-in resources. -Users can consume Extended Resources in Pod specs just like CPU and memory. -The scheduler takes care of the resource accounting so that no more than the -available amount is simultaneously allocated to Pods. +There are two steps required to use Extended Resources. First, the cluster +operator must advertise an Extended Resource. Second, users must request the +Extended Resource in Pods. -The API server restricts quantities of Extended Resources to whole numbers. -Examples of _valid_ quantities are `3`, `3000m` and `3Ki`. Examples of -_invalid_ quantities are `0.5` and `1500m`. +### Managing extended resources -**Note:** Extended Resources replace Opaque Integer Resources. -Users can use any domain name prefix other than "`kubernetes.io`" which is reserved. -{: .note} +#### Node-level extended resources + +Node-level extended resources are tied to nodes. -There are two steps required to use Extended Resources. First, the -cluster operator must advertise a per-node Extended Resource on one or more -nodes. Second, users must request the Extended Resource in Pods. +##### Device plugin managed resources +See [Device +Plugin](https://kubernetes.io/docs/concepts/cluster-administration/device-plugins/) +for how to advertise device plugin managed resources on each node. -To advertise a new Extended Resource, the cluster operator should +##### Other resources +To advertise a new node-level extended resource, the cluster operator can submit a `PATCH` HTTP request to the API server to specify the available quantity in the `status.capacity` for a node in the cluster. After this operation, the node's `status.capacity` will include a new resource. The `status.allocatable` field is updated automatically with the new resource -asynchronously by the kubelet. Note that because the scheduler uses the -node `status.allocatable` value when evaluating Pod fitness, there may -be a short delay between patching the node capacity with a new resource and the -first pod that requests the resource to be scheduled on that node. +asynchronously by the kubelet. Note that because the scheduler uses the node +`status.allocatable` value when evaluating Pod fitness, there may be a short +delay between patching the node capacity with a new resource and the first pod +that requests the resource to be scheduled on that node. **Example:** @@ -420,6 +419,58 @@ JSON-Pointer. For more details, see [IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3). {: .note} +#### Cluster-level extended resources + +Cluster-level extended resources are not tied to nodes. They are usually managed +by scheduler extenders, which handle the resource comsumption, quota and so on. + +You can specify the extended resources that are handled by scheduler extenders +in [scheduler policy +configuration](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31). + +**Example:** + +The following configuration for a scheduler policy indicates that the +cluster-level extended resource "example.com/foo" is handled by scheduler +extender. + - The scheduler sends a pod to the scheduler extender only if the pod requests + "example.com/foo". + - The `ignoredByScheduler` field specifies that the scheduler does not check + the "example.com/foo" resource in its `PodFitsResources` predicate. + +```json +{ + "kind": "Policy", + "apiVersion": "v1", + "extenders": [ + { + "urlPrefix":"", + "bindVerb": "bind", + "ManagedResources": [ + { + "name": "example.com/foo", + "ignoredByScheduler": true + } + ] + } + ] +} +``` + +### Consuming extended resources + +Users can consume Extended Resources in Pod specs just like CPU and memory. +The scheduler takes care of the resource accounting so that no more than the +available amount is simultaneously allocated to Pods. + +The API server restricts quantities of Extended Resources to whole numbers. +Examples of _valid_ quantities are `3`, `3000m` and `3Ki`. Examples of +_invalid_ quantities are `0.5` and `1500m`. + +**Note:** Extended Resources replace Opaque Integer Resources. +Users can use any domain name prefix other than "`kubernetes.io`" which is reserved. +{: .note} + To consume an Extended Resource in a Pod, include the resource name as a key in the `spec.containers[].resources.limits` map in the container spec. @@ -427,14 +478,13 @@ in the `spec.containers[].resources.limits` map in the container spec. must be equal if both are present in a container spec. {: .note} -The Pod is scheduled only if all of the resource requests are -satisfied, including cpu, memory and any Extended Resources. The Pod will -remain in the `PENDING` state as long as the resource request cannot be met by -any node. +A Pod is scheduled only if all of the resource requests are satisfied, including +CPU, memory and any Extended Resources. The Pod remains in the `PENDING` state +as long as the resource request cannot be satisfied. **Example:** -The Pod below requests 2 cpus and 1 "example.com/foo" (an extended resource.) +The Pod below requests 2 CPUs and 1 "example.com/foo" (an extended resource). ```yaml apiVersion: v1 diff --git a/docs/concepts/configuration/pod-priority-preemption.md b/docs/concepts/configuration/pod-priority-preemption.md index 8598d7c2a221b..dc482b11e1681 100644 --- a/docs/concepts/configuration/pod-priority-preemption.md +++ b/docs/concepts/configuration/pod-priority-preemption.md @@ -45,7 +45,7 @@ Also enable scheduling.k8s.io/v1alpha1 API and Priority [admission controller](/ ``` ---runtime-config=scheduling.k8s.io/v1alpha1=true --admission-control=Controller-Foo,Controller-Bar,...,Priority +--runtime-config=scheduling.k8s.io/v1alpha1=true --enable-admission-plugins=Controller-Foo,Controller-Bar,...,Priority ``` After the feature is enabled, you can create [PriorityClasses](#priorityclass) diff --git a/docs/concepts/configuration/taint-and-toleration.md b/docs/concepts/configuration/taint-and-toleration.md index d6da462792207..a653a6f8cea8c 100644 --- a/docs/concepts/configuration/taint-and-toleration.md +++ b/docs/concepts/configuration/taint-and-toleration.md @@ -195,7 +195,7 @@ running on the node as follows * pods that tolerate the taint with a specified `tolerationSeconds` remain bound for the specified amount of time -The above behavior is a beta feature. In addition, Kubernetes 1.6 has alpha +In addition, Kubernetes 1.6 has alpha support for representing node problems. In other words, the node controller automatically taints a node when certain condition is true. The built-in taints currently include: diff --git a/docs/concepts/policy/example-psp.yaml b/docs/concepts/policy/example-psp.yaml index d8359220e42b5..7531949b650ec 100644 --- a/docs/concepts/policy/example-psp.yaml +++ b/docs/concepts/policy/example-psp.yaml @@ -1,4 +1,4 @@ -apiVersion: extensions/v1beta1 +apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: example diff --git a/docs/concepts/policy/pod-security-policy.md b/docs/concepts/policy/pod-security-policy.md index 9cba24f2e857d..d17a5e700fe73 100644 --- a/docs/concepts/policy/pod-security-policy.md +++ b/docs/concepts/policy/pod-security-policy.md @@ -50,7 +50,7 @@ controller](/docs/admin/admission-controllers/#how-do-i-turn-on-an-admission-con but doing so without authorizing any policies **will prevent any pods from being created** in the cluster. -Since the pod security policy API (`extensions/v1beta1/podsecuritypolicy`) is +Since the pod security policy API (`policy/v1beta1/podsecuritypolicy`) is enabled independently of the admission controller, for existing clusters it is recommended that policies are added and authorized before enabling the admission controller. @@ -85,7 +85,7 @@ apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rules: -- apiGroups: ['extensions'] +- apiGroups: ['policy'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: diff --git a/docs/concepts/policy/privileged-psp.yaml b/docs/concepts/policy/privileged-psp.yaml index 6b6ec6687831d..915c8d37b5460 100644 --- a/docs/concepts/policy/privileged-psp.yaml +++ b/docs/concepts/policy/privileged-psp.yaml @@ -1,4 +1,4 @@ -apiVersion: extensions/v1beta1 +apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: privileged diff --git a/docs/concepts/policy/resource-quotas.md b/docs/concepts/policy/resource-quotas.md index 1f4c770ce9c35..cd7518330b1ee 100644 --- a/docs/concepts/policy/resource-quotas.md +++ b/docs/concepts/policy/resource-quotas.md @@ -42,8 +42,8 @@ Neither contention nor changes to quota will affect already created resources. ## Enabling Resource Quota -Resource quota support is enabled by default for many Kubernetes distributions. It is -enabled when the apiserver `--admission-control=` flag has `ResourceQuota` as +Resource Quota support is enabled by default for many Kubernetes distributions. It is +enabled when the apiserver `--enable-admission-plugins=` flag has `ResourceQuota` as one of its arguments. A resource quota is enforced in a particular namespace when there is a diff --git a/docs/concepts/policy/restricted-psp.yaml b/docs/concepts/policy/restricted-psp.yaml index fe1c1d90fe33d..e677ba8e22946 100644 --- a/docs/concepts/policy/restricted-psp.yaml +++ b/docs/concepts/policy/restricted-psp.yaml @@ -1,4 +1,4 @@ -apiVersion: extensions/v1beta1 +apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted diff --git a/docs/concepts/services-networking/custom-dns.yaml b/docs/concepts/services-networking/custom-dns.yaml index c20bf359acf52..3e5acd841a218 100644 --- a/docs/concepts/services-networking/custom-dns.yaml +++ b/docs/concepts/services-networking/custom-dns.yaml @@ -1,7 +1,7 @@ apiVersion: v1 kind: Pod metadata: - namespace: ns1 + namespace: default name: dns-example spec: containers: diff --git a/docs/concepts/services-networking/dns-pod-service.md b/docs/concepts/services-networking/dns-pod-service.md index 321b43ca0c0ca..9a988ebce7404 100644 --- a/docs/concepts/services-networking/dns-pod-service.md +++ b/docs/concepts/services-networking/dns-pod-service.md @@ -164,7 +164,7 @@ following pod-specific DNS policies. These policies are specified in the for details on how DNS queries are handled in those cases. - "`ClusterFirstWithHostNet`": For Pods running with hostNetwork, you should explicitly set its DNS policy "`ClusterFirstWithHostNet`". -- "`None`": A new option value introduced in Kubernetes v1.9. This Alpha feature +- "`None`": A new option value introduced in Kubernetes v1.9 (Beta in v1.10). It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the `dnsConfig` field in the Pod Spec. See [DNS config](#dns-config) subsection below. @@ -198,8 +198,9 @@ spec: ### Pod's DNS Config -Kubernetes v1.9 introduces an Alpha feature that allows users more control on -the DNS settings for a Pod. To enable this feature, the cluster administrator +Kubernetes v1.9 introduces an Alpha feature (Beta in v1.10) that allows users more +control on the DNS settings for a Pod. This feature is enabled by default in v1.10. +To enable this feature in v1.9, the cluster administrator needs to enable the `CustomPodDNS` feature gate on the apiserver and the kubelet, for example, "`--feature-gates=CustomPodDNS=true,...`". When the feature gate is enabled, users can set the `dnsPolicy` field of a Pod @@ -237,8 +238,7 @@ in its `/etc/resolv.conf` file: ``` nameserver 1.2.3.4 search ns1.svc.cluster.local my.dns.search.suffix -options ndots:2 -options edns0 +options ndots:2 edns0 ``` {% endcapture %} diff --git a/docs/concepts/services-networking/service.md b/docs/concepts/services-networking/service.md index 06143b2d56fab..40d0c82f12480 100644 --- a/docs/concepts/services-networking/service.md +++ b/docs/concepts/services-networking/service.md @@ -403,6 +403,8 @@ allocate a port from a flag-configured range (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your `Service`. That port will be reported in your `Service`'s `spec.ports[*].nodePort` field. +If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s) (which is supported since Kubernetes v1.10). A comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 1.2.3.4/32) is used to filter addresses local to this node. For example, if you start kube-proxy with flag `--nodeport-addresses=127.0.0.0/8`, kube-proxy will select only the loopback interface for NodePort Services. The `--nodeport-addresses` is defaulted to empty (`[]`), which means select all available interfaces and is in compliance with current NodePort behaviors. + If you want a specific port number, you can specify a value in the `nodePort` field, and the system will allocate you that port or else the API transaction will fail (i.e. you need to take care about possible port collisions yourself). @@ -413,7 +415,7 @@ configure environments that are not fully supported by Kubernetes, or even to just expose one or more nodes' IPs directly. Note that this Service will be visible as both `:spec.ports[*].nodePort` -and `spec.clusterIP:spec.ports[*].port`. +and `spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).) ### Type LoadBalancer diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index f7d5f2cac11a4..3632604e2d364 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -54,7 +54,7 @@ dynamic provisioning for themselves. To enable dynamic storage provisioning based on storage class, the cluster administrator needs to enable the `DefaultStorageClass` [admission controller](/docs/admin/admission-controllers/#defaultstorageclass) on the API server. This can be done, for example, by ensuring that `DefaultStorageClass` is -among the comma-delimited, ordered list of values for the `--admission-control` flag of +among the comma-delimited, ordered list of values for the `--enable-admission-plugins` flag of the API server component. For more information on API server command line flags, please check [kube-apiserver](/docs/admin/kube-apiserver/) documentation. @@ -70,16 +70,17 @@ Pods use claims as volumes. The cluster inspects the claim to find the bound vol Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as they need it. Users schedule Pods and access their claimed PVs by including a `persistentVolumeClaim` in their Pod's volumes block. [See below for syntax details](#claims-as-volumes). -### Persistent Volume Claim Protection -{% assign for_k8s_version="v1.9" %}{% include feature-state-alpha.md %} -The purpose of the PVC protection is to ensure that PVCs in active use by a pod are not removed from the system as this may result in data loss. +### Storage Object in Use Protection +{% assign for_k8s_version="v1.10" %}{% include feature-state-beta.md %} +The purpose of the Storage Object in Use Protection feature is to ensure that Persistent Volume Claims (PVCs) in active use by a pod and Persistent Volume (PVs) that are bound to PVCs are not removed from the system as this may result in data loss. **Note:** PVC is in active use by a pod when the pod status is `Pending` and the pod is assigned to a node or the pod status is `Running`. {: .note} -When the [PVC protection alpha feature](/docs/tasks/administer-cluster/pvc-protection/) is enabled, if a user deletes a PVC in active use by a pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any pods. +When the [Storage Object in Use Protection beta feature](/docs/tasks/administer-cluster/storage-object-in-use-protection/) is enabled, if a user deletes a PVC in active use by a pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any pods, and also if admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is not bound to a PVC any more. You can see that a PVC is protected when the PVC's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pvc-protection`: + ```shell kubectl describe pvc hostpath Name: hostpath @@ -94,6 +95,28 @@ Finalizers: [kubernetes.io/pvc-protection] ... ``` +You can see that a PV is protected when the PV's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pv-protection` too: + +```shell +kubectl describe pv task-pv-volume +Name: task-pv-volume +Labels: type=local +Annotations: +Finalizers: [kubernetes.io/pv-protection] +StorageClass: standard +Status: Available +Claim: +Reclaim Policy: Delete +Access Modes: RWO +Capacity: 1Gi +Message: +Source: + Type: HostPath (bare host directory volume) + Path: /tmp/data + HostPathType: +Events: +``` + ### Reclaiming When a user is done with their volume, they can delete the PVC objects from the API which allows reclamation of the resource. The reclaim policy for a `PersistentVolume` tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled or Deleted. @@ -475,7 +498,7 @@ spec: ## Raw Block Volume Support -Static provisioning support for Raw Block Volumes is included as an alpha feature for v1.9. With this change are some new API fields that need to be used to facilitate this functionality. Currently, Fibre Channel is the only supported plugin for this feature. +Static provisioning support for Raw Block Volumes is included as an alpha feature for v1.9. With this change are some new API fields that need to be used to facilitate this functionality. Kubernetes v1.10 supports only Fibre Channel and Local Volume plugins for this feature. ### Persistent Volumes using a Raw Block Volume ```yaml diff --git a/docs/concepts/storage/storage-classes.md b/docs/concepts/storage/storage-classes.md index d323c3ac1724c..ccb76aabc0c80 100644 --- a/docs/concepts/storage/storage-classes.md +++ b/docs/concepts/storage/storage-classes.md @@ -640,15 +640,13 @@ references it. ### Local -{% assign for_k8s_version="v1.9" %}{% include feature-state-alpha.md %} - -This feature requires the `VolumeScheduling` feature gate to be enabled. +{% assign for_k8s_version="v1.10" %}{% include feature-state-beta.md %} ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: - name: local-fast + name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer ``` @@ -656,3 +654,7 @@ volumeBindingMode: WaitForFirstConsumer Local volumes do not support dynamic provisioning yet, however a StorageClass should still be created to delay volume binding until pod scheduling. This is specified by the `WaitForFirstConsumer` volume binding mode. + +Delaying volume binding allows the scheduler to consider all of a pod's +scheduling constraints when choosing an appropriate PersistentVolume for a +PersistentVolumeClaim. diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index 1d5e7a47be0c7..f266a050272ab 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -483,68 +483,84 @@ See the [iSCSI example](https://github.com/kubernetes/examples/tree/{{page.githu ### local -{% assign for_k8s_version="v1.7" %}{% include feature-state-alpha.md %} +{% assign for_k8s_version="v1.10" %}{% include feature-state-beta.md %} -This alpha feature requires the `PersistentLocalVolumes` feature gate to be -enabled. - -**Note:** Starting in 1.9, the `VolumeScheduling` feature gate must also be enabled. +**Note:** The alpha PersistentVolume NodeAffinity annotation has been deprecated +and will be removed in a future release. Existing PersistentVolumes using this +annotation must be updated by the user to use the new PersistentVolume +`NodeAffinity` field. {: .note} A `local` volume represents a mounted local storage device such as a disk, partition or directory. -Local volumes can only be used as a statically created PersistentVolume. +Local volumes can only be used as a statically created PersistentVolume. Dynamic +provisioning is not supported yet. -Compared to `hostPath` volumes, local volumes can be used in a durable manner -without manually scheduling pods to nodes, as the system is aware of the volume's -node constraints by looking at the node affinity on the PersistentVolume. +Compared to `hostPath` volumes, local volumes can be used in a durable and +portable manner without manually scheduling pods to nodes, as the system is aware +of the volume's node constraints by looking at the node affinity on the PersistentVolume. However, local volumes are still subject to the availability of the underlying -node and are not suitable for all applications. +node and are not suitable for all applications. If a node becomes unhealthy, +then the local volume will also become inaccessible, and a pod using it will not +be able to run. Applications using local volumes must be able to tolerate this +reduced availability, as well as potential data loss, depending on the +durability characteristics of the underlying disk. -The following is an example PersistentVolume spec using a `local` volume: +The following is an example PersistentVolume spec using a `local` volume and +`nodeAffinity`: ``` yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv - annotations: - "volume.alpha.kubernetes.io/node-affinity": '{ - "requiredDuringSchedulingIgnoredDuringExecution": { - "nodeSelectorTerms": [ - { "matchExpressions": [ - { "key": "kubernetes.io/hostname", - "operator": "In", - "values": ["example-node"] - } - ]} - ]} - }' spec: - capacity: - storage: 100Gi - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Delete - storageClassName: local-storage - local: - path: /mnt/disks/ssd1 + capacity: + storage: 100Gi + # volumeMode field requires BlockVolume Alpha feature gate to be enabled. + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Delete + storageClassName: local-storage + local: + path: /mnt/disks/ssd1 + nodeAffinity: + required: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/hostname + operator: In + values: + - example-node ``` -**Note:** The local PersistentVolume cleanup and deletion requires manual intervention without the external provisioner. -{: .note} +PersistentVolume `nodeAffinity` is required when using local volumes. It enables +the Kubernetes scheduler to correctly schedule pods using local volumes to the +correct node. + +PersistentVolume `volumeMode` can now be set to "Block" (instead of the default +value "Filesystem") to expose the local volume as a raw block device. The +`volumeMode` field requires `BlockVolume` Alpha feature gate to be enabled. -Starting in 1.9, local volume binding can be delayed until pod scheduling by -creating a StorageClass with `volumeBindingMode` set to `WaitForFirstConsumer`. -See the [example](storage-classes.md#local). Delaying volume binding ensures -that the volume binding decision will also be evaluated with any other node -constraints the pod may have, such as node resource requirements, node +When using local volumes, it is recommended to create a StorageClass with +`volumeBindingMode` set to `WaitForFirstConsumer`. See the +[example](storage-classes.md#local). Delaying volume binding ensures +that the PersistentVolumeClaim binding decision will also be evaluated with any +other node constraints the pod may have, such as node resource requirements, node selectors, pod affinity, and pod anti-affinity. -For details on the `local` volume type, see the [Local Persistent Storage -user guide](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume). +An external static provisioner can be run separately for improved management of +the local volume lifecycle. Note that this provisioner does not support dynamic +provisioning yet. For an example on how to run an external local provisioner, +see the [local volume provisioner user guide](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume). + +**Note:** The local PersistentVolume requires manual cleanup and deletion by the +user if the external static provisioner is not used to manage the volume +lifecycle. +{: .note} ### nfs @@ -956,57 +972,103 @@ specification, and to select the type of media to use, for clusters that have several media types. ## Out-of-Tree Volume Plugins -In addition to the previously listed volume types, storage vendors may create -custom plugins without adding it to the Kubernetes repository. This can be -achieved by using either the `CSI` plugin or the `FlexVolume` plugin. +The Out-of-tree volume plugins include the Container Storage Interface (`CSI`) +and `FlexVolume`. They enable storage vendors to create custom storage plugins +without adding them to the Kubernetes repository. -For storage vendors looking to create an out-of-tree volume plugin, [please refer to this FAQ](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md) for choosing between the plugin options. +Before the introduction of `CSI` and `FlexVolume`, all volume plugins (like +volume types listed above) were "in-tree" meaning they were built, linked, +compiled, and shipped with the core Kubernetes binaries and extend the core +Kubernetes API. This meant that adding a new storage system to Kubernetes (a +volume plugin) required checking code into the core Kubernetes code repository. + +Both `CSI` and `FlexVolume` allow volume plugins to be developed independent of +the Kubernetes code base, and deployed (installed) on Kubernetes clusters as +extensions. + +For storage vendors looking to create an out-of-tree volume plugin, please refer +to [this FAQ](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md). ### CSI -CSI stands for [Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md), -a specification attempting to establish an industry standard interface that -container orchestration systems can use to expose arbitrary storage systems -to their container workloads. -Please read -[CSI design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) for further information. +{% assign for_k8s_version="v1.10" %}{% include feature-state-beta.md %} + +[Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md) (CSI) +defines a standard interface for container orchestration systems (like +Kubernetes) to expose arbitrary storage systems to their container workloads. + +Please read the [CSI design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) for more information. - -The `csi` volume type is an in-tree CSI volume plugin for Pods to interact -with external CSI volume drivers running on the same node. -After having deployed a CSI compatible volume driver, users can use `csi` as the -volume type to mount the storage provided by the driver. +CSI support was introduced as alpha in Kubernetes v1.9 and moved to beta in +Kubernets v1.10. -CSI persistent volume support is an alpha feature in Kubernetes v1.9 and requires a -cluster administrator to enable it. To enable CSI persistent volume support, the -cluster administrator adds `CSIPersistentVolume=true` to the `--feature-gates` flag -for apiserver, controller-manager, and kubelet. +Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users +may use the `csi` volume type to attach, mount, etc. the volumes exposed by the +CSI driver. + +The `csi` volume type does not support direct reference from pod and may only be +referenced in a pod via a `PersistentVolumeClaim` object. The following fields are available to storage administrators to configure a CSI persistent volume: - `driver`: A string value that specifies the name of the volume driver to use. - It has to be less than 63 characters and starts with a character. The driver - name can have '`.`', '`-`', '`_`' or digits in it. -- `volumeHandle`: A string value that uniquely identify the volume name returned - from the CSI volume plugin's `CreateVolume` call. The volume handle is then - used in all subsequent calls to the volume driver for referencing the volume. + This value must corespond to the value returned in the `GetPluginInfoResponse` + by the CSI driver as defined in the [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md#getplugininfo). + It is used by Kubernetes to identify which CSI driver to call out to, and by + CSI driver components to identify which PV objects belong to the CSI driver. +- `volumeHandle`: A string value that uniquely identifies the volume. This value + must correspond to the value returned in the `volume.id` field of the + `CreateVolumeResponse` by the CSI driver as defined in the [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume). + The value is passed as `volume_id` on all calls to the CSI volume driver when + referencing the volume. - `readOnly`: An optional boolean value indicating whether the volume is to be - published as read only. Default is false. + "ControllerPublished" (attached) as read only. Default is false. This value is + passed to the CSI driver via the `readonly` field in the + `ControllerPublishVolumeRequest`. +- `fsType`: If the PV's `VolumeMode` is `Filesystem` then this field may be used + to specify the filesystem that should be used to mount the volume. If the + volume has not been formated and formating is supported, this value will be + used to format the volume. If a value is not specified, `ext4` is assumed. + This value is passed to the CSI driver via the `VolumeCapability` field of + `ControllerPublishVolumeRequest`, `NodeStageVolumeRequest`, and + `NodePublishVolumeRequest`. +- `volumeAttributes`: A map of string to string that specifies static properties + of a volume. This map must corespond to the map returned in the + `volume.attributes` field of the `CreateVolumeResponse` by the CSI driver as + defined in the [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume). + The map is passed to the CSI driver via the `volume_attributes` field in the + `ControllerPublishVolumeRequest`, `NodeStageVolumeRequest`, and + `NodePublishVolumeRequest`. +- `controllerPublishSecretRef`: A reference to the secret object containing + sensitive information to pass to the CSI driver to complete the CSI + `ControllerPublishVolume` and `ControllerUnpublishVolume` calls. This field is + optional, and may be empty if no secret is required. If the secret object + contains more than one secret, all secrets are passed. +- `nodeStageSecretRef`: A reference to the secret object containing + sensitive information to pass to the CSI driver to complete the CSI + `NodeStageVolume` call. This field is optional, and may be empty if no secret + is required. If the secret object contains more than one secret, all secrets + are passed. +- `nodePublishSecretRef`: A reference to the secret object containing + sensitive information to pass to the CSI driver to complete the CSI + `NodePublishVolume` call. This field is optional, and may be empty if no + secret is required. If the secret object contains more than one secret, all + secrets are passed. ### FlexVolume -`FlexVolume` enables users to mount vendor volumes into a pod. The vendor plugin -is implemented using a driver, an executable supporting a list of volume commands -defined by the `FlexVolume` API. Drivers must be installed in a pre-defined -volume plugin path on each node. Pods interact with FlexVolume drivers through the `flexVolume` in-tree plugin. +`FlexVolume` is an out-of-tree plugin interface that has existed in Kubernetes +since version 1.2 (before CSI). It uses an exec-based model to interface with +drivers. FlexVolume driver binaries must be installed in a pre-defined volume +plugin path on each node (and in some cases master). + +Pods interact with FlexVolume drivers through the `flexVolume` in-tree plugin. More details can be found [here](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md). ## Mount propagation -**Note:** Mount propagation is an alpha feature in Kubernetes 1.8 and may be -redesigned or even removed in future releases. -{: .note} +{% assign for_k8s_version="v1.10" %}{% include feature-state-beta.md %} Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node. @@ -1015,14 +1077,12 @@ If the "`MountPropagation`" feature is disabled, volume mounts in pods are not p That is, Containers run with `private` mount propagation as described in the [Linux kernel documentation](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). -To enable this feature, specify `MountPropagation=true` in the -`--feature-gates` command line option for the API server and kubelets. -When enabled, the `volumeMounts` field of a Container has a new -`mountPropagation` subfield. Its values are: +Mount propagation of a volume is controlled by `mountPropagation` field in Container.volumeMounts. +Its values are: * `HostToContainer` - This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories. This is - the default mode when the MountPropagation feature is enabled. + the default mode. In other words, if the host mounts anything inside the volume mount, the Container will see it mounted there. @@ -1051,6 +1111,21 @@ In addition, any volume mounts created by Containers in Pods must be destroyed (unmounted) by the Containers on termination. {: .caution} +### Configuration +Before mount propagation can work properly on some deployments (CoreOS, +RedHat/Centos, Ubuntu) mount share must be configured correctly in +Docker as shown below. + +Edit your Docker's `systemd` service file. Set `MountFlags` as follows: +```shell +MountFlags=shared +``` +Or, remove `MountFlags=slave` if present. Then restart the Docker daemon: +```shell +$ sudo systemctl daemon-reload +$ sudo systemctl restart docker +``` + {% endcapture %} {% capture whatsnext %} diff --git a/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/docs/concepts/workloads/controllers/jobs-run-to-completion.md index fb68ece05c2b9..b6086c91c745b 100644 --- a/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -70,12 +70,12 @@ Events: 1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q ``` -To view completed pods of a job, use `kubectl get pods --show-all`. The `--show-all` will show completed pods too. +To view completed pods of a job, use `kubectl get pods`. To list all the pods that belong to a job in a machine readable form, you can use a command like this: ```shell -$ pods=$(kubectl get pods --show-all --selector=job-name=pi --output=jsonpath={.items..metadata.name}) +$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath={.items..metadata.name}) $ echo $pods pi-aiw0a ``` @@ -151,16 +151,6 @@ The requested parallelism (`.spec.parallelism`) can be set to any non-negative v If it is unspecified, it defaults to 1. If it is specified as 0, then the Job is effectively paused until it is increased. -A job can be scaled up using the `kubectl scale` command. For example, the following -command sets `.spec.parallelism` of a job called `myjob` to 10: - -```shell -$ kubectl scale --replicas=10 jobs/myjob -job "myjob" scaled -``` - -You can also use the `scale` subresource of the Job resource. - Actual parallelism (number of pods running at any instant) may be more or less than requested parallelism, for a variety of reasons: @@ -267,8 +257,7 @@ The tradeoffs are: - One Job object for each work item, vs. a single Job object for all work items. The latter is better for large numbers of work items. The former creates some overhead for the user and for the - system to manage large numbers of Job objects. Also, with the latter, the resource usage of the job - (number of concurrently running pods) can be easily adjusted using the `kubectl scale` command. + system to manage large numbers of Job objects. - Number of pods created equals number of work items, vs. each pod can process multiple work items. The former typically requires less modification to existing code and containers. The latter is better for large numbers of work items, for similar reasons to the previous bullet. diff --git a/docs/concepts/workloads/pods/podpreset.md b/docs/concepts/workloads/pods/podpreset.md index 0152e8a566810..7863d61be5ff5 100644 --- a/docs/concepts/workloads/pods/podpreset.md +++ b/docs/concepts/workloads/pods/podpreset.md @@ -68,7 +68,7 @@ In order to use Pod Presets in your cluster you must ensure the following: example, this can be done by including `settings.k8s.io/v1alpha1=true` in the `--runtime-config` option for the API server. 1. You have enabled the admission controller `PodPreset`. One way to doing this - is to include `PodPreset` in the `--admission-control` option value specified + is to include `PodPreset` in the `--enable-admission-plugins` option value specified for the API server. 1. You have defined your Pod Presets by creating `PodPreset` objects in the namespace you will use. diff --git a/docs/getting-started-guides/coreos/cloud-configs/master.yaml b/docs/getting-started-guides/coreos/cloud-configs/master.yaml index 768e91ab40cc8..5b7df1bd77d70 100644 --- a/docs/getting-started-guides/coreos/cloud-configs/master.yaml +++ b/docs/getting-started-guides/coreos/cloud-configs/master.yaml @@ -91,7 +91,7 @@ coreos: ExecStart=/opt/bin/kube-apiserver \ --service-account-key-file=/opt/bin/kube-serviceaccount.key \ --service-account-lookup=false \ - --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ + --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ --runtime-config=api/v1 \ --allow-privileged=true \ --insecure-bind-address=0.0.0.0 \ diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index b9b733c4b8f71..b0d6bfaef53d6 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -607,7 +607,7 @@ Here are some apiserver flags you may need to set: - `--etcd-servers=http://127.0.0.1:4001` - `--tls-cert-file=/srv/kubernetes/server.cert` - `--tls-private-key-file=/srv/kubernetes/server.key` -- `--admission-control=$RECOMMENDED_LIST` +- `--enable-admission-plugins=$RECOMMENDED_LIST` - See [admission controllers](/docs/admin/admission-controllers/) for recommended arguments. - `--allow-privileged=true`, only if you trust your cluster user to run pods as root. diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index 23836bbc7560d..648a80e48c9d0 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -17,7 +17,7 @@ The Kubernetes control plane (API Server, Scheduler, Controller Manager, etc) co {: .note} ## Get Windows Binaries -We recommend using the release binaries that can be found at [https://github.com/kubernetes/kubernetes/releases/latest](https://github.com/kubernetes/kubernetes/releases/latest). Under the CHANGELOG you can find the Node Binaries link for Windows-amd64, which will include kubeadm, kubectl, kubelet and kube-proxy. +We recommend using the release binaries that can be found at [https://github.com/kubernetes/kubernetes/releases/latest](https://github.com/kubernetes/kubernetes/releases/latest). Under the CHANGELOG you can find the Node Binaries link for Windows-amd64, which will include kubeadm, kubectl, kubelet and kube-proxy. If you wish to build the code yourself, please refer to detailed build instructions [here](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/compiling-kubernetes-binaries). @@ -31,7 +31,7 @@ In Kubernetes version 1.9 or later, Windows Server Containers for Kubernetes are ## Networking There are several supported network configurations with Kubernetes v1.9 on Windows, including both Layer-3 routed and overlay topologies using third-party network plugins. - + 1. [Upstream L3 Routing](#upstream-l3-routing-topology) - IP routes configured in upstream ToR 2. [Host-Gateway](#host-gateway-topology) - IP routes configured on each host 3. [Open vSwitch (OVS) & Open Virtual Network (OVN) with Overlay](#using-ovn-with-ovs) - overlay networks (supports STT and Geneve tunneling types) @@ -47,7 +47,7 @@ An additional two CNI plugins [win-l2bridge (host-gateway) and win-overlay (vxla The above networking approaches are already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC. ### Windows -Windows supports the CNI network model and uses plugins to interface with the Windows Host Networking Service (HNS) to configure host networking and policy. At the time of this writing, the only publicly available CNI plugin from Microsoft is built from a private repo and available here [wincni.exe](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/cni/wincni.exe). It uses an l2bridge network created through the Windows Host Networking Service (HNS) by an administrator using HNS PowerShell commands on each node as documented in the [Windows Host Setup](#windows-host-setup) section below. Source code for the future CNI plugins will be made available publicly. +Windows supports the CNI network model and uses plugins to interface with the Windows Host Networking Service (HNS) to configure host networking and policy. At the time of this writing, the only publicly available CNI plugin from Microsoft is built from a private repo and available here [wincni.exe](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/cni/wincni.exe). It uses an l2bridge network created through the Windows Host Networking Service (HNS) by an administrator using HNS PowerShell commands on each node as documented in the [Windows Host Setup](#windows-host-setup) section below. Source code for the future CNI plugins will be made available publicly. #### Upstream L3 Routing Topology In this topology, networking is achieved using L3 routing with static IP routes configured in an upstream Top of Rack (ToR) switch/router. Each cluster node is connected to the management network with a host IP. Additionally, each node uses a local 'l2bridge' network with a pod CIDR assigned. All pods on a given worker node will be connected to the pod CIDR subnet ('l2bridge' network). In order to enable network communication between pods running on different nodes, the upstream router has static routes configured with pod CIDR prefix => Host IP. @@ -65,7 +65,7 @@ The following diagram gives a general overview of the architecture and interacti (The above image is from [https://github.com/openvswitch/ovn-kubernetes#overlay-mode-architecture-diagram](https://github.com/openvswitch/ovn-kubernetes#overlay-mode-architecture-diagram)) -Due to its architecture, OVN has a central component which stores your networking intent in a database. Other components i.e. kube-apiserver, kube-controller-manager, kube-scheduler etc. can be deployed on that central node as well. +Due to its architecture, OVN has a central component which stores your networking intent in a database. Other components i.e. kube-apiserver, kube-controller-manager, kube-scheduler etc. can be deployed on that central node as well. ## Setting up Windows Server Containers on Kubernetes To run Windows Server Containers on Kubernetes, you'll need to set up both your host machines and the Kubernetes node components for Windows. Depending on your network topology, routes may need to be set up for pod communication on different nodes. @@ -76,7 +76,7 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your ##### Linux Host Setup -1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using. +1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using. 2. Configure Linux Master node using steps [here](https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/virtualization/windowscontainers/kubernetes/creating-a-linux-master.md) 3. [Optional] CNI network plugin installed. @@ -92,7 +92,7 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your More detailed instructions can be found [here](https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/virtualization/windowscontainers/kubernetes/getting-started-kubernetes-windows.md). -**Windows CNI Config Example** +**Windows CNI Config Example** Today, Windows CNI plugin is based on wincni.exe code with the following example, configuration file. This is based on the ToR example diagram shown above, specifying the configuration to apply to Windows node-1. Of special interest is Windows node-1 pod CIDR (10.10.187.64/26) and the associated gateway of cbr0 (10.10.187.66). The exception list is specifying the Service CIDR (11.0.0.0/8), Cluster CIDR (10.10.0.0/16), and Management (or Host) CIDR (10.127.132.128/25). Note: this file assumes that a user previous created 'l2bridge' host networks on each Windows node using `-HNSNetwork` cmdlets as shown in the `start-kubelet.ps1` and `start-kubeproxy.ps1` scripts linked above @@ -229,7 +229,7 @@ Use your preferred method to start Kubernetes cluster on Linux. Please note that ## Support for kubeadm join -If your cluster has been created by [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/), +If your cluster has been created by [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/), and your networking is setup correctly using one of the methods listed above (networking is setup outside of kubeadm), you can use kubeadm to add a Windows node to your cluster. At a high level, you first have to initialize the master with kubeadm (Linux), then set up the CNI based networking (outside of kubeadm), and finally start joining Windows or Linux worker nodes to the cluster. For additional documentation and reference material, visit the kubeadm link above. The kubeadm binary can be found at [Kubernetes Releases](https://github.com/kubernetes/kubernetes/releases), inside the node binaries archive. Adding a Windows node is not any different than adding a Linux node: @@ -290,9 +290,9 @@ Secrets and ConfigMaps can be utilized in Windows Server Containers, but must be data: username: YWRtaW4= password: MWYyZDFlMmU2N2Rm - + --- - + apiVersion: v1 kind: Pod metadata: @@ -315,7 +315,7 @@ Secrets and ConfigMaps can be utilized in Windows Server Containers, but must be nodeSelector: beta.kubernetes.io/os: windows ``` - + Windows pod with configMap values mapped to environment variables ```yaml @@ -351,14 +351,14 @@ spec: nodeSelector: beta.kubernetes.io/os: windows ``` - + ### Volumes Some supported Volume Mounts are local, emptyDir, hostPath. One thing to remember is that paths must either be escaped, or use forward slashes, for example `mountPath: "C:\\etc\\foo"` or `mountPath: "C:/etc/foo"`. Persistent Volume Claims are supported for supported volume types. **Examples:** - + Windows pod with a hostPath volume ```yaml apiVersion: v1 @@ -380,9 +380,9 @@ Persistent Volume Claims are supported for supported volume types. hostPath: path: "C:\\etc\\foo" ``` - + Windows pod with multiple emptyDir volumes - + ```yaml apiVersion: v1 kind: Pod @@ -434,10 +434,62 @@ spec: Windows Stats use a hybrid model: pod and container level stats come from CRI (via dockershim), while node level stats come from the "winstats" package that exports cadvisor like data structures using windows specific perf counters from the node. +### Container Resources + +Container resources (CPU and memory) could be set now for windows containers in v1.10. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: iis +spec: + replicas: 3 + template: + metadata: + labels: + app: iis + spec: + containers: + - name: iis + image: microsoft/iis + resources: + limits: + memory: "128Mi" + cpu: 2 + ports: + - containerPort: 80 +``` + +### Hyper-V Containers + +Hyper-V containers are supported as experimental in v1.10. To create a Hyper-V container, kubelet should be started with feature gates `HyperVContainer=true` and Pod should include annotation `experimental.windows.kubernetes.io/isolation-type=hyperv`. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: iis +spec: + replicas: 3 + template: + metadata: + labels: + app: iis + annotations: + experimental.windows.kubernetes.io/isolation-type: hyperv + spec: + containers: + - name: iis + image: microsoft/iis + ports: + - containerPort: 80 +``` + ## Known Limitations for Windows Server Containers with v1.9 Some of these limitations will be addressed by the community in future releases of Kubernetes - Shared network namespace (compartment) with multiple Windows Server containers (shared kernel) per pod is only supported on Windows Server 1709 or later -- Using Secrets and ConfigMaps as volume mounts is not supported +- Using Secrets and ConfigMaps as volume mounts is not supported - Mount propagation is not supported on Windows - The StatefulSet functionality for stateful applications is not supported - Horizontal Pod Autoscaling for Windows Server Container pods has not been verified to work end-to-end @@ -446,6 +498,11 @@ Some of these limitations will be addressed by the community in future releases - Under the networking models of L3 or Host GW, Kubernetes Services are inaccessible to Windows nodes due to a Windows issue. This is not an issue if using OVN/OVS for networking. - Windows kubelet.exe may fail to start when running on Windows Server under VMware Fusion [issue 57110](https://github.com/kubernetes/kubernetes/pull/57124) - Flannel and Weavenet are not yet supported +- Windows container OS must match the Host OS. If it does not, the pod will get stuck in a crash loop. +- Under the networking models of L3 or Host GW, Kubernetes Services are inaccessible to Windows nodes due to a Windows issue. This is not an issue if using OVN/OVS for networking. +- Windows kubelet.exe may fail to start when running on Windows Server under VMware Fusion [issue 57110](https://github.com/kubernetes/kubernetes/pull/57124) +- Flannel and Weavenet are not yet supported +- Some .Net Core applications expect environment variables with a colon (`:`) in the name. Kubernetes currently does not allow this. Replace colon (`:`) with double underscore (`__`) as documented [here](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#configuration-by-environment). ## Next steps and resources diff --git a/docs/reference/api-concepts.md b/docs/reference/api-concepts.md index c2a283f410e0e..fc9092faf990a 100644 --- a/docs/reference/api-concepts.md +++ b/docs/reference/api-concepts.md @@ -150,6 +150,63 @@ For example, if there are 1,253 pods on the cluster and the client wants to rece Note that the `resourceVersion` of the list remains constant across each request, indicating the server is showing us a consistent snapshot of the pods. Pods that are created, updated, or deleted after version `10245` would not be shown unless the user makes a list request without the `continue` token. This allows clients to break large requests into smaller chunks and then perform a watch operation on the full set without missing any updates. +## Receiving resources as Tables + +`kubectl get` is a simple tabular representation of one or more instances of a particular resource type. In the past, clients were required to reproduce the tabular and describe output implemented in `kubectl` to perform simple lists of objects. +A few limitations of that approach include non-trivial logic when dealing with certain objects. Additionally, types provided by API aggregation or third party rersources are not known at compile time. This means that generic implementations had to be in place for types unrecognized by a client. + +In order to avoid potential limitations as described above, clients may request the Table representation of objects, delegating specific details of printing to the server. The Kubernetes API implements standard HTTP content type negotiation: passing an `Accept` header containing a value of `application/json;as=Table;g=meta.k8s.io;v=v1beta1` with a `GET` call will request that the server return objects in the Table content type. + +For example: + +1. List all of the pods on a cluster in the Table format. + + GET /api/v1/pods + Accept: application/json;as=Table;v=meta.k8s.io;g=v1beta1 + --- + 200 OK + Content-Type: application/json + { + "kind": "Table", + "apiVersion": "meta.k8s.io/v1beta1", + ... + "columnDefinitions": [ + ... + ] + } + +For API resource types that do not have a custom Table definition on the server, a default Table response is returned by the server, consisting of the resource's `name` and `creationTimestamp` fields. + + GET /apis/crd.example.com/v1alpha1/namespaces/default/resources + --- + 200 OK + Content-Type: application/json + ... + { + "kind": "Table", + "apiVersion": "meta.k8s.io/v1beta1", + ... + "columnDefinitions": [ + { + "name": "Name", + "type": "string", + ... + }, + { + "name": "Created At", + "type": "date", + ... + } + ] + } + +Table responses are available beginning in version 1.10 of the kube-apiserver. As such, not all API resource types will support a Table response, specifically when using a client against older clusters. Clients that must work against all resource types, or can potentially deal with older clusters, should specify multiple content types in their `Accept` header to support fallback to non-Tabular JSON: + +``` +Accept: application/json;as=Table;v=meta.k8s.io;g=v1beta1, application/json +``` + + ## Alternate representations of resources By default Kubernetes returns objects serialized to JSON with content type `application/json`. This is the default serialization format for the API. However, clients may request the more efficient Protobuf representation of these objects for better performance at scale. The Kubernetes API implements standard HTTP content type negotiation: passing an `Accept` header with a `GET` call will request that the server return objects in the provided content type, while sending an object in Protobuf to the server for a `PUT` or `POST` call takes the `Content-Type` header. The server will return a `Content-Type` header if the requested format is supported, or the `406 Not acceptable` error if an invalid content type is provided. diff --git a/docs/reference/feature-gates.md b/docs/reference/feature-gates.md index 094e1fa545385..672e49d721178 100644 --- a/docs/reference/feature-gates.md +++ b/docs/reference/feature-gates.md @@ -25,8 +25,8 @@ different Kubernetes components. | Feature | Default | Stage | Since | Until | |---------|---------|-------|-------|-------| -| `Accelerators` | `false` | Alpha | 1.6 | | -| `AdvancedAuditing` | `false` | Alpha | 1.7 | | +| `Accelerators` | `false` | Alpha | 1.6 | 1.10 | +| `AdvancedAuditing` | `false` | Alpha | 1.7 | 1.7 | | `AdvancedAuditing` | `true` | Beta | 1.8 | | | `AffinityInAnnotations` | `false` | Alpha | 1.6 | 1.7 | | `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 | @@ -38,11 +38,17 @@ different Kubernetes components. | `BlockVolume` | `false` | Alpha | 1.9 | | | `CPUManager` | `false` | Alpha | 1.8 | 1.9 | | `CPUManager` | `true` | Beta | 1.10 | | -| `CSIPersistentVolume` | `false` | Alpha | 1.9 | | -| `CustomPodDNS` | `false` | Alpha | 1.9 | | +| `CRIContainerLogRotation` | `false` | Alpha | 1.10 | | +| `CSIPersistentVolume` | `false` | Alpha | 1.9 | 1.9 | +| `CSIPersistentVolume` | `true` | Beta | 1.10 | | +| `CustomPodDNS` | `false` | Alpha | 1.9 | 1.9 | +| `CustomPodDNS` | `true` | Beta| 1.10 | | +| `CustomResourceSubresources` | `false` | Alpha | 1.10 | | | `CustomResourceValidation` | `false` | Alpha | 1.8 | 1.8 | | `CustomResourceValidation` | `true` | Beta | 1.9 | | -| `DevicePlugins` | `false` | Alpha | 1.8 | | +| `DebugContainers` | `false` | Alpha | 1.10 | | +| `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 | +| `DevicePlugins` | `true` | Beta | 1.10 | | | `DynamicKubeletConfig` | `false` | Alpha | 1.4 | | | `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 | | `DynamicVolumeProvisioning` | `true` | GA | 1.8 | | @@ -50,24 +56,40 @@ different Kubernetes components. | `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.8 | | `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | | | `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | | -| `HugePages` | `false` | Alpha | 1.8 | | +| `GCERegionalPersistentDisk` | `true` | Beta | 1.10 | | +| `HugePages` | `false` | Alpha | 1.8 | 1.9 | +| `HugePages` | `true` | Beta| 1.10 | | +| `HyperVContainer` | `false` | Alpha | 1.10 | | | `Initializers` | `false` | Alpha | 1.7 | | -| `KubeletConfigFile` | `false` | Alpha | 1.8 | | -| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | | +| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 | +| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 | +| `LocalStorageCapacityIsolation` | `true` | Beta| 1.10 | | | `MountContainers` | `false` | Alpha | 1.9 | | -| `MountPropagation` | `false` | Alpha | 1.8 | | -| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | | +| `MountPropagation` | `false` | Alpha | 1.8 | 1.9 | +| `MountPropagation` | `true` | Beta | 1.10 | | +| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 | +| `PersistentLocalVolumes` | `true` | Beta | 1.10 | | | `PodPriority` | `false` | Alpha | 1.8 | | -| `PVCProtection` | `false` | Alpha | 1.9 | | +| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | | +| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 | +| `ReadOnlyAPIDataVolumes` | `true` | Deprecated | 1.10 | | | `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | | | `RotateKubeletClientCertificate` | `true` | Beta | 1.7 | | | `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | | +| `RunAsGroup` | `false` | Alpha | 1.10 | | +| `ScheduleDaemonSetPods` | `false` | Alpha | 1.10 | | | `ServiceNodeExclusion` | `false` | Alpha | 1.8 | | +| `StorageObjectInUseProtection` | `true` | Beta | 1.10 | | | `StreamingProxyRedirects` | `true` | Beta | 1.5 | | -| `SupportIPVSProxyMode` | `false` | Alpha | 1.8 | | +| `SupportIPVSProxyMode` | `false` | Alpha | 1.8 | 1.8 | +| `SupportIPVSProxyMode` | `false` | Beta | 1.9 | 1.9 | +| `SupportIPVSProxyMode` | `true` | Beta | 1.10 | | +| `SupportPodPidsLimit` | `false` | Alpha | 1.10 | | | `TaintBasedEvictions` | `false` | Alpha | 1.6 | | | `TaintNodesByCondition` | `false` | Alpha | 1.8 | | -| `VolumeScheduling` | `false` | Alpha | 1.9 | | +| `TokenRequest` | `false` | Alpha | 1.10 | | +| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | +| `VolumeScheduling` | `true` | Beta | 1.10 | | ## Using a Feature @@ -122,6 +144,7 @@ Each feature gate is designed for enabling/disabling a specific feature: See [Raw Block Volume Support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support) for more details. - `CPUManager`: Enable container level CPU affinity support, see [CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/). +- `CRIContainerLogRotation`: Enable container log rotation for cri container runtime. - `CSIPersistentVolume`: Enable discovering and mounting volumes provisioned through a [CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) compatible volume plugin. @@ -129,7 +152,12 @@ Each feature gate is designed for enabling/disabling a specific feature: - `CustomPodDNS`: Enable customizing the DNS settings for a Pod using its `dnsConfig` property. Check [Pod's DNS Config](/docs/concepts/services-networking/dns-pod-service/#pods-dns-config) for more details. -- `CustomeResourceValidation`: Enable schema based validation on resources created from [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/). +- `CustomResourceSubresources`: Enable `/status` and `/scale` subresources + on resources created from [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/). +- `CustomResourceValidation`: Enable schema based validation on resources created from + [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/). +- `DebugContainers`: Enable running a "debugging" container in a Pod's namespace to + troubleshoot a running Pod. - `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/) based resource provisioning on nodes. - `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/). @@ -142,7 +170,9 @@ Each feature gate is designed for enabling/disabling a specific feature: host mounts, or containers that are privileged or using specific non-namespaced capabilities (e.g. `MKNODE`, `SYS_MODULE` etc.). This should only be enabled if user namespace remapping is enabled in the Docker daemon. +- `GCERegionalPersistentDisk`: Enable the regional PD feature on GCE. - `HugePages`: Enable the allocation and consumption of pre-allocated [huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/). +- `HyperVContainer`: Enable [Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container) for Windows containers. - `Intializers`: Enable the [dynamic admission control](/docs/admin/extensible-admission-controllers/) as an extension to the built-in [admission controllers](/docs/admin/admission-controllers/). When the `Initializers` admission controller is enabled, this feature is automatically enabled. @@ -158,6 +188,8 @@ Each feature gate is designed for enabling/disabling a specific feature: - `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from being deleted when it is still used by any Pod. More details can be found [here](/docs/tasks/administer-cluster/pvc-protection/). +- `ReadOnlyAPIDataVolumes`: Set Secret, ConfigMap, DownwardAPI and projected volumes to be mounted in read-only mode. + This gate exists only for backward compatibility. It will be removed in 1.11 release. - `ResourceLimitsPriorityFunction`: Enable a scheduler priority function that assigns a lowest possible score of 1 to a node that satisfies at least one of the input Pod's cpu and memory limits. The intent is to break ties between @@ -166,16 +198,22 @@ Each feature gate is designed for enabling/disabling a specific feature: See [kubelet configuration](/docs/admin/kubelet-tls-bootstrapping/#kubelet-configuration) for more details. - `RotateKubeletServerCertificate`: Enable the rotation of the server TLS certificate on the kubelet. See [kubelet configuration](/docs/admin/kubelet-tls-bootstrapping/#kubelet-configuration) for more details. +- `RunAsGroup`: Enable control over the primary group ID set on the init processes of containers. +- `ScheduleDaemonSetPods`: Enable DaemonSet Pods to be scheduled by the default scheduler instead of the DaemonSet controller. - `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers created by a cloud provider. A node is eligible for exclusion if annotated with "`alpha.service-controller.kubernetes.io/exclude-balancer`" key. +- `StorageObjectInUseProtection`: Postpone the deletion of PersistentVolume or + PersistentVolumeClaim objects if they are still being used. - `StreamingProxyRedirects`: Instructs the API server to intercept (and follow) redirects from the backend (kubelet) for streaming requests. Examples of streaming requests include the `exec`, `attach` and `port-forward` requests. - `SupportIPVSProxyMode`: Enable providing in-cluster service load balancing using IPVS. See [service proxies](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) for more details. +- `SupportPodPidsLimit`: Enable the support to limiting PIDs in Pods. - `TaintBasedEvictions`: Enable evicting pods from nodes based on taints on nodes and tolerations on Pods. See [taints and tolerations](/docs/concepts/configuration/taint-and-toleration/) for more details. - `TaintNodesByCondition`: Enable automatic tainting nodes based on [node conditions](/docs/concepts/architecture/nodes/#condition). +- `TokenRequest`: Enable the `TokenRequest` endpoint on service account resources. - `VolumeScheduling`: Enable volume topology aware scheduling and make the PersistentVolumeClaim (PVC) binding aware of scheduling decisions. It also enables the usage of [`local`](/docs/concepts/storage/volumes/#local) volume diff --git a/docs/reference/index.md b/docs/reference/index.md index 428a468722147..5b88d5e3d0bc9 100644 --- a/docs/reference/index.md +++ b/docs/reference/index.md @@ -8,7 +8,8 @@ approvers: * [Kubernetes API Overview](/docs/reference/api-overview/) - Overview of the API for Kubernetes. * Kubernetes API Versions - * [1.9](/docs/reference/generated/kubernetes-api/v1.9/) + * [1.10](/docs/reference/generated/kubernetes-api/v1.10/) + * [1.9](https://v1-9.docs.kubernetes.io/docs/reference/) * [1.8](https://v1-8.docs.kubernetes.io/docs/reference/) * [1.7](https://v1-7.docs.kubernetes.io/docs/reference/) * [1.6](https://v1-6.docs.kubernetes.io/docs/reference/) diff --git a/docs/reference/kubectl/cheatsheet.md b/docs/reference/kubectl/cheatsheet.md index 7502b9420b9cf..58003cb0d9f38 100644 --- a/docs/reference/kubectl/cheatsheet.md +++ b/docs/reference/kubectl/cheatsheet.md @@ -253,7 +253,7 @@ Resource type | Abbreviated alias `configmaps` |`cm` `controllerrevisions` | `cronjobs` | -`customresourcedefinition` |`crd` +`customresourcedefinition` |`crd`, `crds` `daemonsets` |`ds` `deployments` |`deploy` `endpoints` |`ep` diff --git a/docs/reference/kubectl/overview.md b/docs/reference/kubectl/overview.md index 5ae7485c5bc8a..70221e2c24354 100644 --- a/docs/reference/kubectl/overview.md +++ b/docs/reference/kubectl/overview.md @@ -192,6 +192,26 @@ NAME RSRC submit-queue 610995 ``` +#### Server-side columns + +`kubectl` supports receiving specific column information from the server about objects. +This means that for any given resource, the server will return columns and rows relevant to that resource, for the client to print. +This allows for consistent human-readable output across clients used against the same cluster, by having the server encapsulate the details of printing. + +To output object information using this feature, you can add the `--experimental-server-print` flag to a supported `kubectl` command. + +##### Examples + +```shell +$ kubectl get pods --experimental-server-print +``` + +The result of running this command is: + +```shell +NAME READY STATUS RESTARTS AGE +pod-name 1/1 Running 0 1m + ### Sorting list objects To output objects to a sorted list in your terminal window, you can add the `--sort-by` flag to a supported `kubectl` command. Sort your objects by specifying any numeric or string field with the `--sort-by` flag. To specify a field, use a [jsonpath](/docs/user-guide/jsonpath) expression. @@ -245,6 +265,9 @@ $ kubectl get ds --include-uninitialized // List all pods running on node server01 $ kubectl get pods --field-selector=spec.nodeName=server01 + +// List all pods in plain-text output format, delegating the details of printing to the server +$ kubectl get pods --experimental-server-print ``` `kubectl describe` - Display detailed state of one or more resources, including the uninitialized ones by default. diff --git a/docs/reference/setup-tools/kubeadm/implementation-details.md b/docs/reference/setup-tools/kubeadm/implementation-details.md index 834a5e7266462..b7219b98e22b9 100644 --- a/docs/reference/setup-tools/kubeadm/implementation-details.md +++ b/docs/reference/setup-tools/kubeadm/implementation-details.md @@ -233,7 +233,7 @@ Other API server flags that are set unconditionally are: - `--enable-bootstrap-token-auth=true` to enable the `BootstrapTokenAuthenticator` authentication module. see [TLS Bootstrapping](/docs/admin/kubelet-tls-bootstrapping.md) for more details - `--allow-privileged` to `true` (required e.g. by kube proxy) - `--requestheader-client-ca-file` to `front-proxy-ca.crt` - - `--admission-control` to: + - `--enable-admission-plugins` to: - [`Initializers`](/docs/admin/admission-controllers/#initializers-alpha) to enable [Dynamic Admission Control](/docs/admin/extensible-admission-controllers/). - [`NamespaceLifecycle`](/docs/admin/admission-controllers/#namespacelifecycle) e.g. to avoid deletion of system reserved namespaces diff --git a/docs/setup/pick-right-solution.md b/docs/setup/pick-right-solution.md index f0fd33155c3cb..122df965e0f3c 100644 --- a/docs/setup/pick-right-solution.md +++ b/docs/setup/pick-right-solution.md @@ -165,6 +165,7 @@ GCE | CoreOS | CoreOS | flannel | [docs](/docs/gettin Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) VMware vSphere | any | multi-support | multi-support | [docs](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) | [Community](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/contactus.html) +Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap)) lxd | Juju | Ubuntu | flannel/canal | [docs](/docs/getting-started-guides/ubuntu/local/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) AWS | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) Azure | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) diff --git a/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md index 909dc31e10ae7..398b296d5514d 100644 --- a/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md +++ b/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md @@ -22,53 +22,93 @@ for database debugging. {% capture steps %} -## Creating a pod to run a Redis server +## Creating Redis deployment and service -1. Create a pod: +1. Create a Redis deployment: - kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/redis-master.yaml + kubectl create -f https://k8s.io/docs/tutorials/stateless-application/guestbook/redis-master-deployment.yaml - The output of a successful command verifies that the pod was created: + The output of a successful command verifies that the deployment was created: - pod "redis-master" created + deployment "redis-master" created + + When the pod is ready, you can get: + + kubectl get pods -1. Check to see whether the pod is running and ready: + NAME READY STATUS RESTARTS AGE + redis-master-765d459796-258hz 1/1 Running 0 50s - kubectl get pods + kubectl get deployment + + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + redis-master 1 1 1 1 55s + + kubectl get rs + NAME DESIRED CURRENT READY AGE + redis-master-765d459796 1 1 1 1m + + +2. Create a Redis service: + + kubectl create -f https://k8s.io/docs/tutorials/stateless-application/guestbook/redis-master-service.yaml + + The output of a successful command verifies that the service was created: + + service "redis-master" created + + Check the service created: - When the pod is ready, the output displays a STATUS of Running: + kubectl get svc | grep redis - NAME READY STATUS RESTARTS AGE - redis-master 2/2 Running 0 41s + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + redis-master ClusterIP 10.0.0.213 6379/TCP 27s -1. Verify that the Redis server is running in the pod and listening on port 6379: +3. Verify that the Redis server is running in the pod and listening on port 6379: {% raw %} - kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' + kubectl get pods redis-master-765d459796-258hz --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' {% endraw %} The output displays the port: 6379 + ## Forward a local port to a port on the pod -1. Forward port 6379 on the local workstation to port 6379 of redis-master pod: +1. `kubectl port-forward` allows using resource name, such as a service name, to select a matching pod to port forward to since Kubernetes v1.10. + + kubectl port-forward redis-master-765d459796-258hz 6379:6379 + + which is the same as + + kubectl port-forward pods/redis-master-765d459796-258hz 6379:6379 + + or + + kubectl port-forward deployment/redis-master 6379:6379 + + or + + kubectl port-forward rs/redis-master 6379:6379 + + or - kubectl port-forward redis-master 6379:6379 + kubectl port-forward svc/redis-master 6379:6379 - The output is similar to this: + Any of the above commands works. The output is similar to this: I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379 I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379 -1. Start the Redis command line interface: +2. Start the Redis command line interface: - redis-cli + redis-cli -1. At the Redis command line prompt, enter the `ping` command: +3. At the Redis command line prompt, enter the `ping` command: - 127.0.0.1:6379>ping + 127.0.0.1:6379>ping A successful ping request returns PONG. diff --git a/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions.md b/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions.md index c84bb2849e85e..cd638ebc4dc30 100644 --- a/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions.md +++ b/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions.md @@ -216,8 +216,8 @@ Validation of custom objects is possible via [OpenAPI v3 schema](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#schemaObject). Additionally, the following restrictions are applied to the schema: -- The fields `default`, `nullable`, `discriminator`, `readOnly`, `writeOnly`, `xml` and -`deprecated` cannot be set. +- The fields `default`, `nullable`, `discriminator`, `readOnly`, `writeOnly`, `xml`, +`deprecated` and `$ref` cannot be set. - The field `uniqueItems` cannot be set to true. - The field `additionalProperties` cannot be set to false. @@ -328,6 +328,218 @@ And create it: kubectl create -f my-crontab.yaml crontab "my-new-cron-object" created ``` + +### Subresources + +Custom resources support `/status` and `/scale` subresources. +This feature is __alpha__ in v1.10 and may change in backward incompatible ways. + +Enable this feature using the `CustomResourceSubresources` feature gate on +the [kube-apiserver](/docs/admin/kube-apiserver): + +``` +--feature-gates=CustomResourceSubresources=true +``` + +When the `CustomResourceSubresources` feature gate is enabled, only the `properties` construct +is allowed in the root schema for custom resource validation. + +The status and scale subresources can be optionally enabled by +defining them in the CustomResourceDefinition. + +#### Status subresource + +When the status subresource is enabled, the `/status` subresource for the custom resource is exposed. + +- The status and the spec stanzas are represented by the `.status` and `.spec` JSONPaths respectively inside of a custom resource. +- `PUT` requests to the `/status` subresource take a custom resource object and ignore changes to anything except the status stanza. +- `PUT` requests to the `/status` subresource only validate the status stanza of the custom resource. +- `PUT`/`POST`/`PATCH` requests to the custom resource ignore changes to the status stanza. +- Any changes to the spec stanza increments the value at `.metadata.generation`. + +#### Scale subresource + +When the scale subresource is enabled, the `/scale` subresource for the custom resource is exposed. +The `autoscaling/v1.Scale` object is sent as the payload for `/scale`. + +To enable the scale subresource, the following values are defined in the CustomResourceDefinition. + +- `SpecReplicasPath` defines the JSONPath inside of a custom resource that corresponds to `Scale.Spec.Replicas`. + + - It is a required value. + - Only JSONPaths under `.spec` and with the dot notation are allowed. + - If there is no value under the `SpecReplicasPath` in the custom resource, +the `/scale` subresource will return an error on GET. + +- `StatusReplicasPath` defines the JSONPath inside of a custom resource that corresponds to `Scale.Status.Replicas`. + + - It is a required value. + - Only JSONPaths under `.status` and with the dotation are allowed. + - If there is no value under the `StatusReplicasPath` in the custom resource, +the status replica value in the `/scale` subresource will default to 0. + +- `LabelSelectorPath` defines the JSONPath inside of a custom resource that corresponds to `Scale.Status.Selector`. + + - It is an optional value. + - It must be set to work with HPA. + - Only JSONPaths under `.status` and with the dotation are allowed. + - If there is no value under the `LabelSelectorPath` in the custom resource, +the status selector value in the `/scale` subresource will default to the empty string. + +In the following example, both status and scale subresources are enabled. + +Save the CustomResourceDefinition to `resourcedefinition.yaml`: + +```yaml +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + name: crontabs.stable.example.com +spec: + group: stable.example.com + version: v1 + scope: Namespaced + names: + plural: crontabs + singular: crontab + kind: CronTab + shortNames: + - ct + # subresources describes the subresources for custom resources. + subresources: + # status enables the status subresource. + status: {} + # scale enables the scale subresource. + scale: + # specReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Spec.Replicas. + specReplicasPath: .spec.replicas + # statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Replicas. + statusReplicasPath: .status.replicas + # labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector. + labelSelectorPath: .status.labelSelector +``` + +And create it: + +```shell +kubectl create -f resourcedefinition.yaml +``` + +After the CustomResourceDefinition object has been created, you can create custom objects. + +If you save the following YAML to `my-crontab.yaml`: + +```yaml +apiVersion: "stable.example.com/v1" +kind: CronTab +metadata: + name: my-new-cron-object +spec: + cronSpec: "* * * * */5" + image: my-awesome-cron-image + replicas: 3 +``` + +and create it: + +```shell +kubectl create -f my-crontab.yaml +``` + +Then new namespaced RESTful API endpoints are created at: + +``` +/apis/stable.example.com/v1/namespaces/*/crontabs/status +``` + +and + +``` +/apis/stable.example.com/v1/namespaces/*/crontabs/scale +``` + +A custom resource can be scaled using the `kubectl scale` command. +For example, the following command sets `.spec.replicas` of the +custom resource created above to 5: + +```shell +kubectl scale --replicas=5 crontabs/my-new-cron-object +crontabs "my-new-cron-object" scaled + +kubectl get crontabs my-new-cron-object -o jsonpath='{.spec.replicas}' +5 +``` + +### Categories + +Categories is a list of grouped resources the custom resource belongs to (eg. `all`). +You can use `kubectl get ` to list the resources belonging to the category. +This feature is __beta__ and available for custom resources from v1.10. + +The following example adds `all` in the list of categories in the CustomResourceDefinition +and illustrates how to output the custom resource using `kubectl get all`. + +Save the following CustomResourceDefinition to `resourcedefinition.yaml`: + +```yaml +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + name: crontabs.stable.example.com +spec: + group: stable.example.com + version: v1 + scope: Namespaced + names: + plural: crontabs + singular: crontab + kind: CronTab + shortNames: + - ct + # categories is a list of grouped resources the custom resource belongs to. + categories: + - all +``` + +And create it: + +```shell +kubectl create -f resourcedefinition.yaml +``` + +After the CustomResourceDefinition object has been created, you can create custom objects. + +Save the following YAML to `my-crontab.yaml`: + +```yaml +apiVersion: "stable.example.com/v1" +kind: CronTab +metadata: + name: my-new-cron-object +spec: + cronSpec: "* * * * */5" + image: my-awesome-cron-image +``` + +and create it: + +```shell +kubectl create -f my-crontab.yaml +``` + +You can specify the category using `kubectl get`: + +``` +kubectl get all +``` + +and it will include the custom resources of kind `CronTab`: + +```console +NAME AGE +crontabs/my-new-cron-object 3s +``` + {% endcapture %} {% capture whatsnext %} diff --git a/docs/tasks/administer-cluster/coredns.md b/docs/tasks/administer-cluster/coredns.md index 844531067d986..9ec5b3bebc152 100644 --- a/docs/tasks/administer-cluster/coredns.md +++ b/docs/tasks/administer-cluster/coredns.md @@ -5,7 +5,7 @@ title: Using CoreDNS for Service Discovery min-kubernetes-server-version: v1.9 --- -{% include feature-state-alpha.md %} +{% include feature-state-beta.md %} {% capture overview %} This page describes how to enable CoreDNS instead of kube-dns for service @@ -20,8 +20,9 @@ discovery. ## Installing CoreDNS with kubeadm -In Kubernetes 1.9, [CoreDNS](https://coredns.io) is available as an alpha feature and -may be installed by setting the `CoreDNS` feature gate to `true` during `kubeadm init`: +In Kubernetes 1.9, [CoreDNS](https://coredns.io) is available as an alpha feature, and +in Kubernetes 1.10 it is available as a beta feature. In either case, you may install +it during cluster creation by setting the `CoreDNS` feature gate to `true` during `kubeadm init`: ``` kubeadm init --feature-gates=CoreDNS=true @@ -29,6 +30,21 @@ kubeadm init --feature-gates=CoreDNS=true This installs CoreDNS instead of kube-dns. +## Upgrading an Existing Cluster with kubeadm + +In Kubernetes 1.10, you can also move to CoreDNS when you use `kubeadm` to upgrade +a cluster that is using `kube-dns`. In this case, `kubeadm` will generate the CoreDNS configuration +("Corefile") based upon the `kube-dns` ConfigMap, preserving configurations for federation, +stub domains, and upstream name server. + +Note that if you are running CoreDNS in your cluster already, prior to upgrade, your existing Corefile will be +**overwritten** by the one created during upgrade. **You should save your existing ConfigMap +if you have customized it.** You may re-apply your customizations after the new ConfigMap is +up and running. + +This process will be modified for the GA release of this feature, such that an existing +Corefile will not be overwritten. + {% endcapture %} {% capture whatsnext %} diff --git a/docs/tasks/administer-cluster/cpu-management-policies.md b/docs/tasks/administer-cluster/cpu-management-policies.md index 35bd14b2ee596..60fa988ec053e 100644 --- a/docs/tasks/administer-cluster/cpu-management-policies.md +++ b/docs/tasks/administer-cluster/cpu-management-policies.md @@ -33,9 +33,8 @@ management policies to determine some placement preferences on the node. ### Configuration -The CPU Manager is introduced as an alpha feature in Kubernetes v1.8. It -must be explicitly enabled in the kubelet feature gates: -`--feature-gates=CPUManager=true`. +The CPU Manager is an alpha feature in Kubernetes v1.8. It was enabled by +default as a beta feature since v1.10. The CPU Manager policy is set with the `--cpu-manager-policy` kubelet option. There are two supported policies: diff --git a/docs/tasks/administer-cluster/encrypt-data.md b/docs/tasks/administer-cluster/encrypt-data.md index 76766e8544be1..a238ed42390f0 100644 --- a/docs/tasks/administer-cluster/encrypt-data.md +++ b/docs/tasks/administer-cluster/encrypt-data.md @@ -78,7 +78,8 @@ Name | Encryption | Strength | Speed | Key Length | Other Considerations `identity` | None | N/A | N/A | N/A | Resources written as-is without encryption. When set as the first provider, the resource will be decrypted as new values are written. `aescbc` | AES-CBC with PKCS#7 padding | Strongest | Fast | 32-byte | The recommended choice for encryption at rest but may be slightly slower than `secretbox`. `secretbox` | XSalsa20 and Poly1305 | Strong | Faster | 32-byte | A newer standard and may not be considered acceptable in environments that require high levels of review. -`aesgcm` | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented. +`aesgcm` | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented. +`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with PKCS#7 padding, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/) Each provider supports multiple keys - the keys are tried in order for decryption, and if the provider is the first provider, the first key is used for encryption. diff --git a/docs/tasks/administer-cluster/kms-provider.md b/docs/tasks/administer-cluster/kms-provider.md new file mode 100644 index 0000000000000..ae6dcd4bfef98 --- /dev/null +++ b/docs/tasks/administer-cluster/kms-provider.md @@ -0,0 +1,181 @@ +--- +approvers: +- smarterclayton +title: Using a KMS provider for data encryption +--- +{% capture overview %} +This page shows how to configure a Key Management Service (KMS) provider and plugin to enable secret data encryption. +{% endcapture %} + +{% capture prerequisites %} + +* {% include task-tutorial-prereqs.md %} + +* Kubernetes version 1.10.0 or later is required + +* etcd v3 or later is required + +{% assign for_k8s_version="v1.10" %}{% include feature-state-alpha.md %} + +{% endcapture %} + +{% capture steps %} + +The KMS encryption provider uses an envelope encryption scheme to encrypt data in etcd. The data is encrypted using a data encryption key (DEK); a new DEK is generated for each encryption. The DEKs are encrypted with a key encryption key (KEK) that is stored and managed in a remote KMS. The KMS provider uses gRPC to communicate with a specific KMS +plugin. The KMS plugin, which is implemented as a gRPC server and deployed on the same host(s) as the Kubernetes master(s), is responsible for all communication with the remote KMS. + +## Configuring the KMS provider + +To configure a KMS provider on the API server, include a provider of type ```kms``` in the providers array in the encryption configuration file and set the following properties: + + * `name`: Display name of the KMS plugin. + * `endpoint`: Listen address of the gRPC server (KMS plugin). The endpoint is a UNIX domain socket. + * `cachesize`: Number of data encryption keys (DEKs) to be cached in the clear. When cached, DEKs can be used without another call to the KMS; whereas DEKs that are not cached require a call to the KMS to unwrap.. + +See [Understanding the encryption at rest configuration.](/docs/tasks/administer-cluster/encrypt-data) + +## Implementing a KMS plugin + +To implement a KMS plugin, you can develop a new plugin gRPC server or enable a KMS plugin already provided by your cloud provider. You then integrate the plugin with the remote KMS and deploy it on the Kubernetes master. + +### Enabling the KMS supported by your cloud provider +Refer to your cloud provider for instructions on enabling the cloud provider-specific KMS plugin. + +### Developing a KMS plugin gRPC server +You can develop a KMS plugin gRPC server using a stub file available for Go. For other languages, you use a proto file to create a stub file that you can use to develop the gRPC server code. + +* Using Go: Use the functions and data structures in the stub file: [service.pb.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/v1beta1/service.pb.go) to develop the gRPC server code + +* Using languages other than Go: Use the protoc compiler with the proto file: [service.proto](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/v1beta1/service.proto) to generate a stub file for the specific language + +Then use the functions and data structures in the stub file to develop the server code. + +**Notes:** + +* kms plugin version: `v1beta1` + +In response to procedure call Version, a compatible KMS plugin should return v1beta1 as VersionResponse.version + +* message version: `v1beta1` + +All messages from KMS provider have the version field set to current version v1beta1 + +* protocol: UNIX domain socket (`unix`) + +The gRPC server should listen at UNIX domain socket + +### Integrating a KMS plugin with the remote KMS +The KMS plugin can communicate with the remote KMS using any protocol supported by the KMS. +All configuration data, including authentication credentials the KMS plugin uses to communicate with the remote KMS, +are stored and managed by the KMS plugin independently. The KMS plugin can encode the ciphertext with additional metadata that may be required before sending it to the KMS for decryption. + +### Deploying the KMS plugin +Ensure that the KMS plugin runs on the same host(s) as the Kubernetes master(s). + +## Encrypting your data with the KMS provider +To encrypt the data: + +1. Create a new encryption configuration file using the appropriate properties for the `kms` provider: + +```yaml +kind: EncryptionConfig +apiVersion: v1 +resources: + - resources: + - secrets + providers: + - kms: + name: myKmsPlugin + endpoint: unix:///tmp/socketfile.sock + cachesize: 100 + - identity: {} +``` + +2. Set the `--experimental-encryption-provider-config` flag on the kube-apiserver to point to the location of the configuration file. +3. Restart your API server. + +## Verifying that the data is encrypted +Data is encrypted when written to etcd. After restarting your kube-apiserver, any newly created or updated secret should be encrypted when stored. To verify, you can use the etcdctl command line program to retrieve the contents of your secret. + +1. Create a new secret called secret1 in the default namespace: +``` +kubectl create secret generic secret1 -n default --from-literal=mykey=mydata +``` +2. Using the etcdctl command line, read that secret out of etcd: +``` +ETCDCTL_API=3 etcdctl get /kubernetes.io/secrets/default/secret1 [...] | hexdump -C +``` + where `[...]` must be the additional arguments for connecting to the etcd server. + +3. Verify the stored secret is prefixed with `k8s:enc:kms:v1:`, which indicates that the `kms` provider has encrypted the resulting data. + +4. Verify that the secret is correctly decrypted when retrieved via the API: +``` +kubectl describe secret secret1 -n default +``` +should match `mykey: mydata` + +## Ensuring all secrets are encrypted +Because secrets are encrypted on write, performing an update on a secret encrypts that content. + +The following command reads all secrets and then updates them to apply server side encryption. If an error occurs due to a conflicting write, retry the command. For larger clusters, you may wish to subdivide the secrets by namespace or script an update. +``` +kubectl get secrets --all-namespaces -o json | kubectl replace -f - +``` + +## Switching from a local encryption provider to the KMS provider +To switch from a local encryption provider to the `kms` provider and re-encrypt all of the secrets: + +1. Add the `kms` provider as the first entry in the configuration file as shown in the following example. + +```yaml +kind: EncryptionConfig +apiVersion: v1 +resources: + - resources: + - secrets + providers: + - kms: + name : myKmsPlugin + endpoint: unix:///tmp/socketfile.sock + cachesize: 100 + - aescbc: + keys: + - name: key1 + secret: +``` + +2. Restart all kube-apiserver processes. + +3. Run the following command to force all secrets to be re-encrypted using the `kms` provider. + +``` +kubectl get secrets --all-namespaces -o json| kubectl replace -f - +``` + +## Disabling encryption at rest +To disable encryption at rest: + +1. Place the `identity` provider as the first entry in the configuration file: + +```yaml +kind: EncryptionConfig +apiVersion: v1 +resources: + - resources: + - secrets + providers: + - identity: {} + - kms: + name : myKmsPlugin + endpoint: unix:///tmp/socketfile.sock + cachesize: 100 +``` +2. Restart all kube-apiserver processes. +3. Run the following command to force all secrets to be decrypted. +``` +kubectl get secrets --all-namespaces -o json | kubectl replace -f - +``` +{% endcapture %} + +{% include templates/task.md %} diff --git a/docs/tasks/administer-cluster/kubelet-config-file.md b/docs/tasks/administer-cluster/kubelet-config-file.md index 424ad3a9718d5..c415e312a98c0 100644 --- a/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/docs/tasks/administer-cluster/kubelet-config-file.md @@ -6,18 +6,20 @@ title: Set Kubelet parameters via a config file --- {% capture overview %} -{% include feature-state-alpha.md %} +{% include feature-state-beta.md %} -As of Kubernetes 1.8, a subset of the Kubelet's configuration parameters may be -set via an on-disk config file, as a substitute for command-line flags. In the -future, most of the existing command-line flags will be deprecated in favor of -providing parameters via a config file, which simplifies node deployment. +A subset of the Kubelet's configuration parameters may be +set via an on-disk config file, as a substitute for command-line flags. +This functionality is considered beta in v1.10. + +Providing parameters via a config file is the recommended approach because +it simplifies node deployment and configuration management. {% endcapture %} {% capture prerequisites %} -- A v1.8 or higher Kubelet binary must be installed. +- A v1.10 or higher Kubelet binary must be installed for beta functionality. {% endcapture %} @@ -27,25 +29,42 @@ providing parameters via a config file, which simplifies node deployment. The subset of the Kubelet's configuration that can be configured via a file is defined by the `KubeletConfiguration` struct -[here (v1alpha1)](https://github.com/kubernetes/kubernetes/blob/release-1.9/pkg/kubelet/apis/kubeletconfig/v1alpha1/types.go). +[here (v1beta1)](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go). + The configuration file must be a JSON or YAML representation of the parameters -in this struct. Note that this structure, and thus the config file API, -is still considered alpha and is not subject to stability guarantees. +in this struct. Make sure the Kubelet has read permissions on the file. + +Here is an example of what this file might look like: +``` +kind: KubeletConfiguration +apiVersion: kubelet.config.k8s.io/v1beta1 +evictionHard: + memory.available: "200Mi" +``` -Create a file named `kubelet` in its own directory and make sure the directory -and file are both readable by the Kubelet. You should write your intended -Kubelet configuration in this `kubelet` file. +In the example, the Kubelet is configured to evict Pods when available memory drops below 200Mi. +All other Kubelet configuration values are left at their built-in defaults, unless overridden +by flags. Command line flags which target the same value as a config file will override that value. For a trick to generate a configuration file from a live node, see [Reconfigure a Node's Kubelet in a Live Cluster](/docs/tasks/administer-cluster/reconfigure-kubelet). ## Start a Kubelet process configured via the config file -Start the Kubelet with the `KubeletConfigFile` feature gate enabled and the -Kubelet's `--init-config-dir` flag set to the location of the directory -containing the `kubelet` file. The Kubelet will then load the parameters defined -by `KubeletConfiguration` from the `kubelet` file, rather than from their -associated command-line flags. +Start the Kubelet with the `--config` flag set to the path of the Kubelet's config file. +The Kubelet will then load its config from this file. + +Note that command line flags which target the same value as a config file will override that value. +This helps ensure backwards compatibility with the command-line API. + +Note that relative file paths in the Kubelet config file are resolved relative to the +location of the Kubelet config file, whereas relative paths in command line flags are resolved +relative to the Kubelet's current working directory. + +Note that some default values differ between command-line flags and the Kubelet config file. +If `--config` is provided and the values are not specified via the command line, the +defaults for the `KubeletConfiguration` version apply. +In the above example, this version is `kubelet.config.k8s.io/v1beta1`. {% endcapture %} @@ -54,16 +73,8 @@ associated command-line flags. ## Relationship to Dynamic Kubelet Config If you are using the [Dynamic Kubelet Configuration](/docs/tasks/administer-cluster/reconfigure-kubelet) -feature, the configuration provided via `--init-config-dir` will be considered -the "last known good" configuration by the automatic rollback mechanism. - -Note that the layout of the files in the `--init-config-dir` mirrors the layout -of data in the ConfigMaps used for Dynamic Kubelet Config; the file names are -the same as the keys of the ConfigMap, and the file contents are JSON or YAML -representations of the same structures. Today, the only pair is -`kubelet:KubeletConfiguration`, though more may emerge in the future. -See [Reconfigure a Node's Kubelet in a Live Cluster](/docs/tasks/administer-cluster/reconfigure-kubelet) -for more information. +feature, the combination of configuration provided via `--config` and any flags which override these values +is considered the default "last known good" configuration by the automatic rollback mechanism. {% endcapture %} diff --git a/docs/tasks/administer-cluster/reconfigure-kubelet.md b/docs/tasks/administer-cluster/reconfigure-kubelet.md index aead7ff58bc3f..63a79bdf89eae 100644 --- a/docs/tasks/administer-cluster/reconfigure-kubelet.md +++ b/docs/tasks/administer-cluster/reconfigure-kubelet.md @@ -42,21 +42,21 @@ The basic workflow for configuring a Kubelet in a live cluster is as follows: 1. Write a YAML or JSON configuration file containing the Kubelet's configuration. 2. Wrap this file in a ConfigMap and save it to the Kubernetes control plane. -3. Update the Kubelet's correspoinding Node object to use this ConfigMap. +3. Update the Kubelet's corresponding Node object to use this ConfigMap. Each Kubelet watches a configuration reference on its respective Node object. -When this reference changes, the Kubelet downloads the new configuration and -exits. For the feature to work correctly, you must be running a process manager +When this reference changes, the Kubelet downloads the new configuration, +updates a local reference to refer to the file, and exits. +For the feature to work correctly, you must be running a process manager (like systemd) which will restart the Kubelet when it exits. When the Kubelet is restarted, it will begin using the new configuration. -The new configuration completely overrides the old configuration; unspecified -fields in the new configuration will receive their canonical default values. -Some CLI flags do not have an associated configuration field, and will not be -affected by the new configuration. These fields are defined by the KubeletFlags -structure, [here](https://github.com/kubernetes/kubernetes/blob/master/cmd/kubelet/app/options/options.go). +The new configuration completely overrides configuration provided by `--config`, +and is overridden by command-line flags. Unspecified values in the new configuration +will receive default values appropriate to the configuration version +(e.g. `kubelet.config.k8s.io/v1beta1`), unless overridden by flags. -The status of the Node's Kubelet configuration is reported via the `ConfigOK` +The status of the Node's Kubelet configuration is reported via the `KubeletConfigOK` condition in the Node status. Once you have updated a Node to use the new ConfigMap, you can observe this condition to confirm that the Node is using the intended configuration. A table describing the possible conditions can be found @@ -95,13 +95,13 @@ and you will simply edit a copy of this file (which, as a best practice, should live in version control) while creating the first Kubelet ConfigMap. Today, however, the Kubelet is still bootstrapped with command-line flags. Fortunately, there is a dirty trick you can use to generate a config file containing a Node's -current configuration. The trick involves hitting the Kubelet server's `configz` +current configuration. The trick involves accessing the Kubelet server's `configz` endpoint via the kubectl proxy. This endpoint, in its current implementation, is intended to be used only as a debugging aid, which is part of why this is a -dirty trick. There is ongoing work to improve the endpoint, and in the future -this will be a less "dirty" operation. This trick also requires the `jq` command -to be installed on your machine, for unpacking and editing the JSON response -from the endpoint. +dirty trick. The endpoint may be improved in the future, but until then +it should not be relied on for production scenarios. +This trick also requires the `jq` command to be installed on your machine, +for unpacking and editing the JSON response from the endpoint. Do the following to generate the file: @@ -112,12 +112,12 @@ configz endpoint: ``` $ export NODE_NAME=the-name-of-the-node-you-are-reconfiguring -$ curl -sSL http://localhost:8001/api/v1/proxy/nodes/${NODE_NAME}/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubeletconfig/v1alpha1"' > kubelet_configz_${NODE_NAME} +$ curl -sSL http://localhost:8001/api/v1/proxy/nodes/${NODE_NAME}/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME} ``` Note that we have to manually add the `kind` and `apiVersion` to the downloaded object, as these are not reported by the configz endpoint. This is one of the -limitations of the endpoint that is planned to be fixed in the future. +limitations of the endpoint. ### Edit the configuration file @@ -209,29 +209,29 @@ Be sure to specify all three of `name`, `namespace`, and `uid`. ### Observe that the Node begins using the new configuration Retrieve the Node with `kubectl get node ${NODE_NAME} -o yaml`, and look for the -`ConfigOK` condition in `status.conditions`. You should see the message +`KubeletConfigOK` condition in `status.conditions`. You should see the message `Using current (UID: CONFIG_MAP_UID)` when the Kubelet starts using the new configuration. For convenience, you can use the following command (using `jq`) to filter down -to the `ConfigOK` condition: +to the `KubeletConfigOK` condition: ``` -$ kubectl get no ${NODE_NAME} -o json | jq '.status.conditions|map(select(.type=="ConfigOK"))' +$ kubectl get no ${NODE_NAME} -o json | jq '.status.conditions|map(select(.type=="KubeletConfigOK"))' [ { "lastHeartbeatTime": "2017-09-20T18:08:29Z", "lastTransitionTime": "2017-09-20T18:08:17Z", - "message": "using current (UID: \"2ebc8d1a-9e2a-11e7-a8dd-42010a800006\")", + "message": "using current: /api/v1/namespaces/kube-system/configmaps/my-node-config-gkt4c2m4b2", "reason": "passing all checks", "status": "True", - "type": "ConfigOK" + "type": "KubeletConfigOK" } ] ``` If something goes wrong, you may see one of several different error conditions, -detailed in the Table of ConfigOK Conditions, below. When this happens, you +detailed in the table of KubeletConfigOK conditions, below. When this happens, you should check the Kubelet's log for more details. ### Edit the configuration file again @@ -282,16 +282,16 @@ the following, with `name` and `uid` substituted as necessary: ``` configSource: configMapRef: - name: NEW_CONFIG_MAP_NAME + name: ${NEW_CONFIG_MAP_NAME} namespace: kube-system - uid: NEW_CONFIG_MAP_UID + uid: ${NEW_CONFIG_MAP_UID} ``` ### Observe that the Kubelet is using the new configuration Once more, retrieve the Node with `kubectl get node ${NODE_NAME} -o yaml`, and -look for the `ConfigOK` condition in `status.conditions`. You should see the message -`Using current (UID: NEW_CONFIG_MAP_UID)` when the Kubelet starts using the +look for the `KubeletConfigOK` condition in `status.conditions`. You should see the message +`using current: /api/v1/namespaces/kube-system/configmaps/${NEW_CONFIG_MAP_NAME}` when the Kubelet starts using the new configuration. ### Deauthorize your Node fom reading the old ConfigMap @@ -327,9 +327,8 @@ remove the `spec.configSource` subfield. ### Observe that the Node is using its local default configuration -After removing this subfield, you should eventually observe that the ConfigOK -condition's message reverts to either `using current (default)` or -`using current (init)`, depending on how the Node was provisioned. +After removing this subfield, you should eventually observe that the KubeletConfigOK +condition's message reverts to `using current: local`. ### Deauthorize your Node fom reading the old ConfigMap @@ -366,9 +365,9 @@ Here is an example command that uses `kubectl patch`: kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMapRef\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"uid\":\"${CONFIG_MAP_UID}\"}}}}" ``` -## Understanding ConfigOK Conditions +## Understanding KubeletConfigOK Conditions -The following table describes several of the `ConfigOK` Node conditions you +The following table describes several of the `KubeletConfigOK` Node conditions you might encounter in a cluster that has Dynamic Kubelet Config enabled. If you observe a condition with `status=False`, you should check the Kubelet log for more error details by searching for the message or reason text. @@ -383,49 +382,33 @@ more error details by searching for the message or reason text. Status -

using current (default)

-

current is set to the local default, and no init config was provided

+

using current: local

+

when the config source is nil, the Kubelet uses its local config

True

-

using current (init)

-

current is set to the local default, and an init config was provided

-

True

- - -

using current (UID: CURRENT_CONFIG_MAP_UID)

+

using current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}

passing all checks

True

-

using last-known-good (default)

- -
    -
  • failed to load current (UID: CURRENT_CONFIG_MAP_UID)
  • -
  • failed to parse current (UID: CURRENT_CONFIG_MAP_UID)
  • -
  • failed to validate current (UID: CURRENT_CONFIG_MAP_UID)
  • -
- -

False

- - -

using last-known-good (init)

+

using last-known-good: local

    -
  • failed to load current (UID: CURRENT_CONFIG_MAP_UID)
  • -
  • failed to parse current (UID: CURRENT_CONFIG_MAP_UID)
  • -
  • failed to validate current (UID: CURRENT_CONFIG_MAP_UID)
  • +
  • failed to load current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}
  • +
  • failed to parse current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}
  • +
  • failed to validate current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}

False

-

using last-known-good (UID: LAST_KNOWN_GOOD_CONFIG_MAP_UID)

+

using last-known-good: /api/v1/namespaces/${LAST_KNOWN_GOOD_CONFIG_MAP_NAMESPACE}/configmaps/${LAST_KNOWN_GOOD_CONFIG_MAP_NAME}

    -
  • failed to load current (UID: CURRENT_CONFIG_MAP_UID)
  • -
  • failed to parse current (UID: CURRENT_CONFIG_MAP_UID)
  • -
  • failed to validate current (UID: CURRENT_CONFIG_MAP_UID)
  • +
  • failed to load current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}
  • +
  • failed to parse current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}
  • +
  • failed to validate current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}

False

@@ -451,15 +434,15 @@ more error details by searching for the message or reason text.

failed to sync, reason:

  • failed to read Node from informer object cache
  • -
  • failed to reset to local (default or init) config
  • +
  • failed to reset to local config
  • invalid NodeConfigSource, exactly one subfield must be non-nil, but all were nil
  • invalid ObjectReference, all of UID, Name, and Namespace must be specified
  • -
  • invalid ObjectReference, UID SOME_UID does not match UID of downloaded ConfigMap SOME_OTHER_UID
  • -
  • failed to determine whether object with UID SOME_UID was already checkpointed
  • -
  • failed to download ConfigMap with name SOME_NAME from namespace SOME_NAMESPACE
  • -
  • failed to save config checkpoint for object with UID SOME_UID
  • -
  • failed to set current config checkpoint to default
  • -
  • failed to set current config checkpoint to object with UID SOME_UID
  • +
  • invalid ConfigSource.ConfigMapRef.UID: ${UID} does not match ${API_PATH}.UID: ${UID_OF_CONFIG_MAP_AT_API_PATH}
  • +
  • failed to determine whether object ${API_PATH} with UID ${UID} was already checkpointed
  • +
  • failed to download ConfigMap with name ${NAME} from namespace ${NAMESPACE}
  • +
  • failed to save config checkpoint for object ${API_PATH} with UID ${UID}
  • +
  • failed to set current config checkpoint to local config
  • +
  • failed to set current config checkpoint to object ${API_PATH} with UID ${UID}

False

diff --git a/docs/tasks/administer-cluster/running-cloud-controller.md b/docs/tasks/administer-cluster/running-cloud-controller.md index 7fdb2a9433384..fe5ba70ac9cf2 100644 --- a/docs/tasks/administer-cluster/running-cloud-controller.md +++ b/docs/tasks/administer-cluster/running-cloud-controller.md @@ -33,7 +33,7 @@ Successfully running cloud-controller-manager requires some changes to your clus * `kube-apiserver` and `kube-controller-manager` MUST NOT specify the `--cloud-provider` flag. This ensures that it does not run any cloud specific loops that would be run by cloud controller manager. In the future, this flag will be deprecated and removed. * `kubelet` must run with `--cloud-provider=external`. This is to ensure that the kubelet is aware that it must be initialized by the cloud controller manager before it is scheduled any work. -* `kube-apiserver` SHOULD NOT run the `PersistentVolumeLabel` admission controller since the cloud controller manager takes over labeling persistent volumes. To prevent the PersistentVolumeLabel admission plugin from running, make sure the `kube-apiserver` has a `--admission-control` flag with a value that does not include `PersistentVolumeLabel`. +* `kube-apiserver` SHOULD NOT run the `PersistentVolumeLabel` admission controller since the cloud controller manager takes over labeling persistent volumes. To prevent the PersistentVolumeLabel admission plugin from running in `kube-apiserver`, include the `PersistentVolumeLabel` as a listed value in the `--disable-admission-plugins` flag. * For the `cloud-controller-manager` to label persistent volumes, initializers will need to be enabled and an InitializerConifguration needs to be added to the system. Follow [these instructions](/docs/admin/extensible-admission-controllers.md#enable-initializers-alpha-feature) to enable initializers. Use the following YAML to create the InitializerConfiguration: {% include code.html language="yaml" file="persistent-volume-label-initializer-config.yaml" ghlink="/docs/tasks/administer-cluster/persistent-volume-label-initializer-config.yaml" %} diff --git a/docs/tasks/administer-cluster/storage-object-in-use-protection.md b/docs/tasks/administer-cluster/storage-object-in-use-protection.md new file mode 100644 index 0000000000000..76b552edfdf4c --- /dev/null +++ b/docs/tasks/administer-cluster/storage-object-in-use-protection.md @@ -0,0 +1,315 @@ +--- +approvers: +- msau42 +- jsafrane +title: Storage Object in Use Protection +--- + +{% capture overview %} +{% assign for_k8s_version="v1.10" %}{% include feature-state-beta.md %} + +Persistent volume claims (PVCs) that are in active use by a pod and persistent volumes (PVs) that are bound to PVCs can be protected from pre-mature removal. + +{% endcapture %} + +{% capture prerequisites %} + +- The Storage Object in Use Protection feature is enabled in a version of Kubernetes in which it is supported. + +{% endcapture %} + +{% capture steps %} + +## Storage Object in Use Protection feature used for PVC Protection + +The example below uses a GCE PD `StorageClass`, however, similar steps can be performed for any volume type. + +Create a `StorageClass` for convenient storage provisioning: +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/gce-pd +parameters: + type: pd-standard +``` + +Verification scenarios follow below. + +### Scenario 1: The PVC is not in active use by a pod + +- Create a PVC: + +```yaml +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: slzc +spec: + accessModes: + - ReadWriteOnce + storageClassName: slow + resources: + requests: + storage: 3.7Gi +``` + +- Check that the PVC has the finalizer `kubernetes.io/pvc-protection` set: + +```shell +kubectl describe pvc slzc +Name: slzc +Namespace: default +StorageClass: slow +Status: Bound +Volume: pvc-bee8c30a-d6a3-11e7-9af0-42010a800002 +Labels: +Annotations: pv.kubernetes.io/bind-completed=yes + pv.kubernetes.io/bound-by-controller=yes + volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd +Finalizers: [kubernetes.io/pvc-protection] +Capacity: 4Gi +Access Modes: RWO +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ProvisioningSucceeded 2m persistentvolume-controller Successfully provisioned volume pvc-bee8c30a-d6a3-11e7-9af0-42010a800002 using kubernetes.io/gce-pd +``` + +- Delete the PVC and check that the PVC (not in active use by a pod) was removed successfully. + +### Scenario 2: The PVC is in active use by a pod + +- Again, create the same PVC. +- Create a pod that uses the PVC: + +```yaml +kind: Pod +apiVersion: v1 +metadata: + name: app1 +spec: + containers: + - name: test-pod + image: k8s.gcr.io/busybox:1.24 + command: + - "/bin/sh" + args: + - "-c" + - "date > /mnt/app1.txt; sleep 60 && exit 0 || exit 1" + volumeMounts: + - name: path-pvc + mountPath: "/mnt" + restartPolicy: "Never" + volumes: + - name: path-pvc + persistentVolumeClaim: + claimName: slzc +``` + +- Wait until the pod status is `Running`, i.e. the PVC becomes in active use. +- Delete the PVC that is now in active use by a pod and verify that the PVC is not removed but its status is `Terminating`: + +```shell +Name: slzc +Namespace: default +StorageClass: slow +Status: Terminating (since Fri, 01 Dec 2017 14:47:55 +0000) +Volume: pvc-803a1f4d-d6a6-11e7-9af0-42010a800002 +Labels: +Annotations: pv.kubernetes.io/bind-completed=yes + pv.kubernetes.io/bound-by-controller=yes + volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd +Finalizers: [kubernetes.io/pvc-protection] +Capacity: 4Gi +Access Modes: RWO +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ProvisioningSucceeded 52s persistentvolume-controller Successfully provisioned volume pvc-803a1f4d-d6a6-11e7-9af0-42010a800002 using kubernetes.io/gce-pd +``` +- Wait until the pod status is `Terminated` (either delete the pod or wait until it finishes). Afterwards, check that the PVC is removed. + +### Scenario 3: A pod starts using a PVC that is in Terminating state + +- Again, create the same PVC. +- Create a first pod that uses the PVC: + +```yaml +kind: Pod +apiVersion: v1 +metadata: + name: app1 +spec: + containers: + - name: test-pod + image: k8s.gcr.io/busybox:1.24 + command: + - "/bin/sh" + args: + - "-c" + - "date > /mnt/app1.txt; sleep 600 && exit 0 || exit 1" + volumeMounts: + - name: path-pvc + mountPath: "/mnt" + restartPolicy: "Never" + volumes: + - name: path-pvc + persistentVolumeClaim: + claimName: slzc +``` + +- Wait until the pod status is `Running`, i.e. the PVC becomes in active use. +- Delete the PVC that is now in active use by a pod and verify that the PVC is not removed but its status is `Terminating`: + +```shell +Name: slzc +Namespace: default +StorageClass: slow +Status: Terminating (since Fri, 01 Dec 2017 14:47:55 +0000) +Volume: pvc-803a1f4d-d6a6-11e7-9af0-42010a800002 +Labels: +Annotations: pv.kubernetes.io/bind-completed=yes + pv.kubernetes.io/bound-by-controller=yes + volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd +Finalizers: [kubernetes.io/pvc-protection] +Capacity: 4Gi +Access Modes: RWO +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ProvisioningSucceeded 52s persistentvolume-controller Successfully provisioned volume pvc-803a1f4d-d6a6-11e7-9af0-42010a800002 using kubernetes.io/gce-pd +``` + +- Create a second pod that uses the same PVC: + +``` +kind: Pod +apiVersion: v1 +metadata: + name: app2 +spec: + containers: + - name: test-pod + image: gcr.io/google_containers/busybox:1.24 + command: + - "/bin/sh" + args: + - "-c" + - "date > /mnt/app1.txt; sleep 600 && exit 0 || exit 1" + volumeMounts: + - name: path-pvc + mountPath: "/mnt" + restartPolicy: "Never" + volumes: + - name: path-pvc + persistentVolumeClaim: + claimName: slzc +``` + +- Verify that the scheduling of the second pod fails with the below warning: + +``` +Warning FailedScheduling 18s (x4 over 21s) default-scheduler persistentvolumeclaim "slzc" is being deleted +``` + +- Wait until the pod status of both pods is `Terminated` or `Completed` (either delete the pods or wait until they finish). Afterwards, check that the PVC is removed. + +## Storage Object in Use Protection feature used for PV Protection + +The example below uses a `HostPath` PV. + +Verification scenarios follow below. + +### Scenario 1: The PV is not bound to a PVC + +- Create a PV: + +```yaml +kind: PersistentVolume +apiVersion: v1 +metadata: + name: task-pv-volume + labels: + type: local +spec: + capacity: + storage: 1Gi + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Delete + storageClassName: standard + hostPath: + path: "/tmp/data" +``` + +- Check that the PV has the finalizer `kubernetes.io/pv-protection` set: + +```shell +Name: task-pv-volume +Labels: type=local +Annotations: pv.kubernetes.io/bound-by-controller=yes +Finalizers: [kubernetes.io/pv-protection] +StorageClass: standard +Status: Terminating (lasts 1m) +Claim: default/task-pv-claim +Reclaim Policy: Delete +Access Modes: RWO +Capacity: 1Gi +Message: +Source: + Type: HostPath (bare host directory volume) + Path: /tmp/data + HostPathType: +Events: +``` + +- Delete the PV and check that the PV (not bound to a PVC) is removed successfully. + +### Scenario 2: The PV is bound to a PVC + +- Again, create the same PV. + +- Create a PVC + +```yaml +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: task-pv-claim +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +- Wait until the PV and PVC are bound to each other. +- Delete the PV and verify that the PV is not removed but its status is `Terminating`: + +```shell +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +task-pv-volume 1Gi RWO Delete Terminating default/task-pv-claim standard 59s + +``` +- Delete the PVC and verify that the PV is removed too. + +```shell +kubectl delete pvc task-pv-claim +persistentvolumeclaim "task-pv-claim" deleted +$ kubectl get pvc +No resources found. +$ kubectl get pv +No resources found. +``` + +{% endcapture %} + +{% capture discussion %} + + +{% endcapture %} + +{% include templates/task.md %} diff --git a/docs/tasks/configure-pod-container/configure-service-account.md b/docs/tasks/configure-pod-container/configure-service-account.md index c480964c8211b..da2950f1003d0 100644 --- a/docs/tasks/configure-pod-container/configure-service-account.md +++ b/docs/tasks/configure-pod-container/configure-service-account.md @@ -178,7 +178,7 @@ myregistrykey   kubernetes.io/.dockerconfigjson   1       1d Next, modify the default service account for the namespace to use this secret as an imagePullSecret. ```shell -kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}' +kubectl patch serviceaccount default -p '{\"imagePullSecrets\": [{\"name\": \"acrkey\"}]}' ``` Interactive version requiring manual edit: diff --git a/docs/tasks/configure-pod-container/share-process-namespace.md b/docs/tasks/configure-pod-container/share-process-namespace.md new file mode 100644 index 0000000000000..d332835d83abc --- /dev/null +++ b/docs/tasks/configure-pod-container/share-process-namespace.md @@ -0,0 +1,111 @@ +--- +title: Share Process Namespace between Containers in a Pod +min-kubernetes-server-version: v1.10 +approvers: +- verb +- yujuhong +- dchen1107 +--- + +{% capture overview %} + +{% include feature-state-alpha.md %} + +This page shows how to configure process namespace sharing for a pod. When +process namespace sharing is enabled, processes in a container are visible +to all other containers in that pod. + +You can use this feature to configure cooperating containers, such as a log +handler sidecar container, or to troubleshoot container images that don't +include debugging utilities like a shell. + +{% endcapture %} + +{% capture prerequisites %} + +{% include task-tutorial-prereqs.md %} + +A special **alpha** feature gate `PodShareProcessNamespace` must be set to true +across the system: `--feature-gates=PodShareProcessNamespace=true`. + +{% endcapture %} + +{% capture steps %} + +## Configure a Pod + +Process Namespace Sharing is enabled using the `ShareProcessNamespace` field of +`v1.PodSpec`. For example: + +{% include code.html language="yaml" file="share-process-namespace.yaml" ghlink="/docs/tasks/configure-pod-container/share-process-namespace.yaml" %} + +1. Create the pod `nginx` on your cluster: + + $ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/share-process-namespace.yaml + +1. Attach to the `shell` container and run `ps`: + + $ kubectl attach -it nginx -c shell + If you don't see a command prompt, try pressing enter. + / # ps ax + PID USER TIME COMMAND + 1 root 0:00 /pause + 8 root 0:00 nginx: master process nginx -g daemon off; + 14 101 0:00 nginx: worker process + 15 root 0:00 sh + 21 root 0:00 ps ax + +You can signal processes in other containers. For example, send `SIGHUP` to +nginx to restart the worker process. This requires the `SYS_PTRACE` capability. + + / # kill -HUP 8 + / # ps ax + PID USER TIME COMMAND + 1 root 0:00 /pause + 8 root 0:00 nginx: master process nginx -g daemon off; + 15 root 0:00 sh + 22 101 0:00 nginx: worker process + 23 root 0:00 ps ax + +It's even possible to access another container image using the +`/proc/$pid/root` link. + + / # head /proc/8/root/etc/nginx/nginx.conf + + user nginx; + worker_processes 1; + + error_log /var/log/nginx/error.log warn; + pid /var/run/nginx.pid; + + + events { + worker_connections 1024; + +{% endcapture %} + +{% capture discussion %} + +## Understanding Process Namespace Sharing + +Pods share many resources so it makes sense they would also share a process +namespace. Some container images may expect to be isolated from other +containers, though, so it's important to understand these differences: + +1. **The container process no longer has PID 1.** Some container images refuse + to start without PID 1 (for example, containers using `systemd`) or run + commands like `kill -HUP 1` to signal the container process. In pods with a + shared process namespace, `kill -HUP 1` will signal the pod sandbox. + (`/pause` in the above example.) + +1. **Processes are visible to other containers in the pod.** This includes all + information visible in `/proc`, such as passwords that were passed as arguments + or environment variables. These are protected only by regular Unix permissions. + +1. **Container filesystems are visible to other containers in the pod through the + `/proc/$pid/root` link.** This makes debugging easier, but it also means + that filesystem secrets are protected only by filesystem permissions. + +{% endcapture %} + +{% include templates/task.md %} diff --git a/docs/tasks/configure-pod-container/share-process-namespace.yaml b/docs/tasks/configure-pod-container/share-process-namespace.yaml new file mode 100644 index 0000000000000..af812732a247a --- /dev/null +++ b/docs/tasks/configure-pod-container/share-process-namespace.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + shareProcessNamespace: true + containers: + - name: nginx + image: nginx + - name: shell + image: busybox + securityContext: + capabilities: + add: + - SYS_PTRACE + stdin: true + tty: true diff --git a/docs/tasks/debug-application-cluster/audit.md b/docs/tasks/debug-application-cluster/audit.md index 356840acf5e0f..749d8a89ae0ed 100644 --- a/docs/tasks/debug-application-cluster/audit.md +++ b/docs/tasks/debug-application-cluster/audit.md @@ -29,6 +29,10 @@ of its execution generates an event, which is then pre-processed according to a certain policy and written to a backend. You can find more details about the pipeline in the [design proposal][auditing-proposal]. +**Note,** that audit logging feature increases apiserver memory consumption, since some context +required for auditing is stored for each request. Additionally, memory consumption depends on the +audit logging configuration. + ## Audit Policy Audit policy defines rules about what events should be recorded and what data @@ -72,6 +76,24 @@ In both cases, audit events structure is defined by the API in the `audit.k8s.io` API group. The current version of the API is [`v1beta1`][auditing-api]. +**Note:** In case of patches, request body is a JSON array with patch operations, not a JSON object +with an appropriate Kubernetes API object. For example, the following request body is a valid patch +request to `/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name`. + +```json +[ + { + "op": "replace", + "path": "/spec/parallelism", + "value": 0 + }, + { + "op": "remove", + "path": "/spec/template/spec/containers/0/terminationMessagePolicy" + } +] +``` + ### Log backend Log backend writes audit events to a file in JSON format. You can configure @@ -91,14 +113,62 @@ audit backend using the following kube-apiserver flags: - `--audit-webhook-config-file` specifies the path to a file with a webhook configuration. Webhook configuration is effectively a [kubeconfig][kubeconfig]. -- `--audit-webhook-mode` define the buffering strategy, one of the following: - - `batch` - buffer events and asynchronously send the set of events to the external service - This is the default - - `blocking` - block API server responses on sending each event to the external service +- `--audit-webhook-initial-backoff` specifies the amount of time to wait after the first failed + request before retrying. Subsequent requests are retried with exponential backoff. The webhook config file uses the kubeconfig format to specify the remote address of the service and credentials used to connect to it. +### Batching + +Both log and webhook backends support batching. Using webhook as an example, here's the list of +available flags. To get the same flag for log backend, replace `webhook` with `log` in the flag +name. By default, batching is enabled in `webhook` and disabled in `log`. Similarly, by default +throttling is enabled in `webhook` and disabled in `log`. + +- `--audit-webhook-mode` defines the buffering strategy. One of the following: + - `batch` - buffer events and asynchronously process them in batches. This is the default. + - `blocking` - block API server responses on processing each individual event. + +The following flags are used only in the `batch` mode. + +- `--audit-webhook-batch-buffer-size` defines the number of events to buffer before batching. + If the rate of incoming events overflows the buffer, events are dropped. +- `--audit-webhook-batch-max-size` defines the maximum number of events in one batch. +- `--audit-webhook-batch-max-wait` defines the maximum amount of time to wait before unconditionally + batching events in the queue. +- `--audit-webhook-batch-throttle-qps` defines the maximum average number of batches generated + per second. +- `--audit-webhook-batch-throttle-burst` defines the maximum number of batches generated at the same + moment if the allowed QPS was underutilized previously. + +#### Parameter tuning + +Parameters should be set to accommodate the load on the apiserver. + +For example, if kube-apiserver receives 100 requests each second, and each request is audited only +on `ResponseStarted` and `ResponseComplete` stages, you should account for ~200 audit +events being generated each second. Assuming that there are up to 100 events in a batch, +you should set throttling level at at least 2 QPS. Assuming that the backend can take up to +5 seconds to write events, you should set the buffer size to hold up to 5 seconds of events, i.e. +10 batches, i.e. 1000 events. + +In most cases however, the default parameters should be sufficient and you don't have to worry about +setting them manually. You can look at the following Prometheus metrics exposed by kube-apiserver +and in the logs to monitor the state of the auditing subsystem. + +- `apiserver_audit_event_total` metric contains the total number of audit events exported. +- `apiserver_audit_error_total` metric contains the total number of events dropped due to an error + during exporting. + +## Multi-cluster setup + +If you're extending the Kubernetes API with the [aggregation layer][kube-aggregator], you can also +set up audit logging for the aggregated apiserver. To do this, pass the configuration options in the +same format as described above to the aggregated apiserver and set up the log ingesting pipeline +to pick up audit logs. Different apiservers can have different audit configurations and different +audit policies. + ## Log Collector Examples ### Use fluentd to collect and distribute audit events from log file @@ -250,8 +320,8 @@ plugin which supports full-text search and analytics. ## Legacy Audit -__Note:__ Legacy Audit is deprecated and is disabled by default since Kubernetes 1.8. -To fallback to this legacy audit, disable the advanced auditing feature +__Note:__ Legacy Audit is deprecated and is disabled by default since Kubernetes 1.8. Legacy Audit +will be removed in 1.12. To fallback to this legacy audit, disable the advanced auditing feature using the `AdvancedAuditing` feature gate in [kube-apiserver][kube-apiserver]: ``` @@ -299,3 +369,4 @@ and `audit-log-maxage` options. [fluentd_install_doc]: http://docs.fluentd.org/v0.12/articles/quickstart#step1-installing-fluentd [logstash]: https://www.elastic.co/products/logstash [logstash_install_doc]: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html +[kube-aggregator]: /docs/concepts/api-extension/apiserver-aggregation diff --git a/docs/tasks/federation/set-up-placement-policies-federation.md b/docs/tasks/federation/set-up-placement-policies-federation.md index 99a18b67573b1..a86f6bffaa7f5 100644 --- a/docs/tasks/federation/set-up-placement-policies-federation.md +++ b/docs/tasks/federation/set-up-placement-policies-federation.md @@ -51,10 +51,10 @@ Admission Controller. Update the Federation API server command line arguments to enable the Admission Controller and mount the ConfigMap into the container. If there's an existing -`--admission-control` flag, append `,SchedulingPolicy` instead of adding +`--enable-admission-plugins` flag, append `,SchedulingPolicy` instead of adding another line. - --admission-control=SchedulingPolicy + --enable-admission-plugins=SchedulingPolicy --admission-control-config-file=/etc/kubernetes/admission/config.yml Add the following volume to the Federation API server pod: diff --git a/docs/tasks/job/parallel-processing-expansion.md b/docs/tasks/job/parallel-processing-expansion.md index 1dbb6d48875e3..639867a7ba3e0 100644 --- a/docs/tasks/job/parallel-processing-expansion.md +++ b/docs/tasks/job/parallel-processing-expansion.md @@ -85,7 +85,7 @@ do not care to see.) We can check on the pods as well using the same label selector: ```shell -$ kubectl get pods -l jobgroup=jobexample --show-all +$ kubectl get pods -l jobgroup=jobexample NAME READY STATUS RESTARTS AGE process-item-apple-kixwv 0/1 Completed 0 4m process-item-banana-wrsf7 0/1 Completed 0 4m @@ -96,7 +96,7 @@ There is not a single command to check on the output of all jobs at once, but looping over all the pods is pretty easy: ```shell -$ for p in $(kubectl get pods -l jobgroup=jobexample --show-all -o name) +$ for p in $(kubectl get pods -l jobgroup=jobexample -o name) do kubectl logs $p done @@ -184,11 +184,6 @@ If you have a large number of job objects, you may find that: - Even using labels, managing so many Job objects is cumbersome. - You exceed resource quota when creating all the Jobs at once, and do not want to wait to create them incrementally. -- You need a way to easily scale the number of pods running - concurrently. One reason would be to avoid using too many - compute resources. Another would be to limit the number of - concurrent requests to a shared resource, such as a database, - used by all the pods in the job. - Very large numbers of jobs created at once overload the Kubernetes apiserver, controller, or scheduler. diff --git a/docs/tasks/manage-gpus/scheduling-gpus.md b/docs/tasks/manage-gpus/scheduling-gpus.md index 6e12c298699e3..0fb2cab71badf 100644 --- a/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/docs/tasks/manage-gpus/scheduling-gpus.md @@ -14,9 +14,10 @@ consume GPUs across different Kubernetes versions and the current limitations. **From 1.8 onwards, the recommended way to consume GPUs is to use [device plugins](/docs/concepts/cluster-administration/device-plugins).** -To enable GPU support through device plugins, a special **alpha** feature gate -`DevicePlugins` has to be set to true across the system: -`--feature-gates="DevicePlugins=true"`. +To enable GPU support through device plugins before 1,10, the `DevicePlugins` +feature gate has to be explicitly set to true across the system: +`--feature-gates="DevicePlugins=true"`. This is no longer required starting +from 1.10. Then you have to install NVIDIA drivers on the nodes and run an NVIDIA GPU device plugin ([see below](#deploying-nvidia-gpu-device-plugin)). diff --git a/docs/tasks/manage-hugepages/scheduling-hugepages.md b/docs/tasks/manage-hugepages/scheduling-hugepages.md index fbce7ec3a7ab7..71d46b9202204 100644 --- a/docs/tasks/manage-hugepages/scheduling-hugepages.md +++ b/docs/tasks/manage-hugepages/scheduling-hugepages.md @@ -5,10 +5,10 @@ title: Manage HugePages --- {% capture overview %} -{% include feature-state-alpha.md %} +{% include feature-state-beta.md %} Kubernetes supports the allocation and consumption of pre-allocated huge pages -by applications in a Pod as an **alpha** feature. This page describes how users +by applications in a Pod as a **beta** feature. This page describes how users can consume huge pages and the current limitations. {% endcapture %} @@ -18,8 +18,6 @@ can consume huge pages and the current limitations. 1. Kubernetes nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node may only pre-allocate huge pages for a single size. -1. A special **alpha** feature gate `HugePages` has to be set to true across the - system: `--feature-gates=HugePages=true`. The nodes will automatically discover and report all huge page resources as a schedulable resource. diff --git a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 8dc9a4cc814d8..ce03790e163ac 100644 --- a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -26,7 +26,9 @@ heapster monitoring will be turned-on by default). To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster and kubectl at version 1.6 or later. Furthermore, in order to make use of custom metrics, your cluster -must be able to communicate with the API server providing the custom metrics API. +must be able to communicate with the API server providing the custom metrics API. Finally, to use metrics +not related to any Kubernetes object you must have a Kubernetes cluster at version 1.10 or later, and +you must be able to communicate with the API server that provides the external metrics API. See the [Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics) for more details. ## Step One: Run & expose php-apache server @@ -287,6 +289,37 @@ Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was con 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingress were serving a total of 10000 requests per second. +### Autoscaling on metrics not related to Kubernetes objects + +Applications running on Kubernetes may need to autoscale based on metrics that don't have an obvious +relationship to any object in the Kubernetes cluster, such as metrics describing a hosted service with +no direct correlation to Kubernetes namespaces. In Kubernetes 1.10 and later, you can address this use case +with *external metrics*. + +Using external metrics requires a certain level of knowledge of your monitoring system, and it requires a cluster +monitoring setup similar to one required for using custom metrics. With external metrics, you can autoscale +based on any metric available in your monitoring system by providing a `metricName` field in your +HorizontalPodAutoscaler manifest. Additionally you can use a `metricSelector` field to limit which +metrics' time series you want to use for autoscaling. If multiple time series are matched by `metricSelector`, +the sum of their values is used by the HorizontalPodAutoscaler. + +For example if your application processes tasks from a hosted queue service, you could add the following +section to your HorizontalPodAutoscaler manifest to specify that you need one worker per 30 outstanding tasks. + +```yaml +- type: External + external: + metricName: queue_messages_ready + metricSelector: + matchLabels: + queue: worker_tasks + targetAverageValue: 30 +``` + +If your metric describes work or resources that can be divided between autoscaled pods the `targetAverageValue` +field describes how much of that work each pod can handle. Instead of using the `targetAverageValue` field, you could use the +`targetValue` to define a desired value of your external metric. + ## Appendix: Horizontal Pod Autoscaler Status Conditions When using the `autoscaling/v2beta1` form of the HorizontalPodAutoscaler, you will be able to see diff --git a/docs/tasks/run-application/horizontal-pod-autoscale.md b/docs/tasks/run-application/horizontal-pod-autoscale.md index 2477e48fe1df7..b148ca4665d9d 100644 --- a/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -160,13 +160,15 @@ To use custom metrics with your Horizontal Pod Autoscaler, you must set the nece * [Enable the API aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) if you have not already done so. -* Register your resource metrics API and your -custom metrics API with the API aggregation layer. Both of these API servers must be running *on* your cluster. +* Register your resource metrics API, your +custom metrics API and, optionally, external metrics API with the API aggregation layer. All of these API servers must be running *on* your cluster. * *Resource Metrics API*: You can use Heapster's implementation of the resource metrics API, by running Heapster with its `--api-server` flag set to true. * *Custom Metrics API*: This must be provided by a separate component. To get started with boilerplate code, see the [kubernetes-incubator/custom-metrics-apiserver](https://github.com/kubernetes-incubator/custom-metrics-apiserver) and the [k8s.io/metrics](https://github.com/kubernetes/metrics) repositories. + * *External Metrics API*: Starting from Kubernetes 1.10 you can use this API if you need to autoscale on metrics not related to any Kubernetes object. Similarly to *Custom Metrics API* this must be provided by a separate component. + * Set the appropriate flags for kube-controller-manager: * `--horizontal-pod-autoscaler-use-rest-clients` should be true. diff --git a/docs/tutorials/clusters/apparmor.md b/docs/tutorials/clusters/apparmor.md index 03ce680799a99..4f8ecca51989d 100644 --- a/docs/tutorials/clusters/apparmor.md +++ b/docs/tutorials/clusters/apparmor.md @@ -317,14 +317,13 @@ node with the required profile. ### Restricting profiles with the PodSecurityPolicy If the PodSecurityPolicy extension is enabled, cluster-wide AppArmor restrictions can be applied. To -enable the PodSecurityPolicy, two flags must be set on the `apiserver`: +enable the PodSecurityPolicy, the following flag must be set on the `apiserver`: ``` ---admission-control=PodSecurityPolicy[,others...] ---runtime-config=extensions/v1beta1/podsecuritypolicy[,others...] +--enable-admission-plugins=PodSecurityPolicy[,others...] ``` -With the extension enabled, the AppArmor options can be specified as annotations on the PodSecurityPolicy: +The AppArmor options can be specified as annotations on the PodSecurityPolicy: ```yaml apparmor.security.beta.kubernetes.io/defaultProfileName: diff --git a/test/examples_test.go b/test/examples_test.go index 9075d9b7137bb..5faf1d95c9d77 100644 --- a/test/examples_test.go +++ b/test/examples_test.go @@ -414,6 +414,7 @@ func TestExampleObjectSchemas(t *testing.T) { "security-context-2": {&api.Pod{}}, "security-context-3": {&api.Pod{}}, "security-context-4": {&api.Pod{}}, + "share-process-namespace": {&api.Pod{}}, "task-pv-claim": {&api.PersistentVolumeClaim{}}, "task-pv-pod": {&api.Pod{}}, "task-pv-volume": {&api.PersistentVolume{}}, @@ -589,6 +590,8 @@ func TestExampleObjectSchemas(t *testing.T) { capabilities.SetForTests(capabilities.Capabilities{ AllowPrivileged: true, }) + // PodShareProcessNamespace needed for example share-process-namespace.yaml + utilfeature.DefaultFeatureGate.Set("PodShareProcessNamespace=true") for path, expected := range cases { tested := 0