Skip to content

Commit

Permalink
Patch PR #4140 (#4215)
Browse files Browse the repository at this point in the history
* Patch PR #4140

* fix link and typos
  • Loading branch information
chenopis authored and Jessica Yao committed Sep 22, 2017
1 parent 9f44fe1 commit d56692d
Show file tree
Hide file tree
Showing 3 changed files with 16 additions and 18 deletions.
19 changes: 9 additions & 10 deletions docs/concepts/workloads/pods/disruptions.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,12 @@ redirect_from:
- "/docs/admin/disruptions/"
- "/docs/admin/disruptions.html"
- "/docs/tasks/configure-pod-container/configure-pod-disruption-budget/"
- "/docs/tasks/configure-pod-container/configure-pod-disruption-budget/"
- "/docs/tasks/administer-cluster/configure-pod-disruption-budget/"
---

{% capture overview %}
This guide is for application owners who want to build
highly availabile applications, and thus need to understand
highly available applications, and thus need to understand
what types of Disruptions can happen to Pods.

It is also for Cluster Administrators who want to perform automated
Expand All @@ -32,11 +31,11 @@ Pods do not disappear until someone (a person or a controller) destroys them, or
there is an unavoidable hardware or system software error.

We call these unavoidable cases *involuntary disruptions* to
an applicaton. Examples are:
an application. Examples are:

- a hardware failure of the physical machine backing the node
- cluster administrator deletes VM (instance) by mistake
- cloud provider or hypervisor failure makes VM dissappear
- cloud provider or hypervisor failure makes VM disappear
- a kernel panic
- if the node to disappears from the cluster due to cluster network partition
- eviction of a pod due to the node being [out-of-resources](/docs/tasks/administer-cluster/out-of-resource.md).
Expand Down Expand Up @@ -83,11 +82,11 @@ or across zones (if using a
[multi-zone cluster](/docs/admin/multiple-zones).)

The frequency of voluntary disruptions varies. On a basic Kubernetes cluster, there are
no voluntary disruptions at all. However, your cluster admnistrator or hosting provider
no voluntary disruptions at all. However, your cluster administrator or hosting provider
may run some additional services which cause voluntary disruptions. For example,
rolling out node software updates can cause voluntary updates. Also, some implementations
of cluster (node) autoscaling may cause voluntary disruptions to defragment and compact nodes.
You cluster adminstrator or hosting provider should have documented what level of voluntary
You cluster administrator or hosting provider should have documented what level of voluntary
disruptions, if any, to expect.

Kubernetes offers features to help run highly available applications at the same
Expand All @@ -114,7 +113,7 @@ When a cluster administrator wants to drain a node
they use the `kubectl drain` command. That tool tries to evict all
the pods on the machine. The eviction request may be temporarily rejected,
and the tool periodically retries all failed requests until all pods
are terminated, or until a configureable timeout is reached.
are terminated, or until a configurable timeout is reached.

A PDB specifies the number of replicas that an application can tolerate having, relative to how
many it is intended to have. For example, a Deployment which has a `spec.replicas: 5` is
Expand Down Expand Up @@ -144,7 +143,7 @@ When a pod is evicted using the eviction API, it is gracefully terminated (see
Consider a cluster with 3 nodes, `node-1` through `node-3`.
The cluster is running several applications. One of them has 3 replicas initially called
`pod-a`, `pod-b`, and `pod-c`. Another, unrelated pod without a PDB, called `pod-x`, is also shown.
Initially, the pods are layed out as follows:
Initially, the pods are laid out as follows:

| node-1 | node-2 | node-3 |
|:--------------------:|:-------------------:|:------------------:|
Expand Down Expand Up @@ -231,15 +230,15 @@ can happen, according to:
## Separating Cluster Owner and Application Owner Roles

Often, it is useful to think of the Cluster Manager
and Application Owner as separate roles with limited knowlege
and Application Owner as separate roles with limited knowledge
of each other. This separation of responsibilities
may make sense in these scenarios:

- when there are many application teams sharing a Kubernetes cluster, and
there is natural specialization of roles
- when third-party tools or services are used to automate cluster management

Pod Disrutption Budgets support this separation of roles by providing an
Pod Disruption Budgets support this separation of roles by providing an
interface between the roles.

If you do not have such a separation of responsibilities in your organization,
Expand Down
2 changes: 1 addition & 1 deletion docs/tasks/administer-cluster/safely-drain-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ You can attempt an eviction using `curl`:
$ curl -v -H 'Content-type: application/json' http://127.0.0.1:8080/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json
```

The API can respond in one of three ways.
The API can respond in one of three ways:

- If the eviction is granted, then the pod is deleted just as if you had sent
a `DELETE` request to the pod's URL and you get back `200 OK`.
Expand Down
13 changes: 6 additions & 7 deletions docs/tasks/run-application/configure-pdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ nodes.
high availability.
* You should know how to deploy [Replicated Stateless Applications](/docs/tasks/run-application/run-stateless-application-deployment.md)
and/or [Replicated Stateful Applications](/docs/tasks/run-application/run-replicated-stateful-application.md).
* You should have read about the [Pod Disruption Budget concept](/docs/tasks/run-application/configure-pdb.md).
* You should have read about [Pod Disruptions](/docs/concepts/workloads/pods/disruptions/).
* You should confirm with your cluster owner or service provider that they respect
Pod Disruption Budgets.
{% endcapture %}
Expand Down Expand Up @@ -109,7 +109,7 @@ of the desired replicas are unhealthy.
In typical usage, a single budget would be used for a collection of pods managed by
a controller—for example, the pods in a single ReplicaSet or StatefulSet.

Note that a disruption budget does not truly guarantee that the specified
**Note:** A disruption budget does not truly guarantee that the specified
number/percentage of pods will always be up. For example, a node that hosts a
pod from the collection may fail when the collection is at the minimum size
specified in the budget, thus bringing the number of available pods from the
Expand All @@ -123,7 +123,7 @@ semantics of `PodDisruptionBudget`.
You can find examples of pod disruption budgets defined below. They match pods with the label
`app: zookeeper`.

Example PDB Using maxUnavailable:
Example PDB Using minAvailable:

```yaml
apiVersion: policy/v1beta1
Expand Down Expand Up @@ -182,9 +182,8 @@ NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE
zk-pdb 2 1 7s
```

The non-zero value for `ALLOWED-DISRUPTIONS` means that the disruption controller
has seen the PDB and counted the matching PDB, and updated the status
of the PDB.
The non-zero value for `ALLOWED-DISRUPTIONS` means that the disruption controller has seen the pods,
counted the matching pods, and update the status of the PDB.

You can get more information about the status of a PDB with this command:

Expand Down Expand Up @@ -216,7 +215,7 @@ You can use a PDB with pods controlled by another type of controller, by an
- only `.spec.minAvailable` can be used, not `.spec.maxUnavailable`.
- only an integer value can be used with `.spec.minAvailable`, not a percentage.

You can use a selector which selects a subset or superset of the pods beloning to a built-in
You can use a selector which selects a subset or superset of the pods belonging to a built-in
controller. However, when there are multiple PDBs in a namespace, you must be careful not
to create PDBs whose selectors overlap.

Expand Down

0 comments on commit d56692d

Please sign in to comment.