Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Do Not Merge] Release 1.12 #10292

Merged
merged 69 commits into from
Sep 27, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
1ea8f96
Update docs for fields allowed at root of CRD schema (#9973)
nikhita Aug 21, 2018
9f823c1
add plugin docs and examples (#10053)
juanvallejo Aug 28, 2018
21a1beb
merge master
zparnold Sep 4, 2018
7baa17d
Merge branch 'release-1.12' of github.com:/kubernetes/website into re…
zparnold Sep 4, 2018
183d214
docs update to promote TaintNodesByCondition to beta (#9626)
Huang-Wei Sep 4, 2018
9d84a49
HPA Specificity Improvements (#8757)
DirectXMan12 Sep 4, 2018
28eb1cf
adjust docs for pod ready++ (#10049)
freehan Sep 6, 2018
55f8e53
Merge remote-tracking branch 'origin' into release-1.12
zparnold Sep 9, 2018
d5b92b4
Remove --cadvisor-port - has been deprecated since v1.10 (#10023)
dims Sep 10, 2018
68de2e3
Add Documentation for Snapshot Feature (#9948)
xing-yang Sep 10, 2018
d494a44
Add dry-run to api-concepts (#10033)
Sep 10, 2018
d6e30e1
kubeadm-init: Update the offline support section (#10062)
rosti Sep 10, 2018
dc9d71a
Say bye to `DynamicProvisioningScheduling` (#10157)
tengqm Sep 10, 2018
fe96f3b
Update ResourceQuota per PriorityClass state for 1.12 (#10229)
bsalamat Sep 10, 2018
b3bc49e
TokenRequest and TokenRequestProjection now beta (#10161)
tengqm Sep 10, 2018
66eaff0
Change feature state for kms provider to beta. (#10230)
immutableT Sep 10, 2018
698e93b
coredns default (#10200)
chrisohaver Sep 10, 2018
c98cd68
Promote ShareProcessNamespace to beta in docs (#9996)
verb Sep 10, 2018
462817a
Add CoreDNS details to DNS Debug docs (#10201)
chrisohaver Sep 10, 2018
e1e6555
Update docs with topology aware dynamic provisioning (#9939)
msau42 Sep 11, 2018
93c0bb9
HPA Algorithm Information Improvements (#9780)
DirectXMan12 Sep 12, 2018
2673795
Audit 1.12 doc (#9953)
CaoShuFeng Sep 12, 2018
9a7a295
MountPropagation is now GA (#10090)
jsafrane Sep 12, 2018
57f4b2d
RuntimeClass documentation (#10102)
tallclair Sep 12, 2018
9d3671d
Add documentation for Scheduler performance tuning (#10048)
bsalamat Sep 12, 2018
dd24ece
TTL controller for cleaning up finished resources (#10064)
janetkuo Sep 12, 2018
19d8375
Bump quota configuration api version (#10217)
vikaschoudhary16 Sep 12, 2018
90f7df6
Incremental update from master (#10278)
zparnold Sep 12, 2018
0e2ba47
docs update to promote ScheduleDaemonSetPods to beta (#9923)
Huang-Wei Sep 13, 2018
8ba585e
Dynamic volume limit updates for 1.12 (#10211)
gnufied Sep 13, 2018
ae83bb4
Add "MayRunAs" value among other GroupStrategies (#9888)
stlaz Sep 13, 2018
e7319ee
Add CoreDNS details to the customize DNS doc (#10228)
rajansandeep Sep 13, 2018
3286b00
Fix secrets docs in 1.12 branch (#10056)
wojtek-t Sep 13, 2018
4d7691e
merge origin master
zparnold Sep 17, 2018
555bbbb
Revert CoreDNS Docs (#10319)
zparnold Sep 17, 2018
eea1385
Add CRI installation instructions page
bart0sh Sep 14, 2018
b3529ee
Merge pull request #10299 from bart0sh/PR0028-kubeadm-CRI-documentation
ryanmcginnis Sep 19, 2018
5ab6ae0
kubeadm: update API types documentation for 1.12 (#10283)
neolit123 Sep 19, 2018
25fe403
TokenRequest feature documentation (#10295)
mikedanese Sep 19, 2018
0b786ff
AdvancedAuditing is now GA (#10156)
tengqm Sep 19, 2018
2965ea0
update runtime-class.md (#10332)
tianshapjq Sep 20, 2018
46328a4
Document cross-authorizer permissions for creating RBAC roles (#10015)
liggitt Sep 20, 2018
70b991f
kubeadm: update authored content for 1.12 (reference docs and cluster…
neolit123 Sep 20, 2018
e7d47c4
add AllowedProcMountTypes and ProcMountType to docs (#9911)
jessfraz Sep 20, 2018
f84f77c
kubeadm: add new command line reference (#10306)
neolit123 Sep 20, 2018
1cc5130
Documenting SCTP support in Kubernetes (#10279)
janosi Sep 20, 2018
7dbf37f
TLS Bootstrap and Server Cert Rotation feature documentation (#10232)
mikedanese Sep 20, 2018
035fc88
Add clarifications for volume snapshots (#10296)
xing-yang Sep 20, 2018
ce69248
Update kubadm ha installation for 1.12 (#10264)
chuckha Sep 20, 2018
dd6a61f
Document how to run in-tree cloud providers with kubeadm (#10357)
dims Sep 20, 2018
865b49a
kubeadm reference doc for release 1.12 (#10359)
tengqm Sep 21, 2018
5d6da7e
Revert "Revert "Add CoreDNS details to DNS Debug docs (#10201)""
zparnold Sep 21, 2018
045cbb1
Revert "Revert "Add CoreDNS details to the customize DNS doc (#10228)""
zparnold Sep 21, 2018
9520119
Revert "Revert "coredns default (#10200)""
zparnold Sep 21, 2018
a3c7224
add missing instruction for ha guide (#10374)
chuckha Sep 24, 2018
0d8929d
kubeadm - Ha upgrade updates (#10340)
detiber Sep 24, 2018
0b80570
add runasgroup in psp (#10076)
krmayankk Sep 25, 2018
e69adeb
update KubeletPluginsWatcher feature gate (#10205)
Sep 25, 2018
05e9e09
merge upstream master
zparnold Sep 25, 2018
9f19d20
Merge branch 'release-1.12' of github.com:/kubernetes/website into re…
zparnold Sep 25, 2018
490ddd8
generated 1.12 docs
zparnold Sep 25, 2018
bb34f24
Building Multi-arch images with Manifests (#10379)
dims Sep 25, 2018
c1f4b25
Upgrade docs for v1.12 (#10344)
liztio Sep 26, 2018
70bca00
generated assets and docs
zparnold Sep 27, 2018
68e189f
Merge branch 'release-1.12' of github.com:/kubernetes/website into re…
zparnold Sep 27, 2018
27d9161
remove 1.7
zparnold Sep 27, 2018
140dec0
update 1.12
zparnold Sep 27, 2018
36fbc62
Merge branch 'master' into release-1.12
zparnold Sep 27, 2018
29dec80
update plugin documentation under docs>tasks>extend-kubectl (#10259)
juanvallejo Sep 27, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 14 additions & 13 deletions config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -63,10 +63,10 @@ time_format_blog = "Monday, January 02, 2006"
description = "Production-Grade Container Orchestration"
showedit = true

latest = "v1.11"
latest = "v1.12"

fullversion = "v1.11.0"
version = "v1.11"
fullversion = "v1.12.0"
version = "v1.12"
githubbranch = "master"
docsbranch = "master"
deprecated = false
Expand All @@ -76,10 +76,10 @@ githubWebsiteRepo = "github.com/kubernetes/website"
githubWebsiteRaw = "raw.githubusercontent.com/kubernetes/website"

[[params.versions]]
fullversion = "v1.11.0"
version = "v1.11"
githubbranch = "v1.11.0"
docsbranch = "release-1.11"
fullversion = "v1.12.0"
version = "v1.12"
githubbranch = "v1.12.0"
docsbranch = "release-1.12"
url = "https://kubernetes.io"

[params.pushAssets]
Expand All @@ -93,6 +93,13 @@ js = [
"script"
]

[[params.versions]]
fullversion = "v1.11.3"
version = "v1.11"
githubbranch = "v1.11.3"
docsbranch = "release-1.11"
url = "https://v1-11.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.10.3"
version = "v1.10"
Expand All @@ -114,12 +121,6 @@ githubbranch = "v1.8.4"
docsbranch = "release-1.8"
url = "https://v1-8.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.7.6"
version = "v1.7"
githubbranch = "v1.7.6"
docsbranch = "release-1.7"
url = "https://v1-7.docs.kubernetes.io"

# Language definitions.

Expand Down
6 changes: 2 additions & 4 deletions content/en/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,11 +76,9 @@ the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce fr
permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from
Kubernetes causes all the Pod objects running on the node to be deleted from the apiserver, and frees up their names.

Version 1.8 introduced an alpha feature that automatically creates
In version 1.12, `TaintNodesByCondition` feature is promoted to beta,so node lifecycle controller automatically creates
[taints](/docs/concepts/configuration/taint-and-toleration/) that represent conditions.
To enable this behavior, pass an additional feature gate flag `--feature-gates=...,TaintNodesByCondition=true`
to the API server, controller manager, and scheduler.
When `TaintNodesByCondition` is enabled, the scheduler ignores conditions when considering a Node; instead
Similarly the scheduler ignores conditions when considering a Node; instead
it looks at the Node's taints and a Pod's tolerations.

Now users can choose between the old scheduling model and a new, more flexible scheduling model.
Expand Down
40 changes: 40 additions & 0 deletions content/en/docs/concepts/cluster-administration/cloud-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,47 @@ This page explains how to manage Kubernetes running on a specific
cloud provider.
{{% /capture %}}

{{< toc >}}

{{% capture body %}}
### kubeadm
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
kubeadm has configuration options to specify configuration information for cloud providers. For example a typical
in-tree cloud provider can be configured using kubeadm as shown below:

```yaml
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha3
kubernetesVersion: v1.12.0
apiServerExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
apiServerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManagerExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
controllerManagerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
```
The in-tree cloud providers typically need both `--cloud-provider` and `--cloud-config` specified in the command lines
for the [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) and the
[kubelet](/docs/admin/kubelet/). The contents of the file specified in `--cloud-config` for each provider is documented below as well.

For all external cloud providers, please follow the instructions on the individual repositories.

## AWS
This section describes all the possible configurations which can
be used when running Kubernetes on Amazon Web Services.
Expand Down
5 changes: 3 additions & 2 deletions content/en/docs/concepts/cluster-administration/proxies.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ There are several different proxies you may encounter when using Kubernetes:
1. The [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips):

- runs on each node
- proxies UDP and TCP
- proxies UDP, TCP and SCTP
- does not understand HTTP
- provides load balancing
- is just used to reach services
Expand All @@ -51,7 +51,8 @@ There are several different proxies you may encounter when using Kubernetes:

- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
- are created automatically when the Kubernetes service has type `LoadBalancer`
- use UDP/TCP only
- usually supports UDP/TCP only
- SCTP support is up to the load balancer implementation of the cloud provider
- implementation varies by cloud provider.

Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ other pods to be evicted/not get scheduled. To resolve this issue,
[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) is
augmented to support Pod priority. An admin can create ResourceQuota for users
at specific priority levels, preventing them from creating pods at high
priorities. However, this feature is in alpha as of Kubernetes 1.11.
priorities. This feature is in beta since Kubernetes 1.12.
{{< /warning >}}

{{% /capture %}}
Expand Down
112 changes: 112 additions & 0 deletions content/en/docs/concepts/configuration/scheduler-perf-tuning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
---
reviewers:
- bsalamat
title: Scheduler Performance Tuning
content_template: templates/concept
weight: 70
---

{{% capture overview %}}

{{< feature-state for_k8s_version="1.12" >}}

Kube-scheduler is the Kubernetes default scheduler. It is responsible for
placement of Pods on Nodes in a cluster. Nodes in a cluster that meet the
scheduling requirements of a Pod are called "feasible" Nodes for the Pod. The
scheduler finds feasible Nodes for a Pod and then runs a set of functions to
score the feasible Nodes and picks a Node with the highest score among the
feasible ones to run the Pod. The scheduler then notifies the API server about this
decision in a process called "Binding".

{{% /capture %}}

{{% capture body %}}

## Percentage of Nodes to Score

Before Kubernetes 1.12, Kube-scheduler used to check the feasibility of all the
nodes in a cluster and then scored the feasible ones. Kubernetes 1.12 has a new
feature that allows the scheduler to stop looking for more feasible nodes once
it finds a certain number of them. This improves the scheduler's performance in
large clusters. The number is specified as a percentage of the cluster size and
is controlled by a configuration option called `percentageOfNodesToScore`. The
range should be between 1 and 100. Other values are considered as 100%. The
default value of this option is 50%. A cluster administrator can change this value by providing a
different value in the scheduler configuration. However, it may not be necessary to change this value.

```yaml
apiVersion: componentconfig/v1alpha1
kind: KubeSchedulerConfiguration
algorithmSource:
provider: DefaultProvider

...

percentageOfNodesToScore: 50
```
{{< note >}} **Note**: In clusters with zero or less than 50 feasible nodes, the
scheduler still checks all the nodes, simply because there are not enough
feasible nodes to stop the scheduler's search early. {{< /note >}}
**To disable this feature**, you can set `percentageOfNodesToScore` to 100.

### Tuning percentageOfNodesToScore

`percentageOfNodesToScore` must be a value between 1 and 100
with the default value of 50. There is also a hardcoded minimum value of 50
nodes which is applied internally. The scheduler tries to find at
least 50 nodes regardless of the value of `percentageOfNodesToScore`. This means
that changing this option to lower values in clusters with several hundred nodes
will not have much impact on the number of feasible nodes that the scheduler
tries to find. This is intentional as this option is unlikely to improve
performance noticeably in smaller clusters. In large clusters with over a 1000
nodes setting this value to lower numbers may show a noticeable performance
improvement.

An important note to consider when setting this value is that when a smaller
number of nodes in a cluster are checked for feasibility, some nodes are not
sent to be scored for a given Pod. As a result, a Node which could possibly
score a higher value for running the given Pod might not even be passed to the
scoring phase. This would result in a less than ideal placement of the Pod. For
this reason, the value should not be set to very low percentages. A general rule
of thumb is to never set the value to anything lower than 30. Lower values
should be used only when the scheduler's throughput is critical for your
application and the score of nodes is not important. In other words, you prefer
to run the Pod on any Node as long as it is feasible.

It is not recommended to lower this value from its default if your cluster has
only several hundred Nodes. It is unlikely to improve the scheduler's
performance significantly.

### How the scheduler iterates over Nodes

This section is intended for those who want to understand the internal details
of this feature.

In order to give all the Nodes in a cluster a fair chance of being considered
for running Pods, the scheduler iterates over the nodes in a round robin
fashion. You can imagine that Nodes are in an array. The scheduler starts from
the start of the array and checks feasibility of the nodes until it finds enough
Nodes as specified by `percentageOfNodesToScore`. For the next Pod, the
scheduler continues from the point in the Node array that it stopped at when checking
feasibility of Nodes for the previous Pod.

If Nodes are in multiple zones, the scheduler iterates over Nodes in various
zones to ensure that Nodes from different zones are considered in the
feasibility checks. As an example, consider six nodes in two zones:

```
Zone 1: Node 1, Node 2, Node 3, Node 4
Zone 2: Node 5, Node 6
```

The Scheduler evaluates feasibility of the nodes in this order:

```
Node 1, Node 5, Node 2, Node 6, Node 3, Node 4
```

After going over all the Nodes, it goes back to Node 1.

{{% /capture %}}
12 changes: 9 additions & 3 deletions content/en/docs/concepts/configuration/secret.md
Original file line number Diff line number Diff line change
Expand Up @@ -343,9 +343,15 @@ files.

When a secret being already consumed in a volume is updated, projected keys are eventually updated as well.
Kubelet is checking whether the mounted secret is fresh on every periodic sync.
However, it is using its local ttl-based cache for getting the current value of the secret.
As a result, the total delay from the moment when the secret is updated to the moment when new keys are
projected to the pod can be as long as kubelet sync period + ttl of secrets cache in kubelet.
However, it is using its local cache for getting the current value of the Secret.
The type of the cache is configurable using the (`ConfigMapAndSecretChangeDetectionStrategy` field in
[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go)).
It can be either propagated via watch (default), ttl-based, or simply redirecting
all requests to directly kube-apiserver.
As a result, the total delay from the moment when the Secret is updated to the moment
when new keys are projected to the Pod can be as long as kubelet sync period + cache
propagation delay, where cache propagation delay depends on the chosen cache type
(it equals to watch propagation delay, ttl of cache, or zero corespondingly).

{{< note >}}
**Note:** A container using a Secret as a
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -279,9 +279,10 @@ which matches the behavior when this feature is disabled.

## Taint Nodes by Condition

Version 1.8 introduces an alpha feature that causes the node controller to create taints corresponding to
Node conditions. When this feature is enabled (you can do this by including `TaintNodesByCondition=true` in the `--feature-gates` command line flag to the scheduler, such as
`--feature-gates=FooBar=true,TaintNodesByCondition=true`), the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations.
In version 1.12, `TaintNodesByCondition` feature is promoted to beta, so node lifecycle controller automatically creates taints corresponding to
Node conditions.
Similarly the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations.
Note that `TaintNodesByCondition` only taints nodes with `NoSchedule` effect. `NoExecute` effect is controlled by `TaintBasedEviction` which is an alpha feature and disabled by default.

Starting in Kubernetes 1.8, the DaemonSet controller automatically adds the
following `NoSchedule` tolerations to all daemons, to prevent DaemonSets from
Expand Down
20 changes: 20 additions & 0 deletions content/en/docs/concepts/containers/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,26 @@ you can do one of the following:

Note that you should avoid using `:latest` tag, see [Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images) for more information.

## Building Multi-architecture Images with Manifests

Docker CLI now supports the following command `docker manifest` with sub commands like `create`, `annotate` and `push`. These commands can be used to build and push the manifests. You can use `docker manifest inspect` to view the manifest.

Please see docker documentation here:
https://docs.docker.com/edge/engine/reference/commandline/manifest/

See examples on how we use this in our build harness:
https://cs.k8s.io/?q=docker%20manifest%20(create%7Cpush%7Cannotate)&i=nope&files=&repos=

These commands rely on and are implemented purely on the Docker CLI. You will need to either edit the `$HOME/.docker/config.json` and set `experimental` key to `enabled` or you can just set `DOCKER_CLI_EXPERIMENTAL` environment variable to `enabled` when you call the CLI commands.

{{< note >}}
**Note:** Please use Docker *18.06 or above*, versions below that either have bugs or do not support the experimental command line option. Example https://github.com/docker/cli/issues/1135 causes problems under containerd.
{{< /note >}}

If you run into trouble with uploading stale manifests, just clean up the older manifests in `$HOME/.docker/manifests` to start fresh.

For Kubernetes, we have typically used images with suffix `-$(ARCH)`. For backward compatability, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.

## Using a Private Registry

Private registries may require keys to read images from them.
Expand Down
Loading