Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Add v1alpha1 to v1alpha2 config migration doc #9124

Closed
wants to merge 1 commit into from

Conversation

luxas
Copy link
Member

@luxas luxas commented Jun 18, 2018

Part of kubernetes/kubeadm#849
cc @kubernetes/sig-cluster-lifecycle-pr-reviews @Bradamant3 @neolit123

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jun 18, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: chenopis

Assign the PR to them by writing /assign @chenopis in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@neolit123
Copy link
Member

/assign @Bradamant3

@k8sio-netlify-preview-bot
Copy link
Collaborator

Deploy preview for kubernetes-io-master-staging ready!

Built with commit dd82d5d

https://deploy-preview-9124--kubernetes-io-master-staging.netlify.com

1 similar comment
@k8sio-netlify-preview-bot
Copy link
Collaborator

Deploy preview for kubernetes-io-master-staging ready!

Built with commit dd82d5d

https://deploy-preview-9124--kubernetes-io-master-staging.netlify.com


- The `v1alpha1` and `v1alpha2` APIs **aren't marked stable** yet, so their semantics will (possibly significantly) change before reaching `v1beta1`. Between `v1beta1` and `v1` the delta is expected to be small/non-existent.
- **All your `v1alpha1` configuration files can be read by kubeadm v1.11 and are automatically upgraded to `v1alpha2` internally**. This means you don't _technically_ need to read this document and upgrade each field separately, kubeadm takes care of this automatically under the hood.
- The policy is the follwing: If a given API version `v1alphaY` was released in kubeadm v1.Z to replace the existing `v1alphaX` API version, reading `v1alphaX` files in kubeadm v1.Z will work, but kubeadm v1.Z will only _write_ the newer API version `v1alphaY`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

follwing -> following

#### The `.CloudProvider` and `.PrivilegedPods`fields are gone

In-tree cloud providers that `.CloudProvider` once enabled on a best-effort, experimental
basis are deprecated, and use of [out-of-tree cloud providers](#TODO) is encouraged instead.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Link is missing here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The experimental usage of in-tree cloud providers with the .CloudProvider field is deprecated. Usage of out-of-tree cloud providers is encouraged instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not explicitly mentioned in any public docs... in-tree has a deprecation warning but does not support the assertion that out-of-tree is the thing to use and the cloud controller manager really doesn't really say that external is encouraged over in-tree, it just tells you how to use external provider running in-tree code.

I wonder if this decision was explicitly stated and documented somewhere. The opening paragraphs in this document almost get at it...

Copy link
Member

@timothysc timothysc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Meta question - b/c this is all alpha I don't think this should go into the main docs, but instead add to the kubeadm repo, and in our configuration section link to it.


#### Important notes

- The `v1alpha1` and `v1alpha2` APIs **aren't marked stable** yet, so their semantics will (possibly significantly) change before reaching `v1beta1`. Between `v1beta1` and `v1` the delta is expected to be small/non-existent.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/delta is expected to be small/non-existent./

api-policies are adhered to (link to api-policy reference)


### API Changes between `v1alpha1` and `v1alpha2`

#### The `.CloudProvider` and `.PrivilegedPods`fields are gone
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

space after .PrivilegedPods
possibly:
s/gone/deprecated/


The `.PrivilegedPods` boolean field made the API Server and controller manager Static
Pods be privileged Pods, as there is one cloud provider that needs that. However, this can be
equally well by editing the Static Pod manifests after `kubeadm init` has been run.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, this can be -> missing controlled ?

```

The `.PrivilegedPods` boolean field made the API Server and controller manager Static
Pods be privileged Pods, as there is one cloud provider that needs that. However, this can be
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

possibly either specify the CP by name or omit the second part.
not sure.


There are two modes of hosting etcd for kubeadm clusters: locally or externally. Still, this
wasn't reflected in the `v1alpha1` config, hosting fields from both modes under the same
`.Etcd` struct like this:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the v1alpha1 configuration, hosting fields for both modes are sharing the .Etcd structure, like so:

that it's not possible to specify several bootstrap tokens at once, and that similar fields should
not be hosted in the top-level `MasterConfiguration` struct.

In `v1alpha2`, there is a slice of called `.BootstrapTokens` that contains `BootstrapToken` objects that look like this:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/slice of/field/

@neolit123
Copy link
Member

/assign @neolit123

@Bradamant3
Copy link
Contributor

Bradamant3 commented Jun 18, 2018

@luxas if this is for 1.11, it should be rebased against the release-1.11 branch. But +1 to suggestion from @timothysc to put it in the kubeadm repo and link. (which would moot the rebase requirement)

@luxas
Copy link
Member Author

luxas commented Jul 6, 2018

I'm not gonna be able to finish this work as I'm going into the military, can someone help out and carry this over to the kubeadm repo please?

@neolit123
Copy link
Member

@luxas no problem. we are going to handle it!

@timothysc
Copy link
Member

I'm going to close out this issue for now.

@timothysc timothysc closed this Jul 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants