Skip to content

Commit

Permalink
Add vSphere external cloud provider (kubernetes-sigs#5959)
Browse files Browse the repository at this point in the history
  • Loading branch information
pierreyves-lebrun authored and LuckySB committed Apr 21, 2020
1 parent d528aae commit 1c6b385
Show file tree
Hide file tree
Showing 22 changed files with 881 additions and 12 deletions.
90 changes: 90 additions & 0 deletions docs/vsphere-csi.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# vSphere CSI Driver

vSphere CSI driver allows you to provision volumes over a vSphere deployment. The Kubernetes historic in-tree cloud provider is deprecated and will be removed in future versions.

To enable vSphere CSI driver, uncomment the `vsphere_csi_enabled` option in `group_vars/all/vsphere.yml` and set it to `true`.

To set the number of replicas for the vSphere CSI controller, you can change `vsphere_csi_controller_replicas` option in `group_vars/all/vsphere.yml`.

You need to source the vSphere credentials you use to deploy your machines that will host Kubernetes.

| Variable | Required | Type | Choices | Default | Comment |
|---------------------------------------------|----------|---------|----------------------------|---------------------------|----------------------------------------------------------------|
| external_vsphere_vcenter_ip | TRUE | string | | | IP/URL of the vCenter |
| external_vsphere_vcenter_port | TRUE | string | | "443" | Port of the vCenter API |
| external_vsphere_insecure | TRUE | string | "true", "false" | "true" | set to "true" if the host above uses a self-signed cert |
| external_vsphere_user | TRUE | string | | | User name for vCenter with required privileges |
| external_vsphere_password | TRUE | string | | | Password for vCenter |
| external_vsphere_datacenter | TRUE | string | | | Datacenter name to use |
| external_vsphere_kubernetes_cluster_id | TRUE | string | | "kubernetes-cluster-id" | Kubernetes cluster ID to use |
| vsphere_cloud_controller_image_tag | TRUE | string | | "latest" | Kubernetes cluster ID to use |
| vsphere_syncer_image_tag | TRUE | string | | "v1.0.2" | Syncer image tag to use |
| vsphere_csi_attacher_image_tag | TRUE | string | | "v1.1.1" | CSI attacher image tag to use |
| vsphere_csi_controller | TRUE | string | | "v1.0.2" | CSI controller image tag to use |
| vsphere_csi_controller_replicas | TRUE | integer | | 1 | Number of pods Kubernetes should deploy for the CSI controller |
| vsphere_csi_liveness_probe_image_tag | TRUE | string | | "v1.1.0" | CSI liveness probe image tag to use |
| vsphere_csi_provisioner_image_tag | TRUE | string | | "v1.2.2" | CSI provisioner image tag to use |
| vsphere_csi_node_driver_registrar_image_tag | TRUE | string | | "v1.1.0" | CSI node driver registrat image tag to use |
| vsphere_csi_driver_image_tag | TRUE | string | | "v1.0.2" | CSI driver image tag to use |

## Usage example

To test the dynamic provisioning using vSphere CSI driver, make sure to create a [storage policy](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md#create-a-storage-policy) and [storage class](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md#create-a-storageclass), then apply the following manifest:

```yml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc-vsphere
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: Space-Efficient

---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /var/lib/www/html
name: csi-data-vsphere
volumes:
- name: csi-data-vsphere
persistentVolumeClaim:
claimName: csi-pvc-vsphere
readOnly: false
```
Apply this conf to your cluster: ```kubectl apply -f nginx.yml```

You should see the PVC provisioned and bound:

```ShellSession
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc-vsphere Bound pvc-dc7b1d21-ee41-45e1-98d9-e877cc1533ac 1Gi RWO Space-Efficient 10s
```

And the volume mounted to the Nginx Pod (wait until the Pod is Running):

```ShellSession
kubectl exec -it nginx -- df -h | grep /var/lib/www/html
/dev/sdb 976M 2.6M 907M 1% /var/lib/www/html
```

## More info

For further information about the vSphere CSI Driver, you can refer to the official [vSphere Cloud Provider documentation](https://cloud-provider-vsphere.sigs.k8s.io/container_storage_interface.html).
82 changes: 72 additions & 10 deletions docs/vsphere.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,75 @@
# vSphere cloud provider
# vSphere

Kubespray can be deployed with vSphere as Cloud provider. This feature supports
Kubespray can be deployed with vSphere as Cloud provider. This feature supports:

- Volumes
- Persistent Volumes
- Storage Classes and provisioning of volumes.
- vSphere Storage Policy Based Management for Containers orchestrated by Kubernetes.
- Storage Classes and provisioning of volumes
- vSphere Storage Policy Based Management for Containers orchestrated by Kubernetes

## Prerequisites
## Out-of-tree vSphere cloud provider

### Prerequisites

You need at first to configure your vSphere environment by following the [official documentation](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md#prerequisites).

After this step you should have:

- vSphere upgraded to 6.7 U3 or later
- VM hardware upgraded to version 15 or higher
- UUID activated for each VM where Kubernetes will be deployed

### Kubespray configuration

First in `inventory/sample/group_vars/all.yml` you must set the cloud provider to `external` and external_cloud_provider to `external_cloud_provider`.

```yml
cloud_provider: "external"
external_cloud_provider: "vsphere"
```
Then, `inventory/sample/group_vars/vsphere.yml`, you need to declare your vCenter credentials and enable the vSphere CSI following the description below.

| Variable | Required | Type | Choices | Default | Comment |
|----------------------------------------|----------|---------|----------------------------|---------|---------------------------------------------------------------------------|
| external_vsphere_vcenter_ip | TRUE | string | | | IP/URL of the vCenter |
| external_vsphere_vcenter_port | TRUE | string | | "443" | Port of the vCenter API |
| external_vsphere_insecure | TRUE | string | "true", "false" | "true" | set to "true" if the host above uses a self-signed cert |
| external_vsphere_user | TRUE | string | | | User name for vCenter with required privileges |
| external_vsphere_password | TRUE | string | | | Password for vCenter |
| external_vsphere_datacenter | TRUE | string | | | Datacenter name to use |
| external_vsphere_kubernetes_cluster_id | TRUE | string | | "kubernetes-cluster-id" | Kubernetes cluster ID to use |
| vsphere_csi_enabled | TRUE | boolean | | false | Enable vSphere CSI |

Example configuration:

```yml
external_vsphere_vcenter_ip: "myvcenter.domain.com"
external_vsphere_vcenter_port: "443"
external_vsphere_insecure: "true"
external_vsphere_user: "administrator@vsphere.local"
external_vsphere_password: "K8s_admin"
external_vsphere_datacenter: "DATACENTER_name"
external_vsphere_kubernetes_cluster_id: "kubernetes-cluster-id"
vsphere_csi_enabled: true
```

For a more fine-grained CSI setup, refer to the [vsphere-csi](vsphere-csi.md) documentation.

### Deployment

Once the configuration is set, you can execute the playbook again to apply the new configuration:

```ShellSession
cd kubespray
ansible-playbook -i inventory/sample/hosts.ini -b -v cluster.yml
```

You'll find some useful examples [here](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md#sample-manifests-to-test-csi-driver-functionality) to test your configuration.

## In-tree vSphere cloud provider ([deprecated](https://cloud-provider-vsphere.sigs.k8s.io/concepts/in_tree_vs_out_of_tree.html))

### Prerequisites (deprecated)

You need at first to configure your vSphere environment by following the [official documentation](https://kubernetes.io/docs/getting-started-guides/vsphere/#vsphere-cloud-provider).

Expand All @@ -18,15 +80,15 @@ After this step you should have:

If you intend to leverage the [zone and region node labeling](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domain-beta-kubernetes-io-region), create a tag category for both the zone and region in vCenter. The tags can then be applied at the host, cluster, datacenter, or folder level, and the cloud provider will walk the hierarchy to extract and apply the labels to the Kubernetes nodes.

## Kubespray configuration
### Kubespray configuration (deprecated)

First you must define the cloud provider in `inventory/sample/group_vars/all.yml` and set it to `vsphere`.

```yml
cloud_provider: vsphere
```

Then, in the same file, you need to declare your vCenter credential following the description below.
Then, in the same file, you need to declare your vCenter credentials following the description below.

| Variable | Required | Type | Choices | Default | Comment |
|------------------------------|----------|---------|----------------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Expand All @@ -45,7 +107,7 @@ Then, in the same file, you need to declare your vCenter credential following th
| vsphere_zone_category | FALSE | string | | | Name of the tag category used to set the `failure-domain.beta.kubernetes.io/zone` label on nodes (Optional, only used for Kubernetes >= 1.12.0) |
| vsphere_region_category | FALSE | string | | | Name of the tag category used to set the `failure-domain.beta.kubernetes.io/region` label on nodes (Optional, only used for Kubernetes >= 1.12.0) |

Example configuration
Example configuration:

```yml
vsphere_vcenter_ip: "myvcenter.domain.com"
Expand All @@ -60,9 +122,9 @@ vsphere_scsi_controller_type: "pvscsi"
vsphere_resource_pool: "K8s-Pool"
```

## Deployment
### Deployment (deprecated)

Once the configuration is set, you can execute the playbook again to apply the new configuration
Once the configuration is set, you can execute the playbook again to apply the new configuration:

```ShellSession
cd kubespray
Expand Down
4 changes: 2 additions & 2 deletions inventory/sample/group_vars/all/all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,8 @@ loadbalancer_apiserver_healthcheck_port: 8081
# cloud_provider:

## When cloud_provider is set to 'external', you can set the cloud controller to deploy
## Supported cloud controllers are: 'openstack'
## When openstack is used make sure to source in the openstack credentials
## Supported cloud controllers are: 'openstack' and 'vsphere'
## When openstack or vsphere are used make sure to source in the required fields
# external_cloud_provider:

## kubeadm deployment mode
Expand Down
20 changes: 20 additions & 0 deletions inventory/sample/group_vars/all/vsphere.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
## Values for the external vSphere Cloud Provider
# external_vsphere_vcenter_ip: "myvcenter.domain.com"
# external_vsphere_vcenter_port: "443"
# external_vsphere_insecure: "true"
# external_vsphere_user: "administrator@vsphere.local"
# external_vsphere_password: "K8s_admin"
# external_vsphere_datacenter: "DATACENTER_name"
# external_vsphere_kubernetes_cluster_id: "kubernetes-cluster-id"

## Tags for the external vSphere Cloud Provider images
# external_vsphere_cloud_controller_image_tag: "latest"
# vsphere_syncer_image_tag: "v1.0.2"
# vsphere_csi_attacher_image_tag: "v1.1.1"
# vsphere_csi_controller: "v1.0.2"
# vsphere_csi_liveness_probe_image_tag: "v1.1.0"
# vsphere_csi_provisioner_image_tag: "v1.2.2"

## To use vSphere CSI plugin to provision volumes set this value to true
# vsphere_csi_enabled: true
# vsphere_csi_controller_replicas: 1
14 changes: 14 additions & 0 deletions roles/kubernetes-apps/csi_driver/vsphere/defaults/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
external_vsphere_vcenter_port: "443"
external_vsphere_insecure: "true"
external_vsphere_kubernetes_cluster_id: "kubernetes-cluster-id"

vsphere_syncer_image_tag: "v1.0.2"
vsphere_csi_attacher_image_tag: "v1.1.1"
vsphere_csi_controller: "v1.0.2"
vsphere_csi_liveness_probe_image_tag: "v1.1.0"
vsphere_csi_provisioner_image_tag: "v1.2.2"
vsphere_csi_node_driver_registrar_image_tag: "v1.1.0"
vsphere_csi_driver_image_tag: "v1.0.2"

vsphere_csi_controller_replicas: 1
44 changes: 44 additions & 0 deletions roles/kubernetes-apps/csi_driver/vsphere/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
---
- include_tasks: vsphere-credentials-check.yml
tags: vsphere-csi-driver

- name: vSphere CSI Driver | Generate CSI cloud-config
template:
src: "{{ item }}.j2"
dest: "{{ kube_config_dir }}/{{ item }}"
mode: 0640
with_items:
- vsphere-csi-cloud-config
when: inventory_hostname == groups['kube-master'][0]
tags: vsphere-csi-driver

- name: vSphere CSI Driver | Generate Manifests
template:
src: "{{ item }}.j2"
dest: "{{ kube_config_dir }}/{{ item }}"
with_items:
- vsphere-csi-controller-rbac.yml
- vsphere-csi-controller-ss.yml
- vsphere-csi-node.yml
register: vsphere_csi_manifests
when: inventory_hostname == groups['kube-master'][0]
tags: vsphere-csi-driver

- name: vSphere CSI Driver | Create a CSI secret
command: "{{ bin_dir }}/kubectl create secret generic vsphere-config-secret --from-file=csi-vsphere.conf={{ kube_config_dir }}/vsphere-csi-cloud-config -n kube-system"
when: inventory_hostname == groups['kube-master'][0]
tags: vsphere-csi-driver

- name: vSphere CSI Driver | Apply Manifests
kube:
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item }}"
state: "latest"
with_items:
- "{{ vsphere_csi_manifests.results }}"
when:
- inventory_hostname == groups['kube-master'][0]
- not item is skipped
loop_control:
label: "{{ item.item }}"
tags: vsphere-csi-driver
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
- name: External vSphere Cloud Provider | check external_vsphere_vcenter_ip value
fail:
msg: "external_vsphere_vcenter_ip is missing"
when: external_vsphere_vcenter_ip is not defined or not external_vsphere_vcenter_ip

- name: External vSphere Cloud Provider | check external_vsphere_vcenter_port value
fail:
msg: "external_vsphere_vcenter_port is missing"
when: external_vsphere_vcenter_port is not defined or not external_vsphere_vcenter_port

- name: External vSphere Cloud Provider | check external_vsphere_insecure value
fail:
msg: "external_vsphere_insecure is missing"
when: external_vsphere_insecure is not defined or not external_vsphere_insecure

- name: External vSphere Cloud Provider | check external_vsphere_user value
fail:
msg: "external_vsphere_user is missing"
when: external_vsphere_user is not defined or not external_vsphere_user

- name: External vSphere Cloud Provider | check external_vsphere_password value
fail:
msg: "external_vsphere_password is missing"
when:
- external_vsphere_password is not defined or not external_vsphere_password

- name: External vSphere Cloud Provider | check external_vsphere_datacenter value
fail:
msg: "external_vsphere_datacenter is missing"
when:
- external_vsphere_datacenter is not defined or not external_vsphere_datacenter

- name: External vSphere Cloud Provider | check external_vsphere_kubernetes_cluster_id value
fail:
msg: "external_vsphere_kubernetes_cluster_id is missing"
when:
- external_vsphere_kubernetes_cluster_id is not defined or not external_vsphere_kubernetes_cluster_id
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
[Global]
cluster-id = "{{ external_vsphere_kubernetes_cluster_id }}"

[VirtualCenter "{{ external_vsphere_vcenter_ip }}"]
insecure-flag = "{{ external_vsphere_insecure }}"
user = "{{ external_vsphere_user }}"
password = "{{ external_vsphere_password }}"
port = "{{ external_vsphere_vcenter_port }}"
datacenters = "{{ external_vsphere_datacenter }}"
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
kind: ServiceAccount
apiVersion: v1
metadata:
name: vsphere-csi-controller
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: vsphere-csi-controller-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims", "pods"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: vsphere-csi-controller-binding
subjects:
- kind: ServiceAccount
name: vsphere-csi-controller
namespace: kube-system
roleRef:
kind: ClusterRole
name: vsphere-csi-controller-role
apiGroup: rbac.authorization.k8s.io
Loading

0 comments on commit 1c6b385

Please sign in to comment.