Skip to content

Commit

Permalink
Add docs version v0.10
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Jul 29, 2024
1 parent 3f9b432 commit 1eb1b5b
Show file tree
Hide file tree
Showing 44 changed files with 2,499 additions and 0 deletions.
58 changes: 58 additions & 0 deletions versioned_docs/version-0.10/developer-guide/development.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
---
sidebar_position: 3
---

# Development setup

## Prerequisites:

- [kind](https://kind.sigs.k8s.io/)
- [helm](https://helm.sh/)
- [tilt](https://tilt.dev/)

## Create a local development environment

1. Clone the [Rancher Turtles](https://github.com/rancher/turtles) repository locally

2. Create **tilt-settings.yaml**:

```yaml
{
"k8s_context": "k3d-rancher-test",
"default_registry": "ghcr.io/turtles-dev",
"debug": {
"turtles": {
"continue": true,
"port": 40000
}
}
}
```

3. Open a terminal in the root of the Rancher Turtles repository
4. Run the following:

```bash
make dev-env

# Or if you want to use a custom hostname for Rancher
RANCHER_HOSTNAME=my.customhost.dev make dev-env
```

5. When tilt has started, open a new terminal and start ngrok or inlets

```bash
kubectl port-forward --namespace cattle-system svc/rancher 10000:443
ngrok http https://localhost:10000
```

## What happens when you run `make dev-env`?

1. A [kind](https://kind.sigs.k8s.io/) cluster is created with the following [configuration](https://github.com/rancher/turtles/blob/main/scripts/kind-cluster-with-extramounts.yaml).
1. [Cluster API Operator](../developer-guide/install_capi_operator.md) is installed using helm, which includes:
- Core Cluster API controller
- Kubeadm Bootstrap and Control Plane Providers
- Docker Infrastructure Provider
- Cert manager
1. `Rancher manager` is installed using helm.
1. `tilt up` is run to start the development environment.
112 changes: 112 additions & 0 deletions versioned_docs/version-0.10/developer-guide/install_capi_operator.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
---
sidebar_position: 2
---

# Installing Cluster API Operator

:::caution
Installing Cluster API Operator by following this page (without it being a Helm dependency to Rancher Turtles) is not a recommended installation method and intended only for local development purposes.
:::

This section describes how to install `Cluster API Operator` in the Kubernetes cluster.

## Installing Cluster API (CAPI) and providers

`CAPI` and desired `CAPI` providers could be installed using the helm-based installation for [`Cluster API Operator`](https://github.com/kubernetes-sigs/cluster-api-operator) or as a helm dependency for the `Rancher Turtles`.

### Install manually with Helm (alternative)
To install `Cluster API Operator` with version `1.4.6` of the `CAPI` + `Docker` provider using helm, follow these steps:

1. Add the Helm repository for the `Cluster API Operator`:
```bash
helm repo add capi-operator https://kubernetes-sigs.github.io/cluster-api-operator
helm repo add jetstack https://charts.jetstack.io

```
2. Update the Helm repository:
```bash
helm repo update
```
3. Install the Cert-Manager:
```bash
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
```
4. Install the `Cluster API Operator`:
```bash
helm install capi-operator capi-operator/cluster-api-operator
--create-namespace -n capi-operator-system
--set infrastructure=docker:v1.4.6
--set core=cluster-api:v1.4.6
--timeout 90s --wait # Core Cluster API with kubeadm bootstrap and control plane providers will also be installed
```

:::note
`cert-manager` is a hard requirement for `CAPI` and `Cluster API Operator`*
:::

To provide additional environment variables, enable feature gates, or supply cloud credentials, similar to `clusterctl` [common provider](https://cluster-api.sigs.k8s.io/user/quick-start#initialization-for-common-providers), variables secret with `name` and a `namespace` of the secret could be specified for the `Cluster API Operator` as shown below.

```bash
helm install capi-operator capi-operator/cluster-api-operator
--create-namespace -n capi-operator-system
--set infrastructure=docker:v1.4.6
--set core=cluster-api:v1.4.6
--timeout 90s
--secret-name <secret_name>
--wait
```

Example secret data:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: variables
namespace: default
type: Opaque
stringData:
CLUSTER_TOPOLOGY: "true"
EXP_CLUSTER_RESOURCE_SET: "true"
```
To select more than one desired provider to be installed together with the `Cluster API Operator`, the `--infrastructure` flag can be specified with multiple provider names separated by a comma. For example:

```bash
helm install ... --set infrastructure="docker:v1.4.6;aws:v2.3.5"
```

The `infrastructure` flag is set to `docker:v1.4.6;aws:v2.3.5`, representing the desired provider names. This means that the `Cluster API Operator` will install and manage multiple providers, `Docker` and `AWS`, with versions `v1.4.6` and `v2.3.5` respectively.

The cluster is now ready to install Rancher Turtles. The default behavior when installing the chart is to install Cluster API Operator as a Helm dependency. Since we decided to install it manually before installing Rancher Turtles, the feature `cluster-api-operator.enabled` must be explicitly disabled as otherwise it would conflict with the existing installation. You can refer to [Install Rancher Turtles without Cluster API Operator](../developer-guide/install_capi_operator.md#install-rancher-turtles-without-cluster-api-operator-as-a-helm-dependency) to see next steps.

:::tip
For more fine-grained control of the providers and other components installed with CAPI, see the [Add the infrastructure provider](../tasks/capi-operator/add_infrastructure_provider.md) section.
:::
''

### Install Rancher Turtles without `Cluster API Operator` as a Helm dependency

:::note
This option is only suitable for development purposes and not recommended for production environments.
:::

The `rancher-turtles` chart is available in https://rancher.github.io/turtles and this Helm repository must be added before proceeding with the installation:

```bash
helm repo add turtles https://rancher.github.io/turtles
helm repo update
```

and then it can be installed into the `rancher-turtles-system` namespace with:

```bash
helm install rancher-turtles turtles/rancher-turtles --version v0.10.0
-n rancher-turtles-system
--set cluster-api-operator.enabled=false
--set cluster-api-operator.cluster-api.enabled=false
--create-namespace --wait
--dependency-update
```

As you can see, we are telling Helm to ignore installing `cluster-api-operator` as a dependency.

7 changes: 7 additions & 0 deletions versioned_docs/version-0.10/developer-guide/intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
sidebar_position: 0
---

# Introduction

Everything you need to know about developing Rancher Turtles.
125 changes: 125 additions & 0 deletions versioned_docs/version-0.10/getting-started/air-gapped-environment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
---
sidebar_position: 3
---

# Air-gapped environment

Rancher Turtles provides support for an air-gapped environment out-of-the-box by leveraging features of the Cluster API Operator, the required dependency for installing Rancher Turtles.

To provision and configure Cluster API providers, Turtles uses the **CAPIProvider** resource to allow managing Cluster API Operator manifests in a declarative way. Every field provided by the upstream CAPI Operator resource for the desired `spec.type` is also available in the `spec` of the **CAPIProvider** resouce.

To install Cluster API providers in an air-gapped environment the following will need to be done:

1. Configure the Cluster API Operator for an air-gapped environment:
- The operator chart will be fetched and stored as a part of the Turtles chart.
- Provide image overrides for the operator from an accessible image repository.
2. Configure Cluster API providers for an air-gapped environment:
- Provide fetch configuration for each provider from an accessible location (e.g., an internal github/gitlab server) or from pre-created ConfigMaps within the cluster.
- Provide image overrides for each provider to pull images from an accessible image repository.
3. Configure Rancher Turtles for an air-gapped environment:
- Collect and publish Rancher Turtles images and publish to the private registry. [Example of cert-manager installation for the reference](https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/publish-images#2-collect-the-cert-manager-image).
- Provide fetch configuration and image values for `core` and `caprke2` providers in [values.yaml](../reference-guides/rancher-turtles-chart/values.md#cluster-api-operator-values).
- Provider image value for the Cluster API Operator helm chart dependency in [values.yaml](https://github.com/kubernetes-sigs/cluster-api-operator/blob/main/hack/charts/cluster-api-operator/values.yaml#L26). Image values specified with the cluster-api-operator key will be passed along to the Cluster API Operator.

## Example Usage

As an admin, I need to fetch the vSphere provider (CAPV) components from within the cluster because I am working in an air-gapped environment.

In this example, there is a ConfigMap in the `capv-system` namespace that defines the components and metadata of the provider. It can be created manually or by running the following commands:

```bash
# Get the file contents from the GitHub release
curl -L https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/v1.8.5/infrastructure-components.yaml -o components.yaml
curl -L https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/v1.8.5/metadata.yaml -o metadata.yaml

# Create the configmap from the files
kubectl create configmap v1.8.5 --namespace=capv-system --from-file=components=components.yaml --from-file=metadata=metadata.yaml --dry-run=client -o yaml > configmap.yaml

```

This command example would need to be adapted to the provider and version you want to use. The resulting config map will look similar to the example below:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
provider-components: vsphere
name: v1.8.5
namespace: capv-system
data:
components: |
# Components for v1.8.5 YAML go here
metadata: |
# Metadata information goes here
```
A **CAPIProvider** resource will need to be created to represent the vSphere infrastructure provider. It will need to be configured with a `fetchConfig`. The label selector allows the operator to determine the available versions of the vSphere provider and the Kubernetes resources that need to be deployed (i.e. contained within ConfigMaps which match the label selector).

Since the provider's version is marked as `v1.8.5`, the operator uses the components information from the ConfigMap with matching label to install the vSphere provider.

```yaml
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: vsphere
namespace: capv-system
spec:
name: vsphere
type: infrastructure
version: v1.8.5
configSecret:
name: vsphere-variables
fetchConfig:
selector:
matchLabels:
provider-components: vsphere
deployment:
containers:
- name: manager
imageUrl: "gcr.io/myregistry/capv-controller:v1.8.5-foo"
variables:
CLUSTER_TOPOLOGY: "true"
EXP_CLUSTER_RESOURCE_SET: "true"
EXP_MACHINE_POOL: "true"
```

Additionally the **CAPIProvider** overrides the container image to use for the provider using the `deployment.containers[].imageUrl` field. This allows the operator to pull the image from a registry within the air-gapped environment.

### Situation when manifests do not fit into ConfigMap

There is a limit on the [maximum size](https://kubernetes.io/docs/concepts/configuration/configmap/#motivation) of a ConfigMap - 1MiB. If the manifests do not fit into this size, Kubernetes will generate an error and provider installation fail. To avoid this, you can archive the manifests and put them in the ConfigMap that way.

For example, you have two files: `components.yaml` and `metadata.yaml`. To create a working config map you need:

1. Archive components.yaml using `gzip` cli tool

```sh
gzip -c components.yaml > components.gz
```

2. Create a ConfigMap manifest from the archived data

```sh
kubectl create configmap v1.8.5 --namespace=capv-system --from-file=components=components.gz --from-file=metadata=metadata.yaml --dry-run=client -o yaml > configmap.yaml
```

3. Edit the file by adding "provider.cluster.x-k8s.io/compressed: true" annotation

```sh
yq eval -i '.metadata.annotations += {"provider.cluster.x-k8s.io/compressed": "true"}' configmap.yaml
```

**Note**: without this annotation operator won't be able to determine if the data is compressed or not.

4. Add labels that will be used to match the ConfigMap in `fetchConfig` section of the provider

```sh
yq eval -i '.metadata.labels += {"my-label": "label-value"}' configmap.yaml
```

5. Create a ConfigMap in your kubernetes cluster using kubectl

```sh
kubectl create -f configmap.yaml
```
Loading

0 comments on commit 1eb1b5b

Please sign in to comment.