diff --git a/docs/helm-charts.md b/docs/helm-charts.md
index 76cfd0f4..c8d1978a 100644
--- a/docs/helm-charts.md
+++ b/docs/helm-charts.md
@@ -1,82 +1,42 @@
-# Installation and options
+# Installing s3gw with helm charts
-The canonical way to install the helm chart is via a helm repository:
+Before you begin, ensure you install helm. To install, see the [documentaiton](https://helm.sh/docs/intro/install/)
+or run the following:
-```bash
-helm repo add s3gw https://aquarist-labs.github.io/s3gw-charts/
-helm install $RELEASE_NAME s3gw/s3gw --namespace $S3GW_NAMESPACE \
- --create-namespace -f /path/to/your/custom/values.yaml
-```
-
-The chart can also be installed directly from the git repository. To do so, clone
-the repository:
-
-```bash
-git clone https://github.com/aquarist-labs/s3gw-charts.git
+```shell
+curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
+chmod 700 get_helm.sh
+./get_helm.sh
```
-The chart can then be installed from within the repository directory:
+Clone the s3gw-charts repo and change directory:
-```bash
+```shell
+git clone https://aquarist-labs.github.io/s3gw-charts/
cd s3gw-charts
-helm install $RELEASE_NAME charts/s3gw --namespace $S3GW_NAMESPACE \
- --create-namespace -f /path/to/your/custom/values.yaml
```
-Before installing, familiarize yourself with the options. If necessary, provide
-your own `values.yaml` file.
-
-## Rancher
-
-You can install the s3gw via the Rancher App Catalog. The steps are as follows:
+## Configuring values.yaml
-- Cluster -> Projects/Namespaces - create the `s3gw` namespace.
-- Apps -> Repositories -> Create `s3gw` using the s3gw-charts Web URL
- and the main branch.
-- Apps -> Charts -> Install Traefik.
-- Apps -> Charts -> Install `s3gw`.
- Select the `s3gw` namespace previously created.
+Helm charts can be customized for your Kubernetes environment. For a default
+installation, the only option you are required to update is the domain and then
+set the options on the command line directly using `helm --set key=value`.
-## Dependencies
+**Note:** We do recommend at least updating the default access credenitals, but it
+is not necessary for a test installation. See below for more information.
-### Traefik
+Once the domain has been configured, the chart can then be installed from within the
+repository directory:
-If you intend to install s3gw with an ingress resource, you must ensure your
-environment is equipped with a [Traefik](https://helm.traefik.io/traefik)
-ingress controller.
-
-You can use a different ingress controller, but note you will have to
-create your own ingress resource.
-
-### Certificate manager
-
-If you want, you can automate the TLS certificate management.
-s3gw can use [cert-manager](https://cert-manager.io/) in order to create TLS
-certificates for the various ingresses and internal ClusterIP resources.
-
-If cert-manager is not already installed on the cluster,
-it can be installed as follows:
-
-```shell
-$ kubectl create namespace cert-manager
-$ helm repo add jetstack https://charts.jetstack.io
-$ helm repo update
-$ helm install cert-manager --namespace cert-manager jetstack/cert-manager \
- --set installCRDs=true \
- --set extraArgs[0]=--enable-certificate-owner-ref=true
+```bash
+cd s3gw-charts
+helm install $RELEASE_NAME charts/s3gw --namespace $S3GW_NAMESPACE \
+ --create-namespace -f /path/to/your/custom/values.yaml
```
-> **WARNING**: If the cert-manager is not installed in the namespace `cert-manager`,
-> you have to set `.Values.certManagerNamespace` accordingly,
-otherwise the s3gw installation fails.
-
-## Options
-
-Helm charts can be customized for your Kubernetes environment. To do so,
-either provide a `values.yaml` file with your settings, or set the options on
-the command line directly using `helm --set key=value`.
+### Options
-### Access credentials
+#### Access credentials
It is strongly advisable to customize the initial access credentials.
These can be used to access the admin UI, as well as the S3 endpoint. Additional
@@ -144,7 +104,7 @@ You can set the name of the existing secret with:
defaultUserCredentialsSecret: "my-secret"
```
-### Service name
+#### Service name
There are two possible ways to access the s3gw: from inside the Kubernetes
cluster and from the outside. For both, the s3gw must be configured with the
@@ -206,7 +166,7 @@ before accessing the UI via `https://ui.hostname`
cat certificate.pem | base64 -w 0
```
-### Storage
+#### Storage
The s3gw is best deployed on top of a [Longhorn](https://longhorn.io) volume. If
you have Longhorn installed in your cluster, all appropriate resources are
@@ -228,13 +188,13 @@ storageClass:
create: false
```
-#### Local storage
+##### Local storage
You can use the `storageClass.local` and `storageClass.localPath` variables to
set up a node-local volume for testing if you don not have Longhorn. This is an
experimental feature for development use only.
-### Image settings
+#### Image settings
In some cases, custom image settings are needed, for example in an air-gapped
environment or for developers. In that case, you can modify the registry and
@@ -266,14 +226,14 @@ being more verbose:
logLevel: "1"
```
-### Container Object Storage Interface (COSI)
+#### Container Object Storage Interface (COSI)
> **WARNING**: Be advised that COSI standard is currently in **alpha** state.
> The COSI implementation provided by s3gw is considered an experimental feature
> and changes to the COSI standard are expected in this phase.
> The s3gw team does not control the upstream development of COSI.
-#### Prerequisites
+##### Prerequisites
If you are going to use COSI, ensure some resources are pre-deployed on the cluster.
@@ -302,7 +262,7 @@ NAME READY STATUS RESTARTS AGE
objectstorage-controller-6fc5f89444-4ws72 1/1 Running 0 2d6h
```
-#### Installation
+##### Installation
COSI support is disabled by default in s3gw. To enable it, set:
diff --git a/docs/quickstart.md b/docs/quickstart.md
index e04b6e93..3fc8723d 100644
--- a/docs/quickstart.md
+++ b/docs/quickstart.md
@@ -1,5 +1,16 @@
# Quickstart
+## Rancher
+
+You can install the s3gw via the Rancher App Catalog. The steps are as follows:
+
+- Cluster -> Projects/Namespaces - create the `s3gw` namespace.
+- Apps -> Repositories -> Create `s3gw` using the s3gw-charts Web URL
+ and the main branch.
+- Apps -> Charts -> Install Traefik.
+- Apps -> Charts -> Install `s3gw`.
+ Select the `s3gw` namespace previously created.
+
## Helm chart
Add the helm chart to your helm repos and install from there. There are [several
diff --git a/docs/s3gw-with-k8s-k3s.md b/docs/s3gw-with-k8s-k3s.md
index 7c9120e1..243e40b1 100644
--- a/docs/s3gw-with-k8s-k3s.md
+++ b/docs/s3gw-with-k8s-k3s.md
@@ -1,447 +1,84 @@
-# Setting up the s3gw with K8s or K3s
+# Installing k8s for s3gw
-## K8s
+The following document describes the prerequsite installations to run the
+S3 Gateway (s3gw): an S3 object storage service.
-This guide details running an `s3gw` image on the latest stable
-Kubernetes release. You will be able to quickly build a cluster installed on a
-set of virtual machines and have a certain degree of choice in terms of
-customization options. If you are looking for a more lightweight environment
-running directly on bare metal, refer to our [K3s section](#k3s-with-longhorn).
+To install the s3gw, a Kubernetes (k8s) distribution is required.
+This project recommends utilizing k3s as a minimal k8s distribution but
+any k8s distribution can be used.
-### Description
+For the purpose of this document, k3s is the chosen k8s distribution and will
+be referenced continuously throughout this document.
-The entire environment build process is automated by a set of Ansible playbooks.
-The cluster is created with exactly one `admin` node and an arbitrary number of
-`worker` nodes. A single virtual machine acting as an `admin` node is also
-possible; in this case, it will be able to schedule pods as a `worker` node.
-Name topology for nodes is the following:
+## Prerequisites
-```text
-admin
-worker-1
-worker-2
-...
-```
-
-### Requirements
-
-Make sure you have installed the following applications on your system:
-
-- Vagrant
-- libvirt
-- Ansible
-
-### Building the environment
-
-You can build the environment with the `setup-k8s.sh` script. The simplest form
-you can use is:
-
-```bash
-$ ./setup-k8s.sh build
-Building environment ...
-```
-
-This will trigger the build of a Kubernetes cluster formed by one node `admin`
-and one node `worker`.
-
-You can customize the build with the following environment variables:
-
-```text
-IMAGE_NAME : The Vagrant box image used in the cluster
-VM_NET : The virtual machine subnet used in the cluster
-VM_NET_LAST_OCTET_START : Vagrant will increment this value when creating
- vm(s) and assigning an ip
-CIDR_NET : The CIDR subnet used by the Calico network plugin
-WORKER_COUNT : The number of Kubernetes workers in the cluster
-ADMIN_MEM : The RAM amount used by the admin node (Vagrant
- format)
-ADMIN_CPU : The CPU amount used by the admin node (Vagrant
- format)
-ADMIN_DISK : yes/no, when yes a disk will be allocated for the
- admin node - this will be effective only for mono
- clusters
-ADMIN_DISK_SIZE : The disk size allocated for the admin node
- (Vagrant format) - this will be effective only for
- mono clusters
-WORKER_MEM : The RAM amount used by a worker node (Vagrant
- format)
-WORKER_CPU : The CPU amount used by a worker node (Vagrant
- format)
-WORKER_DISK : yes/no, when yes a disk will be allocated for the
- worker node
-WORKER_DISK_SIZE : The disk size allocated for a worker node (Vagrant
- format)
-CONTAINER_ENGINE : The host's local container engine used to build
- the s3gw container (podman/docker)
-STOP_AFTER_BOOTSTRAP : yes/no, when yes stop the provisioning just after
- the bootstrapping phase
-START_LOCAL_REGISTRY : yes/no, when yes start a local insecure image
- registry at admin.local:5000
-S3GW_IMAGE : The s3gw's container image used when deploying the
- application on k8s
-K8S_DISTRO : The Kubernetes distribution to install; specify
- k3s or k8s (k8s default)
-INGRESS : The ingress implementation to be used; NGINX or
- Traefik (NGINX default)
-PROV_USER : The provisioning user used by Ansible (vagrant
- default)
-S3GW_UI_REPO : A GitHub repository to be used when building the
- s3gw-ui's image
-S3GW_UI_VERSION : A S3GW_UI_REPO's branch to be used
-SCENARIO : An optional scenario to be loaded in the cluster
-```
-
-For example, you could start a more specialized build with:
-
-```bash
-$ IMAGE_NAME=generic/ubuntu1804 WORKER_COUNT=4 ./setup-k8s.sh build
-Building environment ...
-```
-
-Or create a mono virtual machine cluster with the lone `admin` node with:
-
-```bash
-$ WORKER_COUNT=0 ./setup-k8s.sh build
-Building environment ...
-```
-
-In this case, the node will be able to schedule pods as a `worker` node.
-
-### Destroying the environment
-
-You can destroy a previously built environment with the following command:
-
-```bash
-$ ./setup-k8s.sh destroy
-Destroying environment ...
-```
-
-Be sure to match the `WORKER_COUNT` value with the one you used in the build
-phase.
-
-Providing a lower value instead of the actual one will cause some allocated VM
-not to be released by Vagrant.
-
-### Starting the environment
-
-You can start a previously built environment with:
-
-```bash
-$ ./setup-k8s.sh start
-Starting environment ...
-```
-
-Be sure to match the `WORKER_COUNT` value with the one you used in the build
-phase.
-
-Providing a lower value instead of the actual one will cause some allocated VM
-not to start.
-
-### Accessing the environment
-
-#### ssh
-
-You can connect through `ssh` to all nodes in the cluster.
-
-To connect to the `admin` node run:
-
-```bash
-$ ./setup-k8s.sh ssh admin
-Connecting to admin ...
-```
-
-To connect to a `worker` node run:
-
-```bash
-$ ./setup-k8s.sh ssh worker-2
-Connecting to worker-2 ...
-```
-
-When connecting to a worker node be sure to match the `WORKER_COUNT` value with
-the one you used in the build phase.
-
-## K3s with Longhorn
-
-This is the entrypoint to set up a Kubernetes cluster running s3gw with
-Longhorn. You can choose to install a **K3s** cluster directly on your machine
-or on top of virtual machines.
-
-Refer to the appropriate section to proceed with the setup:
-
-- [K3s on bare metal](#k3s-on-bare-metal)
-- [K3s on virtual machines](#k3s-on-virtual-machines)
-
-### Ingresses
-
-Services are exposed with a Kubernetes ingress; each service category is
-allocated on a separate virtual host:
-
-- **Longhorn dashboard**, on: `longhorn.local`
-- **s3gw**, on: `s3gw.local` and `s3gw-no-tls.local`
-- **s3gw s3 explorer**, on: `s3gw-ui.local` and `s3gw-ui-no-tls.local`
-
-When you are running the cluster on a virtual machine, you can patch host's
-`/etc/hosts` file as follows:
-
-```text
-10.46.201.101 longhorn.local s3gw.local s3gw-no-tls.local s3gw-ui.local s3gw-ui-no-tls.local
-```
-
-This will make host names resolving with the admin node. Otherwise, when you are
-running the cluster on bare metal, you can patch host's `/etc/hosts` file as
-follows:
-
-```text
-127.0.0.1 longhorn.local s3gw.local s3gw-no-tls.local s3gw-ui.local s3gw-ui-no-tls.local
-```
-
-Services can now be accessed at:
-
-```text
-https://longhorn.local
-https://s3gw.local
-http://s3gw-no-tls.local
-https://s3gw-ui.local
-http://s3gw-ui-no-tls.local
-```
-
-### K3s on bare metal
-
-This document guides you through the setup of a K3s cluster on bare metal. If
-you are looking for K3s cluster running on virtual machines, refer to
-[K3s on virtual machines](#k3s-on-virtual-machines).
-
-#### Minimum free disk space
-
-Longhorn requires a `minimal available storage percentage` on the root disk,
-which is `25%` by default. Check [Longhorn Docs] for details.
-
-#### Disabling firewalld
-
-In some host systems, including OpenSUSE Tumbleweed, you need to disable
-firewalld to ensure proper functioning of k3s and its pods:
+If using openSUSE Tumeleweed, we recommend stopping
+firewalld to ensure k3s runs with full functionality. To do so, run:
```shell
-sudo systemctl stop firewalld.service
+systemctl stop firewalld.service
```
-This is something we intend figuring out in the near future.
+You can install the s3gw for test purposes locally, on baremetal hardware,
+or virtually. In each instance, ensure you provide adequate disk space.
+Longhorn requires a minimal available storage percentage on the root disk,
+which is 25% by default.
-#### From the internet
+### Traefik
-You can easily set up k3s with s3gw from the internet, by running the following
-command:
+If you intend to install s3gw with an ingress resource, you must ensure your
+environment is equipped with a [Traefik](https://helm.traefik.io/traefik)
+ingress controller.
-```shell
-curl -sfL \
- https://raw.githubusercontent.com/aquarist-labs/s3gw-tools/main/env/setup.sh \
- | sh -
-```
+You can use a different ingress controller, but note you will have to
+create your own ingress resource.
-#### From source repository
+## Installing k3s
-To install a lightweight Kubernetes cluster for development purposes, run the
-following commands. It will install open-iscsi and K3s on your local system.
-Additionally, it will deploy Longhorn and the s3gw in the cluster.
+Before you begin, ensure you install k3s. You can find the installation
+instructions [here](https://k3s.io/) or, run:
+`curl -sfL https://get.k3s.io | sh -`.
-```shell
-cd ~/git/s3gw-tools/env
-./setup.sh
-```
+Ensure you move the k3s.yaml file, usually hosted in
+`/etc/rancher/k3s/k3s.yaml`, to `~/.kube/config` to access the
+certificates.
-#### Access the Longhorn UI
+## Installing Longhorn
-The Longhorn UI can be access via the URL `http://longhorn.local`.
+**Important:** As part of the Longhorn installation, it is required that
+`openiscsi` is installed *before* running the Longhorn installation script.
+Ensure this is done so before continuing.
-#### Access the S3 API
+You can install Longhorn either via the `Rancher Apps and Marketplace`,
+using `kubectl`, or via a `helm chart`. The instructions can be found
+[here](https://longhorn.io/docs/1.4.2/deploy/install/).
-The S3 API can be accessed via `http://s3gw.local`.
-
-We provide a [s3cmd](https://github.com/s3tools/s3cmd) configuration file to
-easily communicate with the S3 gateway in the k3s cluster.
+To check the progress of the Longhorn installation, run:
```shell
-cd ~/git/s3gw-tools/k3s
-s3cmd -c ./s3cmd.cfg mb s3://foo
-s3cmd -c ./s3cmd.cfg ls s3://
+kubectl get pods -w -n longhorn-system
```
-Adapt the `host_base` and `host_bucket` properties in the `s3cmd.cfg`
-configuration file if your K3s cluster is not accessible via localhost.
-
-#### Configure s3gw as Longhorn backup target
+### Access the Longhorn UI
-Use the following values in the Longhorn settings page to use the s3gw as backup
-target.
+Now that you have installed Longhorn, access the localhost UI:
+`http://longhorn.local`.
-Backup Target: `s3://@us/` Backup Target Credential Secret:
-`s3gw-secret`
+You should now be able to see longhorn running and there are no volumes.
-### K3s on virtual machines
+## Install certification manager
-Follow this guide if you wish to run a K3s cluster installed on virtual
-machines. You will have a certain degree of choice in terms of customization
-options. If you are looking for a more lightweight environment running directly
-on bare metal, refer to [K3s on bare metal](#k3s-on-bare-metal).
+s3gw uses a [cert-manager](https://cert-manager.io/) in order to create TLS
+certificates for the various ingresses and internal ClusterIP resources.
-#### Description
-
-The entire environment build process is automated by a set of Ansible playbooks.
-The cluster is created with one `admin` node and an arbitrary number of `worker`
-nodes. A single virtual machine acting as an `admin` node is also possible. In
-this case, it will be able to schedule pods as a `worker` node. Name topology of
-nodes is the following:
-
-```text
-admin-1
-worker-1
-worker-2
-...
-```
-
-#### Requirements
-
-Make sure you have installed the following applications on your system:
-
-- Vagrant
-- libvirt
-- Ansible
-
-Make sure you have installed the following Ansible modules:
-
-- kubernetes.core
-- community.docker.docker_image
-
-You can install them with:
-
-```bash
-$ ansible-galaxy collection install kubernetes.core
-...
-$ ansible-galaxy collection install community.docker
-...
-```
-
-#### Supported Vagrant boxes
-
-- opensuse/Leap-15.3.x86_64
-- generic/ubuntu[1604-2004]
-
-#### Building the environment
-
-You can build the environment with the `setup-vm.sh` script. The simplest form
-you can use is:
-
-```bash
-$ ./setup-vm.sh build
-Building environment ...
-```
+Install `cert-manager` as follows:
-This will trigger the build of a Kubernetes cluster formed by one node `admin`
-and one node `worker`. You can customize the build with the following
-environment variables:
-
-```text
-BOX_NAME : The Vagrant box image used in the cluster
- (default: opensuse/Leap-15.3.x86_64)
-VM_NET : The virtual machine subnet used in the cluster
-VM_NET_LAST_OCTET_START : Vagrant will increment this value when creating
- vm(s) and assigning an ip
-WORKER_COUNT : The number of Kubernetes node in the cluster
-ADMIN_MEM : The RAM amount used by the admin node (Vagrant
- format)
-ADMIN_CPU : The CPU amount used by the admin node (Vagrant
- format)
-ADMIN_DISK : yes/no, when yes a disk will be allocated for the
- admin node - this will be effective only for mono
- clusters
-ADMIN_DISK_SIZE : The disk size allocated for the admin node
- (Vagrant format) - this will be effective only for
- mono clusters
-WORKER_MEM : The RAM amount used by a worker node (Vagrant
- format)
-WORKER_CPU : The CPU amount used by a worker node (Vagrant
- format)
-WORKER_DISK : yes/no, when yes a disk will be allocated for the
- worker node
-WORKER_DISK_SIZE : The disk size allocated for a worker node (Vagrant
- format)
-CONTAINER_ENGINE : The host's local container engine used to build
- the s3gw container (podman/docker)
-STOP_AFTER_BOOTSTRAP : yes/no, when yes stop the provisioning just after
- the bootstrapping phase
-S3GW_IMAGE : The s3gw's container image used when deploying the
- application on k3s
-PROV_USER : The provisioning user used by Ansible (vagrant
- default)
-S3GW_UI_REPO : A GitHub repository to be used when building the
- s3gw-ui's image
-S3GW_UI_VERSION : A S3GW_UI_REPO's branch to be used
-SCENARIO : An optional scenario to be loaded in the cluster
-K3S_VERSION : The K3s version to be used (default: v1.23.6+k3s1)
-```
-
-For example, you could start a more specialized build with:
-
-```bash
-$ BOX_NAME=generic/ubuntu1804 WORKER_COUNT=4 ./setup-vm.sh build
-Building environment ...
-```
-
-Or create a mono virtual machine cluster with the lone `admin` node with:
-
-```bash
-$ WORKER_COUNT=0 ./setup-vm.sh build
-Building environment ...
-```
-
-In this case, the node will be able to schedule pods as a `worker` node.
-
-#### Destroying the environment
-
-You can destroy a previously built environment with:
-
-```bash
-$ ./setup-vm.sh destroy
-Destroying environment ...
-```
-
-Be sure to match the `WORKER_COUNT` value with the one you used in the build
-phase. Providing a lower value instead of the actual one will cause some
-allocated VM not to be released by Vagrant.
-
-#### Starting the environment
-
-You can start a previously built environment with:
-
-```bash
-$ ./setup-vm.sh start
-Starting environment ...
-```
-
-Be sure to match the `WORKER_COUNT` value with the one you used in the build
-phase. Providing a lower value instead of the actual one will cause some
-allocated VM not to start.
-
-#### Accessing the environment
-
-You can connect through `ssh` to all nodes in the cluster. To connect to the
-`admin` node run:
-
-```bash
-$ ./setup-vm.sh ssh admin
-Connecting to admin ...
-```
-
-To connect to a `worker` node run:
-
-```bash
-$ ./setup-vm.sh ssh worker-2
-Connecting to worker-2 ...
+```shell
+kubectl create namespace cert-manager
+helm repo add jetstack https://charts.jetstack.io
+helm repo update
+helm install cert-manager --namespace cert-manager jetstack/cert-manager \
+ --set installCRDs=true \
+ --set extraArgs[0]=--enable-certificate-owner-ref=true
```
-
-When connecting to a worker node be sure to match the `WORKER_COUNT` value with
-the one you used in the build phase.
-
-[longhorn docs]:
- https://longhorn.io/docs/1.3.1/best-practices/#minimal-available-storage-and-over-provisioning
diff --git a/mkdocs.yml b/mkdocs.yml
index cace24cb..1f17f54e 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -17,9 +17,11 @@ nav:
- Introduction: "index.md"
- Quickstart: "quickstart.md"
- Roadmap: "roadmap.md"
- - "Installation & setup":
- - Using a Helm chart: "helm-charts.md"
- - K8s & K3s setup: "s3gw-with-k8s-k3s.md"
+ - "Installing the s3gw":
+ - Installing k8s for s3gw: "s3gw-with-k8s-k3s.md"
+ - Installing s3gw with helm charts: "helm-charts.md"
+ - "Configuring the s3gw":
+ - Configuration options: "config-s3gw.md"
- "Contributing":
- Contributing: "contributing.md"
- "Development manual":