Skip to content

Commit

Permalink
Use Pelorus Operator as first-class citizen for our docs (#767)
Browse files Browse the repository at this point in the history
Update docs to ensure pelorus operator is covered.

Signed-off-by: Michal Pryc <mpryc@redhat.com>

Signed-off-by: Michal Pryc <mpryc@redhat.com>
  • Loading branch information
mpryc authored Jan 10, 2023
1 parent 7ed0903 commit cf2d52c
Show file tree
Hide file tree
Showing 10 changed files with 228 additions and 46 deletions.
2 changes: 2 additions & 0 deletions docs/Demo.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Pelorus Demo

For Pelorus installation to monitor your application's workflow, please skip this Demo and jump straight to the [Installation](Install.md) and [Configuration](configuration2.md) part of the documentation.

In this demo, you will

* Get a taste of how Pelorus captures a change going through the application's delivery cycle.
Expand Down
195 changes: 187 additions & 8 deletions docs/Install.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,202 @@
# Installation

The following will walk through the deployment of Pelorus.
There are three methods to deploy Pelorus:

First two are using Pelorus Operator that is available by default in the [Red Hat-provided community-operators catalog sources](https://docs.openshift.com/container-platform/4.11/operators/understanding/olm-rh-catalogs.html#olm-rh-catalogs_olm-rh-catalogs). This catalog is distributed as part of the OpenShift:

* OpenShift [Web console](#openshift-web-console)
* OpenShift [Command Line Tool](#openshift-command-line-tool)

(Deprecated) Helm3 charts method is used for development of Pelorus only:

* [Helm charts](#helm-charts)

## Prerequisites

Before deploying Pelorus, the following tools are necessary

* An OpenShift 4.7 or higher environment
* [git](https://git-scm.com/)
* [oc](https://docs.openshift.com/container-platform/4.8/cli_reference/openshift_cli/getting-started-cli.html#installing-openshift-cli) The OpenShift CLI**\***
* [helm](https://helm.sh/) CLI 3 or higher**\***
* Access to an OpenShift 4.7 or higher cluster via [Web console](#openshift-web-console) or [CLI](#openshift-command-line-tool).

* ***CLI*** access requires a machine from which to run the install with an [oc](https://docs.openshift.com/container-platform/4.8/cli_reference/openshift_cli/getting-started-cli.html#installing-openshift-cli) The OpenShift CLI**\***

* ***Helm charts*** requires additional tools on your system:
* [git](https://git-scm.com/) CLI
* [helm](https://helm.sh/) CLI 3 or higher**\***

>**Note:** It is possible to install `oc` and `helm` inside a Python virtual environment. To do so, change to Pelorus directory (after cloning its repository), and run
>```
>make dev-env
>source .venv/bin/activate
>```
## Initial Deployment
## OpenShift Web console
### Installing Operator
After logging in to the OpenShift Web console select OperatorHub from the Operators menu and search for Pelorus Operator by typing DORA, Dora metrics or Pelorus in the search field:
![1_operator_install_step](img/1_operator_install_step.png)
Next click on Install in the left top corner:
![2_operator_install_step](img/2_operator_install_step.png)
You can select OpenShift namespace to which Pelorus will be installed, then click Install. We recommend using default one, which is **pelorus**. This creates Pelorus Subscription in the given namespace.
![3_operator_install_step](img/3_operator_install_step.png)
Verify that Pelorus, Grafana and Prometheus Operators are installed successfully by checking them under Installed Operators submenu:
![4_operator_install_step](img/4_operator_install_step.png)
### Creating Pelorus instance
>**Note:** Currently it is possible to create only one instance of Pelorus per cluster.
Click on the Pelorus Operator from the [Installing Operator](#installing-operator) last step and then on the "Create instance" link:
![5_operator_install_step](img/5_operator_install_step.png)
** Note:** See the [Configuration Guide](Configuration.md) for more information on exporters and the [Configuration2 Guide](configuration2.md) to understand Pelorus core stack options before continuing.
Click on the `YAML view`, which will open sample Pelorus object YAML, that should be adopted to the application workflow configuration of exporters, which are placed under `instances:` section and eventually additional configuration for the Pelorus core stack under `spec:` and click on the Create button:
![6_operator_install_step](img/6_operator_install_step.png)
Verify Pelorus application deployment by selecting Pelorus tab from the Pelorus Operator view and ensuring the application has `Deployed` status. You may want to check more information about the Pelorus application deployment by clicking on the application name in the following picture `pelorus-sample`:
![7_operator_install_step](img/7_operator_install_step.png)
## OpenShift Command Line Tool
### Installing Operator
Installing Pelorus using CLI is equivalent to the [OpenShift Web console](#openshift-web-console) method.
Create pelorus namespace. You can use other namespace to which Pelorus will be installed, but remember to adapt other installation and configuration steps to use that namespace. We do recommend using the default one, which is **pelorus**.
```shell
$ oc create namespace pelorus
```
Create [OperatorGroup](https://docs.openshift.com/container-platform/4.11/operators/understanding/olm/olm-understanding-operatorgroups.html) object YAML file that uses pelorus namespace:
```shell
$ cat > pelorus-operatorgroup.yaml <<EOF
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: pelorus-operatorgroup
namespace: pelorus
spec:
targetNamespaces:
- pelorus
EOF
```
And apply it:
```shell
$ oc apply -f pelorus-operatorgroup.yaml
```
Verify the OperatorGroup (og) has been successfully created:
```shell
$ oc get og -n pelorus
NAME AGE
pelorus-operatorgroup 0h1m
```
Create Pelorus Operator Subscription object YAML file that uses pelorus namespace:
```shell
$ cat > pelorus-operator-subscription.yaml <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: pelorus-operator
namespace: pelorus
spec:
channel: alpha
name: pelorus-operator
source: community-operators
sourceNamespace: openshift-marketplace
EOF
```
And apply it:
```shell
$ oc apply -f pelorus-operator-subscription.yaml
```
Verify the pelorus-operator Subscription (sub) has been successfully created:
```
$ oc get sub pelorus-operator -n pelorus
NAME PACKAGE SOURCE CHANNEL
pelorus-operator pelorus-operator community-operators alpha
```
Verify the ClusterServiceVersion (csv) for Pelorus Operator together with Grafana and Prometheus were succesfully created:
```shell
$ oc get csv -n pelorus
NAME DISPLAY VERSION REPLACES PHASE
grafana-operator.v4.6.0 Grafana Operator 4.6.0 grafana-operator.v4.5.1 Succeeded
pelorus-operator.v0.0.1 Pelorus Operator 0.0.1 Succeeded
prometheusoperator.0.47.0 Prometheus Operator 0.47.0 prometheusoperator.0.37.0 Succeeded
```
### Creating Pelorus instance
>**Note:** Currently it is possible to create only one instance of Pelorus per cluster.
The Pelorus object YAML file requires specific to the application workflow configuration of exporters that are placed under `instances:` section and eventually additional configuration for the Pelorus core stack under `spec:`.
See the [Configuration Guide](Configuration.md) for more information on exporters and the [Configuration2 Guide](configuration2.md) to understand Pelorus core stack options.
*(Example)* Create Pelorus configuration object YAML file. In this example with two enabled exporters `committime-exporter` and `deploytime-exporter`, without prometheus persistent storage:
```shell
$ cat > pelorus-sample-instance.yaml <<EOF
kind: Pelorus
apiVersion: charts.pelorus.konveyor.io/v1alpha1
metadata:
name: pelorus-sample
namespace: pelorus
spec:
exporters:
global: {}
instances:
- app_name: deploytime-exporter
exporter_type: deploytime
- app_name: failuretime-exporter
enabled: false
env_from_configmaps:
- pelorus-config
- failuretime-config
env_from_secrets:
- jira-secret
exporter_type: failure
- app_name: committime-exporter
exporter_type: committime
extra_prometheus_hosts: null
openshift_prometheus_basic_auth_pass: changeme
openshift_prometheus_htpasswd_auth: 'internal:{SHA}+pvrmeQCmtWmYVOZ57uuITVghrM='
prometheus_retention: 1y
prometheus_retention_size: 1GB
prometheus_storage: false
prometheus_storage_pvc_capacity: 2Gi
prometheus_storage_pvc_storageclass: gp2
EOF
```
Once Pelorus configuration file is created e.g. `pelorus-sample-instance.yaml` apply it using:
```shell
$ oc apply -f pelorus-sample-instance.yaml
```
Verify the pelorus-sample has been successfully created:
```
$ oc get pelorus -n pelorus
NAME AGE
pelorus-sample 31s
```
Verbose command to list Pelorus objects with their statuses, presenting succesfull Deployment with the InstallationSuccessful status:
```
$ oc get pelorus -n pelorus -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.conditions}{"\n"}{end}'
pelorus-sample [{"lastTransitionTime":"2023-01-10T14:12:56Z","status":"True","type":"Initialized"},{"lastTransitionTime":"2023-01-10T14:13:00Z","reason":"InstallSuccessful","status":"True","type":"Deployed"}]
```
## Helm Charts
It is possible to install Pelorus from the source code using Helm charts. To do so, you will need tools on your system listed in the [Prerequisites](#prerequisites) section.
When Pelorus gets installed via helm charts, the first deploys the operators on which Pelorus depends, the second deploys the core Pelorus stack and the third deploys the exporters that gather the data. By default, the below instructions install into a namespace called `pelorus`, but you can choose any name you wish.
### Initial Deployment
To begin, clone Pelorus repository. To do so, you can run
```
Expand Down Expand Up @@ -101,7 +280,7 @@ You may also want to enabled other features for the core stack. See the [Configu
To understand how to set up an application to Pelorus to watch, see [QuickStart tutorial](Demo.md).
## Uninstalling
### Uninstalling
Cleaning up Pelorus is very simple. Just run
Expand All @@ -113,4 +292,4 @@ helm uninstall operators --namespace pelorus
If Pelorus was deployed with PVCs, you may want to delete them, because helm uninstall will not remove PVCs. To delete them, run
```
oc delete pvc --namespace pelorus $(oc get pvc --namespace pelorus -o name)
```
```
77 changes: 39 additions & 38 deletions docs/configuration2.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Customizing Pelorus

See [Configuring the Pelorus Stack](Configuration.md) for a full readout of all possible configuration items. The following sections describe the most common supported customizations that can be made to a Pelorus deployment.
See [Configuring the Pelorus Stack](Configuration.md) for a full readout of all possible configuration items. The following sections describe the most common supported customizations that can be made to the Pelorus configuration object YAML file.

## Configure Prometheus Retention

For detailed information about planning Prometheus storage capacity and configuration options please refer to the [operational aspects](https://prometheus.io/docs/prometheus/latest/storage/#operational-aspects) of the Prometheus documentation.

Prometheus is removing data older than 1 year, so if the metric you are interested in happened to be older than 1 year it won't be visible. This is configurable in the `values.yaml` file with the following option:
Prometheus is removing data older than 1 year, so if the metric you are interested in happened to be older than 1 year it won't be visible. This is configurable in the Pelorus configuration object YAML file with the following option:
```yaml
prometheus_retention: 1y
```
Expand All @@ -22,26 +22,25 @@ Unlike ephemeral volume that have a lifetime of a pod, persistent volume allows

It is recommended to use Prometheus Persistent Volume together with the [Long Term Storage](#configure-long-term-storage-recommended).

To install or upgrade helm chart with PVC that uses default StorageClass and default 2Gi (can units be standardized?) capacity, use one additional field in the `values.yaml` file
To install Pelorus with PVC that uses default `gp2` StorageClass and default `2Gi` capacity, ensure `prometheus_storage` is set to `true` in the Pelorus configuration object YAML file:
```yaml
prometheus_storage: true
# prometheus_storage_pvc_capacity: "<PVC requested volume capacity>" # Optional, default 2Gi
# prometheus_storage_pvc_storageclass: "<your storage class name>" # Optional, default "gp2"
```

Then run `helm upgrade` with updated `values.yaml` configuration
To ensure PVC were properly created and Bound to the PV, run after Pelorus instance creation:
```shell
helm upgrade pelorus charts/pelorus --namespace pelorus --values values.yaml
```
$ oc get pvc --namespace pelorus
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-prometheus-pelorus-db-prometheus-prometheus-pelorus-0 Bound pvc-fe8ac17c-bd23-47da-9c72-057349a59209 2Gi RWO gp2 10s
prometheus-prometheus-pelorus-db-prometheus-prometheus-pelorus-1 Bound pvc-3f815e71-1121-4f39-8174-0243071f281c 2Gi RWO gp2 10s
To ensure PVC were properly created and Bound to the PV, run
```shell
oc get pvc --namespace pelorus
```

## Configure Long Term Storage (Recommended)

The Pelorus chart supports deploying a thanos (what is this?) instance for long term storage. It can use any S3 bucket provider. The following is an example of configuring a `values.yaml` file for [NooBaa](https://www.noobaa.io/) with the local s3 service name.
The Pelorus chart supports deploying a [Thanos](https://thanos.io/) instance for long term storage. It can use any S3 bucket provider. The following is an example of configuring a `pelorus-sample-instance.yaml` file for [NooBaa](https://www.noobaa.io/) with the local s3 service name.
```yaml
bucket_access_point: s3.pelorus.svc:443
bucket_access_key: <your access key>
Expand All @@ -53,10 +52,7 @@ The default bucket name is thanos. It can be overridden by specifying the follow
thanos_bucket_name: <bucket name here>
```

Then run `helm upgrade` with updated `values.yaml` configuration:
```
helm upgrade pelorus charts/pelorus --namespace pelorus --values values.yaml
```
Then deploy Pelorus as described in the [Installation](Install.md) doc.

If you don't have an object storage provider, we recommend NooBaa as a free, open source option. You can follow our [NooBaa quickstart](Noobaa.md) to host an instance on OpenShift and configure Pelorus to use it.

Expand All @@ -66,28 +62,33 @@ By default, Pelorus will pull in data from the cluster in which it is running, b

## Configure Production Cluster

A production configuration example of `values.yaml` with one deploytime exporter, which uses AWS S3 bucket and AWS volume for Prometheus and tracks deployments to production.
A production configuration example of the Pelorus configuration object YAML file `pelorus-sample-instance.yaml`:
```yaml
thanos_bucket_name: <bucket name here>
bucket_access_point: s3.us-east-2.amazonaws.com
bucket_access_key: <your access key>
bucket_secret_access_key: <your secret access key>
prometheus_storage: true
prometheus_storage_pvc_capacity: 20Gi
prometheus_storage_pvc_storageclass: "gp2"
deployment:
labels:
app.kubernetes.io/component: production
app.kubernetes.io/name: pelorus
app.kubernetes.io/version: v0.33.0
exporters:
instances:
- app_name: deploytime-exporter
exporter_type: deploytime
env_from_configmaps:
- pelorus-config
- deploytime-config
```
kind: Pelorus
apiVersion: charts.pelorus.konveyor.io/v1alpha1
metadata:
name: pelorus-production
namespace: pelorus
spec:
exporters:
global: {}
instances:
- app_name: deploytime-exporter
exporter_type: deploytime
- app_name: failuretime-exporter
exporter_type: failure
- app_name: committime-exporter
exporter_type: committime
extra_prometheus_hosts: null
openshift_prometheus_basic_auth_pass: changeme
openshift_prometheus_htpasswd_auth: 'internal:{SHA}+pvrmeQCmtWmYVOZ57uuITVghrM='
prometheus_retention: 1y
prometheus_retention_size: 1GB
prometheus_storage: true
prometheus_storage_pvc_capacity: 20Gi
prometheus_storage_pvc_storageclass: gp2
thanos_bucket_name: <bucket name here>
bucket_access_point: s3.us-east-2.amazonaws.com
bucket_access_key: <your access key>
bucket_secret_access_key: <your secret access key>
```
Binary file added docs/img/1_operator_install_step.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/2_operator_install_step.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/3_operator_install_step.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/4_operator_install_step.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/5_operator_install_step.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/6_operator_install_step.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/7_operator_install_step.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit cf2d52c

Please sign in to comment.