Skip to content

Commit

Permalink
update: docs, fix lint
Browse files Browse the repository at this point in the history
  • Loading branch information
codekow committed Nov 12, 2023
1 parent 4f9ef58 commit ee98ed7
Show file tree
Hide file tree
Showing 4 changed files with 22 additions and 8 deletions.
16 changes: 14 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,25 @@ NOTE: `bash`, `git`, and `oc` are available in the [OpenShift Web Terminal](http
## Bootstrapping a Cluster

1. Verify you are logged into your cluster using `oc`.
1. Clone this repository to your local environment or the [OpenShift Web Terminal](https://docs.openshift.com/container-platform/4.12/web_console/web_terminal/installing-web-terminal.html).
1. Clone this repository

To a local environment

```sh
oc whoami
git clone < repo url >
```

Use an [OpenShift Web Terminal](https://docs.openshift.com/container-platform/4.12/web_console/web_terminal/installing-web-terminal.html)

```
YOLO_URL=https://raw.githubusercontent.com/codekow/demo-ai-gitops-catalog/main/scripts/library/term.sh
. <(curl -s "${YOLO_URL}")
term_init
```

NOTE: open a new terminal to activate new configuration

### Cluster Quick Start for OpenShift GitOps

Basic cluster config
Expand All @@ -63,7 +75,7 @@ Basic cluster config
apply_firmly bootstrap/web-terminal

# setup a default cluster w/o argocd managing it
apply_firmly cluster/default
apply_firmly clusters/default
```

Setup a demo
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,15 @@ metadata:
spec:
replicas: 1
selector:
matchLabels:
matchLabels:
name: oauth-proxy
template:
metadata:
labels:
name: oauth-proxy
spec:
containers:
- name: oauth-proxy
- name: oauth-proxy
env:
- name: UPSTREAM
value: http://httpd:8080
Expand All @@ -40,7 +40,7 @@ spec:
imagePullPolicy: IfNotPresent
ports:
- name: oauth-proxy
containerPort: 8888
containerPort: 8888
protocol: TCP
volumeMounts:
# - mountPath: /etc/tls/private
Expand Down
4 changes: 2 additions & 2 deletions components/configs/kustomized/oauth-proxy/base/sa.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ metadata:
# kind: ClusterRoleBinding
# metadata:
# # Without this role your oauth-proxy will output
# # Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden:
# # User "system:serviceaccount:reverse-words:reversewords" cannot create resource "tokenreviews" in API
# # Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden:
# # User "system:serviceaccount:reverse-words:reversewords" cannot create resource "tokenreviews" in API
# # group "authentication.k8s.io" at the cluster scope
# name: oauth-proxy-tokenreviews
# roleRef:
Expand Down
4 changes: 3 additions & 1 deletion docs/ARGOCD.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,12 @@ Next, clone this repository to your local environment.
This repository deploys sealed-secrets and requires a sealed secret master key to bootstrap. If you plan to reuse sealed-secrets created using another key you must obtain that key from the person that created the sealed-secrets.

The sealed secret(s) for bootstrap should be located at:

```sh
bootstrap/sealed-secrets-secret.yaml
```

If you do not plan to utilize existing sealed secrets you can instead bootstrap a new sealed-secrets controller and obtain a new secret.
If you do not plan to utilize existing sealed secrets you can instead bootstrap a new sealed-secrets controller and obtain a new secret.

`bootstrap.sh` can also be to used to create the file if it doesn't already exist.

Expand Down Expand Up @@ -126,6 +127,7 @@ Explanation:
Argo utilizes a `Health Check` to validate if an object has been successfully applied and updated, failed, or is progressing by the cluster. The health check for the `Subscription` object looks at the `Condition` field in the `Subscription` which is updated by the `OLM`. Once the `Subscription` is applied to the cluster, `OLM` creates several other objects in order to install the Operator. Once the Operator has been installed `OLM` will report the status back to the `Subscription` object. This reconciliation process may take several minutes even after the Operator has successfully installed.

Resolution/Troubleshooting:

- Validate that the Operator has successfully installed via the `Installed Operators` section of the OpenShift Web Console.
- If the Operator has not installed, additional troubleshooting is required.
- If the Operator has successfully installed, feel free to ignore the `Progressing` state and proceed. `OLM` should reconcile the status after several minutes and Argo will update the state to `Healthy`.

0 comments on commit ee98ed7

Please sign in to comment.