diff --git a/Makefile b/Makefile
index 02ed08b5..24811a83 100644
--- a/Makefile
+++ b/Makefile
@@ -145,7 +145,7 @@ helm-lint: ## Lint Helm chart.
.PHONY: helm-template
helm-template: ## Run Helm template.
- helm -n druid-operator-system template --create-namespace ${NAMESPACE_DRUID_OPERATOR} ./chart --debug
+ helm -n druid-operator template --create-namespace ${NAMESPACE_DRUID_OPERATOR} ./chart --debug
##@ Build Dependencies
diff --git a/README.md b/README.md
index 0c4c6546..477ec6bd 100644
--- a/README.md
+++ b/README.md
@@ -6,17 +6,22 @@
Kubernetes Operator For Apache Druid
-**This is the official [druid-operator](https://github.com/druid-io/druid-operator) project, now maintained by [Maintainers.md](./MAINTAINERS.md).
+**This is the official [druid-operator](https://github.com/druid-io/druid-operator) project, now maintained by [Maintainers.md](./MAINTAINERS.md).
[druid-operator](https://github.com/druid-io/druid-operator) is depreacted. Ref to [issue](https://github.com/druid-io/druid-operator/issues/329) and [PR](https://github.com/druid-io/druid-operator/pull/336). Feel free to open issues and PRs! Collaborators are welcome !**
![Build Status](https://github.com/datainfrahq/druid-operator/actions/workflows/docker-image.yml/badge.svg) ![Docker pull](https://img.shields.io/docker/pulls/datainfrahq/druid-operator.svg) [![Latest Version](https://img.shields.io/github/tag/datainfrahq/druid-operator)](https://github.com/datainfrahq/druid-operator/releases) [![Slack](https://img.shields.io/badge/slack-brightgreen.svg?logo=slack&label=Community&style=flat&color=%2373DC8C&)](https://kubernetes.slack.com/archives/C04F4M6HT2L)
+
-
-
- Druid Operator provisions and manages [Apache Druid](https://druid.apache.org/) cluster on kubernetes. Druid Operator is designed to provision and manage [Apache Druid](https://druid.apache.org/) in distributed mode only. It is built using the [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder). Language used is GoLang. Druid Operator is available on [operatorhub.io](https://operatorhub.io/operator/druid-operator) Refer to [Documentation](./docs/README.md) for getting started. Join Kubernetes slack and join [druid-operator](https://kubernetes.slack.com/archives/C04F4M6HT2L)
+Druid Operator provisions and manages [Apache Druid](https://druid.apache.org/) cluster on kubernetes.
+Druid Operator is designed to provision and manage [Apache Druid](https://druid.apache.org/) in distributed mode only.
+It is built in Golang using [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder).
+Druid Operator is available on [operatorhub.io](https://operatorhub.io/operator/druid-operator)
+Refer to [Documentation](./docs/README.md) for getting started.
+
+Feel free to join Kubernetes slack and join [druid-operator](https://kubernetes.slack.com/archives/C04F4M6HT2L)
### Talks and Blogs on Druid Operator
@@ -36,13 +41,10 @@
### Notifications
-- The project moved to Kubebuilder v3 which requires a [manual change](docs/kubebuilder_v3_migration.md) in the operator.
-- Users may experience HPA issues with druid-operator with release 0.0.5, as described in the [issue](https://github.com/druid-io/druid-operator/issues/160).
-- The latest release 0.0.6 has fixes for the above issue.
+- The project moved to Kubebuilder v3 which requires a [manual change](docs/kubebuilder_v3_migration.md) in the operator.
+- Users are encourage to use operator version 0.0.9+.
- The operator has moved from HPA apiVersion autoscaling/v2beta1 to autoscaling/v2 API users will need to update there HPA Specs according v2beta2 api in order to work with the latest druid-operator release.
-- Users may experience pvc deletion [issue](https://github.com/druid-io/druid-operator/issues/186) in release 0.0.6, this issue has been fixed in patch release 0.0.6.1.
- druid-operator has moved Ingress apiVersion networking/v1beta1 to networking/v1. Users will need to update there Ingress Spec in the druid CR according networking/v1 syntax. In case users are using schema validated CRD, the CRD will also be needed to be updated.
-- druid-operator has moved PodDisruptionBudget apiVersion policy/v1beta1 to policy/v1. Users will need to update there Kubernetes versions to 1.21+ to use druid-operator tag 0.0.9+.
- The latest release for druid-operator is v1.0.0, this release is compatible with k8s version 1.25. HPA API is kept to version v2beta2.
### Kubernetes version compatibility
@@ -56,7 +58,7 @@
### Contributors
-
+
### Note
ApacheĀ®, [Apache Druid, DruidĀ®](https://druid.apache.org/) are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. This project, druid-operator, is not an Apache Software Foundation project.
diff --git a/chart/templates/deployment.yaml b/chart/templates/deployment.yaml
index a0adb0b1..96b6f54e 100644
--- a/chart/templates/deployment.yaml
+++ b/chart/templates/deployment.yaml
@@ -68,8 +68,7 @@ spec:
- ALL
- args:
- --health-probe-bind-address=:8081
- - --metrics-bind-address=127.0.0.1:8080
- - --leader-elect
+ - -enable-leader-election
env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
diff --git a/chart/values.yaml b/chart/values.yaml
index 5a32a04d..e3e589e1 100644
--- a/chart/values.yaml
+++ b/chart/values.yaml
@@ -4,8 +4,8 @@
env:
DENY_LIST: "default,kube-system" # Comma-separated list of namespaces to ignore
- RECONCILE_WAIT: "10s" # Reconciliation delay
- WATCH_NAMESPACE: "" # Namespace to watch or empty string to watch all namespaces, To watch multiple namespaces add , into string. Ex: WATCH_NAMESPACE: "ns1,ns2,ns3"
+ RECONCILE_WAIT: "10s" # Reconciliation delay
+ WATCH_NAMESPACE: "" # Namespace to watch or empty string to watch all namespaces, To watch multiple namespaces add , into string. Ex: WATCH_NAMESPACE: "ns1,ns2,ns3"
#MAX_CONCURRENT_RECONCILES:: "" # MaxConcurrentReconciles is the maximum number of concurrent Reconciles which can be run.
replicaCount: 1
@@ -46,6 +46,9 @@ podAnnotations: {}
podSecurityContext:
runAsNonRoot: true
+ fsGroup: 1000
+ runAsUser: 1000
+ runAsGroup: 1000
securityContext:
allowPrivilegeEscalation: false
diff --git a/config/default/kustomization.yaml b/config/default/kustomization.yaml
index dac66ea2..03804210 100644
--- a/config/default/kustomization.yaml
+++ b/config/default/kustomization.yaml
@@ -1,5 +1,5 @@
# Adds namespace to all resources.
-namespace: druid-operator-system
+namespace: druid-operator
# Value of this field is prepended to the
# names of all resources, e.g. a deployment named
@@ -13,9 +13,9 @@ namePrefix: druid-operator-
# someName: someValue
bases:
-- ../crd
-- ../rbac
-- ../manager
+ - ../crd
+ - ../rbac
+ - ../manager
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
#- ../webhook
@@ -25,12 +25,10 @@ bases:
#- ../prometheus
patchesStrategicMerge:
-# Protect the /metrics endpoint by putting it behind auth.
-# If you want your controller-manager to expose the /metrics
-# endpoint w/o any authn/z, please comment the following line.
-- manager_auth_proxy_patch.yaml
-
-
+ # Protect the /metrics endpoint by putting it behind auth.
+ # If you want your controller-manager to expose the /metrics
+ # endpoint w/o any authn/z, please comment the following line.
+ - manager_auth_proxy_patch.yaml
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
diff --git a/config/default/manager_auth_proxy_patch.yaml b/config/default/manager_auth_proxy_patch.yaml
index b7512661..87c967eb 100644
--- a/config/default/manager_auth_proxy_patch.yaml
+++ b/config/default/manager_auth_proxy_patch.yaml
@@ -13,43 +13,42 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- - key: kubernetes.io/arch
- operator: In
- values:
- - amd64
- - arm64
- - ppc64le
- - s390x
- - key: kubernetes.io/os
- operator: In
- values:
- - linux
+ - key: kubernetes.io/arch
+ operator: In
+ values:
+ - amd64
+ - arm64
+ - ppc64le
+ - s390x
+ - key: kubernetes.io/os
+ operator: In
+ values:
+ - linux
containers:
- - name: kube-rbac-proxy
- securityContext:
- allowPrivilegeEscalation: false
- capabilities:
- drop:
- - "ALL"
- image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1
- args:
- - "--secure-listen-address=0.0.0.0:8443"
- - "--upstream=http://127.0.0.1:8080/"
- - "--logtostderr=true"
- - "--v=0"
- ports:
- - containerPort: 8443
- protocol: TCP
- name: https
- resources:
- limits:
- cpu: 500m
- memory: 128Mi
- requests:
- cpu: 5m
- memory: 64Mi
- - name: manager
- args:
- - "--health-probe-bind-address=:8081"
- - "--metrics-bind-address=127.0.0.1:8080"
- - "--leader-elect"
+ - name: kube-rbac-proxy
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - "ALL"
+ image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1
+ args:
+ - "--secure-listen-address=0.0.0.0:8443"
+ - "--upstream=http://127.0.0.1:8080/"
+ - "--logtostderr=true"
+ - "--v=0"
+ ports:
+ - containerPort: 8443
+ protocol: TCP
+ name: https
+ resources:
+ limits:
+ cpu: 500m
+ memory: 128Mi
+ requests:
+ cpu: 5m
+ memory: 64Mi
+ - name: manager
+ args:
+ - "--health-probe-bind-address=:8081"
+ - "-enable-leader-election"
diff --git a/config/manager/manager.yaml b/config/manager/manager.yaml
index cb198dac..08015600 100644
--- a/config/manager/manager.yaml
+++ b/config/manager/manager.yaml
@@ -58,6 +58,9 @@ spec:
# - linux
securityContext:
runAsNonRoot: true
+ fsGroup: 1000
+ runAsUser: 1000
+ runAsGroup: 1000
# TODO(user): For common cases that do not require escalating privileges
# it is recommended to ensure that all your Pods/Containers are restrictive.
# More info: https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted
@@ -66,37 +69,37 @@ spec:
# seccompProfile:
# type: RuntimeDefault
containers:
- - command:
- - /manager
- args:
- - --leader-elect
- image: controller:latest
- name: manager
- securityContext:
- allowPrivilegeEscalation: false
- capabilities:
- drop:
- - "ALL"
- livenessProbe:
- httpGet:
- path: /healthz
- port: 8081
- initialDelaySeconds: 15
- periodSeconds: 20
- readinessProbe:
- httpGet:
- path: /readyz
- port: 8081
- initialDelaySeconds: 5
- periodSeconds: 10
- # TODO(user): Configure the resources accordingly based on the project requirements.
- # More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- resources:
- limits:
- cpu: 500m
- memory: 128Mi
- requests:
- cpu: 10m
- memory: 64Mi
+ - command:
+ - /manager
+ args:
+ - -enable-leader-election
+ image: controller:latest
+ name: manager
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - "ALL"
+ livenessProbe:
+ httpGet:
+ path: /healthz
+ port: 8081
+ initialDelaySeconds: 15
+ periodSeconds: 20
+ readinessProbe:
+ httpGet:
+ path: /readyz
+ port: 8081
+ initialDelaySeconds: 5
+ periodSeconds: 10
+ # TODO(user): Configure the resources accordingly based on the project requirements.
+ # More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+ resources:
+ limits:
+ cpu: 500m
+ memory: 128Mi
+ requests:
+ cpu: 10m
+ memory: 64Mi
serviceAccountName: controller-manager
terminationGracePeriodSeconds: 10
diff --git a/docs/dev_doc.md b/docs/dev_doc.md
index 9076daed..f5c0e5e0 100644
--- a/docs/dev_doc.md
+++ b/docs/dev_doc.md
@@ -1,7 +1,7 @@
## Dev Dependencies
-- Golang 1.19+
-- Kubebuilder 2.3.1+
+- Golang 1.20+
+- Kubebuilder v3
## Running Operator Locally
diff --git a/docs/druid_cr.md b/docs/druid_cr.md
index 91014549..176e81a1 100644
--- a/docs/druid_cr.md
+++ b/docs/druid_cr.md
@@ -5,7 +5,6 @@
- For full details on spec refer to ```pkg/apis/druid/v1alpha1/druid_types.go```
- The operator supports both deployments and statefulsets for druid Nodes. ```kind``` can be specified in the druid NodeSpec's to ```Deployment``` / ```StatefulSet```.
- ```NOTE: The default behavior shall provision all the nodes as statefulsets.```
-
- The following are cluster scoped and common to all the druid nodes.
```yaml
@@ -46,13 +45,13 @@ spec:
common.runtime.properties: |
```
- - The following are specific to a node.
+- The following are specific to a node.
```yaml
nodes:
# String value, can be anything to define a node name.
brokers:
- # nodeType can be broker,historical, middleManager, indexer, router, coordinator and overlord.
+ # nodeType can be broker, historical, middleManager, indexer, router, coordinator and overlord.
# Required Key
nodeType: "broker"
# Optionally specify for broker nodes
@@ -67,4 +66,5 @@ spec:
# Runtime Properties for the node
# Required Key
runtime.properties: |
+ ...
```
diff --git a/docs/features.md b/docs/features.md
index d1002367..6b4ca193 100644
--- a/docs/features.md
+++ b/docs/features.md
@@ -1,69 +1,80 @@
# Features
-* [Deny List in Operator](#Deny-List-in-Operator)
-* [Reconcile Time in Operator](#Reconcile-Time-in-Operator)
-* [Finalizer in Druid CR](#Finalizer-in-Druid-CR)
-* [Deletetion of Orphan PVC's](#Deletetion-of-Orphan-PVC's)
-* [Rolling Deploy](#Rolling-Deploy)
-* [Force Delete of Sts Pods](#Force-Delete-of-Sts-Pods)
-* [Scaling of Druid Nodes](#Scaling-of-Druid-Nodes)
-* [Volume Expansion of Druid Nodes Running As StatefulSets](#Scaling-of-Druid-Nodes)
-* [Add Additional Containers in Druid Nodes](#Add-Additional-Containers-in-Druid-Nodes)
-
+- [Features](#features)
+ - [Deny List in Operator](#deny-list-in-operator)
+ - [Reconcile Time in Operator](#reconcile-time-in-operator)
+ - [Finalizer in Druid CR](#finalizer-in-druid-cr)
+ - [Deletetion of Orphan PVC's](#deletetion-of-orphan-pvcs)
+ - [Rolling Deploy](#rolling-deploy)
+ - [Force Delete of Sts Pods](#force-delete-of-sts-pods)
+ - [Scaling of Druid Nodes](#scaling-of-druid-nodes)
+ - [Volume Expansion of Druid Nodes Running As StatefulSets](#volume-expansion-of-druid-nodes-running-as-statefulsets)
+ - [Add Additional Containers in Druid Nodes](#add-additional-containers-in-druid-nodes)
## Deny List in Operator
+
- There may be use cases where we want the operator to watch all namespaces but restrict few namespaces, due to security, testing flexibility etc reasons.
- The druid operator supports such cases. In ```deploy/operator.yaml```, user can enable ```DENY_LIST``` env and pass the namespaces to be excluded.
- Each namespace to be seperated using a comma.
## Reconcile Time in Operator
+
- As per operator pattern, the druid operator reconciles every 10s ( default reconcile time ) to make sure the desired state ( druid CR ) in sync with current state.
- In case user wants to adjust the reconcile time, it can be adjusted by adding an ENV variable in ```deploy/operator.yaml```, user can enable ```RECONCILE_WAIT``` env and pass in the value suffixed with ```s``` string ( example: 30s). The default time is 10s.
## Finalizer in Druid CR
+
- Druid Operator supports provisioning of sts as well as deployments. When sts is created a pvc is created along. When druid CR is deleted the sts controller does not delete pvc's associated with sts.
-- In case user does care about pvc data and wishes to reclaim it, user can enable ```DisablePVCDeletionFinalizer: true``` in druid CR.
+- In case user does care about pvc data and wishes to reclaim it, user can enable ```DisablePVCDeletionFinalizer: true``` in druid CR.
- Default behavior shall trigger finalizers and pre-delete hooks that shall be executed which shall first clean up sts and then pvc referenced by sts.
- Default behavior is set to true ie after deletion of CR, any pvc's provisioned by sts shall be deleted.
## Deletetion of Orphan PVC's
-- Assume ingestion is kicked off on druid, the sts MiddleManagers nodes are scaled to a certain number of replicas, and when the ingestion is completed. The middlemanagers are scaled down to avoid costs etc.
+
+- Assume ingestion is kicked off on druid, the sts MiddleManagers nodes are scaled to a certain number of replicas, and when the ingestion is completed. The middlemanagers are scaled down to avoid costs etc.
- Sts on scale down, just terminates the pods it owns not the PVC. PVC are left orpahned and are of little or no use.
-- In such cases druid-operator supports deletion of pvc orphaned by the sts.
+- In such cases druid-operator supports deletion of pvc orphaned by the sts.
- To enable this feature users need to add a flag in the druid cluster spec ```deleteOrphanPvc: true```.
## Rolling Deploy
+
- Operator supports ```rollingDeploy```, in case specified to ```true``` at the clusterSpec, the operator does incremental updates in the order as mentioned [here](http://druid.io/docs/latest/operations/rolling-updates.html)
- In rollingDeploy each node is update one by one, and incase any of the node goes in pending/crashing state during update the operator halts the update and does not update the other nodes. This requires manual intervation.
-- Default updates and cluster creation is in parallel.
+- Default updates and cluster creation is in parallel.
- Regardless of rolling deploy enabled, cluster creation always happens in parallel.
## Force Delete of Sts Pods
+
- During upgrade if sts is set to ordered ready, the sts controller will not recover from crashloopback state. The issues is referenced [here](https://github.com/kubernetes/kubernetes/issues/67250), and here's a reference [doc](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#forced-rollback)
-- How operator solves this is using the ```forceDeleteStsPodOnError``` key, the operator will delete the sts pod if its in crashloopback state. Example Scenario: During upgrade, user rolls out a faulty configuration causing the historical pod going in crashing state, user rolls out a valid configuration, the new configuration will not be applied unless user manual delete pods, so solve this scenario operator shall delete the pod automatically without user intervention.
+- How operator solves this is using the ```forceDeleteStsPodOnError``` key, the operator will delete the sts pod if its in crashloopback state. Example Scenario: During upgrade, user rolls out a faulty configuration causing the historical pod going in crashing state, user rolls out a valid configuration, the new configuration will not be applied unless user manual delete pods, so solve this scenario operator shall delete the pod automatically without user intervention.
- ```NOTE: User must be aware of this feature, there might be cases where crashloopback might be caused due probe failure, fault image etc, the operator shall keep on deleting on each re-concile loop. Default Behavior is True ```
## Scaling of Druid Nodes
-- Operator supports ```HPA autosaling/v2beta2``` Spec in the nodeSpec for druid nodes. In case HPA deployed, HPA controller maintains the replica count/state for the particular statefulset referenced. Refer to ```examples.md``` for HPA configuration.
+
+- Operator supports ```HPA autosaling/v2beta2``` Spec in the nodeSpec for druid nodes. In case HPA deployed, HPA controller maintains the replica count/state for the particular statefulset referenced. Refer to ```examples.md``` for HPA configuration.
- ```NOTE: Prefered to scale only brokers using HPA.```
- In order to scale MM with HPA, its recommended not to use HPA. Refer to these discussions which have adderessed the issues in details.
-1. https://github.com/apache/druid/issues/8801#issuecomment-664020630
-2. https://github.com/apache/druid/issues/8801#issuecomment-664648399
+
+1.
+2.
## Volume Expansion of Druid Nodes Running As StatefulSets
+
```NOTE: This feature has been tested only on cloud environments and storage classes which have supported volume expansion. This feature uses cascade=orphan strategy to make sure only Stateful is deleted and recreated and pods are not deleted.```
+
- Druid Nodes specifically historicals run as statefulsets. Each statefulset replica has a pvc attached.
- NodeSpec in druid CR has key ```volumeClaimTemplates``` where users can define the pvc's storage class as well as size.
- In case a user wants to increase size in the node, the statefulsets cannot be directly updated.
- Druid Operator behind the scenes performs seamless update of the statefulset, plus patch the pvc's with desired size defined in the druid CR.
-- Druid operator shall perform a cascade deletion of the sts, and shall patch the pvc. Cascade deletion has no affect to the pods running, queries are served and no downtime is experienced.
+- Druid operator shall perform a cascade deletion of the sts, and shall patch the pvc. Cascade deletion has no affect to the pods running, queries are served and no downtime is experienced.
- While enabling this feature, druid operator will check if volume expansion is supported in the storage class mentioned in the druid CR, only then will it perform expansion.
-- Shrinkage of pvc's isnt supported, desiredSize cannot be less than currentSize as well as counts.
+- Shrinkage of pvc's isnt supported, **desiredSize cannot be less than currentSize as well as counts**.
- To enable this feature ```scalePvcSts``` needs to be enabled to ```true```.
- By default, this feature is disabled.
## Add Additional Containers in Druid Nodes
+
- The Druid operator supports additional containers to run along with the druid services. This helps support co-located, co-managed helper processes for the primary druid application
-- This can be used for init containers or sidecars or proxies etc.
-- To enable this features users just need to add a new container to the container list
-- This is scoped at cluster scope only, which means that additional container will be common to all the nodes
+- This can be used for init containers or sidecars or proxies etc.
+- To enable this features users just need to add a new container to the container list.
+- This is scoped at cluster scope only, which means that additional container will be common to all the nodes.
diff --git a/docs/getting_started.md b/docs/getting_started.md
index a6d9bfc3..a6d09f0a 100644
--- a/docs/getting_started.md
+++ b/docs/getting_started.md
@@ -1,21 +1,27 @@
## Install the operator
```bash
-# This will deploy the operator into the druid-operator-system namespace
+# This will deploy kind to test the stack locally
+make kind
+# This will deploy the operator into the druid-operator namespace
make deploy
# Check the deployed druid-operator
-kubectl describe deployment -n druid-operator-system druid-operator-controller-manager
+kubectl describe deployment -n druid-operator druid-operator-controller-manager
```
Operator can be deployed with namespaced scope or clutser scope. By default, the operator is namespaced scope.
For the operator to be cluster scope, do the following changes:
+
- Edit the `config/default/manager_config_patch.yaml` so the `patchesStrategicMerge:` will look like this:
+
```yaml
patchesStrategicMerge:
- manager_auth_proxy_patch.yaml
- manager_config_patch.yaml
```
+
- Edit the `config/default/manager_config_patch.yaml` to look like this:
+
```yaml
apiVersion: apps/v1
kind: Deployment
@@ -33,44 +39,50 @@ spec:
```
## Install the operator using Helm chart
-- Install cluster scope operator into the `druid-operator-system` namespace:
+
+- Install cluster scope operator into the `druid-operator` namespace:
+
```bash
# Install Druid operator using Helm
-helm -n druid-operator-system install --create-namespace cluster-druid-operator ./chart
+helm -n druid-operator upgrade -i --create-namespace cluster-druid-operator ./chart
# ... or generate manifest.yaml to install using other means:
-helm -n druid-operator-system template --create-namespace cluster-druid-operator ./chart > manifest.yaml
+helm -n druid-operator template --create-namespace cluster-druid-operator ./chart > manifest.yaml
```
-- Install namespaced operator into the `druid-operator-system` namespace:
+- Install namespaced operator into the `druid-operator` namespace:
+
```bash
# Install Druid operator using Helm
-helm -n druid-operator-system install --create-namespace --set env.WATCH_NAMESPACE="mynamespace" namespaced-druid-operator ./chart
+kubectl create ns mynamespace
+helm -n druid-operator upgrade -i --create-namespace --set env.WATCH_NAMESPACE="mynamespace" namespaced-druid-operator ./chart
# you can use myvalues.yaml instead of --set
-helm -n druid-operator-system install --create-namespace -f myvalues.yaml namespaced-druid-operator ./chart
+helm -n druid-operator upgrade -i --create-namespace -f myvalues.yaml namespaced-druid-operator ./chart
# ... or generate manifest.yaml to install using other means:
-helm -n druid-operator-system template --set env.WATCH_NAMESPACE="" namespaced-druid-operator ./chart --create-namespace > manifest.yaml
+helm -n druid-operator template --set env.WATCH_NAMESPACE="" namespaced-druid-operator ./chart --create-namespace > manifest.yaml
```
- Update settings, upgrade or rollback:
+
```bash
# To upgrade chart or apply changes in myvalues.yaml
-helm -n druid-operator-system upgrade -f myvalues.yaml namespaced-druid-operator ./chart
+helm -n druid-operator upgrade -f myvalues.yaml namespaced-druid-operator ./chart
# Rollback to previous revision
-helm -n druid-operator-system rollback cluster-druid-operator
+helm -n druid-operator rollback cluster-druid-operator
```
- Uninstall operator
+
```bash
# To avoid destroying existing clusters, helm will not uninstall its CRD. For
# complete cleanup annotation needs to be removed first:
kubectl annotate crd druids.druid.apache.org helm.sh/resource-policy-
# This will uninstall operator
-helm -n druid-operator-system uninstall cluster-druid-operator
+helm -n druid-operator uninstall cluster-druid-operator
```
## Deploy a sample Druid cluster
@@ -89,8 +101,6 @@ Note that above tiny-cluster only works on a single node kubernetes cluster(e.g.
## Debugging Problems
- - For kubernetes version 1.11 make sure to disable ```type: object``` in the CRD root spec.
-
```bash
# get druid-operator pod name
kubectl get po | grep druid-operator