Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: runtime error: index out of range [3] with length 3 #153

Closed
prune998 opened this issue Sep 30, 2019 · 7 comments · Fixed by #155
Closed

panic: runtime error: index out of range [3] with length 3 #153

prune998 opened this issue Sep 30, 2019 · 7 comments · Fixed by #155
Assignees
Labels
bug Something isn't working community

Comments

@prune998
Copy link
Contributor

Describe the bug
During Reconcile of a cluster, the Operator panic with :

kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.744Z	INFO	controllers.KafkaCluster	Reconciling KafkaCluster	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.744Z	DEBUG	controllers.KafkaCluster	Skipping PKI reconciling due to no SSL config	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "pki"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.744Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "envoy"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.744Z	DEBUG	controllers.KafkaCluster	Reconciled	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "envoy"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.744Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka_monitoring"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka_monitoring", "kind": "*v1.ConfigMap", "name": "kf-kafka-kafka-jmx-exporter"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	Reconciled	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka_monitoring"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "cruisecontrol_monitoring"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "cruisecontrol_monitoring", "kind": "*v1.ConfigMap", "name": "kf-kafka-cc-jmx-exporter"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	Reconciled	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "cruisecontrol_monitoring"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.745Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.746Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Service", "name": "kf-kafka-all-broker"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.747Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.748Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.759Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kf-kafka-config-1"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.760Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Service", "name": "kf-kafka-1"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.760Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Pod"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:12.763Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Pod"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.026Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.026Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.028Z	DEBUG	controllers.KafkaCluster	resource diffs	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kf-kafka-config-2", "patch": "{\"data\":{\"broker-config\":\"advertised.listeners=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nbroker.id=2\\nbroker.rack=francecentral,3\\ncruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nlistener.security.protocol.map=PLAINTEXT:PLAINTEXT\\nlisteners=PLAINTEXT://:9092\\nlog.dirs=/kafka-logs/kafka\\nmetric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter\\nsecurity.inter.broker.protocol=PLAINTEXT\\nsuper.users=\\nzookeeper.connect=zk-zookeeper.alerting:2181\"}}", "current": "{\"kind\":\"ConfigMap\",\"apiVersion\":\"v1\",\"metadata\":{\"name\":\"kf-kafka-config-2\",\"namespace\":\"alerting\",\"selfLink\":\"/api/v1/namespaces/alerting/configmaps/kf-kafka-config-2\",\"uid\":\"0f9e6683-e399-11e9-94f3-4a7bbeb43129\",\"resourceVersion\":\"16769225\",\"creationTimestamp\":\"2019-09-30T15:43:34Z\",\"labels\":{\"app\":\"kafka\",\"brokerId\":\"2\",\"kafka_cr\":\"kf-kafka\"},\"annotations\":{\"banzaicloud.com/last-applied\":\"{\\\"data\\\":{\\\"broker-config\\\":\\\"advertised.listeners=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\\\nbroker.id=2\\\\ncruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\\\nlistener.security.protocol.map=PLAINTEXT:PLAINTEXT\\\\nlisteners=PLAINTEXT://:9092\\\\nlog.dirs=/kafka-logs/kafka\\\\nmetric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter\\\\nsecurity.inter.broker.protocol=PLAINTEXT\\\\nsuper.users=\\\\nzookeeper.connect=zk-zookeeper.alerting:2181\\\"},\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kafka\\\",\\\"brokerId\\\":\\\"2\\\",\\\"kafka_cr\\\":\\\"kf-kafka\\\"},\\\"name\\\":\\\"kf-kafka-config-2\\\",\\\"namespace\\\":\\\"alerting\\\",\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"kafka.banzaicloud.io/v1beta1\\\",\\\"blockOwnerDeletion\\\":true,\\\"controller\\\":true,\\\"kind\\\":\\\"KafkaCluster\\\",\\\"name\\\":\\\"kf-kafka\\\",\\\"uid\\\":\\\"d6ff7d7f-e398-11e9-94f3-4a7bbeb43129\\\"}]}}\"},\"ownerReferences\":[{\"apiVersion\":\"kafka.banzaicloud.io/v1beta1\",\"kind\":\"KafkaCluster\",\"name\":\"kf-kafka\",\"uid\":\"d6ff7d7f-e398-11e9-94f3-4a7bbeb43129\",\"controller\":true,\"blockOwnerDeletion\":true}]},\"data\":{\"broker-config\":\"advertised.listeners=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nbroker.id=2\\ncruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nlistener.security.protocol.map=PLAINTEXT:PLAINTEXT\\nlisteners=PLAINTEXT://:9092\\nlog.dirs=/kafka-logs/kafka\\nmetric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter\\nsecurity.inter.broker.protocol=PLAINTEXT\\nsuper.users=\\nzookeeper.connect=zk-zookeeper.alerting:2181\"}}", "modified": "{\"data\":{\"broker-config\":\"advertised.listeners=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nbroker.id=2\\nbroker.rack=francecentral,3\\ncruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nlistener.security.protocol.map=PLAINTEXT:PLAINTEXT\\nlisteners=PLAINTEXT://:9092\\nlog.dirs=/kafka-logs/kafka\\nmetric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter\\nsecurity.inter.broker.protocol=PLAINTEXT\\nsuper.users=\\nzookeeper.connect=zk-zookeeper.alerting:2181\"},\"metadata\":{\"labels\":{\"app\":\"kafka\",\"brokerId\":\"2\",\"kafka_cr\":\"kf-kafka\"},\"name\":\"kf-kafka-config-2\",\"namespace\":\"alerting\",\"ownerReferences\":[{\"apiVersion\":\"kafka.banzaicloud.io/v1beta1\",\"blockOwnerDeletion\":true,\"controller\":true,\"kind\":\"KafkaCluster\",\"name\":\"kf-kafka\",\"uid\":\"d6ff7d7f-e398-11e9-94f3-4a7bbeb43129\"}]}}", "original": "{\"data\":{\"broker-config\":\"advertised.listeners=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nbroker.id=2\\ncruise.control.metrics.reporter.bootstrap.servers=PLAINTEXT://kf-kafka-2.alerting.svc.cluster.local:9092\\nlistener.security.protocol.map=PLAINTEXT:PLAINTEXT\\nlisteners=PLAINTEXT://:9092\\nlog.dirs=/kafka-logs/kafka\\nmetric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter\\nsecurity.inter.broker.protocol=PLAINTEXT\\nsuper.users=\\nzookeeper.connect=zk-zookeeper.alerting:2181\"},\"metadata\":{\"labels\":{\"app\":\"kafka\",\"brokerId\":\"2\",\"kafka_cr\":\"kf-kafka\"},\"name\":\"kf-kafka-config-2\",\"namespace\":\"alerting\",\"ownerReferences\":[{\"apiVersion\":\"kafka.banzaicloud.io/v1beta1\",\"blockOwnerDeletion\":true,\"controller\":true,\"kind\":\"KafkaCluster\",\"name\":\"kf-kafka\",\"uid\":\"d6ff7d7f-e398-11e9-94f3-4a7bbeb43129\"}]}}"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.160Z	INFO	controllers.KafkaCluster	Kafka cluster state updated	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kf-kafka-config-2"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.160Z	INFO	controllers.KafkaCluster	resource updated	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kf-kafka-config-2"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.161Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Service", "name": "kf-kafka-2"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.161Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Pod"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.316Z	INFO	kafka_util	offline Replica Count is 0
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:13.316Z	INFO	kafka_util	all replicas are in sync
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:14.324Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager 2019-09-30T15:46:14.324Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-nlwfz manager E0930 15:46:14.324936       1 runtime.go:69] Observed a panic: runtime.boundsError{x:3, y:3, signed:true, code:0x0} (runtime error: index out of range [3] with length 3)
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/runtime/runtime.go:76
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/runtime/runtime.go:65
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/runtime/runtime.go:51
kafka-operator-7bc9c8cd8f-nlwfz manager /usr/local/go/src/runtime/panic.go:679
kafka-operator-7bc9c8cd8f-nlwfz manager /usr/local/go/src/runtime/panic.go:75
kafka-operator-7bc9c8cd8f-nlwfz manager /workspace/pkg/resources/kafka/configmap.go:179
kafka-operator-7bc9c8cd8f-nlwfz manager /workspace/pkg/resources/kafka/configmap.go:97
kafka-operator-7bc9c8cd8f-nlwfz manager /workspace/pkg/resources/kafka/kafka.go:252
kafka-operator-7bc9c8cd8f-nlwfz manager /workspace/controllers/kafkacluster_controller.go:111
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:216
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:192
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:171
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:152
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:153
kafka-operator-7bc9c8cd8f-nlwfz manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:88
kafka-operator-7bc9c8cd8f-nlwfz manager /usr/local/go/src/runtime/asm_amd64.s:1357
kafka-operator-7bc9c8cd8f-nlwfz manager panic: runtime error: index out of range [3] with length 3 [recovered]
kafka-operator-7bc9c8cd8f-nlwfz manager 	panic: runtime error: index out of range [3] with length 3
kafka-operator-7bc9c8cd8f-nlwfz manager
kafka-operator-7bc9c8cd8f-nlwfz manager goroutine 436 [running]:
kafka-operator-7bc9c8cd8f-nlwfz manager k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/runtime/runtime.go:58 +0x105
kafka-operator-7bc9c8cd8f-nlwfz manager panic(0x185fd00, 0xc0016630c0)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/usr/local/go/src/runtime/panic.go:679 +0x1b2
kafka-operator-7bc9c8cd8f-nlwfz manager github.com/banzaicloud/kafka-operator/pkg/resources/kafka.Reconciler.generateBrokerConfig(0x1c02900, 0xc0005b1d70, 0xc001632000, 0x3, 0xc00188b6b0, 0xc0008c81c0, 0x0, 0x0, 0x0, 0x0, ...)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/workspace/pkg/resources/kafka/configmap.go:179 +0x6c9
kafka-operator-7bc9c8cd8f-nlwfz manager github.com/banzaicloud/kafka-operator/pkg/resources/kafka.(*Reconciler).configMap(0xc0017ceec0, 0xc000000003, 0xc00188b6b0, 0x0, 0x0, 0xc0008c81c0, 0x0, 0x0, 0x1bf7080, 0xc0017cfdc0, ...)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/workspace/pkg/resources/kafka/configmap.go:97 +0x5bd
kafka-operator-7bc9c8cd8f-nlwfz manager github.com/banzaicloud/kafka-operator/pkg/resources/kafka.(*Reconciler).Reconcile(0xc0017ceec0, 0x1bf7080, 0xc0017cfdc0, 0x0, 0x0)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/workspace/pkg/resources/kafka/kafka.go:252 +0x1693
kafka-operator-7bc9c8cd8f-nlwfz manager github.com/banzaicloud/kafka-operator/controllers.(*KafkaClusterReconciler).Reconcile(0xc000381140, 0xc000784998, 0x8, 0xc000784b08, 0x8, 0xc0008c9cd8, 0xc0008bf9e0, 0xc0008ce248, 0x1bb8ba0)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/workspace/controllers/kafkacluster_controller.go:111 +0x6bc
kafka-operator-7bc9c8cd8f-nlwfz manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00031a320, 0x17b3940, 0xc001483f40, 0xc000581500)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:216 +0x162
kafka-operator-7bc9c8cd8f-nlwfz manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00031a320, 0xc0000a4000)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:192 +0xcb
kafka-operator-7bc9c8cd8f-nlwfz manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc00031a320)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:171 +0x2b
kafka-operator-7bc9c8cd8f-nlwfz manager k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000016640)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:152 +0x5e
kafka-operator-7bc9c8cd8f-nlwfz manager k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000016640, 0x3b9aca00, 0x0, 0x1, 0xc0000ba180)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:153 +0xf8
kafka-operator-7bc9c8cd8f-nlwfz manager k8s.io/apimachinery/pkg/util/wait.Until(0xc000016640, 0x3b9aca00, 0xc0000ba180)
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:88 +0x4d
kafka-operator-7bc9c8cd8f-nlwfz manager created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
kafka-operator-7bc9c8cd8f-nlwfz manager 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:157 +0x32e

Steps to reproduce the issue:
havn't found a way to reproduce yet

Expected behavior
Operator should not panic :)

Additional context
Operator version 0.6.0 generated from Helm chart

kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T12:36:28Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Cleaned Zookeeper and re-created the kafka cluster from 0.
I had this error two times, with IDs 121,122,123 then with a new cluster with IDs 1,2,3.

@prune998
Copy link
Contributor Author

I tried deleting the pod and it loop panic. Here are the logs from start to panic :

kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.509Z	INFO	controller-runtime.metrics	metrics server is starting to listen	{"addr": ":8080"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.513Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.513Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.513Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.517Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.517Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.518Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.518Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.518Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.518Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.518Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.518Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkacluster", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.520Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkatopic", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.520Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkauser", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.521Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "kafkauser", "source": "kind source: /, Kind="}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.521Z	INFO	setup	starting manager
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:10.523Z	INFO	controller-runtime.manager	starting metrics server	{"path": "/metrics"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:12.525Z	INFO	controller-runtime.certwatcher	Updated current TLS certificate
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:12.525Z	INFO	controller-runtime.certwatcher	Starting certificate watcher
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:26.905Z	INFO	controller-runtime.controller	Starting Controller	{"controller": "kafkauser"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:26.905Z	INFO	controller-runtime.controller	Starting Controller	{"controller": "kafkacluster"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:26.905Z	DEBUG	controller-runtime.manager.events	Normal	{"object": {"kind":"ConfigMap","namespace":"tools","name":"controller-leader-election-helper","uid":"757215a1-e37f-11e9-94f3-4a7bbeb43129","apiVersion":"v1","resourceVersion":"16773185"}, "reason": "LeaderElection", "message": "kafka-operator-7bc9c8cd8f-zjmrq_14373140-e39c-11e9-9eec-d22c587c7263 became leader"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:26.905Z	INFO	controller-runtime.controller	Starting Controller	{"controller": "kafkatopic"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.005Z	INFO	controller-runtime.controller	Starting workers	{"controller": "kafkauser", "worker count": 1}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.005Z	INFO	controller-runtime.controller	Starting workers	{"controller": "kafkatopic", "worker count": 1}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.005Z	INFO	controller-runtime.controller	Starting workers	{"controller": "kafkacluster", "worker count": 1}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.005Z	INFO	controllers.KafkaCluster	Reconciling KafkaCluster	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.005Z	DEBUG	controllers.KafkaCluster	Skipping PKI reconciling due to no SSL config	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "pki"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.005Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "envoy"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.005Z	DEBUG	controllers.KafkaCluster	Reconciled	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "envoy"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.005Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka_monitoring"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.006Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka_monitoring", "kind": "*v1.ConfigMap", "name": "kf-kafka-kafka-jmx-exporter"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.006Z	DEBUG	controllers.KafkaCluster	Reconciled	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka_monitoring"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.006Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "cruisecontrol_monitoring"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.006Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "cruisecontrol_monitoring", "kind": "*v1.ConfigMap", "name": "kf-kafka-cc-jmx-exporter"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.006Z	DEBUG	controllers.KafkaCluster	Reconciled	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "cruisecontrol_monitoring"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.006Z	DEBUG	controllers.KafkaCluster	Reconciling	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.007Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Service", "name": "kf-kafka-all-broker"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.008Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.008Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.009Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kf-kafka-config-1"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.009Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Service", "name": "kf-kafka-1"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.009Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Pod"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.014Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Pod"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.124Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.125Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.193Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.ConfigMap", "name": "kf-kafka-config-2"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.194Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Service", "name": "kf-kafka-2"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:27.194Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Pod"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:28.368Z	INFO	controllers.KafkaCluster	Kafka cluster state updated	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.Pod"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:28.400Z	DEBUG	controllers.KafkaCluster	searching with label because name is empty	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-zjmrq manager 2019-09-30T16:05:28.401Z	DEBUG	controllers.KafkaCluster	resource is in sync	{"kafkacluster": "alerting/kf-kafka", "Request.Name": "kf-kafka", "component": "kafka", "kind": "*v1.PersistentVolumeClaim"}
kafka-operator-7bc9c8cd8f-zjmrq manager E0930 16:05:28.401350       1 runtime.go:69] Observed a panic: runtime.boundsError{x:3, y:3, signed:true, code:0x0} (runtime error: index out of range [3] with length 3)
kafka-operator-7bc9c8cd8f-zjmrq manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/runtime/runtime.go:76
kafka-operator-7bc9c8cd8f-zjmrq manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/runtime/runtime.go:65
kafka-operator-7bc9c8cd8f-zjmrq manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/runtime/runtime.go:51
kafka-operator-7bc9c8cd8f-zjmrq manager /usr/local/go/src/runtime/panic.go:679
kafka-operator-7bc9c8cd8f-zjmrq manager /usr/local/go/src/runtime/panic.go:75
kafka-operator-7bc9c8cd8f-zjmrq manager /workspace/pkg/resources/kafka/configmap.go:179
kafka-operator-7bc9c8cd8f-zjmrq manager /workspace/pkg/resources/kafka/configmap.go:97
kafka-operator-7bc9c8cd8f-zjmrq manager /workspace/pkg/resources/kafka/kafka.go:252
kafka-operator-7bc9c8cd8f-zjmrq manager /workspace/controllers/kafkacluster_controller.go:111
kafka-operator-7bc9c8cd8f-zjmrq manager /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:216
kafka-operator-7bc9c8cd8f-zjmrq manager /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:192
kafka-operator-7bc9c8cd8f-zjmrq manager /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:171
kafka-operator-7bc9c8cd8f-zjmrq manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:152
kafka-operator-7bc9c8cd8f-zjmrq manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:153
kafka-operator-7bc9c8cd8f-zjmrq manager /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:88
kafka-operator-7bc9c8cd8f-zjmrq manager /usr/local/go/src/runtime/asm_amd64.s:1357
kafka-operator-7bc9c8cd8f-zjmrq manager panic: runtime error: index out of range [3] with length 3 [recovered]
kafka-operator-7bc9c8cd8f-zjmrq manager 	panic: runtime error: index out of range [3] with length 3
kafka-operator-7bc9c8cd8f-zjmrq manager
kafka-operator-7bc9c8cd8f-zjmrq manager goroutine 406 [running]:
kafka-operator-7bc9c8cd8f-zjmrq manager k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/runtime/runtime.go:58 +0x105
kafka-operator-7bc9c8cd8f-zjmrq manager panic(0x185fd00, 0xc000996aa0)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/usr/local/go/src/runtime/panic.go:679 +0x1b2
kafka-operator-7bc9c8cd8f-zjmrq manager github.com/banzaicloud/kafka-operator/pkg/resources/kafka.Reconciler.generateBrokerConfig(0x1c02900, 0xc0002c11d0, 0xc0007e0c00, 0x3, 0xc0021400b0, 0xc0012681c0, 0x0, 0x0, 0x0, 0x0, ...)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/workspace/pkg/resources/kafka/configmap.go:179 +0x6c9
kafka-operator-7bc9c8cd8f-zjmrq manager github.com/banzaicloud/kafka-operator/pkg/resources/kafka.(*Reconciler).configMap(0xc000ba9460, 0xc000000003, 0xc0021400b0, 0x0, 0x0, 0xc0012681c0, 0x0, 0x0, 0x1bf7080, 0xc00021e840, ...)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/workspace/pkg/resources/kafka/configmap.go:97 +0x5bd
kafka-operator-7bc9c8cd8f-zjmrq manager github.com/banzaicloud/kafka-operator/pkg/resources/kafka.(*Reconciler).Reconcile(0xc000ba9460, 0x1bf7080, 0xc00021e840, 0x0, 0x0)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/workspace/pkg/resources/kafka/kafka.go:252 +0x1693
kafka-operator-7bc9c8cd8f-zjmrq manager github.com/banzaicloud/kafka-operator/controllers.(*KafkaClusterReconciler).Reconcile(0xc0005b4c30, 0xc000ec8c28, 0x8, 0xc000ec8c20, 0x8, 0xc0003a0cd8, 0xc0008bea20, 0xc0002ec2d8, 0x1bb8ba0)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/workspace/controllers/kafkacluster_controller.go:111 +0x6bc
kafka-operator-7bc9c8cd8f-zjmrq manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000148a00, 0x17b3940, 0xc00062a060, 0x285c400)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:216 +0x162
kafka-operator-7bc9c8cd8f-zjmrq manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000148a00, 0xc0008d6800)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:192 +0xcb
kafka-operator-7bc9c8cd8f-zjmrq manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc000148a00)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:171 +0x2b
kafka-operator-7bc9c8cd8f-zjmrq manager k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0008b3d10)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:152 +0x5e
kafka-operator-7bc9c8cd8f-zjmrq manager k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008b3d10, 0x3b9aca00, 0x0, 0x1, 0xc0002d68a0)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:153 +0xf8
kafka-operator-7bc9c8cd8f-zjmrq manager k8s.io/apimachinery/pkg/util/wait.Until(0xc0008b3d10, 0x3b9aca00, 0xc0002d68a0)
kafka-operator-7bc9c8cd8f-zjmrq manager 	/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:88 +0x4d
kafka-operator-7bc9c8cd8f-zjmrq manager created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
kafka-operator-7bc9c8cd8f-zjmrq manager 	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.0/pkg/internal/controller/controller.go:157 +0x32e

@prune998
Copy link
Contributor Author

line 179 of pkg/resources/kafka/configmap.go :
parsedReadOnlyBrokerConfig := util.ParsePropertiesFormat(r.KafkaCluster.Spec.Brokers[id].ReadOnlyConfig)

Which expect the ReadOnlyConfig to exist for the broker ID... which is not the case for me...
I'm looking into my kafkacluster definition right now...

@prune998
Copy link
Contributor Author

After starting clean I still see the same error for broker 3. The configmap for broker3 is not created.
Here's the manifest for the Kafkacluster :

apiVersion: kafka.banzaicloud.io/v1beta1
kind: KafkaCluster
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
    kafka_cr: kf-kafka
  name: kf-kafka
  namespace: alerting
spec:
  headlessServiceEnabled: false
  zkAddresses:
    - "zk-zookeeper.alerting:2181"
  rackAwareness:
    labels:
      - "failure-domain.beta.kubernetes.io/region"
      - "failure-domain.beta.kubernetes.io/zone"
  oneBrokerPerNode: false
  clusterImage: "privaterepo:4567/infra/docker-images/kafka:2.3.0.7"
  #clusterWideConfig: |
  #  background.threads=10
  rollingUpgradeConfig:
    failureThreshold: 1
  brokerConfigGroups:
    # Specify desired group name (eg., 'default_group')
    default_group:
      # all the brokerConfig settings are available here
      serviceAccountName: "kf-kafka"
      kafkaJvmPerfOpts: "-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Dsun.net.inetaddr.ttl=60 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${HOSTNAME} -Dcom.sun.management.jmxremote.rmi.port=9099"
      storageConfigs:
        - mountPath: "/kafka-logs"
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: ssd
            resources:
              requests:
                storage: 30Gi
  brokers:
    - id:  1
      brokerConfigGroup: "default_group"
      brokerConfig:
        resourceRequirements:
          limits:
            memory: "3Gi"
          requests:
            cpu: "0.3"
            memory: "512Mi"
        config: |
          session.timeout.ms=20000
          offsets.topic.replication.factor=2
          ProducerConfig.RETRIES_CONFIG=10
          transaction.state.log.replication.factor=2
          transaction.state.log.min.isr=1
          log.dirs=/kafka-logs/data
          ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG=1000
          delete.topic.enable=true
          num.partitions=32
          auto.create.topics.enable=false
          default.replication.factor=2
          num.recovery.threads.per.data.dir=8
    - id:  2
      brokerConfigGroup: "default_group"
      brokerConfig:
        resourceRequirements:
          limits:
            memory: "3Gi"
          requests:
            cpu: "0.3"
            memory: "512Mi"
        config: |
          session.timeout.ms=20000
          offsets.topic.replication.factor=2
          ProducerConfig.RETRIES_CONFIG=10
          transaction.state.log.replication.factor=2
          transaction.state.log.min.isr=1
          log.dirs=/kafka-logs/data
          ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG=1000
          delete.topic.enable=true
          num.partitions=32
          auto.create.topics.enable=false
          default.replication.factor=2
          num.recovery.threads.per.data.dir=8
    - id:  3
      brokerConfigGroup: "default_group"
      brokerConfig:
        resourceRequirements:
          limits:
            memory: "3Gi"
          requests:
            cpu: "0.3"
            memory: "512Mi"
        config: |
          session.timeout.ms=20000
          offsets.topic.replication.factor=2
          ProducerConfig.RETRIES_CONFIG=10
          transaction.state.log.replication.factor=2
          transaction.state.log.min.isr=1
          log.dirs=/kafka-logs/data
          ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG=1000
          delete.topic.enable=true
          num.partitions=32
          auto.create.topics.enable=false
          default.replication.factor=2
          num.recovery.threads.per.data.dir=8
  listenersConfig:
    internalListeners:
      - type: "plaintext"
        name: "plaintext"
        containerPort: 9092
        usedForInnerBrokerCommunication: true
  cruiseControlConfig:
    image: "solsson/kafka-cruise-control:latest"
    serviceAccountName: "kf-kafka"

@prune998
Copy link
Contributor Author

I tried using brokers ID as 0,1,2 instead of 1,2,3... It seems to be running fine.

Can you please clarify the usage of broker ID ? does it have to start from 0 (ie it's a counter of brokers starting at 0 OR it's the Kafka Broker ID, which can be any number as long at it's unique) ?

@pbalogh-sa pbalogh-sa added the bug Something isn't working label Oct 1, 2019
@pbalogh-sa
Copy link
Member

@prune998 Thanks for reporting this. The first brokerID has to be 0 at the moment. We will fix this soon.

@baluchicken
Copy link
Member

@prune998 Thanks for reporting this we are going to release a new version soon, which contains the fixes

@prune998
Copy link
Contributor Author

prune998 commented Oct 1, 2019

OK thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working community
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants