Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors when scaling down mongodb #1635

Open
Olliewer opened this issue Oct 31, 2024 · 0 comments · May be fixed by #1636
Open

Errors when scaling down mongodb #1635

Olliewer opened this issue Oct 31, 2024 · 0 comments · May be fixed by #1636

Comments

@Olliewer
Copy link

What did you do to encounter the bug?
Steps to reproduce the behavior:

Setting replicas of the mongodb member and arbiter to 0.

What did you expect?

The mongodb operator accepts this configuration of the mongodb. Not having any pods for the mongodb should be a valid configuration for example to scale down the cluster at night or on the weekends.

What happened instead?

The new spec is not valid. Therefore we get the following error

2024-10-31T10:22:14.764Z	ERROR	controllers/mongodb_status_options.go:104	error validating new Spec: number of arbiters specified (0) is greater or equal than the number of members in the replicaset (0). At least one member must not be an arbiter
github.com/mongodb/mongodb-kubernetes-operator/controllers.messageOption.ApplyOption
	/workspace/controllers/mongodb_status_options.go:104
github.com/mongodb/mongodb-kubernetes-operator/pkg/util/status.Update
	/workspace/pkg/util/status/status.go:25
github.com/mongodb/mongodb-kubernetes-operator/controllers.ReplicaSetReconciler.Reconcile
	/workspace/controllers/replica_set_controller.go:135
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.7/pkg/internal/controller/controller.go:122
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.7/pkg/internal/controller/controller.go:323
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.7/pkg/internal/controller/controller.go:274
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.7/pkg/internal/controller/controller.go:235

Operator Information

  • Operator Version: 0..9.0
  • MongoDB Image used: mongodb-linux-x86_64-rhel80-6.0.14

Kubernetes Cluster Information

  • Distribution: EKS
  • Version 1.30
  • Image Registry location: quay

Additional context
Add any other context about the problem here.

Operator logs:

2024-10-31T10:20:31.715Z	INFO	controllers/replica_set_controller.go:360	Creating/Updating AutomationConfig	{"ReplicaSet": "<namespace>/<appname>-mongodb"}
2024-10-31T10:20:31.731Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-central-scram-credentials
2024-10-31T10:20:31.746Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-<service1>-scram-credentials
2024-10-31T10:20:31.761Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-<service2>-scram-credentials
2024-10-31T10:20:31.779Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-<service3>-scram-credentials
2024-10-31T10:20:31.796Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-<service4>-scram-credentials
2024-10-31T10:20:31.810Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-<service5>-scram-credentials
2024-10-31T10:20:31.825Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-<service6>-scram-credentials
2024-10-31T10:20:31.848Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-<service7>-scram-credentials
2024-10-31T10:20:31.867Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-<service8>-scram-credentials
2024-10-31T10:20:31.881Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-<service9>-scram-credentials
2024-10-31T10:20:31.894Z	DEBUG	scram/scram.go:101	Credentials have not changed, using credentials stored in: secret/mongodb-<serviceadmin>-scram-credentials
2024-10-31T10:20:31.895Z	DEBUG	agent/replica_set_port_manager.go:122	No port change required	{"ReplicaSet": "<namespace>/<appname>-mongodb"}
2024-10-31T10:20:31.895Z	DEBUG	agent/replica_set_port_manager.go:40	Calculated process port map: map[<appname>-mongodb-0:27017]	{"ReplicaSet": "<namespace>/<appname>-mongodb"}
2024-10-31T10:20:31.895Z	DEBUG	controllers/replica_set_controller.go:535	AutomationConfigMembersThisReconciliation	{"mdb.AutomationConfigMembersThisReconciliation()": 1}
2024-10-31T10:20:31.896Z	DEBUG	controllers/replica_set_controller.go:383	Waiting for agents to reach version 1	{"ReplicaSet": "<namespace>/<appname>-mongodb"}
2024-10-31T10:20:31.896Z	INFO	agent/agent_readiness.go:59	All 1 Agents have reached Goal state	{"ReplicaSet": "<namespace>/<appname>-mongodb"}
2024-10-31T10:20:31.896Z	DEBUG	controllers/replica_set_controller.go:209	Resetting StatefulSet UpdateStrategy to RollingUpdate	{"ReplicaSet": "<namespace>/<appname>-mongodb"}
2024-10-31T10:20:32.074Z	INFO	controllers/replica_set_controller.go:264	Successfully finished reconciliation, MongoDB.Spec: {Members:1 Type:ReplicaSet Version:6.0.14 Arbiters:0 FeatureCompatibilityVersion:6.0 ReplicaSetHorizons:[] Security:{Authentication:{Modes:[SCRAM] AgentMode: AgentCertificateSecret:nil IgnoreUnknownUsers:0xc0007fb2a0} TLS:{Enabled:true Optional:false CertificateKeySecret:{Name:<appname>-mongodb-certificate} CaCertificateSecret:&LocalObjectReference{Name:<appname>-mongodb-selfsigned-ca-<namespace>,} CaConfigMap:nil} Roles:[]} Users:[{Name:alm-central DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-central-password} Roles:[{DB:alm-central Name:readWrite}] ScramCredentialsSecretName:mongodb-central ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}} {Name:alm-<service1>-service DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-<service1>-password} Roles:[{DB:alm-<service1>-service Name:readWrite} {DB:alm-<service1>-service Name:dbAdmin}] ScramCredentialsSecretName:mongodb-<service1> ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}} {Name:alm-<service2>-service DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-<service2>-password} Roles:[{DB:alm-<service2>-service Name:readWrite}] ScramCredentialsSecretName:mongodb-<service2> ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}} {Name:alm-<service3>-service DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-<service3>-password} Roles:[{DB:alm-<service3>-service Name:readWrite} {DB:alm-<service3>-service Name:dbAdmin}] ScramCredentialsSecretName:mongodb-<service3> ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}} {Name:alm-<service4>-service DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-<service4>-password} Roles:[{DB:alm-<service4>-service Name:readWrite} {DB:alm-<service4>-service Name:dbAdmin}] ScramCredentialsSecretName:mongodb-<service4> ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}} {Name:alm-<service5>-service DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-<service5>-password} Roles:[{DB:alm-<service5>-service Name:readWrite} {DB:alm-<service5>-service Name:dbAdmin}] ScramCredentialsSecretName:mongodb-<service5> ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}} {Name:alm-<service6>-service DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-<service6>-password} Roles:[{DB:alm-<service6>-service Name:readWrite} {DB:alm-<service6>-service Name:dbAdmin}] ScramCredentialsSecretName:mongodb-<service6> ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}} {Name:alm-<service7>-service DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-<service7>-password} Roles:[{DB:alm-<service7>-service Name:readWrite}] ScramCredentialsSecretName:mongodb-<service7> ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}} {Name:alm-<service8>-service DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-<service8>-password} Roles:[{DB:alm-<service8>-service Name:readWrite} {DB:alm-<service8>-service Name:dbAdmin}] ScramCredentialsSecretName:mongodb-<service8> ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}} {Name:alm-periodic-pf-query-service DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-<service9>-password} Roles:[{DB:alm-periodic-pf-query-service Name:readWrite}] ScramCredentialsSecretName:mongodb-<service9> ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}} {Name:admin DB:admin PasswordSecretRef:{Name:<appname>-mongodb Key:mongodb-admin-password} Roles:[{DB:admin Name:root} {DB:alm-central Name:root} {DB:alm-<service8>-service Name:root} {DB:alm-<service7>-service Name:root} {DB:alm-<service2>-service Name:root} {DB:alm-<service1>-service Name:root} {DB:alm-periodic-pf-query-service Name:root} {DB:alm-<service3>-service Name:root} {DB:alm-<service4>-service Name:root} {DB:alm-<service5>-service Name:root} {DB:alm-<service6>-service Name:root}] ScramCredentialsSecretName:mongodb-<serviceadmin> ConnectionStringSecretName: AdditionalConnectionStringConfig:{Object:map[]}}] StatefulSetConfiguration:{SpecWrapper:{Spec:{Replicas:<nil> Selector:nil Template:{ObjectMeta:{Name: GenerateName: Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[<appname>-namespace:<namespace> app.kubernetes.io/component:mongodb app.kubernetes.io/instance:<appname> app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:<appname> helm.sh/chart:<appname>-6.10.0] Annotations:map[] OwnerReferences:[] Finalizers:[] ManagedFields:[]} Spec:{Volumes:[] InitContainers:[{Name:mongod-posthook Image: Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[] Claims:[]} VolumeMounts:[] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath: TerminationMessagePolicy: ImagePullPolicy: SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} Stdin:false StdinOnce:false TTY:false} {Name:mongodb-agent-readinessprobe Image: Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[] Claims:[]} VolumeMounts:[] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath: TerminationMessagePolicy: ImagePullPolicy: SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} Stdin:false StdinOnce:false TTY:false}] Containers:[{Name:mongodb-agent Image: Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:AGENT_LOG_LEVEL Value:DEBUG ValueFrom:nil}] Resources:{Limits:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} ephemeral-storage:{i:{value:2147483648 scale:0} d:{Dec:<nil>} s:2Gi Format:BinarySI} memory:{i:{value:209715200 scale:0} d:{Dec:<nil>} s: Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} ephemeral-storage:{i:{value:1073741824 scale:0} d:{Dec:<nil>} s:1Gi Format:BinarySI} memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Claims:[]} VolumeMounts:[] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/opt/scripts/readinessprobe],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:60,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:40,TerminationGracePeriodSeconds:nil,} StartupProbe:nil Lifecycle:nil TerminationMessagePath: TerminationMessagePolicy: ImagePullPolicy: SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} Stdin:false StdinOnce:false TTY:false} {Name:mongod Image: Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[cpu:{i:{value:1000 scale:-3} d:{Dec:<nil>} s: Format:DecimalSI} ephemeral-storage:{i:{value:4294967296 scale:0} d:{Dec:<nil>} s:4Gi Format:BinarySI} memory:{i:{value:2147483648 scale:0} d:{Dec:<nil>} s:2Gi Format:BinarySI}] Requests:map[cpu:{i:{value:500 scale:-3} d:{Dec:<nil>} s:500m Format:DecimalSI} ephemeral-storage:{i:{value:2147483648 scale:0} d:{Dec:<nil>} s:2Gi Format:BinarySI} memory:{i:{value:805306368 scale:0} d:{Dec:<nil>} s: Format:BinarySI}] Claims:[]} VolumeMounts:[] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath: TerminationMessagePolicy: ImagePullPolicy: SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy: TerminationGracePeriodSeconds:<nil> ActiveDeadlineSeconds:<nil> DNSPolicy: NodeSelector:map[beta.kubernetes.io/arch:amd64 kubernetes.io/os:linux] ServiceAccountName:<appname>-mongodb-sa DeprecatedServiceAccount: AutomountServiceAccountToken:<nil> NodeName: HostNetwork:false HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c102,c202,},RunAsUser:*1000,RunAsNonRoot:nil,SupplementalGroups:[1000],FSGroup:*1000,RunAsGroup:*1000,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},} ImagePullSecrets:[{Name:965567389202.dkr.ecr.eu-central-1.amazonaws.com}] Hostname: Subdomain: Affinity:nil SchedulerName: Tolerations:[] HostAliases:[] PriorityClassName: Priority:<nil> DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:<nil> PreemptionPolicy:<nil> Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil HostUsers:<nil> SchedulingGates:[] ResourceClaims:[]}} VolumeClaimTemplates:[{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:data-volume GenerateName: Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:10737418240 scale:0} d:{Dec:<nil>} s:10Gi Format:BinarySI}] Claims:[]} VolumeName: StorageClassName:0xc000b1dc80 VolumeMode:<nil> DataSource:nil DataSourceRef:nil} Status:{Phase: AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] ResizeStatus:<nil>}} {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:logs-volume GenerateName: Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:3221225472 scale:0} d:{Dec:<nil>} s:3Gi Format:BinarySI}] Claims:[]} VolumeName: StorageClassName:0xc000b1dcb0 VolumeMode:<nil> DataSource:nil DataSourceRef:nil} Status:{Phase: AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] ResizeStatus:<nil>}}] ServiceName: PodManagementPolicy: UpdateStrategy:{Type: RollingUpdate:nil} RevisionHistoryLimit:<nil> MinReadySeconds:0 PersistentVolumeClaimRetentionPolicy:nil Ordinals:nil}} MetadataWrapper:{Labels:map[] Annotations:map[]}} AgentConfiguration:{LogLevel: LogFile: MaxLogFileDurationHours:0 LogRotate:<nil> SystemLog:<nil>} AdditionalMongodConfig:{MapWrapper:{Object:map[storage:map[wiredTiger:map[engineConfig:map[journalCompressor:zlib]]] systemLog:map[quiet:true]]}} AutomationConfigOverride:<nil> Prometheus:0xc000282380 AdditionalConnectionStringConfig:{Object:map[]}}, MongoDB.Status: {MongoURI:mongodb://<appname>-mongodb-0.<appname>-mongodb-svc.<namespace>.svc.cluster.local:27017/?replicaSet=<appname>-mongodb Phase:Running Version:6.0.14-<namespace> CurrentStatefulSetReplicas:1 CurrentMongoDBMembers:1 CurrentStatefulSetArbitersReplicas:0 CurrentMongoDBArbiters:0 Message:}	{"ReplicaSet": "<namespace>/<appname>-mongodb"}
2024-10-31T10:22:14.764Z	INFO	controllers/replica_set_controller.go:130	Reconciling MongoDB	{"ReplicaSet": "<namespace>/<appname>-mongodb"}
2024-10-31T10:22:14.764Z	DEBUG	controllers/replica_set_controller.go:132	Validating MongoDB.Spec	{"ReplicaSet": "<namespace>/<appname>-mongodb"}
2024-10-31T10:22:14.764Z	ERROR	controllers/mongodb_status_options.go:104	error validating new Spec: number of arbiters specified (0) is greater or equal than the number of members in the replicaset (0). At least one member must not be an arbiter
github.com/mongodb/mongodb-kubernetes-operator/controllers.messageOption.ApplyOption
	/workspace/controllers/mongodb_status_options.go:104
github.com/mongodb/mongodb-kubernetes-operator/pkg/util/status.Update
	/workspace/pkg/util/status/status.go:25
github.com/mongodb/mongodb-kubernetes-operator/controllers.ReplicaSetReconciler.Reconcile
	/workspace/controllers/replica_set_controller.go:135
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.7/pkg/internal/controller/controller.go:122
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.7/pkg/internal/controller/controller.go:323
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.7/pkg/internal/controller/controller.go:274
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.7/pkg/internal/controller/controller.go:235
2024-10-31T10:22:14.802Z	INFO	controllers/replica_set_controller.go:130	Reconciling MongoDB	{"ReplicaSet": "<namespace>/<appname>-mongodb"}
❯ k get statefulsets
NAME                  READY   AGE
<appname>-mongodb       0/0     38m
<appname>-mongodb-arb   0/0     38m
                                                                                     
❯ k get mdbc
NAME              PHASE    VERSION
<appname>-mongodb   Failed   6.0.14

❯ k get statefulsets -o yaml
apiVersion: v1
items:
- apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: <appname>-mongodb
  spec:
    persistentVolumeClaimRetentionPolicy:
      whenDeleted: Retain
      whenScaled: Retain
    podManagementPolicy: OrderedReady
    replicas: 0
    revisionHistoryLimit: 10
  status:
    availableReplicas: 0
    collisionCount: 0
    currentRevision: <appname>-mongodb-arb-85977ff9b8
    observedGeneration: 1
    replicas: 0
    updateRevision: <appname>-mongodb-arb-85977ff9b8
@Olliewer Olliewer linked a pull request Oct 31, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant