You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
mq:
version: 9.1.3.0# Set to True if running MQ in HA modeuseConnectionNameList: truetlsSecretName: 'spm-dev01-mq-secret'queueManager:
name: 'QM1'secret:
# name is the secret that contains the 'admin' user password and the 'app' user password to use for messagingname: ''# adminPasswordKey is the secret key that contains the 'admin' user passwordadminPasswordKey: 'adminPasswordKey'# appPasswordKey is the secret key that contains the 'admin' user passwordappPasswordKey: 'appPasswordKey'metrics:
enabled: falseresources: {}multiInstance:
cephEnabled: falsestorageClassName: 'nfs'nfsEnabled: truenfsIP: 'fs-xxxxxxxx.efs.eu-west-2.amazonaws.com'nfsFolder: 'spm-dev01'
When the curam-mq and rest-mq pods stasrt they connect mount the AWS EFS file system, and Kubernetes(EKS) returns the following error:
Warning FailedMount 2m31s kubelet, ip-100-64-18-180.eu-west-2.compute.internal MountVolume.SetUp failed for volume "spm-dev01-curam-pv-qm" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/11e52fd5-fb8c-40b6-9cf7-b252f1f4e1ac/volumes/kubernetes.io~nfs/spm-dev01-curam-pv-qm --scope -- mount -t nfs -o hard,nfsvers=4.1,noresvport,retrans=2,rsize=1048576,timeo=600,wsize=1048576 fs-xxxxxxx.efs.eu-west-2.amazonaws.com:/spm-dev01/curam /var/lib/kubelet/pods/11e52fd5-fb8c-40b6-9cf7-b252f1f4e1ac/volumes/kubernetes.io~nfs/spm-dev01-curam-pv-qm
Output: Running scope as unit run-12552.scope.
mount.nfs: Connection timed out
Solution:
adding mountOptions with the following properties as recommended here seems to resolve the issue
For this to be portable or able to be changed for different servicer providers I've added the following code to mqserver/templates/pv-data.yaml, mqserver/templates/pv-logs.yaml, mqserver/templates/pv-qm.yaml
{{- if $.Values.global.mq.multiInstance.nfsMountOptions }}mountOptions:
{{- range $.Values.global.mq.multiInstance.nfsMountOptions }}
- {{ . | quote }}{{- end }}{{- end}}
adding the following element, to the set values then sets the mountOptions:
Issue:
When running MQ with the following properties:
When the
curam-mq
andrest-mq
pods stasrt they connect mount the AWS EFS file system, and Kubernetes(EKS) returns the following error:Solution:
adding mountOptions with the following properties as recommended here seems to resolve the issue
For this to be portable or able to be changed for different servicer providers I've added the following code to
mqserver/templates/pv-data.yaml
,mqserver/templates/pv-logs.yaml
,mqserver/templates/pv-qm.yaml
adding the following element, to the set values then sets the
mountOptions
:The text was updated successfully, but these errors were encountered: