Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output: mount: unknown filesystem type 'glusterfs' #1709

Closed
daveoconnor opened this issue Jul 17, 2017 · 10 comments
Closed

Output: mount: unknown filesystem type 'glusterfs' #1709

daveoconnor opened this issue Jul 17, 2017 · 10 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@daveoconnor
Copy link

daveoconnor commented Jul 17, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Minikube version (use minikube version):
minikube version: v0.20.0

Environment:

  • OS (e.g. from /etc/os-release): Ubuntu 16.04.2 LTS
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): minikube-v1.0.6.iso
  • Install tools:
  • Others:

What happened:

Received following error on statefulset start up:

SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "certificates-storage-backend-development-0", which is unexpected.
MountVolume.SetUp failed for volume "kubernetes.io/glusterfs/0fb66a0a-6aae-11e7-999d-080027a863a3-certificates-storage" (spec.Name: "certificates-storage") pod "0fb66a0a-6aae-11e7-999d-080027a863a3" (UID: "0fb66a0a-6aae-11e7-999d-080027a863a3") with: glusterfs: mount failed: mount failed: exit status 32 Mounting command: mount Mounting arguments: 10.0.0.111:/certificates-volume /var/lib/kubelet/pods/0fb66a0a-6aae-11e7-999d-080027a863a3/volumes/kubernetes.io~glusterfs/certificates-storage glusterfs [log-level=ERROR log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/certificates-storage/backend-development-0-glusterfs.log] Output: mount: unknown filesystem type 'glusterfs' the following error information was pulled from the glusterfs log to help diagnose this issue: glusterfs: could not open log file for pod: backend-development-0

What you expected to happen:
Mount the volume within the pod

How to reproduce it (as minimally and precisely as possible):
01-gluster-storage-class.yml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
		name: gluster-standard
provisioner: kubernetes.io/glusterfs
parameters:
		endpoint: "gluster-cluster"
		resturl: "http://10.0.0.111:8081"

02-gluster-endpoint.yml

apiVersion: v1
kind: Endpoints
metadata:
	name: gluster-cluster 
subsets:
- addresses:              
	- ip: 10.0.0.111
	ports:                  
	- port: 1 # port number is ignored, but must be legal
		protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
	name: gluster-cluster
spec:
	ports:
	- port: 1 # port number is ignored but must be legal

03-persistent-volume.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: certificates-storage-claim
spec:
  capacity:
    storage: 20Mi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: /certificates-storage
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  storageClassName: gluster-standard

04-persistent-volume-claim.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
	name: certificates-storage-claim 
	annotations:
		volume.beta.kubernetes.io/storage-class: gluster-standard
spec:
	accessModes:
	- ReadWriteMany
	resources:
		requests:
			storage: 20Mi

05-statefulset.yml

apiVersion: v1
kind: Service
metadata:
	name: backend-development
	labels:
		app: backend-development
spec:
	ports:
	- port: 80
		name: web
	clusterIP: None
	selector:
		app: backend-development

---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
	name: backend-development
spec:
	serviceName: "backend-development"
	replicas: 3
	template:
		metadata:
			labels:
				app: backend-development
		spec:
			terminationGracePeriodSeconds: 10
			containers:
				- name: backend-development
					image: mount-test-gluster
					imagePullPolicy: Never
					ports:
						- containerPort: 80
							name: web
					securityContext:
						privileged: true
					volumeMounts:
						- name: certificates-storage
							mountPath: /etc/secrets
			volumes:
				- name: certificates-storage
					persistentVolumeClaim:
						claimName: certificates-storage-claim

glusterfs-client-install.sh

#!/bin/bash
GLUSTER_VERSION='3.10'
wget -O - http://download.gluster.org/pub/gluster/glusterfs/${GLUSTER_VERSION}/rsa.pub | apt-key add -
echo deb http://download.gluster.org/pub/gluster/glusterfs/3.10/LATEST/Debian/stretch/apt stretch main > /etc/apt/sources.list.d/gluster.list 
apt update && apt install -y glusterfs-client

Dockerfile:

FROM debian:stretch
MAINTAINER me@example.org

RUN apt update && apt upgrade -y && apt install -y nginx wget gnupg2 apt-transport-https

COPY glusterfs-client-install.sh /opt/
RUN /opt/glusterfs-client-install.sh

CMD ["nginx", "-g", "daemon off;"]

mkdir gluster-test-case
add all of the above files to the gluster-test-case directory changing ip addresses to gluster service as appropriate.
cd gluster-test-case
chmod +x glusterfs-client-install.sh
docker build -t mount-test-gluster .
kubectl create -f 01-gluster-storage-class.yml
kubectl create -f 02-gluster-endpoint.yml
kubectl create -f 03-persistent-volume.yml
kubectl create -f 04-persistent-volume-claim.yml
kubectl create -f 05-statefulset.yml

Anything else do we need to know:

If I comment out the volume section of the statefulset and then get the pods up and running, I can docker exec into the container and mount manually as expected.

@r2d4
Copy link
Contributor

r2d4 commented Jul 17, 2017

We don't have glusterfs installed in the image that we use. However, we can add it.

@r2d4 r2d4 added iso/minikube-iso kind/feature Categorizes issue or PR as related to a new feature. labels Jul 17, 2017
@daveoconnor
Copy link
Author

Thanks, that'd be great. Is there a way to install it manually as a workaround in the meantime?

Is it a bug that these are said to be supported by kubernetes but not minikube?

Thanks again.

@r2d4
Copy link
Contributor

r2d4 commented Jul 17, 2017

You may be able to install it manually inside the minikube VM by working in the minikube ssh shell. I'm not too familiar with glusterFS, but it looks like the nodes need to be part of a glusterFS server cluster so this would entail running a glusterFS server inside the minikube VM.

As far as feature/bug, as far as I know, I'm not sure we can offer any guarantees on features that require special features on the nodes themselves (gpus, certain cloud features, etc.)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 1, 2018
@huguesalary
Copy link
Contributor

I also need glusterfs to be available in minikube.

I've been trying to find a way to install it, but I couldn't find any package manager in the VM. Do I need to mess with Buildroot to install glusterfs?

@daveoconnor
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 3, 2018
@Jerral3
Copy link

Jerral3 commented Mar 2, 2018

Is there any known workaround for this issue? It seems like a real pain to try to onstall glusterfs-client in the minikube TinyLinux iso.

The only solution I see would be to install glusterfs in the pod and then mount the volume manually with a postStartHook, but it feels like it's not the right time to do it in the Pod lifecycle...

Any suggestion?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 31, 2018
@huguesalary
Copy link
Contributor

/remove-lifecycle stale

I would really like to see this happen.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 1, 2018
@dlorenc
Copy link
Contributor

dlorenc commented Jul 17, 2018

I think this was fixed in #2925

@dlorenc dlorenc closed this as completed Jul 17, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

7 participants