Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs for analytics persistence #112

Closed
ukclivecox opened this issue Mar 7, 2018 · 5 comments
Closed

Add docs for analytics persistence #112

ukclivecox opened this issue Mar 7, 2018 · 5 comments
Assignees

Comments

@ukclivecox
Copy link
Contributor

Options for creating various volumes compatible with Prometheus

@Raab70
Copy link

Raab70 commented Mar 7, 2018

I know I would like to see setups using GlusterFS as well as a static ZFS server. https://cloud.google.com/solutions/filers-on-compute-engine#summary_of_file_server_options

@Raab70
Copy link

Raab70 commented Mar 20, 2018

Any update on this? Specifically the NFS version. I followed this example and was able to get it working fine. So then I created a PV and PVC for seldon to use and the mount fails.

My Seldon PV/PVC:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: seldon-volume
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.35.240.4
    path: "/seldon/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: seldon-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  storageClassName: ""

Which is just a combination of the built in PV and PVC from the example.

But the output upon deployment:

Type     Reason                 Age   From                                                 Message
  ----     ------                 ----  ----                                                 -------
  Normal   Scheduled              3m    default-scheduler                                    Successfully assigned prometheus-deployment-7b44cd4c5f-r2f9r to gke-dev-cluster-default-pool-99eadb69-pnkk
  Normal   SuccessfulMountVolume  3m    kubelet, gke-dev-cluster-default-pool-99eadb69-pnkk  MountVolume.SetUp succeeded for volume "prometheus-config-volume"
  Normal   SuccessfulMountVolume  3m    kubelet, gke-dev-cluster-default-pool-99eadb69-pnkk  MountVolume.SetUp succeeded for volume "prometheus-rules-volume"
  Normal   SuccessfulMountVolume  3m    kubelet, gke-dev-cluster-default-pool-99eadb69-pnkk  MountVolume.SetUp succeeded for volume "default-token-fwscs"
  Warning  FailedMount            1m    kubelet, gke-dev-cluster-default-pool-99eadb69-pnkk  Unable to mount volumes for pod "prometheus-deployment-7b44cd4c5f-r2f9r_default(628fc942-2be6-11e8-a139-42010a800233)": timeout expired waiting for volumes to attach/mount for pod "default"/"prometheus-deployment-7b44cd4c5f-r2f9r". list of unattached/unmounted volumes=[prometheus-storage-volume]
  Warning  FailedMount            1m    kubelet, gke-dev-cluster-default-pool-99eadb69-pnkk  MountVolume.SetUp failed for volume "seldon-volume" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/628fc942-2be6-11e8-a139-42010a800233/volumes/kubernetes.io~nfs/seldon-volume --scope -- /home/kubernetes/containerized_mounter/mounter mount -t nfs 10.35.240.4:/seldon/ /var/lib/kubelet/pods/628fc942-2be6-11e8-a139-42010a800233/volumes/kubernetes.io~nfs/seldon-volume
Output: Running scope as unit run-ra64210a7e6d44594ae08976ea0249a75.scope.
Mount failed: mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs 10.35.240.4:/seldon/ /var/lib/kubelet/pods/628fc942-2be6-11e8-a139-42010a800233/volumes/kubernetes.io~nfs/seldon-volume]
Output: mount.nfs: Connection timed out

Everything looks the same between the nfs PV and nfs PVC from the demo as the seldon-volume and seldon-claim:

$ kubectl get pvc
NAME                               STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs                                Bound     nfs                                        1Mi        RWX                           17s
nfs-pv                             Bound     pvc-ec8c3b52-2be9-11e8-a139-42010a800233   200Gi      RWO            standard       9m
seldon-claim                       Bound     seldon-volume                              50Gi       RWX                           9m
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                      STORAGECLASS   REASON    AGE
nfs                                        1Mi        RWX            Retain           Bound     default/nfs                                                         2s
pvc-ec8c3b52-2be9-11e8-a139-42010a800233   200Gi      RWO            Delete           Bound     default/nfs-pv                             standard                 8m
seldon-volume                              50Gi       RWX            Retain           Bound     default/seldon-claim                                                8m

Any ideas?

@ukclivecox
Copy link
Contributor Author

Not sure. Looks like an NFS error.
I see you have nfs-pv and seldon-claim in your pvc list above?
Are you able to mount the nfs volume onto a busybox pod to ensure its working ok?

@Raab70
Copy link

Raab70 commented Mar 20, 2018

Yes, I went all the way through the above example in accessing that NFS server through busybox and ensuring that it was working.

As an update I have followed the following tutorial for setting up a single node filer and gotten slightly further. The above issue is definitely DNS because when I use the IP of the single node filer for the same PV and PVC as above it mounts properly. Since this is a working solution I say go ahead and close, doesn't appear to be a seldon + NFS issue but an issue with that example NFS deployment I used.

@ukclivecox
Copy link
Contributor Author

OK. If you can create a working example it would be great to have an example we can add to our docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants