-
Notifications
You must be signed in to change notification settings - Fork 294
Shared Persistent Volume #471
Shared Persistent Volume #471
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ to sign the CLA. Once you've signed, please reply here (e.g. "I signed it!") and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
From: k8s-ci-robot [mailto:notifications@github.com]
Sent: 28 March 2017 11:55
To: kubernetes-incubator/kube-aws <kube-aws@noreply.github.com>
Cc: Kevin Taylor <kevtaylor@expedia.com>; Author <author@noreply.github.com>
Subject: Re: [kubernetes-incubator/kube-aws] Shared Persistent Volume (#471)
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
📝 Please follow instructions at https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ to sign the CLA.
Once you've signed, please reply here (e.g. "I signed it!") and we'll verify. Thanks.
…________________________________
* If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits<https://help.github.com/articles/setting-your-email-in-git/>.
* If you signed the CLA as a corporation, please sign in with your organization's credentials at https://identity.linuxfoundation.org/projects/cncf to be authorized.
Instructions for interacting with me using PR comments are available here<https://github.com/kubernetes/community/blob/master/contributors/devel/pull-request-commands.md>. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra<https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:> repository. I understand the commands that are listed here<https://github.com/kubernetes/test-infra/blob/master/commands.md>.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#471 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AE5BIZ32VmmHF1I05B53V1ZE4UseFVVAks5rqOcBgaJpZM4Mrc-n>.
|
Hi @kevtaylor, thanks for your contribution 👍 |
@@ -775,6 +775,9 @@ worker: | |||
# See https://github.com/kubernetes-incubator/kube-aws/issues/208 for more information | |||
#elasticFileSystemId: fs-47a2c22e | |||
|
|||
# Create shared persistent volume | |||
#sharedPersistentVolume: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering/Not intended to scope-creep this PR; Are there any use-cases that lead us want to create multiple of shared persistent volumes, possibly varying volume names and storage sizes?
sharedPersistentVolumes:
- name: shared-efs
provider: efs
size: 500Gi
- name: shared-ebs
provider: ebs
size: 100Gi
@mumoshu The use case we have is regarding batch processes that we run using chronos api. We create indexes which ate produced by a batch job which need to be read or published by another job. An Efs volume is mounted across multiple instances so if we have processes running across a cluster they can all see the shared volume. The current kube-aws offering assumes an efs mount point so we create a dynamic one and set it up to be available in kubernetes. The 500Gb limit could be configured but that would bloat the templates so we just figured that you could modify it to your own spec |
Thanks @kevtaylor! |
@kevtaylor Btw, would you mind signing the CLA before merging this? |
@kevtaylor Would you mind running |
@mumoshu We understand the destruction of the volume if you take the whole cluster out and I would assume that if anyone didn't want that they could use the existing process of supplying their own I'd. We are in the process if getting the CLA requirements and will push the requested format shortly. |
Restart=on-failure | ||
ExecStartPre=/opt/bin/set-efs-pv | ||
ExecStartPre=/usr/bin/systemctl is-active kube-node-taint-and-uncordon.service | ||
ExecStart=/opt/bin/load-efs-pv |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps you're missing an entry for the /opt/bin/load-efs-pv
script under write_files:
?
set-efs-pv
does seem to exist but no load-efs-pv
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My colleague pushed the missing part up to the commit. Apologies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Once other minor comments are addressed, I'll be ready to merge this
d584764
to
3f4b70e
Compare
3f4b70e
to
8dd141f
Compare
Codecov Report
@@ Coverage Diff @@
## master #471 +/- ##
=======================================
Coverage 38.06% 38.06%
=======================================
Files 46 46
Lines 3145 3145
=======================================
Hits 1197 1197
Misses 1752 1752
Partials 196 196
Continue to review full report at Codecov.
|
@kevtaylor Could you reply with |
Hi @jollinshead! Is @kevtaylor your colleague? Then, can you confirm that you have signed CLA as an organization? |
@mumoshu Hi. I am waiting for a representative from our organisation to sign the CLA. I'll post here when I get it. Sorry for the delay but politics gets in the way of progress :-) |
@kevtaylor Got it. Thanks for the reply and sorry for rushing you 👍 |
I have been approved as an individual |
Thanks!
I'd appreciate it if you could resolve conflicts
2017年4月13日(木) 18:15 Kevin Taylor <notifications@github.com>:
… I have been approved as an individual
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#471 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABV-TTvUgNLqryMTsJjIyggVoPGUzF6ks5rvee9gaJpZM4Mrc-n>
.
|
@kevtaylor LGTM. Thanks for your contribution 👍 |
* kubernetes-incubator/master: Don't mount /var/lib/rkt into kubelet to avoid shared bind-mounts propagation Fix to calico configuration file etcd endpoints Fix hyperlink to restore script in Readme.md. Reference 'autosave' rather than 'export' in comments of cluster.yaml. 'Restore' feature to restore Kubernetes Resources from S3 backup Add missing '/' when constructing the Autosave S3 put path Shared Persistent Volume (kubernetes-retired#471) Fix an incorrect variable name in the e2e/run script Add documentation for administrating etcd cluster Resolves kubernetes-retired#491 use gzip base64 encoding for customFiles New options: customFiles and customSystemdUnits Add cluster.yaml details for apiEndpointName Fix the dead-lock while bootstrapping etcd cluster when wait signal is enabled. Resolves kubernetes-retired#525 Fix elasticFileSystemId to be propagated to node pools Resolves kubernetes-retired#487 Minor fixup for etcd unit files Fix up apiEndpoints.loadBalancer config
- Add the new option `sharedPersistentVolume: true/false` to cluster.yaml - When enabled, an EFS volume which is available across all zones is created - A persistent volume definition which derives the EFS mount point is created on Kubernetes
SharedPersistentVolume=true/false in cluster.yaml
Creates a dynamic EFS volume available across all zones
Creates persistent volume definitions for kubernetes - derives the efs mount point
Creates service scripts to create the persistent volume in the cluster