Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

efs-provisioner #91

Closed
pierluigilenoci opened this issue Oct 2, 2020 · 12 comments
Closed

efs-provisioner #91

pierluigilenoci opened this issue Oct 2, 2020 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pierluigilenoci
Copy link

I am opening this ticket because I wanted information about efs-provisioner. The repo that contained it has been archived https://github.com/kubernetes-retired/external-storage/ and I have not found any information on the future of the project. Can you help me? Maybe @wongma7 ?

Ref: https://github.com/kubernetes-retired/external-storage/tree/master/aws/efs

@wongma7
Copy link
Contributor

wongma7 commented Oct 2, 2020

I recommend moving to https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner . It is a generic and less opinionated ( does not try to validate stuff) version of the efs provisioner

@pierluigilenoci
Copy link
Author

I'll try this new provisioner. Thank you.

@chadlwilson
Copy link

chadlwilson commented Nov 22, 2020

Hi @wongma7 - do you know of a migration guide/suggestion/procedure somewhere for transitioning from the efs-provisioner to the nfs-subdir-external-provisioner for those with existing production systems and existing PVCs/PVs?

I'm trying to assess what we might be getting into with such a migration to a supported option (c.f switching to the EFS CSI driver, which is obviously an alternative).

Do you have any useful experience to share from your perspective @pierluigilenoci ?

@pierluigilenoci
Copy link
Author

@chadlwilson I'm sorry but I haven't started migrating to the new provider yet so my experience is limited. I am waiting for the migration of the chart before starting to do the job once. Ref: https://github.com/helm/charts/issues/21103

@verwilst is the chart maintainer, maybe he can help you better.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 21, 2021
@pierluigilenoci
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 23, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 22, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@chadlwilson
Copy link

chadlwilson commented Jul 22, 2021

I had forgotten about this issue, however if anyone else comes across it, in our case we found the migration to nfs-subdir-external-provisioner relatively painless. Needed to install in parallel with a diff storage class, create new PVCs for the relevant apps, but were able to do so on the same underlying EFS volume, migrate apps across and then remove the old PVCs and PVs before uninstalling the efs-provisioner.

In our case we didn't have data stored here that we couldn't afford to lose, making it easier.

Couple of differences to note

  • no longer requires AWS creds (so need for kube2iam etc)
  • I found a hack to allow the new provisioner to 'adopt' the previous volumes under the efs storage class which seemed to work fine, BUT it involved losing control of configuration values in the storage class (since these are immutable) and did not seem a good idea so we created a new storage class and new volumes in the end.
  • the new provisioner is distroless so better from a security perspective
  • new provisioner supports archive-on-delete which has is a different default behaviour to the old one

A hacky migration strategy for those who cannot lose data may be to exec into the old provisioner and copy files from the old volume subdir to the new subdir, being careful about permissions (if the provisioner shares EFS volume it can see the subdirs created by the other provisioner, and it has a shell in the container).

@pierluigilenoci
Copy link
Author

pierluigilenoci commented Jul 22, 2021

@chadlwilson to be honest we have migrated to Amazon EFS CSI Driver
Ref: https://github.com/kubernetes-sigs/aws-efs-csi-driver

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants