-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature - Support for Pod-Level Identity Using AWS Credentials #334
Comments
Hi @aminmr, thanks for the feature request! We'll be happy to gather more feedback on it before starting the implementation. Generally, the use of static credentials is not recommended, but we do recognize that there are use cases where their usage is required. With IRSA, service accounts are annotated with the role they need to assume. If we decide to proceed with the implementation, we'll need to define a way of associating static credentials with the service account. |
Hi @vladem, Thanks for the response! I understand the concerns about static credentials. If you decide to move forward, I'm happy to help define how to associate them with the service account through annotations. Thanks! |
We have successfully used the pod-level IRSA to mount S3 buckets on EKS clusters. But as mentioned before, some S3-compatible storage solutions, such as Ceph, or OCI buckets can not use this approach. Instead of configuring a single CSI driver level AWS credentials, an alternative could be configuring the AWS credential per Persistent Volume. I am proposing the use of "nodePublishSecretRef" field in the Persistent Volume object to specify the K8S secret which stores the required access key and secret key. We can introduce a new authenticationSource (other than driver and pod) as "persistentVolume". Workflow
CSI Driver Changes
Proposed Patch
|
@dannycjones @unexge What do you folks think about this approach of providing secrets per PVs? Design is simple enough to expand on existing capabilities of the driver. The changes captured in the patch are also small and can be contributed by the community in a feature branch. |
Hey @pawanpnvda, we'll talk with our product team about it. In AWS, we do not recommend usage of long-term credentials. I understand this is needed outside AWS for S3-like services, but would like to discuss with our product team about it. |
Thanks for opening this feature request. As previously mentioned, we do not recommend usage of long-term AWS credentials:
https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds.html and in general our thinking about supporting S3-like products is captured in Mountpoint's README:
|
We decided to reconsider our decision for non-AWS/EKS use-cases. While we maintain our stance against using long-term AWS credentials in EKS environments, we will consider adding an opt-in Helm flag to enable usage of this. This flag would allow users outside of AWS to enable long-term credentials if they cannot access short-term credentials. |
Thank you! |
Hey @pawanpnvda, we'll be looking into the proposal during our design decisions. But from an initial look, it seems reasonable to me. |
/feature
Is your feature request related to a problem? Please describe.
The Mountpoint for Amazon S3 CSI Driver currently supports pod-level credentials only through IRSA. This creates limitations for users of S3-compatible storage solutions, such as Ceph. These users cannot utilize pod-level credentials effectively and are restricted to driver-level credential configurations.
Describe the solution you'd like in detail
I would like the ability to define pod-level credentials using the following AWS environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
For security purposes, these variables could be passed to the pods via Kubernetes secrets. This feature would improve flexibility for S3-compatible storage while allowing credentials to be managed at the pod level.
Describe alternatives you've considered
The only current alternative is to configure credentials at the driver level. However, this approach lacks the granularity needed for pod-level configurations and does not meet the requirements for non-EKS Kubernetes environments or S3-compatible backends.
I am also willing to contribute to the development of this feature if guidance and support from the maintainers are available.
The text was updated successfully, but these errors were encountered: