Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong Error Returned When Volume is Attached #515

Closed
kainoaseto opened this issue May 30, 2020 · 9 comments · Fixed by #698
Closed

Wrong Error Returned When Volume is Attached #515

kainoaseto opened this issue May 30, 2020 · 9 comments · Fixed by #698
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@kainoaseto
Copy link

kainoaseto commented May 30, 2020

/kind bug

What happened?
When a EBS volume is attached to another EC2 instance the ALREADY_EXISTS error is returned when it seems like the FAILED_PRECONDITION error should be returned instead because the node_id and volume_id do not match what the request attachment of volume_id and node_id.

CSI Spec: https://github.com/container-storage-interface/spec/blob/master/spec.md#controllerpublishvolume-errors

What you expected to happen?
I expected a FAILED_PRECONDITION error to be thrown or have this error thrown if the volume was already attached to my node/ec2-instance.

How to reproduce it (as minimally and precisely as possible)?

  1. Attach a EBS volume to a ec2-instance using any method, easiest would likely be calling 'CreateVolume`, and go through the normal attachment process or by scheduling a job.

  2. Call the RPC ControllerPublishVolume with the same volume_id but different node_id than the already attached volume.

Anything else we need to know?:
I've located the line that produces this problem here:

if awsErr.Code() == "VolumeInUse" {

It would be ideal to have this ALREADY_EXISTS still work but just also check for the node_id that it is attached to before returning that and otherwise return FAILED_PRECONDITION.

Environment

  • Kubernetes version (use kubectl version): N/A (Using Nomad v0.11.2)
  • Driver version: v0.6.0-dirty
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 10, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 10, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@AndyXiangLi
Copy link
Contributor

/reopen
/assign
looking into this

@k8s-ci-robot
Copy link
Contributor

@AndyXiangLi: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen
/assign
looking into this

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ayberk
Copy link
Contributor

ayberk commented Jan 13, 2021

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Jan 13, 2021
@k8s-ci-robot
Copy link
Contributor

@ayberk: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ayberk
Copy link
Contributor

ayberk commented Jan 13, 2021

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jan 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants