-
Notifications
You must be signed in to change notification settings - Fork 26
Cloudstor volume plugin does not support NVMe block devices on Nitro-based instances #184
Comments
/cc @ddebroy |
This should follow the work done in RexRay - any ideas when this might get implemented? Without this change, it holds up use of the new instance types, which puts this plugin on a dead-end path in terms of usage... |
@kinghuang appears to have documented this very well, and we are experiencing the same issue here for the past month. Would like to get on the latest nitro-based instances (C5/M5), as it's not sustainable to stay on 4-series instances much further into 2019. This hurts us on performance, which we've proven out on a few clusters that do not have storage requirements, and has caused us to delay reserving 5-series instance types for the year ahead for much of our workload. Hopeful to get some traction here. |
Bump to top...any news on this front? |
Hi, |
@joeabbey Any chance we can get a comment from Docker on this? Will Cloudstor be updated to handle NVMe block devices on current generation EC2 instances? |
Just reserved a few M5 instances, but noticed this issue. Any workarounds for this bug? |
We had to downgrade our m5 to m4 and t3 to t2 to make this work ;-( |
I've moved to REX-Ray EBS, but it doesn't handle copying volumes across zones like Cloudstor. |
Same - have to choose between using older instances (m4/t2) and getting cross-AZ replication with Cloudstor, or using newer instances (m5/t3) and losing cross-AZ replication with RexRay. Would be good to hear if Docker is planning to support Cloudstor here, otherwise it's on a deadend path... |
I'll migrate to REX-Ray instead, since we can't downgraate to M4 since just reserved a few M5 for 3 years. 👎 Fortunately, the lack of across zones volume copy doesn't affect us. |
Hi, I found a temporary solution: |
Is cloudstor no longer being developed? |
We gave up, migrated to Rexray and accepted the lack of multi-az support. Very unfortunate that Docker-Inc didn't at least OSS the plugin if not carrying it forward, as it had some nice features. |
We gave up and moved to Rexray as well. It has been a much better experience even though it took longer to get up and running. |
@dodwmd @dodgemich @respectTheCode too bad that no one from Docker can assist - I've tried as well getting in touch with @joeabbey et.al to assist updating the Docker version but no reply what so ever. Feels not very professional from Docker's side IMO. |
@enbohm did you ever get in contact with anyone? We've looked into using Rexray but it also looks like it's not actively maintained anymore.. We just upgraded from t2 to t3's and prepaid for the t3's for the next year but now ran into this issue and I'm running out of options.. |
@porshkevich Care to explain how you implemented this workaround? Thanks!
|
Summary
On current generation EC2 instances, EBS volumes are exposed as NVMe block devices. Devices are named
/dev/nvme0n1
,/dev/nvme1n1
, ….The Cloudstor volume plugin doesn't appear to work correct with EBS volumes exposed as NVMe block devices.
Expected behaviour
Cloudstor should be able to handle EBS volumes exposed as NVMe block devices.
Actual behaviour
An error occurs mounting Docker volumes backed by EBS volumes exposed as NVMe block devices.
Information
Cloudstor correctly creates and attaches EBS volumes. But, cannot then mount volumes in containers.
Steps to reproduce the behavior
m5.large
.docker plugin inspect --format '{{ .Enabled }}' cloudstor:aws
The text was updated successfully, but these errors were encountered: