-
Notifications
You must be signed in to change notification settings - Fork 27
Busy azure-disk regularly fail to mount causing K8S Pod deployments to halt. #12
Comments
Sorry for the slow response time. There is a known performance issue here with the azure linux drivers. the kubernetes repo might have more expertise on this. But I do know that Kubernetes 1.6 has a speed up for attaching the disks. A work around we have asked people to use in the short term is to preformat the disks to ext4. Now how to do that in the PVC I'm not the most familiar with. @colemickens did I get this right? |
It's not the ext4 format being slow (which is addressed in 1.5.3+) since the PV was provisioned and in use, but rather wasn't able to detached and reattached when rescheduled. @dbalaouras, I assume this is continuing to happen, even after 4-5 minutes? If not, as @JackQuincy indicated, the attach code has seen improvements in recent releases (1.5.3+, 1.6), and a full rewrite is coming in the future: kubernetes/kubernetes#41950 This should probably be filed as an upstream Issue on Kubernetes and diagnosed further there. @dbalaouras can you do that and link it here? It would be helpful if you can repro and include these things:
The instanceViews can be collected from https://resources.azure.com. Navigate through the hierarchy to the VM and then go to the child node "instanceView". You can also get it from the CLI: |
Hi @colemickens and @JackQuincy - thanks for the feedback. Indeed, this error happens every 4-5 minutes; i.e. I see the following log:
I can also verify that the Pod is deployed successfully on a different node, right after I delete it and let the ReplicationController reschedule it. With respect to the "VM where the Pod was originally scheduled", I don't see the Pod being re-scheduled when this error occurs. Here's the event log of a problematic Pod, 20 seconds after I launched it:
And the full description 10 minutes later:
BTW, I just noticed that all failing Pods I see in my cluster having the same error, try to use the same volume: Then I inspected the VM in the Azure Portal and saw this: I'm also attaching an Instance View of the VM where the failing Pods were scheduled (with some data removed): k8s-agent-1DA8A8DF-2-instance-view.json.zip At this point, I'm not quite sure if this is a k8s issue; not sure what to report to Kubernetes. Looks more like an issue w/ the AzureDisk Volume Plugin I guess? |
Exactly the same problem here ! |
What version of Kubernetes are your clusters running? |
I need the instanceView from the machine where the disk was previously attached, otherwise I have no idea what's going on - the error message indicates that the detach never finished from the other node - I need the instanceView to know if that's true. Also, please let me know the Kubernetes cluster version. I'm still confused about timeline of events. Are the error messages about "DiskBlobNotFound" occurring at the exact same time as the "failed to attaches"? Is the disk actually detached from wherever it was scheduled? Please review my post above and provide all of the requested information. I can not troubleshoot this any further without it, especially since I can't repro. This also should be filed as an Issue against Kubernetes repo. Wait... are you attempting to run multiple pods with the same disk right now??? |
@colemickens no, I am not attempting to run multiple pods with the same disk. I know this is not possible, plus I need a separate disk mounted to each single Pod and I am using dynamic provisioning. For each deployment, I create a brand new (with a different name each time) PersistentVolumeClaim using a StorageClass with Provisioner After a few deployments, I start seeing this error...I can't be sure in which pod (if any) the disk was originally attached to. During my initial (and quick) investigation, I could not find the disk mentioned in the error msg, mounted to any other node. With respect to the So let me do this maybe: I will try to reproduce in a fresh cluster and try to keep records of the disks that get created and the nodes they get mounted to. In the meanwhile, if you need me to collect any additional info or if you have any hints on what could cause this issue, please let me know. |
"After a few deployments" means what? You spun up a bunch more PVC and Pods and then ran into it? Or you did a rolling update, and the Pod got rescheduled? If you're in Slack, please ping me, might be able to dig in a bit faster. |
@colemickens thx much for the help so far. Posting some updates here to keep this thread synced: Each "deployment" in my use case includes:
PVs usually get deleted after I uninstall the deployment (i.e. delete all above resources ^^). I did notice the following tho:
Finally: I am trying to reproduce the issue in a new cluster that runs k8s 1.5.3 but everything seems very stable so far! I will post here any updates. |
Hi @dbalaouras Does this still need to be kept open? If you've not repro'd please close, otherwise give an update. Otherwise I'll go ahead and close this out again. Thanks! |
@colemickens sure, I'll close it now and re-open it only if it appears again in 1.5.3. Thanks! |
I have this issue right now. The blob is not mounted on any agent but it fails to mount for 8h hours straight.
|
@zecke Would you be able to use https://resources.azure.com to look at the VMs in your cluster. Particularly to look at the I'm wondering if a detach operation failed, leaving the disk leased out, but without Kubernetes having any way of knowing that it needs to try to re-detach from the VM? If you do find such a VM, obviously please let us know, but you should be able to mitigate by doing any sort of modification to the VM. Something as simple as adding/removing a tag in the Portal can trigger it to complete the detachment operation. If this is the case, we can try to get some details from you privately to understand why the detach failed initially. Thank you very much for any information you can provide. |
Feel free to ping me out of band, a support request has been filed too. It is a galera cluster with three disks, three k8 agents and three RCs/SVCs. Two disks are attached from the resources view point of view while nothing is mounted on the agents. The specific disk above is not mounted and can't be mounted. So apparently the storage with k8/Linux is not reliable. What other options do I have?
|
Just for the record due out of band communication: The upstream developer(s) have no interest in fixing it. To me it seems if you want kubernetes with persistent storage, then right now don't use Azure. I will not comment on this issue anymore. |
@colemickens @zecke I am having this problem as well. I am using the However, when I deploy my pod I get this error:
The reason for this is a This blob isn't in use. I deleted the PVC, the PV, the Deployment and then the Page Blob from the storage account as well. Any advice on what to do? Here are versions I am using as well
|
I can assure everyone we're very interested in this being a smooth experience. I apologize for any miscommunication, I simply wanted to ensure the discussion was here on GitHub instead of in personal email. I will personally try to assist with driving this issue. @zecke Unfortunately, without the information I've requested above, I will be unable to make any further guesses or recommendations without access to your account. If you have opened an Issue with Azure Support, it will eventually make its way to our team where we can potentially look into your subscription on your behalf. If you'd like me to try to assist sooner, please try to collect the requested information from https://resources.azure.com. @zecke To give a bit more detail on my suspicions... I believe that the disk is being unmounted but not detached. This would explain the behavior you're seeing - the disk is not mounted, but Azure complains that a lease is taken, or that a detach is currently in progress. Unforunately, as I mentioned, I can only confirm this if you collect information from the (Getting your server version would be helpful too. Newer versions have a number of stability improvements.) |
@rossedman Good information, let me ask some followup questions:
This one is more surprising, I've never had someone have the PV provisioned and then immediately say it's in use. For everyone's knowledge, there is a large re-write coming for Kubernetes 1.7 that should improve the handling of a variety of edge-cases. That having been said, if we can identify issues in 1.6, we can try to get those fixes as well for future 1.6.x releases. |
And re-opening this since people are clearly having some issues with |
@colemickens will get some more information today. actually, we are migrating our k8s cluster out of dev/test and into a prod setup so the dev/test cluster we could give access to and try to do some troubleshooting. let me try to do this process again and I will provide answers to the questions above. My suspicions are that the VHD mount is on the wrong node. The pod may be rescheduling to other nodes due to memory constraints or something else and its staying attached to the original host node. Can't confirm yet but will have more soon. |
@rossedman That is also my suspicion. There is code specifically built to retry the attachment operation, but I fear the same may not exist for detach. (If the detach fails, but isn't specifically handled, when K8s calls to retry the detach, I believe Azure will return immediate success...) This is why I'm so interested in seeing the Thanks! |
@colemickens So here are a couple findings that I have to report: Files I'm Working WithThis is just a PersistentVolumeClaim a single postgres container. Nothing special. I have 3 agents nodes and 1 master. In the instance below the PV mounts to the third host agent.
After the claim is deployed:
InstanceView
When deploying the above it times out and fails to mount:If I do a
Here is the error message from the diskmount:
When deleting a PersistentVolume it does not remove it from the blob storage but does seem to unmount from host node:If I run |
@colemickens As a temporary fix I ended up using your kubernetes container that can create VHDs and I mounted that into my container. That process still works. |
@colemickens Hey, wondering if there is any update on this? Was this what you needed? Thanks |
Hi everyone, setup info:
Cluster deployed via ACS. I have some database server k8s deployments (single pod per deployment, each having their own PVC, dynamically provisioned). Problems arose after this specific event, as shown by the logs of the kube controller manager:
So judging by the logs that followed, some nodes went down, came back up, k8s rescheduled pods and we have the issue that was documented extensively above. We tried to force pod deletion in some cases to try to understand what was going on, which resulted in them being recreated (as they're managed by the deployments), and some volumes were correctly reattached.
This appears to be the same issue: the blob lease is locked and the volume cannot be attached to the rescheduled pod.
InstanceView of the VM
So in the instanceView the disk Finally "statuses" message of the instanceView is the same as the one appearing in the Overview tab of the portal UI :
Which refers to another disk that is actually bound to another pod running in another instance without any problem. I don't understand why that node would be trying to acquire the lease of this volume... This is a heavy and unexpected blocker in the process of moving to k8s on Azure for us. Happy to discuss and help troubleshooting this further (slack, email, on here or wherever), as it's a critical point to elude before moving forward with Azure. @colemickens Please let me know. |
@teeterc I'm also seeing the same errors with PVC/PV and Statefulsets on multiple Kubernetes clusters since the maintenance reboots. My clusters were created using acs-engine with Kubernetes v1.7.5.
also with the mismatch between requested and error volume id. The suggestion from @andyzhangx to restart the controller-manager has not helped. Nor has redeploying the k8s-master VMs or the affected k8s-agent VMs. |
Hi,
Good luck Radovan |
@ajhewett @teeterc You can follow the debugging on the other issue Azure/acs-engine#2002 and compare to your own errors. But it seems that something went wrong yesterday. @rcconsult It's a way if your disks are already mounted, mine are brand new so basically i can't create mount new disks on workers even if they weren't mounted anywhere previously. There is definitely a problem, mounting Managed and Non-Managed disks since yesterday. It's really critical for production clusters guys... Yesterday was a crazy day ... |
@rcconsult I have performed the same as your option 1 which has worked in some clusters (fortunately, the data in my volumes is replicated). However, I have a cluster where the problem is very persistent and k8s keeps trying to attach a deleted volume. |
@andyzhangx the k8s agents and master are not now in a failed state. Some were in a failed state right after the maintenance reboots but I performed a "Redeploy" from the portal on the failed VMs several hours ago. The errors attaching volumes have been occuring before and after my redeploy. |
@ajhewett could you check the disk |
@andyzhangx the disk |
@andyzhangx the VM status is:
|
Same here... a 1.7.5 cluster with all of the stateful sets borked after the security updates. 3 of 6 VMs are stuck in "Failed" state (though active nodes in k8s), and the other 3 are stuck in "VM Stopping" state. Attempting the instructions @ https://blogs.technet.microsoft.com/mckittrick/azure-vm-stuck-in-failed-state-arm/ for the Failed VMs just results in a 409 conflict error saying that the disk cannot detached because it is in use. Attempting it on the "VM Stopping" state VMs just freezes the powershell script indefinitely. It's kind of crazy and scary how poorly stateful sets on Azure work. Its doubly annoying because when everything is fine they work well -- they just can't survive any sort of unusual / exceptional situations. |
Have now got multiple clusters with multiple containers with problems due to nodes in failed states and disk attach errors, have got to say this really needs looking at to make it robust as is very fragile. |
@ajhewett thanks for providing the info. Could you use https://blogs.technet.microsoft.com/mckittrick/azure-vm-stuck-in-failed-state-arm/ to update VM |
@andyzhangx many, many thanks! I executed
and without any futher manual intervention the pod that was stuck in |
@ajhewett good to know. One thing to confirm, some of your VMs are in
|
Several VMs rebooted without any problems. Step 1 helped repair some failed VMs but not all. Step 2 was additionally needed for some VMs even though they seemed healthy. However, I cannot be sure that only these 2 steps are sufficient because I tried several other things before step 2. e.g. reducing the number of replicas in the statefulsets, deleting PVCs, deleting the stuck pods so that the statefulset recreated them. BTW: I still have 7 k8s clusters with VMs that have not (yet) been rebooted. If a similar problem occurs I will first try step 1 and 2 before anything else. |
@ajhewett thanks for the info. |
@andyzhangx the statefulsets use dynamically provisioned volumes (PVCs) with managed azure disks (storageclass managed-premium or managed-standard). Each pod gets its own disk, there is no disk sharing. The workload using statefulsets and PVCs is Elasticsearch. |
@andyzhangx I've tried your workaround @ #12 (comment) for my situation. However, it was not sufficient. The fundamental problem seems to be a disconnect between the Kubernetes view of each node (node fully up and in "Running" state) and the Azure view of each node VM (VM in "VM Stopping" or in "Failed" state), which never resolves because of the persistent disk leases not being cleanly released and re-acquired. The final solution that worked for me was to simply do the following for every agent node in my cluster one at a time:
While doing this, the newly restarted nodes may still fail to mount persistent disks as one of the subsequent un-restarted VMs may still be holding a lock on a disk. This resolves itself once all the VMs are restarted and are in "Running" state. |
I did find a workaround (sorry i didn't post earlier). I noticed that the failing pods were being deployed the failed Azure nodes. So, i: 1: Scaled the the kube cluster + sum(failed nodes) Everything came back online in about 10m. It seems that the Volumes that are failing to mount are isolated to the failing pods.... still strange, however, that Pods were trying to mount unrelated stateful volumes. |
Update this thread: 1. disk attach errorIssue details: In some corner case(detaching multiple disks on a node simultaneously), when scheduling a pod with azure disk mount from one node to another, there could be lots of disk attach error(no recovery) due to the disk not being released in time from the previous node. This issue is due to lack of lock before DetachDisk operation, actually there should be a central lock for both AttachDisk and DetachDisk opertions, only one AttachDisk or DetachDisk operation is allowed at one time. The disk attach error could be like following:
Related issues
Mitigation:
in Azure cloud shell, run
in Azure cli, run
Fix
|
@andyzhangx I also note that most of my nodes, even though there are no visible issues with stateful sets right now, are getting errors like this every second or so in my logs:
Investigating that pod:
Note, that pod is not running on this machine any more. So it seems like somewhere along the way these volume directories did not get cleaned up properly. |
@rocketraman this could be related to pod deletion error due to race condition, if error only exists in kubelet logs, that would be ok according to my experience. |
So after your fix is deployed, this should not occur again right? The log messages themselves seem easy to "solve" -- it appears that removing the orphaned pod directory with |
The original issue is fixed by kubernetes/kubernetes#60183 Let me know if you have any question, thanks. |
I've setup Azure Container Service with Kubernetes and I use dynamic provisioning of volumes (see details below) when deploying new Pods. Quite frequently (about 10%) I get the following error which halts the deployment:
The Pod deployment then halts forever, or until I delete the Pod and let the ReplicationController create a new one.
Any idea what is causing this?
Workflow
I have created the following StorageClass:
The storageaccount does contain a Blob service named
vhds
.When deploying a new Pod, I create a PVC that looks like this:
and finally use the PVC in the pods:
The text was updated successfully, but these errors were encountered: