Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release/1.7] Fix the race condition during GC of snapshots when client retries #10763

Conversation

k8s-infra-cherrypick-robot

This is an automated cherry-pick of #10723

/assign samuelkarp

When an upstream client (e.g. kubelet) stops or restarts, the CRI
connection to the containerd gets interrupted which is treated as a
cancellation of context which subsequently cancels an ongoing operation,
including an image pull. This generally gets followed by containerd's
GC routine that tries to delete the prepared snapshots for the image
layer(s) corresponding to the image in the pull operation that got
cancelled. However, if the upstream client immediately retries (or
starts a new) image pull operation, containerd initiates a new image
pull and starts unpacking the image layers into snapshots. This may
create a race condition: the GC routine (corresponding to the failed
image pull operation) trying to clean up the same snapshot that the new
image pull operation is preparing, thus leading to the "parent snapshot
does not exist: not found" error.

Race Condition Scenario:
Assume an image consisting of 2 layers (L1 and L2, L1 being the bottom
layer) that are supposed to get unpacked into snapshots S1 and S2
respectively.

During an image pull operation, containerd unpacks(L1) which involves
Stat()'ing the chainID. This Stat() fails as the chainID does not
exist and Prepare(L1) gets called. Once S1 gets prepared, containerd
processes L2 - unpack(L2) which again involves Stat()'ing the chainID
which fails as the chainID for S2 does not exist which results in the
call to Prepare(L2). However, if the image pull operation gets
cancelled before Prepare(L2) is called, then the GC routine tries to
clean up S1.

When the image pull operation is retried by the upstream client,
containerd follows the same series of operations. unpack(L1) gets
called which then calls Stat(chainID) for L1. However, this time,
Stat(L1) succedes as S1 already exists (from the previous image pull
operation) and thus containerd goes to the next iteration to
unpack(L2). Now, GC cleans up S1 and when Prepare(L2) gets called, it
returns back the "parent snapshot does not exist: not found" error.

Fix:
Removing the "Stat() + early return" fixes the race condition. Now
during the image pull operation corresponding to the client retry,
although the chainID (for L1) already exists, containerd does not
return early and goes on to Prepare(L1). Since L1 is already prepared,
it adds a new lease to S1 and then returns `ErrAlreadyExists`. This
new lease prevents GC from cleaning up S1 when containerd processes
L2 (unpack(L2) -> Prepare(L2)).

Fixes: containerd#3787

Signed-off-by: Saket Jajoo <saketjajoo@google.com>
@k8s-ci-robot
Copy link

Hi @k8s-infra-cherrypick-robot. Thanks for your PR.

I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@samuelkarp
Copy link
Member

/ok-to-test

@saketjajoo
Copy link
Contributor

/retest

@mxpv mxpv merged commit b36fa65 into containerd:release/1.7 Oct 4, 2024
57 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants