-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
misc: Fix cleanup with k3s / rke2 #586
misc: Fix cleanup with k3s / rke2 #586
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks @fidencio !
c60bd96
to
dc31680
Compare
I don't think the E2E failures are related to my changes. |
misc/snapshotter/snapshotter.sh
Outdated
for i in $(nsenter -t 1 -m ctr -n k8s.io snapshot --snapshotter nydus list | grep -v KEY | cut -d' ' -f1); do | ||
nsenter -t 1 -m ctr -n k8s.io snapshot --snapshotter nydus rm $i || true | ||
local ctr_args="" | ||
if [ "${CONTAINER_RUNTIME}" == "k3s" ] || [ "${CONTAINER_RUNTIME}" == "k3s-agent" ] || [ "${CONTAINER_RUNTIME}" == "rke2-agent" ] || [ "${CONTAINER_RUNTIME}" == "rke2-server" ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @fidencio !
if [[ " k3s k3s-agent rke2-agent rke2-server " =~ " ${CONTAINER_RUNTIME} " ]]; then
to simplify.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmmm I'm not sure k3s/rk2 are strictly a CONTAINER_RUNTIME
. Wouldn't be better introduce a new variable, e.g., ${CONTAINER_RUNTIME_SOCK:-}
that the caller would export if it wants something different from the default? We also avoid having code in snapshotter to handle different Kubernetes distributions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This follows the pattern already present in other parts of the script, @wainersm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmmm I'm not sure k3s/rk2 are strictly a
CONTAINER_RUNTIME
. Wouldn't be better introduce a new variable, e.g.,${CONTAINER_RUNTIME_SOCK:-}
that the caller would export if it wants something different from the default? We also avoid having code in snapshotter to handle different Kubernetes distributions.
That's not exactly a good idea. We want the experience to just work for whoever is using whatever k8s distro.
The user usually doesn't know and shouldn't know about the containerd socket for k3s, things should just work, and I think in the same way we have the overlays and yatta-yatta, this is our responsiblity as developers to hide it from them, actually making their lives easier.
When performing the cleanup with k3s / rke2, we must specify the containerd socket address that's been created by them, instead of using the default one. For both k3s and rke2 the containerd address socket is the same: `/run/k3s/containerd/containerd.sock`. Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make it simpler, following the same pattern introduced in the cleanup function. Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
dc31680
to
1b21030
Compare
cc @imeoer |
LGTM, thanks! @fidencio |
@fidencio Will release |
When performing the cleanup with k3s / rke2, we must specify the containerd socket address that's been created by them, instead of using the default one.
For both k3s and rke2 the containerd address socket is the same:
/run/k3s/containerd/containerd.sock
.