You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation for caching images will pull the respective container images for the current region, which in us-west-2 might look something like: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.9
And then to save space, the pulled image is tagged with the names that would be used in other regions when the AMI is copied across regions (i.e. - image is built in us-west-2 and then copied/shared to say eu-central-2)
However, the current implementation is simply changing the region within the image tag, and not taking into account the AWS account ID and domain name. So in the scenario above, the image would currently be tagged as:
Images should be tagged with the correct properties for the given AWS regions (account ID, region, and domain)
How to reproduce it (as minimally and precisely as possible):
From an EKS node, run ctr -n k8s.io image ls and inspect the image names. For example, see the images below that currently have the account ID 602401143452 but instead should have 900612956339
updated to remove reference to different partitions since AMIs cannot be copied across partitions
bryantbiggs
changed the title
Cached image tags do not account for AWS account ID and domain
Cached image tags do not account for the intended region's AWS account ID
Sep 25, 2023
What happened:
The current implementation for caching images will pull the respective container images for the current region, which in
us-west-2
might look something like:602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.9
And then to save space, the pulled image is tagged with the names that would be used in other regions when the AMI is copied across regions (i.e. - image is built in
us-west-2
and then copied/shared to sayeu-central-2
)However, the current implementation is simply changing the region within the image tag, and not taking into account the AWS account ID and domain name. So in the scenario above, the image would currently be tagged as:
602401143452.dkr.ecr.eu-central-2.amazonaws.com/eks/pause:3.9
But should actually be tagged as:
900612956339.dkr.ecr.eu-central-2.amazonaws.com/eks/pause:3.9
Coming from https://github.com/awslabs/amazon-eks-ami/blob/5d5db2f0eb99b9851ca064cad89460401caa9072/scripts/install-worker.sh#L470C13-L470C13
What you expected to happen:
Images should be tagged with the correct properties for the given AWS regions (account ID, region, and domain)
How to reproduce it (as minimally and precisely as possible):
From an EKS node, run
ctr -n k8s.io image ls
and inspect the image names. For example, see the images below that currently have the account ID602401143452
but instead should have900612956339
[root@ip-10-0-26-154 bin]# ctr -n k8s.io image ls | grep eu-central-1 602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon-k8s-cni-init:v1.12.6-eksbuild.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:9f55255618d5219ff9335b671a816e4438f8268b2a9b766c892e6ffe44208db4 21.2 MiB linux/amd64,linux/arm64 io.cri-containerd.image=managed 602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon-k8s-cni-init:v1.15.0-eksbuild.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:8e67736247864727fc3747e78f969f6cedfa4d1856305e2408e0f4d9235dcfc2 57.0 MiB linux/amd64,linux/arm64 io.cri-containerd.image=managed 602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon-k8s-cni:v1.12.6-eksbuild.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:fa1d851c35134381d660f6b571b1a3d4da0554e48e346d38f5fc5ca6420dd9cd 40.9 MiB linux/amd64,linux/arm64 io.cri-containerd.image=managed 602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon-k8s-cni:v1.15.0-eksbuild.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:84f5d00dcf365ffbcab602c154bf45a8c81ac1dd00972703c136b037adfd5fd9 42.9 MiB linux/amd64,linux/arm64 io.cri-containerd.image=managed 602401143452.dkr.ecr.eu-central-1.amazonaws.com/eks/kube-proxy:v1.27.1-minimal-eksbuild.1 application/vnd.docker.distribution.manifest.list.v2+json sha256:c2ccb4b5e1f29bd91965c23662b3c76c31ed689c97ba7a7ec9293c4782997ee2 28.6 MiB linux/amd64,linux/arm64 io.cri-containerd.image=managed 602401143452.dkr.ecr.eu-central-1.amazonaws.com/eks/kube-proxy:v1.27.4-minimal-eksbuild.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:a444bb7bd6cc1f60bb224db4b840a03076f45b8104b56251df7df1b5a0f17a05 28.6 MiB linux/amd64,linux/arm64 io.cri-containerd.image=managed 602401143452.dkr.ecr.eu-central-1.amazonaws.com/eks/pause:3.5 application/vnd.docker.distribution.manifest.list.v2+json sha256:529cf6b1b6e5b76e901abc43aee825badbd93f9c5ee5f1e316d46a83abbce5a2 291.7 KiB linux/amd64,linux/arm64 io.cri-containerd.image=managed
Anything else we need to know?:
Environment:
aws eks describe-cluster --name <name> --query cluster.platformVersion
):aws eks describe-cluster --name <name> --query cluster.version
):uname -a
):cat /etc/eks/release
on a node):The text was updated successfully, but these errors were encountered: