-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Knative Serving controller can not access the same set of private registries as the K8s cluster #1996
Comments
i imagine once knative build can "output" image digests this functionality would go away from knative serving's controller. is that correct? |
Yes, but this same problem is shared by the support recently added to kaniko to support authenticating via cluster-identity. |
Using a local (i.e self-hosted) registry, as opposed to a private (authenticated) one, I'd say that this issue isn't only about authentication. Possible solution 2 under "Additional Info" above won't suffice. Solution 1 will probably work, as it uses nodes' docker daemon, but as explained below we must use registry URLs that differ between build and serving. Builds can push to the registry over a regular However, TLS using a cluster-generated certificate isn't trusted. Kaniko must run with With regular k8s apps we can have images accessed through a proxy (auth directives removed) DaemonSet with Servings or any other containers without a mounted docker.sock can not use I'm yet to test with a locally pused image as |
I am curious to see who else faces this issue ? Since it seems that we can push to a in-cluster repo but serving cannot pull |
Fully ECR support is blocked by knative/serving#1996
EKS support is limited to k8s 1.10. Fully ECR support is blocked by knative/serving#1996
This issue touches on a few problems, one of which is that ECR doesn't automagically work with knative. I've written up a description of this problem here: google/go-containerregistry#355 The fix for this would be upstream in kubernetes/kubernetes ideally. I don't use ECR, so it's hard for me to test, but if anyone on AWS is willing to pick this up, I'm happy to provide any missing pointers. |
I think the PR will fix it in the new version : kubernetes/kubernetes#75587 |
@bbrowning I setup private registry and dns resolve failed with error msg as below,
10.0.0.10 is clusterIP of service kube-dns in my cluster. |
@yu2003w This looks like a k8s configuration issue on your end - .icp is not a valid TLD so I assume its something local you use for development (and is failing to be resolvable). |
@greghaynes Yes, you're correct. icpdev.icp is an entry in /etc/hosts. |
Talked to @sebgoa about helping test @jonjohnsonjr patch for ECR support. If we don't hear back by EoD Thursday (or positively!), this will probably not land until 0.8. |
If it helps anyone else workaround this issue until a better solution lands, take a look at the ecr-helper and also this post to run it periodically. The is more for the build side, but this helped me figure out how to make it work for the serving side too. The missing piece was 1) creating a |
@ryanbrainard if you are up for testing out some bits with ECR I know @jonjohnsonjr has a patch that we hope to make things work. |
@mattmoor @jonjohnsonjr For sure! We were just about to dive into coding up this workaround into a somewhat automated fashion (so far just tested it in manual bash scripts), but would be great to avoid that. Can you point me at the code and any instrucitons? I'm also in knative slack if that helps. cc: @jttyeung |
The PoC is here, it basically carries a patch to K8s upstream that's landed at head. |
@mattmoor I gave it a whirl, but it didn't work. Details are here: #4084 (comment) |
@ryanbrainard ksvc seems to use the default service account . I patched it accordingly (as described here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account). Then I added the The result is unfortunately the same as before: As described above, when I start a Pod directly into my Kops Cluster, which pulls the image of ECR, everything works fine. Only with ksvc it doesn't work. |
@mattmoor I dug a little deeper on trying the patch and got it to work! It turned out to just be an IAM issue with the controller pod (+ needing the patch applied). Details: #4084 (comment) |
Expected Behavior
The Knative Serving controller is able to access the same set of private registries as the K8s cluster.
Actual Behavior
The Knative Serving Controller fails to authenticate itself to the same private registries as the K8s cluster.
Steps to Reproduce the Problem
An example would be a Kops cluster that is able to access a private AWS ECR registry, while the Knative Serving controller can not.
A high-level set of steps to reproduce the issue is provided below:
unsupported status code 401; body: Not Authorized
The difference in behavior is due to Kops setting the
--cloud-provider
flag and that this is not enabled by the Knative Serving controller when executing: https://github.com/kubernetes/kubernetes/blob/36877dafe40495cb43994464e2427355f99042c7/pkg/credentialprovider/aws/aws_credentials.go#L158Additional Info
Possible solutions:
Workarounds
Specify the digest in the image address which will prevent the controller from attempting to resolve to a digest:
serving/pkg/reconciler/v1alpha1/revision/resolve.go
Line 52 in b707183
Configure the registry in
registriesToSkip
for the controllerThe above stated workarounds both lose the tag-to-digest resolution functionality.
The text was updated successfully, but these errors were encountered: