-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2.0: Consider daemonization #821
Comments
The I think CNI should be defined via gRPC, to be called by CRI, or others. For plugins, call via envvar is good, I think we can still use it in 2.0. |
Citation required? It seems to me that the problems with dynamic linking are entirely in the opposite direction; if you copy a dynamically-linked binary from your container to the root disk, it may not be able to find the shared libraries it needs any more. If you're talking about openssl FIPS compliance stuff specifically, I don't think that's a problem that can be solved at the CNI level; if a particular distribution (eg OCP) or cluster-infrastructure provider (eg EKS) needs to require specific things of all CNI plugins running in that environment, then either they need to build all the CNI plugins themselves, or they need to bless specific container images containing tested/approved CNI plugins. I don't see any way of getting around that.
To the best of my knowledge there is currently exactly one known bug, which is that if a goroutine running in a namespace spawns off a new goroutine, it is not guaranteed that the new goroutine will remain in the same namespace. Which seems like something golang needs to provide some fix for (though it will be complicated since sometimes you do want the current semantics).
It would be great if CNI 2.0 required all plugins to be run from containers, even if they aren't daemons, and did not require containers to make any modifications to the root disk. |
Yes, good point. I wrote it backwards. s/executing/leaking-from-a-container/
Agreed, that is my understanding as well. I still don't love doing something that is, at best, unofficially supported by the runtime.
If we switch to gRPC, then the "interface" is a socket file, making no statement as to how plugins are executed (and thus enabling containerization). However, if we eschew gRPC and stick with an execution-based protocol, it would be interesting if the CNI flow included, somehow, the container runtime engine executing more containers (i.e. plugins) on behalf of CNI... Of course, many plugins need to store some state between invocations, so there is always that concern. But that is somewhat incidental. |
Got a link for posterity? |
I'd add * more consistent/predictable resource utilization |
(I meant "did not require plugins to make any modifications")
Yes, that's what I meant. I mean, obviously the runtime knows how to run a binary out of a container, so...
I'm fine with plugins being allowed to write to the root disk, I just don't think CNI should require plugins to write to the root disk. ie, it should not require you to copy out a binary and it should not require you to write out a config file. |
I don't have a link but I can explain the problem. There's a golang routine ( Long ago, people realized that this caused bugs, because although But more recently people discovered that the opposite problem exists; if you spawn off a new goroutine from your locked thread, that goroutine won't stay on the same thread. In particular, if you do And this is not an unambiguous bug like the previous problem, because goroutines are ubiquitous and there are lots of circumstances where you wouldn't want the goroutine to inherit the special behavior from the original thread. So any fix for this is likely to require API changes of some sort (?) and it's not clear any fix is coming any time soon. |
(And you can't just say "well don't use the 'unsafe' functions" because there's no way to know from the outside which functions are unsafe. It's entirely possible that plugins that work today would break in the future if additional goroutines were added to functions in (Ugh.) |
Add an advantage: in my environment, i deploy 2000 nodes and 40,000 pods.
So, in sometime may many nodes retry cni request and send request to kube-apiserver, and get 429 response from kube-apiserver. If deployed in daemonization, we can limit every node's request by client-go's throttles and reduce kube-apiserver's pressure. |
how would plugin delegation work? i.e. something like |
Background
The exec-based API can be awkward for a few reasons:
Considerations
Advantages
Bidirectionality?
Would we want plugins to be able to send events to the runtime? Or should we still model this as a request-response protocol?
The text was updated successfully, but these errors were encountered: