Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support SPDY for exec #411

Closed
inikolaev opened this issue Feb 12, 2020 · 10 comments
Closed

Support SPDY for exec #411

inikolaev opened this issue Feb 12, 2020 · 10 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@inikolaev
Copy link
Contributor

I have cluster in with I could exec into a pod using Exec class, but recently it stopped working and now return 403 status code when doing so.

At the same time kubectl exec works fine and after some digging it looks like there are two protocols which can be used: WebSocket and SPDY. SPDY is used by kubectl and that is why it's still working.

@brendandburns
Copy link
Contributor

brendandburns commented Feb 14, 2020

An HTTP 403 comes before any protocol (SPDY or WebSockets) is negotiated. So I don't think SPDY vs WebSockets is the reason that you are getting a 403.

My guess is that your kubeconfig is somehow incompatible with the Java client. What is the content of your kubeconfig (after removing secrets)

@inikolaev
Copy link
Contributor Author

Unfortunately I don't have access to the kubeconfig, but I can ask about it.

The thing is that it was working before and looks like something changed. As I understand WebSocket and SPDY protocol negotiation is started differently: WebSocket requires that the first request is GET, while kubectl is sending POST which is then upgraded. Could it be that the corresponding role in Kubernetes for pod/exec only allows POST requests and not GET?

I managed to find different issues which look similar to me, but I'm not well versed in K8s internals:

By the way I got confused by these two methods: connectPostNamespacedPodExec and connectGetNamespacedPodExec. What are they supposed to be used for?

@inikolaev
Copy link
Contributor Author

Here's an example kubeconfig:

apiVersion: v1
clusters:
- cluster:
    server: https://cluster1
  name: cluster1
- cluster:
    server: https://cluster2
  name: cluster2
contexts:
- context:
    cluster: cluster1
    namespace: namespace1
    user: username
  name: cluster1
- context:
    cluster: cluster2
    namespace: namespace2
    user: username
  name: cluster2
current-context: cluster1
kind: Config
preferences: {}
users:
- name: username
  user:
    token: some token here

and here's extract from Kubernetes roles:

- apiGroups:
  - ""
  resources:
  - pods
  - pods/attach
  - pods/eviction
  - pods/exec
  verbs:
  - create
  - delete
  - deletecollection
  - patch
  - update

@brendandburns
Copy link
Contributor

That could be the problem, you may need to add get as a verb for your role, in order to be able to do the GET that initiates the WebSocket.

@brendandburns
Copy link
Contributor

Regarding the question about connectPostNamespacedPodExec and connectGetNamespacedPodExec they come out of the swagger specification and they're to perform either the get or the post for the exec.

But honestly, the generated API client doesn't understand WebSockets or SPDY, so they actually not very useful by themselves.

@inikolaev
Copy link
Contributor Author

That could be the problem, you may need to add get as a verb for your role, in order to be able to do the GET that initiates the WebSocket.

Unfortunately this is not always possible. I'm trying to make some prototype which uses SPDY, but unfortunately the library I use is already abandoned in favor of HTTP/2, which K8s doesn't yet support for exec.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 15, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 14, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants