Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

latest version gives failure #98

Closed
electrical opened this issue Dec 14, 2016 · 6 comments
Closed

latest version gives failure #98

electrical opened this issue Dec 14, 2016 · 6 comments

Comments

@electrical
Copy link

Hi,

When using the latest version with the example yaml files I'm getting this error:

14/12/2016 16:40:31Error: the server has asked for the client to provide credentials (post thirdpartyresources.extensions)

I'm running K8S 1.4.6 on top of Rancher 1.2.0

@dshulyak
Copy link
Contributor

dshulyak commented Dec 15, 2016

You need to be sure that, so called, in cluster authentication works. Please check following links:

http://kubernetes.io/docs/user-guide/service-accounts/
http://kubernetes.io/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod

For example on my environment i have following created in default namespace:

kubectl get serviceaccounts
NAME      SECRETS   AGE
default   1         7m

kubectl get secrets
NAME                  TYPE                                  DATA      AGE
default-token-q42hg   kubernetes.io/service-account-token   3         8m

There is a high probability that you are running apiserver without ServiceAccount admission controller. In my setup i am using following list for apiserver

--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,PersistentVolumeLabel

@electrical
Copy link
Author

Hi @dshulyak thanks your your quick response!

I checked the API server and it has these settings enabled for the admission control: NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota,ServiceAccount

I also have the serviceaccounts and secrets in the default namespace like you have.

I guess I'm missing something in the controller and scheduler yaml files that i insert into K8S to access the info?

@dshulyak
Copy link
Contributor

@electrical There is no need to change anything except iface from examples. All required information for auth should be mounted into pod.

I found similar topic, and one of the suggestions is to ensure that generated token is valid. This, but in default namespace:
kubernetes/dashboard#374 (comment)

@electrical
Copy link
Author

Been doing some more digging but unable to find any reasoning why it doesn't work.
The containers nicely have the service account as a volume mounted so they can access the details.
The only thing i can think of is that the token is incorrect?
I tried their test but nothing is indicating its incorrect.
I can even do the curl call with a wrong bearer token.

@electrical
Copy link
Author

Did a bit more digging. I'm getting a 'HTTP/1.1 401 Unauthorized' response when using the token of the serviceaccount secret. Would that mean that the token is invalid and needs to be re-generated?

@electrical
Copy link
Author

Well, re-created the secret in the kube-system namespace and it seems to work now :-) guess somehow i got into a weird state.
Thanks for your patience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants