Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always getting error: You must be logged in to the server (Unauthorized) #275

Closed
beetlikeyg083 opened this issue Nov 1, 2019 · 16 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@beetlikeyg083
Copy link

I am currently playing around with AWS EKS
But I always get error: You must be logged in to the server (Unauthorized) when trying to run kubectl cluster-info command.

I have read a lot of AWS documentation and look at lots of similar issues who face the same problem. Unfortunately, none of them resolves my problem.

So, this is what I did

  1. create a user to access aws-cli name crop-portal
  2. create a role for EKS name crop-cluster
  3. create EKS cluster via AWS console with the role crop-cluster name crop-cluster(cluster and role have the same name)
  4. run AWS configure for user crop-portal
  5. run aws eks update-kubeconfig --name crop-cluster to update the kube config
  6. run aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access
  7. copy accessKey, secreyKey and sessionToken into env variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN accordingly
  8. run aws sts get-caller-indentity and now the result says it used assume role already
{
    "UserId": "AROAXWZGX5HOBZPVGAUKC:botocore-session-1572604810",
    "Account": "529972849116",
    "Arn": "arn:aws:sts::529972849116:assumed-role/crop-cluster/botocore-session-1572604810"
}
  1. run kubectl cluster and always get error: You must be logged in to the server (Unauthorized)

when I run aws-iam-authenticator token -i crop-cluster, it gave me the token and
when I run aws-iam-authenticator verify -t token -i crop-portal, it also passed

&{ARN:arn:aws:sts::529972849116:assumed-role/crop-cluster/1572605554603576170 CanonicalARN:arn:aws:iam::529972849116:role/crop-cluster AccountID:529972849116 UserID:AROAXWZGX5HOBZPVGAUKC SessionName:1572605554603576170}

I don't know what is wrong or what I miss. I try so hard to get it works but I really don't know what to do after this.
Some people suggest creating a cluster with awscli instead of GUI. I tried both methods and none of them work. Either creating with awscli or GUI is the same for me.

Please someone helps :(

@swoldemi
Copy link

swoldemi commented Nov 10, 2019

Have you tried, before doing run kubectl cluster and always get error: You must be logged in to the server (Unauthorized), running aws eks update-kubeconfig -name crop-cluster?

Yes, aws-iam-authenticator is giving you a token, but kubectl looks at $HOME/.kube/config for a kubeconfig file. Running update-kubeconfig will automatically update your kubeconfig at $HOME/.kube/config.

I believe, you just need to make sure you are making that call to update-kubeconfig while assuming that role has access to the API server. Also make sure you're running kubectl cluster-info not kubectl cluster 👍

It also looks you passed the name of your user not the name of the cluster to verify -i

@beetlikeyg083
Copy link
Author

@swoldemi I did. aws eks update-kubeconfig -name crop-cluster is in my step number 5.

@swoldemi
Copy link

swoldemi commented Nov 13, 2019

@swoldemi I did. aws eks update-kubeconfig -name crop-cluster is in my step number 5.

Sorry, I guess I mean't updating your kubeconfig after you assume the role? Assuming crop-cluster-arn is the ARN of the role. That way the call to update-kubeconfig is using the credetials you get from the assumed role. I have also had issues with the environment variables not being read correctly, so to be safe I would do it inline; you may be able to find a to generate this prefix for you or do some jq magic:

AWS_ACCESS_KEY_ID=<Key ID> AWS_SECRET_ACCESS_KEY=<Secret Access Key> \ 

AWS_SESSION_TOKEN=<Session Token> aws eks update-kubeconfig --name crop-cluster

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 11, 2020
@yunghoy
Copy link

yunghoy commented Feb 26, 2020

I've tried to solve this case, used root ARN and custom ARN, adding user into role, creating cluster with root, creating cluster with custom id, creating cluster in console with both id, using aws, using aws-iam-authenticator, using custom profile in .aws/config, changing token in .kube/config to real token, etc., etc., etc.

aws sts get-caller-identity worked.
aws eks --region us-west-2 describe-cluster --name eks --query cluster worked.
aws eks --region us-west-2 update-kubeconfig --name eks --role-arn arn:aws:iam::*************:role/eksrole worked.

All information seems correct but I was not able to achieve to access kubernetes console.

A lot of people are talking about this message in communities here and there since 2018.
Thus, I will make this issue is opened.

@yunghoy
Copy link

yunghoy commented Feb 26, 2020

If here is an AWS developers who are in EKS, please prepare your new linux machine and get kubectl, aws and aws-iam-authenticator, and make a new AWS account and try to create a new EKS cluster. And can you update EKS document on AWS website?
Thanks.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 27, 2020
@eddiezane
Copy link
Member

eddiezane commented Apr 13, 2020

Might not be the same cause but I just ran into this and in my case I created my cluster on a different computer as a different IAM user than the computer/user I was trying to access the cluster with.

The instructions from here helped me figure out that I needed to add the user that didn't create the cluster to the configmap.

From 2nd computer:

aws sts get-caller-identity

{
    "UserId": "foo",
    "Account": "bar",
    "Arn": "arn:aws:iam::bar:user/SECOND"
}

// Grab ARN

From computer/user that created cluster:

kubectl -n kube-system edit configmap aws-auth

// Set mapUsers to
mapUsers: |
  - userarn: arn:aws:iam::bar:user/SECOND
    username: SECOND
    groups:
      - system:masters

If you're hitting this my suspicion would be that you created the cluster as a different user/role than you are trying to access with. Maybe instead of using the web console try creating with aws-cli or eksctl.

Hope this helps someone.

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@maxzintel
Copy link

Had a similar issue recently. It ended up being something simple that I may not have noticed for a while had there not been another engineer on the team who had run into it before.

The rolearn I had added was a copy & paste of the ARN shown in the console. So it looked something like this:

kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  
  mapRoles: |
  # ...
  - rolearn: arn:aws:iam::${number}:role/${path}/${role_name}
  # ...

Turns out you are not supposed to add the ARN path here (not sure why). Removing it, so now we had - rolearn ... :role/${role_name}, fixed the issue for me.

@yacinelazaar
Copy link

image
@s1dequest Yeah the path for my roles is already empty and the issue still persists.

@francotel
Copy link

francotel commented Dec 16, 2020

I have the same problem with the assumed role? I tried ti add the role arn but not the assume role beacuse dont let me something like that:
eksctl create iamidentitymapping \ --cluster eks-cluster-tupana-dev \ --arn "arn:aws:iam::AWS_ID:role/service-role/codebuild-images-profile-tupana-dev" \ --username admin \ --group system:masters \ --profile tupana-dev --region us-east-1

@sarathm54
Copy link

sarathm54 commented Apr 25, 2021

I got the same error and resolved.
i created eks cluster from aws console, and configured aws cli in my laptop with deferent iam user, this caused me the same error. to resolve this i created cli access security credentials for the same user which i created the eks cluster, when i use this credentials to clonfigure my aws cli everything fine. now its working perfectly.

thanks

116005721-e3284500-a625-11eb-91df-cbf6f646220d

@aditj95
Copy link

aditj95 commented Aug 19, 2021

Using the assumed-role didn't work for me. Had to use the actual "role" (from IAM click on the role and get the arn). Remove the path between role/path/<ROLE_NAME>

I.e.

groups: "system:masters" rolearn: "arn:aws:iam::<account>:role/<ROLE_NAME>" username: "<ROLE_NAME>"

@chitikenasatish
Copy link

just run the below command with proper region and cluster name . it worked.
aws eks update-kubeconfig --region us-west-2 --name my-cluster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests