Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(eks): missing access to Kubernetes objects on EKS cluster creation #18843

Closed
robertd opened this issue Feb 6, 2022 · 19 comments · Fixed by #25606
Closed

(eks): missing access to Kubernetes objects on EKS cluster creation #18843

robertd opened this issue Feb 6, 2022 · 19 comments · Fixed by #25606
Assignees
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service bug This issue is a bug. closed-for-staleness This issue was automatically closed because it hadn't received any attention in a while. p2 response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days.

Comments

@robertd
Copy link
Contributor

robertd commented Feb 6, 2022

General Issue

Missing access to Kubernetes objects on EKS cluster creation

The Question

We're missing access to Kubernetes objects on EKS cluster creation.

image

image

image

CDK CLI Version

2.0.0 (build 4b6ce31)

Framework Version

No response

Node.js Version

No response

OS

No response

Language

Typescript

Language Version

No response

Other information

No response

@robertd robertd added guidance Question that needs advice or information. needs-triage This issue or PR still needs to be triaged. labels Feb 6, 2022
@mvs5465
Copy link

mvs5465 commented Feb 8, 2022

I was able to get past this by setting the mastersRole property equal to my IAM role. https://docs.aws.amazon.com/cdk/api/v2//docs/aws-cdk-lib.aws_eks-readme.html#masters-role

@peterwoodworth
Copy link
Contributor

I'm not too sure what the context is here, was this working before and isn't working anymore? Which constructs are you using?

@peterwoodworth peterwoodworth added response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. and removed needs-triage This issue or PR still needs to be triaged. labels Feb 8, 2022
@robertd
Copy link
Contributor Author

robertd commented Feb 9, 2022

Sorry for being too vague with my original post @peterwoodworth (i've added few more screenshots). ASFAIK I don't think this has ever worked. I'm using Cluster construct. I've tried creating a cluster using eksctl and I was able to see all the workloads and nodes in the console.

@robertd
Copy link
Contributor Author

robertd commented Feb 9, 2022

@mvs5465 Thanks. I'll give it a try.

@robertd
Copy link
Contributor Author

robertd commented Feb 9, 2022

@mvs5465 What does the default behavior then mean? Will the master role be created by default? If it is... how do we get it?
image

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Feb 9, 2022
@pmkuny
Copy link

pmkuny commented Feb 9, 2022

@robertd Are you using assumed credentials in the CLI and cdk deploy, by chance? I've run into something similar that's mentioned here #16888 - my understanding is that cdk deploy is using the deployment role instead of assumed credentials, so the Cluster is created with implicit admin set to that role, instead of any assumed role you're using in the CLI. In comparison, eksctl and Console creation use whatever assumed role you're executing with.

@robertd
Copy link
Contributor Author

robertd commented Feb 10, 2022

@pmkuny Yes. I’m using assumed creds and I'll give that workaround mentioned in #16888 a try. Thanks for the suggestion.

@pmkuny
Copy link

pmkuny commented Feb 18, 2022

@robertd Additionally, see this latest merge: #18963 - Adds the ability to use a CLI-credential based synthesizer.

@robertd
Copy link
Contributor Author

robertd commented Feb 19, 2022

@pmkuny Sweet. Looking forward to 2.13.0 release.

@robertd robertd closed this as completed Feb 19, 2022
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@robertd
Copy link
Contributor Author

robertd commented Feb 21, 2022

@pmkuny I just tried 2.13.0 release and I'm still getting RBAC permission issues with my cluster.

const stack = new Stack(mockApp, 'testingKarpenterFargateEphemeralStack', {
  env,
  synthesizer: new CliCredentialsStackSynthesizer(),
});

image

@robertd robertd reopened this Feb 21, 2022
@pmkuny
Copy link

pmkuny commented Feb 24, 2022

@robertd - Sorry to hear that. I think leaving this issue open then is appropriate.

Can I ask why you're trying to do administer the EKS cluster with SSO credentials instead of using the vended role that CDK creates during cluster creation (assumable by the account principals)? After some more research, it seems that this behavior of creating a separate role to administer the cluster (and additionally, potentially scoping down the separate role that CDK creates for EKS administration) is a better practice. Just trying to understand what your use case is and how I can help.

@robertd
Copy link
Contributor Author

robertd commented Mar 2, 2022

Hi @pmkuny. I am using cdk to create an EKS cluster after assuming the proper roles, and by default I'm seeing this message...

Your current user or role does not have access to Kubernetes objects on this EKS cluster This may be due to the current user or role not having Kubernetes RBAC permissions to describe cluster resources or not having an entry in the cluster’s auth config map.

Can I ask why you're trying to do administer the EKS cluster with SSO credentials instead of using the vended role that CDK creates during cluster creation (assumable by the account principals)?

Our gitlab runners have pre-created IAM roles that allow CDK to create EKS clusters for us. Unfortunately, we are not able to use CDK pipelines yet (not sure if EKS creation using EKS clusters would pick proper roles).

After some more research, it seems that this behavior of creating a separate role to administer the cluster (and additionally, potentially scoping down the separate role that CDK creates for EKS administration) is a better practice.

Would you mind providing a cdk example on how to do this?

@NGL321 NGL321 added @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service bug This issue is a bug. needs-reproduction This issue needs reproduction. p2 and removed guidance Question that needs advice or information. labels Mar 7, 2022
@NGL321
Copy link
Contributor

NGL321 commented Mar 7, 2022

Hey @robertd,

After looking through the comments I am having a hard time pinning down what is going on here. Out of consideration that this may be a problem with our EKS implementation I am going to treat it as a bug for now.

Could you provide an explicit sample of the code that is causing this error? Just a snippet of the resources responsible for this would be sufficient. Then I can dig a little deeper

@pmkuny
Copy link

pmkuny commented Mar 7, 2022

Someone from the CDK team correct please, if I'm wrong here.

Hi @robertd -

  1. The reason you're seeing that error message in the Console is because, by default, the CDK uses a CloudFormation Execution role when deploying and this role is getting the implicit admin permissions. Per the documentation here:

When you create an Amazon EKS cluster, the AWS Identity and Access Management (IAM) entity user or role, such as a federated user that creates the cluster, is automatically granted system:masters permissions in the cluster's role-based access control (RBAC) configuration in the Amazon EKS control plane.

That means that the CloudFormation execution role is the one getting administrator permissions to the cluster, not your SSO-vended credentials, which is why you're seeing that error message in the AWS Console saying that you don't have access to the cluster.

To get around this, CDK also creates a role by default(see masters_role) that is assumable by any principal in the account. Additionally, a CloudFormation Output is generated that allows you to update your kubectl config with that role assumption (the names might be different, but here's an example):

aws eks update-kubeconfig --name InfrastructureCluster6F2DDFD7-e9c3d4982cb5418db407c5af74f892ec --region us-west-2 --role-arn arn:aws:iam::xxxxxxxxxx:role/AppStage-InfrastructureEk-InfrastructureClusterMas-FQ2JEKF3BTEC
  1. For your GitLab runners, you could do a couple things:
    a. Role chain - assume the default masters_role that CDK creates for you from your GitLab IAM Role - potentially scope down the trust policy here so only the GitLab IAM Role can assume it. By default, the trust policy allows any principal in the same account to assume it.
    b. Create an explicit IAM Role using the IAM Construct and pass it to masters_role property during cluster creation.

Recommendation
I would actually support the idea that the default implicit role being created be locked down so that ONLY cluster administrators within your organization could access it. I would then follow the instructions here to add permissions to the cluster RBAC system for your GitLab Runner IAM Roles. In my opinion, using the default administrator role to run your Runners, instead of creating an explicit role for them and adding them to the cluster, is a security risk.

Finally, the fact that the new CliCredentialsStackSynthesizer is not working properly may be a separate bug as well, for that feature.

@robertd
Copy link
Contributor Author

robertd commented Mar 9, 2022

Thanks @pmkuny for the detailed overview. 🙌

@robertd
Copy link
Contributor Author

robertd commented Mar 9, 2022

@NGL321 I'm not doing really anything fancy.... Just creating an EKS cluster with a custom nodegroup.

...
const cluster = new Cluster(stack, 'eks', {
  vpc,
  version: KubernetesVersion.V1_21,
  defaultCapacity: 0,
  vpcSubnets: [{
    subnetType: SubnetType.PRIVATE_WITH_NAT,
  }],
  tags,
});

const nodegroup = new Nodegroup(stack, 'NodeGroup', {
  cluster,
  minSize: 0,
  desiredSize: 1,
  instanceTypes: [
    InstanceType.of(InstanceClass.M5, InstanceSize.LARGE),
  ],
});

@NGL321 NGL321 removed their assignment Jun 24, 2022
@pahud
Copy link
Contributor

pahud commented May 16, 2023

When we deploy the EKS cluster like this:

const cluster = new eks.Cluster(this, 'EksCluster', {
      vpc,
      version: eks.KubernetesVersion.V1_26,
      kubectlLayer: new KubectlLayer(this, 'KubectlLayer'),
      defaultCapacity: 0,
    });

A cluster mastersRole will be created which maps to system:masters in the RBAC roles.

When you see the error in the console:
image

This means your current viewing IAM principal was not defined in the aws-auth ConfigMap.

We have some solutions here:

Option 1:

If you are viewing the console with an assumed role, add this role into the aws-auth system:masters by:

cluster.awsAuth.addMastersRole(your_assumed_role)

Make sure that the eks:AccessKubernetesApi and other necessary IAM permissions to view Kubernetes resources are assigned to the IAM principal that you're using. Check required permissions for more detail.

Consider to write a helper function like this:

function attachConsoleReadOnlyPolicies(scope: Construct, role: iam.Role) {
  // see - https://docs.aws.amazon.com/eks/latest/userguide/view-kubernetes-resources.html#view-kubernetes-resources-permissions
  role.addToPolicy(new iam.PolicyStatement({
    actions: [
      "eks:ListFargateProfiles",
      "eks:DescribeNodegroup",
      "eks:ListNodegroups",
      "eks:ListUpdates",
      "eks:AccessKubernetesApi",
      "eks:ListAddons",
      "eks:DescribeCluster",
      "eks:DescribeAddonVersions",
      "eks:ListClusters",
      "eks:ListIdentityProviderConfigs",
      "iam:ListRoles"
  ],
    resources: [ '*' ],
  }));
  role.addToPolicy(new iam.PolicyStatement({
    actions: [
      "ssm:GetParameter"
  ],
    resources: [ Stack.of(scope).formatArn({
      service: 'ssm',
      resource: 'parameter',
      arnFormat: ArnFormat.SLASH_RESOURCE_NAME, 
      resourceName: '*',
    }) ],
  }));
}

Attach required policies to your current role.

attachConsoleReadOnlyPolicies(this, your_assumed_role)

Option 2:

Create your custom mastersRole and pass to the props.mastersRole. Make sure this role has necessary IAM policies and your current IAM principal is allowed to assume this role.

    const mastersRole = new iam.Role(this, 'MastersRole', {
      assumedBy: new iam.AccountRootPrincipal,
    });

    const cluster = new eks.Cluster(this, 'EksCluster', {
      vpc: getOrCreateVpc(this),
      version: eks.KubernetesVersion.V1_26,
      placeClusterHandlerInVpc: false,
      kubectlLayer: new KubectlLayer(this, 'KubectlLayer'),
      defaultCapacity: 0,
      mastersRole,
      outputMastersRoleArn: true,
    });

   attachConsoleReadOnlyPolicies(this, mastersRole)

Now in your EKS console top right corner, click the Switch Role button in the drop-down menu and enter the Account, Role, Display name. Hit the Switch Role button and you are all set!

Make sure to switch back to your original identity when you leave the EKS console.

Option 3:

Create a dedicated role only for console browsing that allows all account root principal to assume:

    const cluster = new eks.Cluster(this, 'EksCluster', {
      vpc,
      version: eks.KubernetesVersion.V1_26,
      kubectlLayer: new KubectlLayer(this, 'KubectlLayer'),
    });

    // create a 2nd readonly master role
    const consoleAdminRole = new iam.Role(this, 'ConsoleReadOnlyRole', {
      assumedBy: new iam.AccountRootPrincipal,
    });

    attachConsoleReadOnlyPolicies(this, mastersRole)

    // add this role to system:masters RBAC group
    cluster.awsAuth.addMastersRole(consoleAdminRole)
    new CfnOutput(this, 'ConsoleReadOnlyReole', { value: consoleAdminRole.roleName })

Switch to this role in the console as described in Option 2.

@pahud pahud removed the needs-reproduction This issue needs reproduction. label May 16, 2023
@pahud pahud self-assigned this May 16, 2023
@pahud pahud added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label May 16, 2023
@github-actions
Copy link

This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.

@github-actions github-actions bot added closing-soon This issue will automatically close in 4 days unless further comments are made. closed-for-staleness This issue was automatically closed because it hadn't received any attention in a while. and removed closing-soon This issue will automatically close in 4 days unless further comments are made. labels May 18, 2023
mergify bot pushed a commit that referenced this issue Jul 26, 2023
Improve the EKS doc in terms of the console access.

Closes #18843

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
bmoffatt pushed a commit to bmoffatt/aws-cdk that referenced this issue Jul 29, 2023
Improve the EKS doc in terms of the console access.

Closes aws#18843

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service bug This issue is a bug. closed-for-staleness This issue was automatically closed because it hadn't received any attention in a while. p2 response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants