Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod setup error when using AWS: "CredentialRequiresARNError" #1262

Closed
phobos-dthorga opened this issue Nov 7, 2019 · 12 comments
Closed

Pod setup error when using AWS: "CredentialRequiresARNError" #1262

phobos-dthorga opened this issue Nov 7, 2019 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@phobos-dthorga
Copy link

Apologies if this has been posted already, but I couldn't find much on this particular error condition, if anything. But I'm having such a difficult time setting up external-dns via Rancher within Kubernetes and I've tried pretty much everything I know at this point.

As you can see below [ 1, 2 ], I keep getting this error with my external-dns pods when trying to setup a configuration to make use of AWS Route 53.

[ 1 ] - https://paste.gekkofyre.io/view/8e550193
[ 2 ] - https://imgur.com/a/5NYkR5R

You can also find my configuration at the link just below, with obvious private information omitted for security purposes.

[ 3 ] - https://paste.gekkofyre.io/view/5dbf5822#k1Qz8yTVKFOJL1s6y16uONn9l2fyikMh

Please note that we're making use of Rancher v2.3.2 to orchestrate our Kubernetes cluster, which consists of three nodes currently, plus the Rancher controller. There is plenty of RAM, vCPUs, and storage to go around so I can't see any of that being an issue.

Our Kubernetes version is v1.16.2-00 across all nodes and Docker itself is v5:18.09.9~3-0 as well. This is all running on the latest updated version of Ubuntu 18.04 LTS, which again is the same for all nodes and the Rancher controller itself. If anyone can offer assistance then that would be dearly appreciated, thank you.

--
https://gekkofyre.io/
GekkoFyre Networks

@frittenlab
Copy link

I am seeing exactly the same problem. This worked last week without a glitch. I am installing external dns via the helm chart. After downgrading the helm chart to the image 0.5.16-debian-9-r8 everything works again. Before I was also seeing this error:

time="2019-11-08T16:23:39Z" level=fatal msg="CredentialRequiresARNError: credential type source_profile requires role_arn, profile default"

@cliedeman
Copy link
Contributor

Ran into the same issue.

@frittenlab the downgrade worked for me too thanks.

Its likely on of these PR's
#1182
#1185
#1172

Will do a bisect if I get time

@rivernews
Copy link

Ran into same issue when installing external-dns chart by helm. Looking at external-dns pod log gives msg="CredentialRequiresARNError: credential type source_profile requires role_arn, profile default"

Can confirm version by @ffledgling worked for me too. Kubernetes version is 1.14.8. For those using terraform you can lock the version to chart v2.6.1 which is the latest version using app version 0.5.16:

resource "helm_release" "project-external-dns" {
  name      = "external-dns"
  chart     = "stable/external-dns"
  version   = "v2.6.1"
  ...
}

@prageethw
Copy link

can confirm this appears after helm version 2.10.1.

@acim
Copy link

acim commented Dec 30, 2019

Same here, works with Helm chart 2.10.1 and not with any later. Latest EKS version at the moment.

@Freyert
Copy link

Freyert commented Jan 2, 2020

Looks like the S3 terraform resource had a similar issue and HashiCorp had to patch their aws-sdk wrapper to grab credentials differently.

Issue with helpful details: hashicorp/aws-sdk-go-base#4

hashicorp/aws-sdk-go-base#5

Ticket was reopened later: hashicorp/terraform#22732


External DNS uses a command line flag to assume the role:

https://github.com/helm/charts/blob/master/stable/external-dns/templates/deployment.yaml#L101-L103

Where the flag is consumed and the role should be assumed:

if assumeRole != "" {
log.Infof("Assuming role: %s", assumeRole)
sess.Config.WithCredentials(stscreds.NewCredentials(sess, assumeRole))
}


Something worth trying:

Add the role ARN to the AWS config :)

https://github.com/helm/charts/blob/master/stable/external-dns/templates/_helpers.tpl#L113-L117


Lol OK so my deployment isn't even trying to assume a role.


OK, so there's a second block where it tries to do this which definitely is broken:

https://github.com/helm/charts/blob/master/stable/external-dns/templates/_helpers.tpl#L119-L123

For some reason it doesn't evaluate to the first block which would work. This block tries to use source_profile without a role_arn.

OK, so here's how it happens:

Providing aws.region causes an AWS config to be attached with a [profile default] whose source_profile is default. I think what they're trying to do is refer to the credentials in the credentials file. If you omit aws.region the config won't be attached.

@Freyert
Copy link

Freyert commented Jan 3, 2020

Fix delivered to stable chart. Please confirm if the newest version 2.13.2 works for you.

party

@tomerleib
Copy link

2.13.2 works!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 4, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 4, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

10 participants