Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm Task printing all pods in cluster for each chart deployed in 'upgrade' or 'install' command #13175

Closed
SirAlvarex opened this issue Jun 24, 2020 · 4 comments

Comments

@SirAlvarex
Copy link

Required Information

Entering this information will route you directly to the right team and expedite traction.

Question, Bug, or Feature?
Type: Bug

Enter Task Name: HelmDeploy

list here (V# not needed):
V0

Direct link:

pushDeploymentDataToEvidenceStore(kubectlCli, manifest, manifestUrls).then((result) => {

Environment

  • Server - Azure Pipelines
  • Agent - Hosted

Issue Description

The Helm task, on successful upgrade, appears to be executing a set of commands to present the status of the cluster to the user. But the way this is done is causing the logs to be flooded on large deployments.

Starting at this line:

else if (command === "install" || command === "upgrade") {

The task appears to call helm to get the charts in a deployment. Then it loops through the charts and calls pushDeploymentDataToEvidenceStore. The first thing this does is list every pod in the cluster and send it to stdout.

We have a deployment that is pushing out 91 charts. The end result is 91 calls of the following:

[command]/vsts/agent/_work/_tool/kubectl/1.18.4/x64/kubectl cluster-info

[command]/vsts/agent/_work/_tool/kubectl/1.18.4/x64/kubectl get pods -o json

This leads to us logging 1.3 million lines during a helm upgrade.

It appears a PR was merged in 12 days ago that is the direct result of this: #12995

If I understand the change, before this would have only happened if --debug was attached as an optional argument.

Task logs

Cannot attach logs as they are private for the company (internal Microsoft), but I hope the above makes sense enough.

@shigupt202
Copy link
Contributor

@SirAlvarex We're sorry for the inconvenience. The issue is indeed because of the PR that you pointed out. But earlier, it would have never happened, whether --debug was used or not (because of a bug, this code was never executed). It seems that the commands kubectl cluster-info and kubectl get pods -o json are getting executed for every chart which is not required. We'll work on fixing this as soon as possible.

@shigupt202 shigupt202 self-assigned this Jun 25, 2020
@haljin
Copy link

haljin commented Jun 26, 2020

At the same time it seems the task prints out the manifest, which includes secrets. Since we are using helm to base64 encode the values of secrets, Azure DevOps does not recognize them as secrets and does not censor them in the output. That's a pretty major security breach as it prints out passwords out in the open.

@shigupt202
Copy link
Contributor

Closing the issue. The fix should start rolling out from next week.

@ghost
Copy link

ghost commented Nov 12, 2021

This does not seem fixed. The kubectl install/upgrade command is still attempting to get my cluster data into the "EvidenceStore". The task doesn't fail, but it does throw an error:

Release "my-app" does not exist. Installing it now.
NAME: my-app
LAST DEPLOYED: Fri Nov 12 15:34:40 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
/usr/bin/kubectl cluster-info
Error from server (Forbidden): services is forbidden: User "system:serviceaccount:default:azure-devops-agent" cannot list resource "services" in API group "" in the namespace "kube-system"

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

I don't wish to further expand my agent's permissions at this time, as my deployment succeeds regardless.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants