Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Garbage Collection for Assets #64

Closed
2 of 11 tasks
eladb opened this issue Dec 12, 2018 · 66 comments
Closed
2 of 11 tasks

Garbage Collection for Assets #64

eladb opened this issue Dec 12, 2018 · 66 comments
Assignees
Labels
bar-raiser/assigned effort/medium Reasonable effort required for implementation status/done Implementation complete

Comments

@eladb
Copy link
Contributor

eladb commented Dec 12, 2018

Description

Assets which are uploaded to the CDK's S3 bucket and ECR repository are never deleted. This will incur costs for users in the long term. We should come up with a story on how those should be garbage collected safely.

Initially we should offer cdk gc which will track down unused assets (e.g. by tracing them back from deployed stacks) and offering users to delete them. We can offer an option to automatically run this after every deployment (either in CLI or through CI/CD). Later we can even offer a construct that you deploy to your environment and it can do that for you.

Proposed usage:

cdk gc [ENVIRONMENT...] [--list] [--type=s3|ecr]

Examples:

This command will find all orphaned S3 and ECR assets in a specific AWS environment and will delete them:

cdk gc aws://ACCOUNT/REGION

This command will garbage collect all assets in all environments that belong to the current CDK app (if cdk.json exists):

cdk gc

Just list orphaned assets:

cdk gc --list

Roles

Role User
Proposed by @eladb
Author(s) @kaizen3031593
API Bar Raiser @njlynch
Stakeholders @rix0rrr @nija-at

See RFC Process for details

Workflow

  • Tracking issue created (label: status/proposed)
  • API bar raiser assigned (ping us at #aws-cdk-rfcs if needed)
  • Kick off meeting
  • RFC pull request submitted (label: status/review)
  • Community reach out (via Slack and/or Twitter)
  • API signed-off (label api-approved applied to pull request)
  • Final comments period (label: status/final-comments-period)
  • Approved and merged (label: status/approved)
  • Execution plan submitted (label: status/planning)
  • Plan approved and merged (label: status/implementing)
  • Implementation complete (label: status/done)

Author is responsible to progress the RFC according to this checklist, and
apply the relevant labels to this issue so that the RFC table in README gets
updated.

@sam-goodwin
Copy link

How do we generate keys when uploading? Just random?

If it were deterministic, we could use lifecycle rules of object versions: https://aws.amazon.com/about-aws/whats-new/2014/05/20/amazon-s3-now-supports-lifecycle-rules-for-versioning/

That way objects are only considered for deletion if they are not 'current'.

@sam-goodwin
Copy link

Now that I think about it, versioning would only help if we were regularly re-uploading files. That's probably not the case?

@eladb
Copy link
Contributor Author

eladb commented Dec 12, 2018

Life cycle rules sounds like a good option for sure. The object keys are based on the hash of the contents of the asset, so to avoid uploading in case the content hasn't changed (code)

@rix0rrr
Copy link
Contributor

rix0rrr commented Dec 12, 2018

I'm not sure this works if there are two stacks, and only one is deployed with a new version of the asset.

In my mind, the other stack should still point to the old version of the asset (should not be automatically updated), but now the asset will be aged out after a while and the undeployed stack will magically break after a certain time.


Alternative idea (but much more involved and potentially not Free Tier): a Lambda that runs ever N hours which enumerates all stacks, detects the assets that are in use (from stack parameters or in some other way) and clears out the rest?

@rix0rrr
Copy link
Contributor

rix0rrr commented Dec 12, 2018

Now that I think about it, versioning would only help if we were regularly re-uploading files. That's probably not the case?
I thought you meant expiration for non-current versions. The latest version still stays alive indefinitely.

But I think this runs afoul of reuse across stacks.

@eladb
Copy link
Contributor Author

eladb commented Dec 12, 2018

👍 on garbage collecting lambda that runs every week or so, with ability to opt out and some cost warnings on docs and online

@sam-goodwin
Copy link

Seems risky. What if a deployment happen while crawling?

@eladb
Copy link
Contributor Author

eladb commented Dec 12, 2018

Yeah perhaps only collect old assets (month old) and we can salt the object key such that if a whole month had passed, it will be a new object?

Thinking out loud.... requires a design

@eladb eladb transferred this issue from aws/aws-cdk Jan 22, 2020
@eladb eladb added the effort/medium Reasonable effort required for implementation label Jan 22, 2020
@eladb eladb changed the title assets: garbage collection Asset Garbage Collector Jan 27, 2020
@eladb eladb added the status/proposed Newly proposed RFC label Jan 27, 2020
@eladb eladb changed the title Asset Garbage Collector Garbage Collection for Assets Feb 4, 2020
@thekevinbrown
Copy link

I've got quite a few assets in my bucket now after a month or so of deploying from CD.

How do I determine which ones are in use, even manually? I can't seem to figure out the correlation between the names of the files in S3 and anything else I could use to determine what's being used. The lambdas don't point back at them in any way I can see.

I want to eventually write a script to do this safely for my use case, but absent a way of telling what's being used I'm stuck.

@ansman
Copy link

ansman commented Feb 11, 2021

S3 now has lifecycle rules that can automatically delete objects a number of days after creation which might be a solution too.

@mlafeldt
Copy link

mlafeldt commented Feb 11, 2021

How do I determine which ones are in use, even manually?

The ones in use are those referenced in active CloudFormation stacks deployed via CDK.

Those stack templates will include something like this:

    "GenFeedFunc959C5085": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": {
            "Fn::Sub": "cdk-xxx-assets-${AWS::AccountId}-${AWS::Region}"
          },
          "S3Key": "5946c35f6797cf45370f262c1e5992bc54d8e7dd824e3d5aa32152a2d1e85e5d.zip"
        },

S3 now has lifecycle rules that can automatically delete objects a number of days after creation which might be a solution too.

Unfortunately, that won't help since old objects might still be in use, e.g. when a Lambda wasn't deployed in a while.

(It doesn't help that all assets are stored in the same "folder" either.)

@ansman
Copy link

ansman commented Feb 11, 2021

Doesn't lambda copy the resources during deployment?

@mlafeldt
Copy link

Doesn't lambda copy the resources during deployment?

The Lambda service will cache functions, but AFAIK there's no guarantee that they will be cached forever.

@ansman
Copy link

ansman commented Feb 11, 2021

I'm fairly certain that Lambda only reads the assets during deployment and that they aren't needed afterwards. You can for example deploy to lambda without using S3 for smaller assets and those aren't stored in there.

@mlafeldt
Copy link

Would love to read about the behavior in the docs somewhere. That 50 MB upload limit must exist for a reason. Haven't found anything so far though.

@ansman
Copy link

ansman commented Feb 11, 2021

I can't find any concrete resources on this but I haven't found any docs mentioning that you cannot delete the object after deletion. Also, my lambdas doesn't have any permission to read my staging bucket nor does it mention that the object is from S3 so I doubt it's required to keep the object around.

@kjpgit
Copy link

kjpgit commented Apr 7, 2021

I would suggest writing s3 objects as transient/YYYY/MM/DD/HASH.zip and having a lifecycle policy to remove transient/* files after 3 days. You'd get caching for builds done in the same day. This is similar to the salted hash suggested, but a lot more explicit, observable, and not subject to collisions. Also, the hash could stay the same day to day, so as to not trigger a spurious Lambda redeploy.

The main issue here is you need to pick your date / prefix only once, and stick with that, for the whole build process. You don't want to upload files at Tue 23:59 and a later process/function looks in Wednesday for the object. Perhaps just having an --transient-asset-prefix argument to cdk deploy would be enough?

Option 2 for S3 is to check the last-modifed date on objects, and just force a re-upload every N days. Then you could have a lifecycle policy to delete old objects (> N+1 days), and that shouldn't race with deploys. You'd need to be 100% sure the re-upload doesn't race with S3's lifecycle process, however. That's why I prefer the immutable objects in option 1, there is no chance of a race.

ECR is different because it's not transient, it is the long term backing store for Lambda. Just spitballing here, but if there was a "transient ECR repo" with a 7 day deletion policy, you could push new builds to that, and then during cloudformation deploy, those images would then be "copied/hardlinked" to a "runtime ECR repo" with lifetime managed by cloudformation, e.g. removed upon stack update/delete. Maybe the same thing could be accomplished if cloudformation could set ECR Tags that "pin" images in the transient repo that are in use, and tagged images are excluded from lifecycle rules. However, to avoid races, builds have to push something (e.g. at least some tiny change) to the transient ECR repo to refresh the image age (at least if the image is older than a day), so it won't be deleted right before cloudformation starts tracking / pinning it.

@polothy
Copy link

polothy commented May 6, 2021

Word of caution: doing just a time based deletion on assets is a little risky. We have had this scenario play out:

  • A stack is deployed and is healthy.
  • The stack's assets are removed from S3.
  • The stack is then updated at a later date (new assets in S3), but the update fails and triggers a rollback.
  • The rollback fails because CFN looks for the old assets in the prior template.
  • This puts your stack into the UPDATE_ROLLBACK_FAILED state. For us, we were able to get out of it by carefully skipping some rollback on resources. It's a rather scary state to have a stack in TBH.

So, unsure how you would even accomplish this, but ideally don't delete any S3 assets that are referenced in existing CFN templates.

@polothy
Copy link

polothy commented May 6, 2021

So, unsure how you would even accomplish this, but ideally don't delete any S3 assets that are referenced in existing CFN templates.

Sorry, was reading quickly - sound like the RFC is going to try to do this, so yah!

@mrpackethead
Copy link
Contributor

No need for the aggressive tone; we don't know if @awsmjs has any authority over hiring.

But I do agree that, since CDK is a tool of the for-profit AWS product, maintainers should aim higher than the average volunteer-run open-source project.

I understand that staffing is a constraint and priorities have to be made, but CDK team should then inform whoever is responsible for staffing, that the CDK team size is inadequate vis-à-vis its responsibilities. And also, put such tickets on some backlog then, instead of closing them as if they do not represent a real need from the customers who are -in the end- the ones paying for this.

People have been polite for long enough. We spend millions of dollars a year with AWS. Time to put customers interests first.

@moltar

This comment was marked as off-topic.

@eddie-atkinson
Copy link

Look folks I appreciate that this is frustrating, but can we please not flame someone who's trying to do their job.

After a few months of reflection the solution for my use case of this construct was to separate the build and deploy stages of my service.

Essentially I had two CDK stacks, one of which provisioned an ECR repo to which I could apply a life cycle policy on images, and the other which deployed my service.

Then when it came to deployment I simply deployed my first stack, parsed out the ECR repo ARN and piped it into the call to cdk deploy of the other stack using CDK config. That way no CloudFormation dependency was created between the two stacks, but I could also build an deploy an image to a service in one fell swoop. This also has the neat side effect that if you need to redeploy your service due to a config change you don't need to rebuild your image which was the most time consuming part of the process for me.

This approach is more in line with the build, release, run philosophy of the 12 Factor App, which I broadly agree with.

@evgenyka evgenyka reopened this Dec 18, 2023
@awsmjs awsmjs removed the roadmap label Dec 18, 2023
@evgenyka evgenyka added roadmap status/planning RFC implementation is being planned and removed status/proposed Newly proposed RFC status/accepted labels Dec 18, 2023
@kornicameister
Copy link

Personally I had refined default bootstrap solution that prepares accounts with cdk toolkit. That includes reasonable retention policies and I find it to a very good job.

@vkdot
Copy link

vkdot commented Mar 7, 2024

It seems several solutions clean up CDK staging bucket from the perspective of listing deployed CloudFormation templates. This might not be ideal in case multiple CDK projects are being used in a single AWS account. and we want a different set up per project.

I have tried a different approach that modifies Default Stack Synthesizer to use S3 object key prefix and leverage cloud assembly output from CDK app synth method to determine, which assets should be kept after successful deployment.

The full description and example Java code is at https://github.com/NewTownData/events-monolith/blob/main/infrastructure/docs/clean-up.md

@revmischa
Copy link

Just speaking for myself here, I have a small team and just noticed we're wasting 500GB on CDK assets in various buckets.
I emptied the one for dev envs and everything seems to still work fine.
I think it can be solved with a lifecycle policy on the bucket. CDK itself creates the bucket so maybe it could just apply some policy. Maybe just delete something if it's over 6mo old to be conservative?

@kaizencc
Copy link
Contributor

hi everyone, some updates for this feature:

aws/aws-cdk#31611 is released in CDK 2.163.0 under an --unstable flag :)
aws/aws-cdk#31841 is under development but should be out next week

between the two of them and any follow ups, we should be able to close out this feature as completed.

kaizencc added a commit that referenced this issue Oct 30, 2024
This is a request for comments about Asset Garbage Collection. See #64
for
additional details. 

APIs are signed off by @rix0rrr .

---

_By submitting this pull request, I confirm that my contribution is made
under
the terms of the Apache-2.0 license_

---------

Co-authored-by: Rico Hermans <rix0rrr@gmail.com>
@kaizencc
Copy link
Contributor

The RFC for this issue has been approved and merged (#379). The implementation has also been completed and should be fully released this week in CDK 2.165.0 under the --unstable flag for the foreseeable future. Please let me know if anyone runs into issues when using it!

@kaizencc kaizencc added status/done Implementation complete and removed status/planning RFC implementation is being planned labels Nov 4, 2024
@TiagoVentosa
Copy link

@kaizencc Very exited for this new feature, and just tried to run it now and...

npx cdk gc --unstable=gc --rollback-buffer-days 5
 ⏳  Garbage Collecting environment aws://<ACCOUNT>/eu-west-1...
Error refreshing stacks: AccessDenied: User: arn:aws:sts::<ACCOUNT>:assumed-role/<ROLE> is not authorized to perform: cloudformation:GetTemplateSummary on resource: arn:aws:cloudformation:eu-west-1:<ACCOUNT>:stack/<ORGANIZATION-STACK> with an explicit deny in a service control policy

My first instinct is to say that the gc command should probably ignore stacks that it doesn't have access, but leave a warning in the console. Maybe even have a parameter to control this behaviour?

Should I open a separate ticket for this?

@kaizencc
Copy link
Contributor

kaizencc commented Dec 3, 2024

#32322

@BwL1289
Copy link

BwL1289 commented Dec 30, 2024

Thanks for building this feature. FYI - 5th day in a row running it and still going. Stops every 12 hours because my token gets revoked:

[19364.41%] 260064 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.

would be great to fix the % and number of assets deleted sections to see realtime updates.

I can confirm it's working if I go to my ecr and s3 assets and count the number of assets now vs what they were before I began.

@BwL1289
Copy link

BwL1289 commented Dec 31, 2024

After more investigating, looks like ECR is analyzing the same image ids over and over.

Has anyone else seen this behavior?

@blimmer
Copy link

blimmer commented Dec 31, 2024

I think this was just fixed yesterday: aws/aws-cdk#32679 / aws/aws-cdk#32498

@BwL1289
Copy link

BwL1289 commented Dec 31, 2024

Thanks for the heads up. Will keep an eye out for a new release.

@BwL1289
Copy link

BwL1289 commented Jan 5, 2025

Still seeing unexpected behavior.

In one account I see the following:

...
...
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[10:33:17] [sdk info] ECR.PutImage({"repositoryName":"cdk-hnb659fds-container-assets-<redacted>-<redacted>","imageDigest":"sha256:<redacted>","imageManifest":"{\n   \"schemaVersion\": 2,\n   \"mediaType\": \"<redacted>\",\n   \"config\": {\n      \"mediaType\": \"<redacted>\",\n      \"size\": 5901,\n      \"digest\": \"sha256:<redacted>\"\n   },\n   \"layers\": [\n      {\n         \"mediaType\": \"<redacted>\",\n         \"size\": 51425992,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 123664308,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 12743115,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 91031203,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 213512017,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 9206109,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 11062104,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 45374,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 29905,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 1361,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 346,\n         \"digest\": \"sha256:<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 220,\n         \"digest\": \"sha256:<redacted>\"\n      }\n   ]\n}","imageTag":"443-aws-cdk.isolated-<redacted>"}) -> OK
[10:33:17] Tagged 444 assets
[57.46%] 1468 files scanned: 1362 assets (2105.36 GiB) tagged, 0 assets (0.00 GiB) deleted.
[10:33:17] Reading cached notices from /Users/benjaminleff/.cdk/cache/notices.json
[57.46%] 1468 files scanned: 1362 assets (2105.36 GiB) tagged, 0 assets (0.00 GiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[57.46%] 1468 files scanned: 1362 assets (2105.36 GiB) tagged, 0 assets (0.00 GiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[57.46%] 1468 files scanned: 1362 assets (2105.36 GiB) tagged, 0 assets (0.00 GiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[57.46%] 1468 files scanned: 1362 assets (2105.36 GiB) tagged, 0 assets (0.00 GiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[57.46%] 1468 files scanned: 1362 assets (2105.36 GiB) tagged, 0 assets (0.00 GiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[57.46%] 1468 files scanned: 1362 assets (2105.36 GiB) tagged, 0 assets (0.00 GiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[57.46%] 1468 files scanned: 1362 assets (2105.36 GiB) tagged, 0 assets (0.00 GiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[57.46%] 1468 files scanned: 1362 assets (2105.36 GiB) tagged, 0 assets (0.00 GiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.
[100.00%] 4762 files scanned: 0 assets (0.00 MiB) tagged, 0 assets (0.00 MiB) deleted.

This continues seemingly indefinitely. It's also strange that the verbose flag stops working.

In another account I see the following:

...
...
[99.73%] 1839 files scanned: 2 assets (0.34 MiB) tagged, 0 assets (0.00 MiB) deleted.
[09:46:49] [sdk info] ECR.PutImage({"repositoryName":"cdk-hnb659fds-container-assets-<redacted>-<redacted>","imageDigest":"sha256:<redacted>
<redacted>","imageManifest":"{\n   \"schemaVersion\": 2,\n   \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\",\n   \"config\": {\n      \"mediaType\": \"application/vnd.docker.container.image.v1+json\",\n      \"size\": 9791,\n      \"digest\": \"sha256:<redacted>
<redacted>\"\n   },\n   \"layers\": [\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 51425992,\n         \"digest\": \"sha256:<redacted>
<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 6857210421,\n         \"digest\": \"sha256:<redacted>
<redacted>\"\n      },\n      {\n         \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\",\n         \"size\": 236,\n         \"digest\": \"sha256:<redacted>
<redacted>\"\n      }\n   ]\n}","imageTag":"186-aws-cdk.isolated-<redacted>"}) -> OK
[09:46:51] Tagged 187 assets
[21.78%] 294 files scanned: 187 assets (405.60 GiB) tagged, 0 assets (0.00 GiB) deleted.
[99.73%] 1839 files scanned: 2 assets (0.34 MiB) tagged, 0 assets (0.00 MiB) deleted.
[09:46:51] Reading cached notices from /Users/benjaminleff/.cdk/cache/notices.json
[99.73%] 1839 files scanned: 2 assets (0.34 MiB) tagged, 0 assets (0.00 MiB) deleted.
[99.73%] 1839 files scanned: 2 assets (0.34 MiB) tagged, 0 assets (0.00 MiB) deleted.
[99.73%] 1839 files scanned: 2 assets (0.34 MiB) tagged, 0 assets (0.00 MiB) deleted.
[99.73%] 1839 files scanned: 2 assets (0.34 MiB) tagged, 0 assets (0.00 MiB) deleted.
[99.73%] 1839 files scanned: 2 assets (0.34 MiB) tagged, 0 assets (0.00 MiB) deleted.
[99.73%] 1839 files scanned: 2 assets (0.34 MiB) tagged, 0 assets (0.00 MiB) deleted
...

This also continues seemingly indefinitely. Same with verbose flag behavior not working as expected.

It's possible I'm not allowing the operation to continue for long enough.

Any ideas?

@blimmer
Copy link

blimmer commented Jan 5, 2025

Since this feature is past the RFC now, I think it'd be better to open bugs in the CDK repo. This RFC has a lot of subscribers and is likely not being actively monitored by maintainers anymore.

@BwL1289
Copy link

BwL1289 commented Jan 5, 2025

@blimmer thanks. Opened aws/aws-cdk#32742.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bar-raiser/assigned effort/medium Reasonable effort required for implementation status/done Implementation complete
Projects
None yet
Development

No branches or pull requests