-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(release): 2.89.0 #26566
Merged
Merged
chore(release): 2.89.0 #26566
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Uses a fake file system in `aws-lambda-nodejs` tests to avoid intermittent test failures when attempting to access the real file system. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ine (#26443) Expose `stateMachineRevisionId` as a readonly property to StateMachine whose value is a reference to the `StateMachineRevisionId` attribute of the underlying CloudFormation resource. Closes #26440 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…26455) Makes it less confusing ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…or of trigger failures (#26450) TriggerCustomResourceProvider takes only the status code of Invoke API call into account. https://github.com/aws/aws-cdk/blob/7a6f953fe5a4d7e0ba5833f06596b132c95e0709/packages/aws-cdk-lib/triggers/lib/lambda/index.ts#L69-L73 If invocationType is `EVENT`, Lambda function is invoked asynchronously. In that case, if Lambda function is invoked successfully, the trigger will success regardless of the result for the function execution. I add this consideration into README. Closes #26341 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Co-authored-by: Kaizen Conroy <36202692+kaizencc@users.noreply.github.com>
Add missing Aurora Engine Version 3_02_3. See Aurora docs at https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.3023.html ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…o-op test (#26457) Test was a no-op due to the typo.
Policy names with slashes (`/`) are not allowed when bootstrapping. For example: ``` cdk bootstrap --custom-permissions-boundary aaa/bbb ``` Would fail: ``` Error: The permissions boundary name aaa/bbb does not match the IAM conventions. ``` This fix allows to specify paths in the policy name. Closes #26320. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…26297) Reported issue with `diff` is that it treats the fail return status for cases when there are actual diffs, making it hard to know what happened with pipeline automations. The proposed solution adds a logging statement with a similar format that is used for deploy (but here the total time is reported) specifying how many stacks have differences, as presented below. As a result, it will be possible to check in pipelines for this logging statement to correctly detect situation when there are no actual changes, when there are changes, and when there are failures, since on failures this statement will not be present: Case of no changes: ✨ Number of stacks with differences: 0 Case of changes in 5 stacks: ✨ Number of stacks with differences: 5 Closes #10417. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…26466) Ran into an issue where the construct trace was incorrect in larger projects, specifically where there are constructs that contain multiple constructs. To get the construct trace tree we first construct a new tree that only contains the path to the terminal construct (the one with the trace). We drop all the other constructs in the tree that don't relate. For example, if we had a tree like: ``` ->App -->MyStage --->MyStack1 ---->MyConstruct ----->Resource --->MyStack2 ---->MyConstruct ----->Resource --->MyStack3 ---->MyConstruct ----->Resource ``` And we want to get a new tree for `/App/MyStage/MyStack2/MyConstruct/Resource` it should look like this: ``` ->App -->MyStage --->MyStack2 ---->MyConstruct ----->Resource ``` We weren't correctly generating the new tree correctly and would always end up with the tree being the first item in the list. I've updated one of the tests, and also tested this on a more complex application. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…refix (#26324) Optionally specify a prefix for the staging stack name, which will be baked into the stackId and stackName. The default prefix is `StagingStack`. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Implementing `IGrantable` for cases when it's needed to grant permissions to a `Service` instance. For example: ``` declare const bucket: IBucket; const service = new apprunner.Service(this, 'Service', { source: apprunner.Source.fromEcrPublic({ imageConfiguration: { port: 8000 }, imageIdentifier: 'public.ecr.aws/aws-containers/hello-app-runner:latest', }), }); bucket.grantRead(service); ``` Closes #26089. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
… allowed to be specified as subscription and dead-letter queue (#26110) To send message to SQS queues encrypted by KMS from SNS, we need to grant SNS service-principal access to the key by key policy. From this reason, we need to use customer managed key because we can't edit key policy for AWS managed key. However, CDK makes it easy to create such a non-functional subscription. To prevent CDK from making such a subscription, I added the validation which throw an error when SQS queue encrypted by AWS managed KMS key is specified as subscription or dead-letter queue. Closes #19796 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Fix issue with order of `-f` flag and file path in `rm` command for `pnpm` esbuild bundling step to remove `node_modules/.modules.yaml` from output dir. This is continuing to cause bundling step to fail for `pnpm` >= 8.4.0 with no external `node_modules` specified per issue #26478. Solved by moving the `-f` flag before file path in the `rm` command and updating relevant unit test. Please note that I haven't adjusted the `del` command for windows env as not sure if same issue occurs in that env. Exemption Request: No changes to integration test output of `aws-lambda-nodejs/test/integ.dependencies-pnpm.js` and don't feel this warrants a separate integration test. Closes #26478. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…s not include `Lambda.ClientExecutionTimeoutException` default Retry settings (#26474) According to the document, best practice for Step Functions which invoke a Lambda function is as follows. https://docs.aws.amazon.com/step-functions/latest/dg/bp-lambda-serviceexception.html ``` "Retry": [ { "ErrorEquals": [ "Lambda.ClientExecutionTimeoutException", "Lambda.ServiceException", "Lambda.AWSLambdaException", "Lambda.SdkClientException"], "IntervalSeconds": 2, "MaxAttempts": 6, "BackoffRate": 2 } ] ``` I have made changes to align with the official documentation. Closes #26470. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add support for [geolocation routing](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-geo.html) in Route53 RecordSets. This PR adds attribute `geoLocation` to `RecordSetOptions` and new class `GeoLocation`. This enables developers to use geolocation routing. The new feature can be used like this (more examples in README): ```ts new route53.ARecord(this, 'ARecordGeoLocationContinent', { zone: myZone, target: route53.RecordTarget.fromIpAddresses('1.2.3.0', '5.6.7.0'), geoLocation: route53.GeoLocation.continent('EU'), // Europe }); ``` Closes #9478. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This PR contains implementation of ScheduleGroup. A Schedule is the main resource in Amazon EventBridge Scheduler, this PR adds ScheduleGroup which can be used to group Schedules and on which Schedule depends. Every AWS account comes with a default group for schedules. Customers can also create a custom groups to organise schedules that share a common purpose or belong to the same environment. Schedule has a property `group` that determines what group is the schedule associated with. To be able to test adding schedules to the group I have added property `group` to private class `Schedule` and used `Lazy` functionality to be able to update `group` of the schedule dynamically. Implementation is based on RFC: https://github.com/aws/aws-cdk-rfcs/blob/master/text/0474-event-bridge-scheduler-l2.md Advances #23394 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…stance.serverlessV2` (#26472) **Context** A recent feature release #25437 has added support for Aurora Serverless V2 cluster instances. This change also introduced a new approach for defining cluster instances, deprecating `instanceProps` in the process. The new approach uses `ClusterInstance.provisioned()` and `ClusterInstance.serverlessV2()` to define instances and their parameters on a per-instance basis. A migration flag `isFromLegacyInstanceProps` has also been added to the `ClusterInstance.provisioned()` constructor props to allow for migration to this new approach without destructive changes to the generated CFN template. **Bug** Because the `DatabaseCluster` construct has not previously had official support for Serverless V2 instances, the same migration flag has not been made available for `ClusterInstance.serverlessV2()`. This ignores the fact that many people have already provisioned serverless v2 instances using a common workaround described here #20197 (comment). People who have used this method previously have no clean migration path. This has been previously raised in #25942. **Fix** This fix simply exposes the `isFromLegacyInstanceProps` flag on **both** `ProvisionedClusterInstanceProps` and `ServerlessV2ClusterInstanceProps`. The behaviour for this flag is already implemented and applied across both instance types, so this is a type-only change. I have however added a test to capture this upgrade path for Serverless V2 instances from the common workaround. Closes #25942. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Built the PR off of #26486 to hopefully avoid merge conflicts if we merge the other first ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
… accessed from a cross-env stack (#26308) Currently, `Secret.secretFullArn` returns the partial ARN if the secret is referenced between cross-env stacks. An obvious practical implication is that `grant*` methods will produce an incorrect ARN for the IAM policies, since a full ARN is required for cross-environment access. This PR partially fixes the issue - I reimplemented `arnForPolicies` to be lazily evaluated. It checks if the value is being accessed across environments and adds `-??????` to the ARN if it is. Now, this does not solve the underlying issue of `secretFullArn` returning the partial ARN. While it should return undefined, we have to check how the prop is accessed (same environment or cross-env) before we know whether to return the ARN or `undefined`. If we use a `Lazy` here, it still cannot return `undefined` (only `any` as the closest thing). So I don't think the underlying cause could be solved currently, that's why I opted for this partial fix that would address the most practical consequence of the bug. This is a partial fix for #22468. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…validation (#26193) Setting `maxHealthyPercent` to a non-integer value was not raising synth-time errors, but was generating invalid CFN templates. This fix adds validation for both `maxHealthyPercent` and `minHealthyPercent`. Closes #26158. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…26349) This change adds the possibility to specify the [`tags`](https://docs.aws.amazon.com/batch/latest/APIReference/API_SubmitJob.html#:~:text=Required%3A%20No-,tags,-The%20tags%20that) property in the `BatchSubmitJob` construct. Closes #26336. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Adds support for [Neptune serverless](https://docs.aws.amazon.com/neptune/latest/userguide/neptune-serverless-using.html). Example of how to launch a Neptune serverless cluster: ``` new DatabaseCluster(stack, 'Database', { vpc, instanceType: InstanceType.SERVERLESS, clusterParameterGroup, removalPolicy: cdk.RemovalPolicy.DESTROY, serverlessScalingConfiguration: { minCapacity: 1, maxCapacity: 5, }, }); ``` Closes #26428 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This is necessary for the new automation that handles community reviews. Thanks @tmokmss! ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
We discussed for the need to document `exclude` patterns using a negation in [the PR](#26365 (comment)). I also could not find documentation for the `exclude` property itself, so I added them. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
SSM parameter is not offering any al2022 AMIs as described in #26274. This PR marks `latestAmazonLinux2022()` as deprecated and uses `latestAmazonLinux2023()` instead. - [x] mark latestAmazonLinux2022 as deprecated - [x] update the aws-ec2 README to use latestAmazonLinux2023 instead Closes #26274 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
The blog post was wrong, added a few missing configuration bits and streamlined the date code. Also added a report for PRs. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
We don't have a mechanism to keep a projen project up to date. It's better if all packages in a monorepo use the same tooling. Replaces #26468 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…eline (#26496) Creates a workflow that will request a GitHub Environment deployment whenever cli files change. The deployment has to be approved by a team member to run. The actual code to deploy is just a push to the `test-main-pipeline` branch, thus reusing the existing functionality. Workflow tested on my fork. When change has been detected: <img width="883" alt="image" src="https://github.com/aws/aws-cdk/assets/379814/8b7951ec-0256-4ebf-ac60-39b82ab6a712"> After approval: <img width="874" alt="image" src="https://github.com/aws/aws-cdk/assets/379814/6f3ce6f1-9c3b-44b3-bbfb-d0804c18f9dc"> After request submitted: <img width="949" alt="image" src="https://github.com/aws/aws-cdk/assets/379814/47920d0e-4a78-4a90-8be8-031ab0480d6b"> ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…eg-test (#26523) The original attempt didn't work. The push ended up being tried by `github-actions[bot]` which (correctly) failed. Try a different way that should also work. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add support for Aurora Engine Version 3_03_1. AWS Release notes: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.3031.html
Improve the EKS doc in terms of the console access. Closes #18843 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Try yet another authentication method for the push to the testing pipeline branch. This has worked on my fork with a PR from another fork. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Currently, we can't set the subscription filter name as a prop for L2 SubscriptionFilter construct. This PR introduces the new prop `filterName`. This will let us set a specific name without requiring escape hatches. Closes #26485 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…oftwareUpdateOptions (#26403) The [`OffPeakWindowOptions`](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_opensearchservice.CfnDomain.OffPeakWindowOptionsProperty.html) and [`SoftwareUpdateOptions`](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_opensearchservice.CfnDomain.SoftwareUpdateOptionsProperty.html) are supported by OpenSearch Domain, but not by the CDK high-level construct. This change adds the corresponding properties to the `Domain` construct: ```ts const domain = new Domain(this, 'Domain', { version: EngineVersion.OPENSEARCH_1_3, offPeakWindowEnabled: true, // can be omitted if offPeakWindowStart is set offPeakWindowStart: { hours: 20, minutes: 0, }, enableAutoSoftwareUpdate: true, }); ``` Closes #26388. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Currently this is just part of the `Yarn upgrade` workflow. We want a separate workflow so the updates are visible in the CHANGELOG. This should also reduce potential failures in the `Yarn upgrade` workflow and vice-versa, so they don't block each other. Tested o fork: https://github.com/mrgrain/aws-cdk/actions/runs/5680204361 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This ends up being a lot of work. Essentially, the `pull_request_review` trigger runs the action on the merge branch which does not have the necessary secrets. This workflow has been inspired from github [docs](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#using-data-from-the-triggering-workflow) on what to do in this situation. The PR linter takes the commit sha and pr number from the `pull_request_target` workflow and assumes that those values are available. So we will take those values from `pull_request_review`, upload them as artifacts and then download them when the pr linter action is triggered by the workflow run. The code in `index.ts` hasn't been tested, but should work. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Ugh: https://github.com/aws/aws-cdk/actions/runs/5685197735 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
AWS Service Spec packages to latest versions.
Ran npm-check-updates and yarn upgrade to keep the `yarn.lock` file up-to-date.
Build on Windows fails due to the hard-coded file path separators (`'/'`) in the eslint rule `invalid-cfn-imports.ts`. The error message is "no such file or directory, open 'C:\aws-cdk\tools\@aws-cdk\pkglint\bin\pkglint.ts\package.json'" as follows: ``` $ npx lerna run build --scope=aws-cdk-lib lerna notice cli v7.1.4 lerna notice filter including "aws-cdk-lib" lerna info filter [ 'aws-cdk-lib' ] × 1/3 dependent project tasks failed (see below) √ 2/3 dependent project tasks succeeded [0 read from cache] ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > @aws-cdk/pkglint:build yarn run v1.22.19 $ tsc -b && eslint . --ext=.ts && chmod +x bin/pkglint DeprecationWarning: 'originalKeywordKind' has been deprecated since v5.0.0 and will no longer be usable after v5.2.0. Use 'identifierToKeywordKind(identifier)' instead. Oops! Something went wrong! :( ESLint: 7.32.0 Error: Error while loading rule '@aws-cdk/invalid-cfn-imports': ENOENT: no such file or directory, open 'C:\aws-cdk\tools\@aws-cdk\pkglint\bin\pkglint.ts\package.json' Occurred while linting C:\aws-cdk\tools\@aws-cdk\pkglint\bin\pkglint.ts at Object.openSync (node:fs:603:3) at Object.readFileSync (node:fs:471:35) at isAlphaPackage (C:\aws-cdk\tools\@aws-cdk\eslint-plugin\lib\rules\invalid-cfn-imports.js:139:31) at currentFileIsInAlphaPackage (C:\aws-cdk\tools\@aws-cdk\eslint-plugin\lib\rules\invalid-cfn-imports.js:110:16) at Object.create (C:\aws-cdk\tools\@aws-cdk\eslint-plugin\lib\rules\invalid-cfn-imports.js:12:10) at createRuleListeners (C:\aws-cdk\node_modules\eslint\lib\linter\linter.js:765:21) at C:\aws-cdk\node_modules\eslint\lib\linter\linter.js:937:31 at Array.forEach (<anonymous>) at runRules (C:\aws-cdk\node_modules\eslint\lib\linter\linter.js:882:34) at Linter._verifyWithoutProcessors (C:\aws-cdk\node_modules\eslint\lib\linter\linter.js:1181:31) error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. ``` Replace all with `path.sep`, cross-platform separator. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This is needed so these PRs will merge without manual intervention. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Documents some unkowns around running integ tests with hosted zones. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
When mode is set to `non-blocking`, logs are stored in a buffer. The setting `max-buffer-size` controls the size of this buffer. Being able to set this buffer is very important, the larger the buffer, the less likely there is to be log loss. Recently I performed benchmarking of `non-blocking` mode, and found that the default buffer size is not sufficient to prevent log loss: moby/moby#45999 We're planning to run an education campaign to ensure ECS customers understand the risk with the default `blocking` mode, and the risk with `non-blocking` mode with the default buffer size. Therefore, we need folks to be able to control this setting in their CDK. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ent test failures (#26551) Attempts to resolve intermittent failures in the cloud-assembly-schema test case. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
aws-cdk-automation
added
auto-approve
pr/no-squash
This PR should be merged instead of squash-merging it
labels
Jul 28, 2023
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
Thank you for contributing! Your pull request will be automatically updated and merged without squashing (do not update manually, and be sure to allow changes to be pushed to your fork). |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See CHANGELOG