Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(release): 2.161.0 #31647

Merged
merged 46 commits into from
Oct 3, 2024
Merged

chore(release): 2.161.0 #31647

merged 46 commits into from
Oct 3, 2024

Conversation

aws-cdk-automation
Copy link
Collaborator

@aws-cdk-automation aws-cdk-automation commented Oct 3, 2024

See CHANGELOG

mrgrain and others added 30 commits September 24, 2024 09:19
…ed schema overrides (#31539)

### Reason for this change

We didn't receive automated updates for the following three resources since they had a temporary schema override:

```
AWS::ECS::CapacityProvider
AWS::Lambda::EventSourceMapping
AWS::Lambda::Function
```

This prevented new features like EventSourceMapping tagging or documentation updates to be applied to the AWS CDK.

### Description of changes

Remove the outdated temporary schemas.

### Description of how you validated changes

Manually compared that the current schemas from `@aws-cdk/awscdk-service-spec` are additive compared to the schema overrides. Specifically checked that the features they were added for are now available upstream.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable)

Closes #31127 #31425.

### Reason for this change



A restart policy can be specified in CloudFormation, but not in L2.

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-restart-policy.html

### Description of changes



Add `enableRestartPolicy` and some properties to the container definition.

### Description of how you validated changes



unit tests and integ tests.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…roup and logging properties into loggingConfig (#31488)

### Reason for this change

The `logging` and `logGroup` properties contain a restriction where a `logGroup` cannot be provided if `logging` is set to `false`. This was previously handled by error handling but we want to change this to make it impossible for a user to run into that scenario in the first place. 

### Description of changes

BREAKING CHANGE: the `logging` and `logGroup` properties in `DestinationLoggingProps` have been removed and replaced with a single optional property `loggingConfig` which accepts a class of type `LoggingConfig`. 

#### Details
Combine the `logging` and `logGroup` properties into a single new optional property called `loggingConfig` which accepts a class of type `LoggingConfig`. 

`LoggingConfig` is an abstract class which can be instantiated through either an instance of `EnableLogging` or `DisableLogging` which can be used in the following 3 ways:

```ts
import * as logs from 'aws-cdk-lib/aws-logs';

const logGroup = new logs.LogGroup(this, 'Log Group');
declare const bucket: s3.Bucket;

// 1. Enable logging with no parameters - a log group will be created for you
const destinationWithLogging = new destinations.S3Bucket(bucket, {
  loggingConfig: new destinations.EnableLogging(),
});

// 2. Enable a logging and pass in a logGroup to be used
const destinationWithLoggingAndMyLogGroup = new destinations.S3Bucket(bucket, {
  loggingConfig: new destinations.EnableLogging(logGroup),
});

// 3. Disable logging (does not accept any parameters so it is now impossible to provide a logGroup in this case)
const destinationWithoutLogging = new destinations.S3Bucket(bucket, {
  loggingConfig: new destinations.DisableLogging(),
});

```

### Description of how you validated changes
unit + integ test

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…yptionKey
When the Log Retention Lambda runs massively parallel (on 70+ Lambdas at the same time), it can run into throttling problems and fail.

Raise the retry count and delays:

- Raise the default amount of retries from 5 -> 10
- Raise the sleep base from 100ms to 1s.
- Change the sleep calculation to apply the 60s limit *after* jitter instead of before (previously, we would take a fraction of 60s; now we're taking a fraction of the accumulated wait time, and after calculating that limit it to 60s).

Fixes #31338.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Stack tags are not rendered to the template, but instead are passed via API call.

Verify that stack tags do not contain unresolved values, as they won't work.

Closes #28017.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…1328)

When having 2 stacks with the same name in the same stage (which makes sense when deploying them to different environments), the CodePipeline Action name is derived from the stack name, and will be duplicated.

Detect if an graph node name is already being used and if so, use environment information to try and make the name unique.

Closes #30960.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Out of our last 6 pipeline failures, 4 were:

```console
hotswap deployment for ecs service detects failed deployment and errors
    Timeout: test took more than 600s to complete
```

This is sporadic, so i'm trying to make it more stable by extending the timeout, which will now be 1800s.

https://github.com/aws/aws-cdk/blob/16b74f337e351b177aaeed2d80c519ff264c3e11/packages/%40aws-cdk-testing/cli-integ/lib/with-cdk-app.ts#L16-L17

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…comingBytes (#31535)

### Issue

Closes #30034

### Reason for this change
- Create alarms on log ingestion to ensure it is working

### Description of changes

- Add metric methods for log group IncomingLogEvents and IncomingBytes, similar to existing metric methods like metricInvocations on Lambda functions.

### Description of how you validated changes
- validated with new unit tests and a new integration test

### Checklist
- [ ] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…30186)

### Issue # (if applicable)

Closes #27128

### Reason for this change
The `--quiet` flag on the `cdk diff` command prevents the stack name and default message from being printed when no diff exists.
If diffs exist, the stack name and diffs are expected to be printed, but currently, the stack name is not printed, and it is difficult to determine which stack the diff is for.

for example:
```bash
$ cdk diff --quiet
Resources
[~] AWS::S3::Bucket MyFirstBucket MyFirstBucketB8884501
 ├─ [~] DeletionPolicy
 │   ├─ [-] Delete
 │   └─ [+] Retain
 └─ [~] UpdateReplacePolicy
     ├─ [-] Delete
     └─ [+] Retain


✨  Number of stacks with differences: 1
```

This PR will fix to print the stack name when the `--quiet` flag is specified and diffs exist.

### Description of changes
Changed the position of the `fullDiff` function call.
It is possible to output the stack name in the `printSecurityDiff` or `printStackDiff` functions, but since the message has already been output before these functions are called, the stack name must be output first.
I think it would be more user-friendly to have all messages after the output of the stack name, but if this is not the case, please point this out.

### Description of how you validated changes
I added a unit test to confirm to print the stack name when diff exists.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
The CLI prints deployment errors 3 times. This is caused by an catching an error, printing it, and then throwing it again; to another `catch` statement that catches the error, prints it, and then throws it again.

In this PR, get rid of one catch and change the error that gets rethrown in a different case.

Also in this PR: fix the inconsistency of printing the progress of asset publishing. Compared to the progress of stack deployments, the stack name isn't bold and there is a single space offset.

(A little work to change the printing, a LOT of work to get the integration+regression tests to pass, that all assert way too many specifics about the error messages that get printed to the screen)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable)

Closes #<issue number here>.

### Reason for this change



Added Meta Llama 3.2 models.

- meta.llama3-2-1b-instruct-v1:0
- meta.llama3-2-3b-instruct-v1:0
- meta.llama3-2-11b-instruct-v1:0
- meta.llama3-2-90b-instruct-v1:0

ref

- https://aws.amazon.com/about-aws/whats-new/2024/09/llama-3-2-generative-ai-models-amazon-bedrock/
- https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html

### Description of changes



Added the models.

### Description of how you validated changes



### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…n provider (#31482)

### Issue # (if applicable)

N/A

### Reason for this change

The authentication providers and their logic in this module have bad code smells, and thus, we have refactored them to bring the module more in line with CDK standards and best practices. In addition, the Digits authentication provider has been deprecated since September 2017, so it has been removed.

### Description of changes

* Any modules relating to the Digits auth have been removed, as the service itself is deprecated.
* The `IdentityPoolProviders` and `IdentityPoolAuthenticationProviders` interfaces have been merged, as there did not seem to be a reason to keep them separate, aside from differentiating third-party and internal providers.
* Some grammar, punctuation, formatting, and capitalization changes

### Description of how you validated changes

Unit tests and integration tests have been tweaked only as necessary to confirm these changes. Since they all still pass or show no need to be updated, we can confirm that this refactor does not affect them. The integration test has also been updated to reflect that the previous Google prop for `clientSecret` is deprecated, and use `clientSecretValue` instead.

**BREAKING CHANGE**: The `IdentityPoolProviderType.DIGITS` and `IdentityPoolProviderUrl.DIGITS` enum values, and `IdentityPoolDigitsLoginProvider` interface have been removed, as well as the `digits` attribute of the `IdentityPoolAuthenticationProviders` interface.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ter (#31524)

### Issue # (if applicable)

Closes #31523 .

### Reason for this change

Cloudformation supports [enabling local write forwarding](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbcluster.html#cfn-rds-dbcluster-enablelocalwriteforwarding) feature but AWS CDK does not support it.

### Description of changes

- Add `enableLocalWriteForwarding` to `DatabaseClusterBaseProps`
- Add validation that `engineType` is either `aurora` or `aurora-mysql`
  - Having `engineType` set to `aurora` means launching a MySQL-compatible Aurora cluster.

### Description of how you validated changes

Add both unit and integ tests.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Closes #29378, #29377.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
#30563)

### Issue # (if applicable)

Closes #26847.

### Reason for this change
In the case of passwords generated by [DatabaseSecret](https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-redshift-alpha.DatabaseSecret.html), there may be a need to exclude certain characters. 

The original issue was to exclude the backtick character from passwords. 
However, the current default value of `excludeCharacters`, `'"@/\ ''`, matches the characters that are not supported in Redshift ([docs](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_USER.html#r_CREATE_USER-parameters)).

> It can use any ASCII characters with ASCII codes 33–126, except ' (single quotation mark), " (double quotation mark), \, /, or @.

Instead of including the backtick in the default value of `excludeCharacters`, it was considered appropriate to make it configurable.



### Description of changes
Add `excludeCharacters` property to specify characters to not include in generated passwords.



### Description of how you validated changes
Add unit tests and integ tests.


### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add new foundation model.

Ref:
* https://aws.amazon.com/about-aws/whats-new/2024/09/jamba-1-5-family-models-amazon-bedrock/
* https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html


Provider | Model name | Version | Model ID
-- | -- | -- | --
AI21 Labs | Jamba 1.5 Large | 1.x | ai21.jamba-1-5-large-v1:0
AI21 Labs | Jamba 1.5 Mini | 1.x | ai21.jamba-1-5-mini-v1:0


### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…baseInstanceReadReplica (#31579)

### Issue # (if applicable)

Closes #31061.

### Reason for this change
Calling `grantConnect()` on an instance of `DatabaseInstanceReadReplica` generates an incorrect policy that uses the full ARN of the instance instead of the instanceResourceId value. It should have created policy with correct resource format `arn:aws:rds-db:region:account-id:dbuser:DbiResourceId/db-user-name` per [Creating and using an IAM policy for IAM database access](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html).

### Description of changes
Fixed the IAM policy that `grantConnect()` generates for `DatabaseInstanceReadReplica`. The change correctly sets the value of `instanceResourceId` to replica instance `attrDbiResourceId`. The value of `instanceResourceId` is used to generate IAM policy.

### Description of how you validated changes
- Added new unit test.
- Updated existing integration test.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…pi arn (#31567)

### Issue # (if applicable)

Closes #31550.

### Reason for this change

When using a lambda authorizer with a GraphqlAPI, the cdk automatically creates the AWS::Lambda::Permission required for the AppSync API to invoke the lambda authorizer. It does not however add a SourceArn.

This conflicts with the control tower policy [[CT.LAMBDA.PR.2]](https://docs.aws.amazon.com/controltower/latest/controlreference/lambda-rules.html#ct-lambda-pr-2-description), and it is in general good practice to scope permissions.

### Description of changes

Added new feature flag `APPSYNC_GRAPHQLAPI_SCOPE_LAMBDA_FUNCTION_PERMISSION`.

Currently, when using a Lambda authorizer with an AppSync GraphQL API, the AWS CDK automatically generates the necessary AWS::Lambda::Permission to allow the AppSync API to invoke the Lambda authorizer. This permission is overly permissive because it lacks a SourceArn, meaning it allows invocations from any source.

When this feature flag is enabled, the AWS::Lambda::Permission will be properly scoped with the SourceArn corresponding to the specific AppSync GraphQL API.
```ts
  ...
  config?.handler.addPermission(`${id}-appsync`, {
    principal: new ServicePrincipal('appsync.amazonaws.com'),
    action: 'lambda:InvokeFunction',
    sourceArn: this.arn, // <-- added when feature flag is enabled
  });
  ...
```

### Description of how you validated changes

Unit + integ tests with feature flag enabled. 

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable)

None

### Reason for this change

Adding an integ test of using ECS with Windows AMIs. This is currently missing, hence, is a test gap.

### Description of changes

The integ test creates an ECS cluster and a Ec2Service that lives in EC2 instances.

### Description of how you validated changes

N/A

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ment (#31452)

### Issue # (if applicable)

Closes #28579

### Reason for this change

The [CR lambda](https://github.com/aws/aws-cdk/blob/597228c1552a21f8dc7250a0be62160f838bb776/packages/%40aws-cdk/custom-resource-handlers/lib/aws-s3-deployment/bucket-deployment-handler/index.py#L138C14-L138C30) is essentially sending back the same data in the response which is hitting the limit for close to 50 object uploads.

Particularly this is being a limitation when using servicecatalog.ProductStack, if there are local assets beyond a particular number, the Custom::CDKBucketDeployment would fail with the error Response object is too long which is a hard limit of 4096 bytes.

### Description of changes

1. Added a new property to control the custom resource sending large data and hitting the 4096 bytes limit even though the deployment operation is successful. 
2. The property `outputObjectKeys` has been set to false by default for the service catalog product so that the error does not occur. 

### Description of how you validated changes

Validated using a sample stack with the property set and confirmed the behavior. Also, the existing deployments would be unaffected. 

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…1587)

### Issue # (if applicable)

N/A

### Reason for this change
- New warning following #31535 regarding the docstring

### Description of changes
- update params in the docstring to match the function declaration

### Description of how you validated changes



### Checklist
- [ ] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add EventBridge API destination as a Pipes target.

CloudFormation groups EventBridge API destination with API Gateway REST API
as [PipeTargetHttpParameters](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-pipes-pipe-pipetargethttpparameters.html#cfn-pipes-pipe-pipetargethttpparameters-pathparametervalues), but I think separating them here similar to [aws-event-targets](https://github.com/aws/aws-cdk/tree/main/packages/aws-cdk-lib/aws-events-targets/lib) 
makes more sense, as API Gateway requires `stage`, `path`, and `method` (see [here](https://github.com/aws/aws-cdk/blob/main/packages/aws-cdk-lib/aws-events-targets/lib/api-gateway.ts#L11-L32)).
### Reason for this change

We would like to be able to send customers a notice when issues with bootstrap templates are discovered.

### Description of changes

Currently, our notices mechanism can only match against CLI/Framework versions. In order to match against a bootstrap stack version, we need to hook into the deploy process, where we already perform bootstrap version checks.

There were two options to implement the change:

1. Bubble up the bootstrap stack version all the up to the CLI entry-point, where notices are initialized.
2. Allow access to notices from anywhere in our CLI code base.

I opted for number 2 because it is less disruptive (in terms of files changed) and more flexible for future code that might want to take advantage of the notices mechanism.

The tricky thing is, notices are dependent on user configuration (i.e `Configuration`), which we don't have access to in this part of the code. To make it work, I created a new `Notices` singleton class. It is instantiated in the CLI entry-point (via `Notices.create` with user configuration), and can then be accessed from anywhere in the code (via `Notices.get()`). 

This change resulted in a pretty big refactor to the notices code, but keeps everything else untouched.

### Docs

Documentation of enhanced notice authoring capabilities: cdklabs/aws-cdk-notices#631

### Description of how you validated changes

Added unit tests.

### Checklist
- [X] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ation-order-dependent (#31470)

### Issue # (if applicable)

Closes #31345.

### Reason for this change

Any stringified value containing an intrinsic will use a custom resource to resolve this value at deploy time.

Today, this custom resource's logical ID will take the form `'CDKJsonStringify<number>'`,
where <number> is a counter incremented for each stringified value. This results in resource replacement updates for the custom resource when the order of construct instantiation is changed, like changing this:
```
const app = new App();
new SomeStack(app, 'Stack1');
new SomeStack(app, 'Stack2');
```

to:

```
const app = new App();
new SomeStack(app, 'Stack2');
new SomeStack(app, 'Stack1');
```

This only happens if `SomeStack` stringifies a token, which some CDK constructs will do automatically. These resource replacements won't affect customer infrastructure, but customers using a common setup as in #31345 will see diffs on the same application in different environments, which violates the repeatability promise of CDK.

### Description of changes

Generate a unique identifier from the token's value instead of a counter. This makes this logical ID no longer instantiation-order dependent.

**This will cause diffs when upgrading**.

### Description of how you validated changes

Unit, integration, and manual tests.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Our current test only captures bootstrap notices, lets capture all.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…31571)

### Issue # (if applicable)

Closes #30067.

### Reason for this change

Fallback to existing AWS SDK import misses a rare/flaky edge case where the npm install passes, but the subsequent `require` fails

### Description of changes

Fall back to the pre-existing AWS SDK if requiring the latest version fails

### Description of how you validated changes

- Fixed no-op test "installs the latest SDK"
- Added test "falls back to installed sdk if require fails"

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
New protocol for `--unstable` is documented here and being used in #31611 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
1kaileychen and others added 16 commits October 1, 2024 22:04
…dd error checks (#31510)

### Issue # (if applicable)

Closes #31148

### Reason for this change
When adding an arm-based instance, optimal instance is set by default which consists of an x86-based instance. This causes errors since arm and x86 instances can't be mixed together.

### Description of changes
- useOptimalInstanceClasses is set to false for all arm-based instances
- Throws error when trying to mix x86 and arm instances
  - Case 1: Instantiating the class
  - Case 2: addInstanceClass and addInstanceType functions
- Warning useOptimalInstanceClasses is being set to false for arm-based instances

### Description of how you validated changes
- Unit test where optimal doesn't get added for arm-based instances
- Unit tests to display errors when instantiating the class
- Unit tests to display errors when addInstanceClass and addInstanceType is adding mixed instances
- Unit tests to display warning for arm based instances

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add a CLI feature to roll a stuck change back.

This is mostly useful for deployments performed using `--no-rollback`: if a failure occurs, the stack gets stuck in an `UPDATE_FAILED` state from which there are 2 options:

- Try again using a new template
- Roll back to the last stable state

There used to be no way to perform the second operation using the CDK CLI, but there now is.

`cdk rollback` works in 2 situations:

- A paused fail state; it will initiating a fresh rollback (on `CREATE_FAILED`, `UPDATE_FAILED`).
- A paused rollback state; it will retry the rollback, optionally skipping some resources (on `UPDATE_ROLLBACK_FAILED` -- it seems there is no way to continue a rollback in `ROLLBACK_FAILED` state).

`cdk rollback --orphan <logicalid>` can be used to skip resource rollbacks that are causing problems.

`cdk rollback --force` will look up all failed resources and continue skipping them until the rollback has finished.

This change requires new bootstrap permissions, so the bootstrap stack is updated to add the following IAM permissions to the `deploy-action` role:

```
                  - cloudformation:RollbackStack
                  - cloudformation:ContinueUpdateRollback
```

These are necessary to call the 2 CloudFormation APIs that start and continue a rollback. 

Relates to (but does not close yet) #30546.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
#31600)

Closes #26495 

### Reason for this change

The `isTaggable` function of the `TagManager` class is currently broken in Python, as it can return `undefined` instead of `true` or `false`.

### Description of changes

In JS/TS, the logical AND operator (`&&`) returns the first falsy value it encounters, even if that value is `undefined` instead of `false` - so the current implementation of `isTaggable` allows for `undefined` to be returned if `tags` is undefined:

```ts
public static isTaggable(construct: any): construct is ITaggable {
  const tags = (construct as any).tags;
  return tags && typeof tags === 'object' && (tags as any)[TAG_MANAGER_SYM];
}
```

The fix is simply changing the return line to the following to handle cases where tags is `null` or `undefined`:

```ts
return tags !== undefined && tags !== null && typeof tags === 'object' && (tags as any)[TAG_MANAGER_SYM];
```

### Description of how you validated changes

Added a unit test to assert that `isTaggable` returns `false`, not `undefined` for a non-taggable Construct (and still returns `true` for a taggable construct).

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add G6e instance type.

Ref: https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-ec2-g6e-instances/

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
There were two problems:

1. The test compares against bootstrap stack version `22`. This will break once we bump the bootstrap stack version. Changed range to `<1999` to include all possible future versions.
2. The CLI and Framework notices are not displayed because in the pipeline, their version is suffixed with `-rc.1`, and apparently `semver` doesn't match against those.

> `semver.satisfies('2.16.0-rc.0', '<99.0.0') // false`
> `semver.satisfies('2.16.0', '<99.0.0') // true`

I don't see a quick way around this so I just removed those notices from the test. We have plenty of unit tests to cover this so i'm not too concerned. Note that this means our notices mechanism isn't able to match against pre-releases, this had always been the case and is ok since we don't publish our pre-releases.

### Checklist
- [X] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…d large templates (#31597)

Closes #29936 

### Reason for this change

When running `cdk diff` on larger templates, the CDK needs to upload the diff template to S3 to create the ChangeSet. However, the CLI is currently not using the the [file asset publishing role](https://github.com/aws/aws-cdk/blob/main/packages/aws-cdk/lib/api/bootstrap/bootstrap-template.yaml#L275) to do so and is instead using the IAM user/role that is configured by the user in the CLI - this means that if the user/role lacks S3 permissions then the `AccessDenied` error is thrown and users cannot see a full diff.

### Description of changes

This PR ensures that the `FileAssetPublishingRole` is used by `cdk diff` to upload assets to S3 before creating a ChangeSet by:
- Deleting the `makeBodyParameterAndUpload` function which was using the deprecated `publishAssets` function from [deployments.ts](https://github.com/aws/aws-cdk/blob/4b00ffeb86b3ebb9a0190c2842bd36ebb4043f52/packages/aws-cdk/lib/api/deployments.ts#L605)
- Building and Publishing the template file assets inside the `uploadBodyParameterAndCreateChangeSet` function within `cloudformation.ts` instead

### Description of how you validated changes

Integ test that deploys a simple CDK app with a single IAM role, then runs `cdk diff` on a large template change adding 200 IAM roles. I asserted that the logs did not contain the S3 access denied permissions errors, and also contained a statement for assuming the file publishing role. Reused the CDK app for the integ test from this [PR](#30568) by @sakurai-ryo which tried fixing this issue by adding another Bootstrap role (which we decided against).

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable)

None.

### Reason for this change

SageMaker Experiments supports private link access but AWS CDK does not support creating an interface VPC endpoint for SageMaker Experiments.

### Description of changes

Add Interface VPC Endpoint for AWS SageMaker Experiments.

### Description of how you validated changes

I executed the AWS CLI command as shown below.

```sh
$ aws ec2 describe-vpc-endpoint-services --filters Name=service-type,Values=Interface Name=owner,Values=amazon --region us-east-1 --query ServiceNames | grep sagemaker
    "aws.sagemaker.us-east-1.experiments", // added
    "aws.sagemaker.us-east-1.notebook",
    "aws.sagemaker.us-east-1.studio",
    "com.amazonaws.us-east-1.sagemaker.api",
    "com.amazonaws.us-east-1.sagemaker.featurestore-runtime",
    "com.amazonaws.us-east-1.sagemaker.metrics",
    "com.amazonaws.us-east-1.sagemaker.runtime",
    "com.amazonaws.us-east-1.sagemaker.runtime-fips",
```

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…31590)

### Reason for this change

I will resolve this issue: #31339 (comment). In the current FunctionURL implementation, there is no way to access AuthType, and therefore, when writing logic that depends on AuthType, there is no method to reference it.

### Description of changes

I will fix the construct to allow access to functionurl.AuthType

### Description of how you validated changes

adding unittest and integ-test re-run.

### Reason for Exemption:
The fix introduces no changes to the resources being created.

### Clarification Request:
This fix only makes the internal property authType of the L2 Construct publicly accessible, and does not introduce any differences in the created resources.

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…12.20 (#31604)

Add new cluster engines.

Ref: [Amazon Aurora supports PostgreSQL 16.4, 15.8, 14.13, 13.16, and 12.20](https://aws.amazon.com/about-aws/whats-new/2024/09/amazon-aurora-supports-postgresql-new-versions/)

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
)

### Issue # (if applicable)

Closes #27578, Closes #30740.

### Reason for this change

`cfn-include` only allows Intrinsics in resource update and create policies to wrap primitive values. If Intrinsics are included anywhere else, `cfn-include` silently drops them. 

CDK's type system [does not allow](https://github.com/aws/aws-cdk/blob/main/packages/aws-cdk-lib/core/lib/cfn-resource-policy.ts) intrinsics in resource policies unless they define a primitive value. `cfn-include` adheres to this type system and drops any resource policies that use an intrinsic to define a complex value. This is an example of a forbidden use of intrinsics:

```
  "Resources": {
    "ResourceSignalIntrinsic": {
     // ....
      "CreationPolicy": {
        "ResourceSignal": {
          "Fn::If": [
            "UseCountParameter",
            {
              "Count": { "Ref": "CountParameter" }
            },
            5
          ]
        }
      }
    }
  }
}
```

This is forbidden because an intrinsic contains the `Count` property of this policy. CFN allows this, but CDK's type system does not permit it. 

### Description of changes

`cfn-include` will throw if any intrinsics break the type system, instead of silently dropping them.

CDK's type system is a useful constraint around these resource update / create policies because it allows constructs that modify them, like autoscaling, to not be token-aware. Tokens are not resolved at synthesis time, so it makes it impossible to modify these with simple arithmetic if they contain tokens. 

The CDK will never (or at least should not) generate a token that breaks this type system.

Thus, the only use-case for allowing these tokens is `cfn-include`. Supporting these customers would require the CDK type system to allow these, and thus CDK L2s should handle such cases; except, for L2 customers, this use-case does not happen. Explicitly reject templates that don't conform to this. 

Throwing here is a breaking change, so this is under a feature flag. 

Additionally add a new property, `dehydratedResources` -- a list of logical IDs that `cfn-include` will not parse. Those resources still exist in the final template.

This does not impact L2 users.

### Description of how you validated changes

Unit testing. 

Manually verified that this does not impact any L2s. 

### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable)

Closes #29547

### Reason for this change



### Description of changes
Add DynamoDB interface endpoint and validation

### Description of how you validated changes
Added unit tests and integ test.



### Checklist
- [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Updates the L1 CloudFormation resource definitions with the latest changes from `@aws-cdk/aws-service-spec`

**L1 CloudFormation resource definition changes:**
```
├[~] service aws-amazonmq
│ └ resources
│    └[~] resource AWS::AmazonMQ::Configuration
│      └ attributes
│         └ Revision: - integer
│                     + string ⇐ integer
├[~] service aws-apigatewayv2
│ └ resources
│    └[~] resource AWS::ApiGatewayV2::Integration
│      ├ attributes
│      │  └[-] Id: string
│      └ types
│         └[~] type ResponseParameter
│           ├  - documentation: response parameter
│           │  + documentation: Supported only for HTTP APIs. You use response parameters to transform the HTTP response from a backend integration before returning the response to clients. Specify a key-value map from a selection key to response parameters. The selection key must be a valid HTTP status code within the range of 200-599. Response parameters are a key-value map. The key must match the pattern `<action>:<header>.<location>` or `overwrite.statuscode` . The action can be `append` , `overwrite` or `remove` . The value can be a static value, or map to response data, stage variables, or context variables that are evaluated at runtime. To learn more, see [Transforming API requests and responses](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-parameter-mapping.html) .
│           └ properties
│              ├ Destination: (documentation changed)
│              └ Source: (documentation changed)
├[~] service aws-autoscaling
│ └ resources
│    └[~] resource AWS::AutoScaling::ScalingPolicy
│      └ types
│         ├[~] type TargetTrackingMetricDataQuery
│         │ └  - documentation: The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
│         │    You can use `TargetTrackingMetricDataQuery` structures with a `PutScalingPolicy` operation when you specify a `TargetTrackingConfiguration` in the request.
│         │    You can call for a single metric or perform math expressions on multiple metrics. Any expressions used in a metric specification must eventually return a single time series.
│         │    For more information, see the [Create a target tracking scaling policy for Amazon EC2 Auto Scaling using metric math](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-target-tracking-metric-math.html) in the *Amazon EC2 Auto Scaling User Guide* .
│         │    + documentation: The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
│         │    You can use `TargetTrackingMetricDataQuery` structures with a [PutScalingPolicy](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_PutScalingPolicy.html) operation when you specify a [TargetTrackingConfiguration](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_TargetTrackingConfiguration.html) in the request.
│         │    You can call for a single metric or perform math expressions on multiple metrics. Any expressions used in a metric specification must eventually return a single time series.
│         │    For more information, see the [Create a target tracking scaling policy for Amazon EC2 Auto Scaling using metric math](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-target-tracking-metric-math.html) in the *Amazon EC2 Auto Scaling User Guide* .
│         └[~] type TargetTrackingMetricStat
│           └  - documentation: This structure defines the CloudWatch metric to return, along with the statistic and unit.
│              `TargetTrackingMetricStat` is a property of the `TargetTrackingMetricDataQuery` object.
│              For more information about the CloudWatch terminology below, see [Amazon CloudWatch concepts](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html) in the *Amazon CloudWatch User Guide* .
│              + documentation: This structure defines the CloudWatch metric to return, along with the statistic and unit.
│              `TargetTrackingMetricStat` is a property of the [TargetTrackingMetricDataQuery](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_TargetTrackingMetricDataQuery.html) object.
│              For more information about the CloudWatch terminology below, see [Amazon CloudWatch concepts](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html) in the *Amazon CloudWatch User Guide* .
├[~] service aws-b2bi
│ └ resources
│    ├[~] resource AWS::B2BI::Capability
│    │ └ types
│    │    └[~] type EdiConfiguration
│    │      └ properties
│    │         └[+] CapabilityDirection: string
│    ├[~] resource AWS::B2BI::Partnership
│    │ ├ properties
│    │ │  ├ Capabilities: - Array<string>
│    │ │  │               + Array<string> (required)
│    │ │  └[+] CapabilityOptions: CapabilityOptions
│    │ └ types
│    │    ├[+] type CapabilityOptions
│    │    │ ├  name: CapabilityOptions
│    │    │ └ properties
│    │    │    └OutboundEdi: OutboundEdiOptions
│    │    ├[+] type OutboundEdiOptions
│    │    │ ├  name: OutboundEdiOptions
│    │    │ └ properties
│    │    │    └X12: X12Envelope (required)
│    │    ├[+] type X12Delimiters
│    │    │ ├  name: X12Delimiters
│    │    │ └ properties
│    │    │    ├ComponentSeparator: string
│    │    │    ├DataElementSeparator: string
│    │    │    └SegmentTerminator: string
│    │    ├[+] type X12Envelope
│    │    │ ├  name: X12Envelope
│    │    │ └ properties
│    │    │    └Common: X12OutboundEdiHeaders
│    │    ├[+] type X12FunctionalGroupHeaders
│    │    │ ├  name: X12FunctionalGroupHeaders
│    │    │ └ properties
│    │    │    ├ApplicationSenderCode: string
│    │    │    ├ApplicationReceiverCode: string
│    │    │    └ResponsibleAgencyCode: string
│    │    ├[+] type X12InterchangeControlHeaders
│    │    │ ├  name: X12InterchangeControlHeaders
│    │    │ └ properties
│    │    │    ├SenderIdQualifier: string
│    │    │    ├SenderId: string
│    │    │    ├ReceiverIdQualifier: string
│    │    │    ├ReceiverId: string
│    │    │    ├RepetitionSeparator: string
│    │    │    ├AcknowledgmentRequestedCode: string
│    │    │    └UsageIndicatorCode: string
│    │    └[+] type X12OutboundEdiHeaders
│    │      ├  name: X12OutboundEdiHeaders
│    │      └ properties
│    │         ├InterchangeControlHeaders: X12InterchangeControlHeaders
│    │         ├FunctionalGroupHeaders: X12FunctionalGroupHeaders
│    │         ├Delimiters: X12Delimiters
│    │         └ValidateEdi: boolean
│    └[~] resource AWS::B2BI::Transformer
│      ├ properties
│      │  ├ EdiType: - EdiType (required)
│      │  │          + EdiType (deprecated=WARN)
│      │  ├ FileFormat: - string (required)
│      │  │             + string (deprecated=WARN)
│      │  ├[+] InputConversion: InputConversion
│      │  ├[+] Mapping: Mapping
│      │  ├ MappingTemplate: - string (required)
│      │  │                  + string (deprecated=WARN)
│      │  ├[+] OutputConversion: OutputConversion
│      │  ├ SampleDocument: - string
│      │  │                 + string (deprecated=WARN)
│      │  └[+] SampleDocuments: SampleDocuments
│      └ types
│         ├[+] type FormatOptions
│         │ ├  name: FormatOptions
│         │ └ properties
│         │    └X12: X12Details (required)
│         ├[+] type InputConversion
│         │ ├  name: InputConversion
│         │ └ properties
│         │    ├FromFormat: string (required)
│         │    └FormatOptions: FormatOptions
│         ├[+] type Mapping
│         │ ├  name: Mapping
│         │ └ properties
│         │    ├TemplateLanguage: string (required)
│         │    └Template: string
│         ├[+] type OutputConversion
│         │ ├  name: OutputConversion
│         │ └ properties
│         │    ├ToFormat: string (required)
│         │    └FormatOptions: FormatOptions
│         ├[+] type SampleDocumentKeys
│         │ ├  name: SampleDocumentKeys
│         │ └ properties
│         │    ├Input: string
│         │    └Output: string
│         └[+] type SampleDocuments
│           ├  name: SampleDocuments
│           └ properties
│              ├BucketName: string (required)
│              └Keys: Array<SampleDocumentKeys> (required)
├[~] service aws-batch
│ └ resources
│    └[~] resource AWS::Batch::JobDefinition
│      └ types
│         ├[~] type EcsProperties
│         │ └ properties
│         │    └ TaskProperties: (documentation changed)
│         └[~] type PodProperties
│           └ properties
│              ├ Containers: (documentation changed)
│              └ InitContainers: (documentation changed)
├[~] service aws-bedrock
│ └ resources
│    ├[~] resource AWS::Bedrock::Flow
│    │ └ types
│    │    ├[~] type KnowledgeBaseFlowNodeConfiguration
│    │    │ └ properties
│    │    │    └ ModelId: (documentation changed)
│    │    └[~] type PromptFlowNodeInlineConfiguration
│    │      └ properties
│    │         └ ModelId: (documentation changed)
│    ├[~] resource AWS::Bedrock::FlowVersion
│    │ └ types
│    │    ├[~] type KnowledgeBaseFlowNodeConfiguration
│    │    │ └ properties
│    │    │    └ ModelId: (documentation changed)
│    │    └[~] type PromptFlowNodeInlineConfiguration
│    │      └ properties
│    │         └ ModelId: (documentation changed)
│    ├[~] resource AWS::Bedrock::KnowledgeBase
│    │ ├ attributes
│    │ │  ├ CreatedAt: (documentation changed)
│    │ │  └ UpdatedAt: (documentation changed)
│    │ └ types
│    │    └[~] type KnowledgeBaseConfiguration
│    │      └ properties
│    │         └ VectorKnowledgeBaseConfiguration: (documentation changed)
│    ├[~] resource AWS::Bedrock::Prompt
│    │ └ types
│    │    └[~] type PromptVariant
│    │      └ properties
│    │         └ ModelId: (documentation changed)
│    └[~] resource AWS::Bedrock::PromptVersion
│      └ types
│         └[~] type PromptVariant
│           └ properties
│              └ ModelId: (documentation changed)
├[~] service aws-cloudformation
│ └ resources
│    └[~] resource AWS::CloudFormation::HookTypeConfig
│      └ properties
│         ├ Configuration: (documentation changed)
│         ├ TypeArn: (documentation changed)
│         └ TypeName: (documentation changed)
├[~] service aws-cloudtrail
│ └ resources
│    ├[~] resource AWS::CloudTrail::EventDataStore
│    │ └ types
│    │    ├[~] type AdvancedEventSelector
│    │    │ └  - documentation: Advanced event selectors let you create fine-grained selectors for CloudTrail management and data events. They help you control costs by logging only those events that are important to you. For more information about advanced event selectors, see [Logging management events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html) and [Logging data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .
│    │    │    You cannot apply both event selectors and advanced event selectors to a trail.
│    │    │    *Supported CloudTrail event record fields for management events*
│    │    │    - `eventCategory` (required)
│    │    │    - `eventSource`
│    │    │    - `readOnly`
│    │    │    *Supported CloudTrail event record fields for data events*
│    │    │    - `eventCategory` (required)
│    │    │    - `resources.type` (required)
│    │    │    - `readOnly`
│    │    │    - `eventName`
│    │    │    - `resources.ARN`
│    │    │    > For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .
│    │    │    + documentation: Advanced event selectors let you create fine-grained selectors for AWS CloudTrail management, data, and network activity events. They help you control costs by logging only those events that are important to you. For more information about configuring advanced event selectors, see the [Logging data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) , [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) , and [Logging management events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html) topics in the *AWS CloudTrail User Guide* .
│    │    │    You cannot apply both event selectors and advanced event selectors to a trail.
│    │    │    *Supported CloudTrail event record fields for management events*
│    │    │    - `eventCategory` (required)
│    │    │    - `eventSource`
│    │    │    - `readOnly`
│    │    │    *Supported CloudTrail event record fields for data events*
│    │    │    - `eventCategory` (required)
│    │    │    - `resources.type` (required)
│    │    │    - `readOnly`
│    │    │    - `eventName`
│    │    │    - `resources.ARN`
│    │    │    *Supported CloudTrail event record fields for network activity events*
│    │    │    > Network activity events is in preview release for CloudTrail and is subject to change. 
│    │    │    - `eventCategory` (required)
│    │    │    - `eventSource` (required)
│    │    │    - `eventName`
│    │    │    - `errorCode` - The only valid value for `errorCode` is `VpceAccessDenied` .
│    │    │    - `vpcEndpointId`
│    │    │    > For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .
│    │    └[~] type AdvancedFieldSelector
│    │      └ properties
│    │         └ Field: (documentation changed)
│    └[~] resource AWS::CloudTrail::Trail
│      ├ properties
│      │  └ AdvancedEventSelectors: (documentation changed)
│      └ types
│         ├[~] type AdvancedEventSelector
│         │ └  - documentation: Advanced event selectors let you create fine-grained selectors for CloudTrail management and data events. They help you control costs by logging only those events that are important to you. For more information about advanced event selectors, see [Logging management events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html) and [Logging data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .
│         │    You cannot apply both event selectors and advanced event selectors to a trail.
│         │    *Supported CloudTrail event record fields for management events*
│         │    - `eventCategory` (required)
│         │    - `eventSource`
│         │    - `readOnly`
│         │    *Supported CloudTrail event record fields for data events*
│         │    - `eventCategory` (required)
│         │    - `resources.type` (required)
│         │    - `readOnly`
│         │    - `eventName`
│         │    - `resources.ARN`
│         │    > For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .
│         │    + documentation: Advanced event selectors let you create fine-grained selectors for AWS CloudTrail management, data, and network activity events. They help you control costs by logging only those events that are important to you. For more information about configuring advanced event selectors, see the [Logging data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) , [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) , and [Logging management events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html) topics in the *AWS CloudTrail User Guide* .
│         │    You cannot apply both event selectors and advanced event selectors to a trail.
│         │    *Supported CloudTrail event record fields for management events*
│         │    - `eventCategory` (required)
│         │    - `eventSource`
│         │    - `readOnly`
│         │    *Supported CloudTrail event record fields for data events*
│         │    - `eventCategory` (required)
│         │    - `resources.type` (required)
│         │    - `readOnly`
│         │    - `eventName`
│         │    - `resources.ARN`
│         │    *Supported CloudTrail event record fields for network activity events*
│         │    > Network activity events is in preview release for CloudTrail and is subject to change. 
│         │    - `eventCategory` (required)
│         │    - `eventSource` (required)
│         │    - `eventName`
│         │    - `errorCode` - The only valid value for `errorCode` is `VpceAccessDenied` .
│         │    - `vpcEndpointId`
│         │    > For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .
│         ├[~] type AdvancedFieldSelector
│         │ └ properties
│         │    └ Field: (documentation changed)
│         └[~] type DataResource
│           └ properties
│              └ Type: (documentation changed)
├[~] service aws-datasync
│ └ resources
│    └[~] resource AWS::DataSync::LocationS3
│      └  - documentation: The `AWS::DataSync::LocationS3` resource specifies an endpoint for an Amazon S3 bucket.
│         For more information, see [Create an Amazon S3 location](https://docs.aws.amazon.com/datasync/latest/userguide/create-locations-cli.html#create-location-s3-cli) in the *AWS DataSync User Guide* .
│         + documentation: The `AWS::DataSync::LocationS3` resource specifies an endpoint for an Amazon S3 bucket.
│         For more information, see the [*AWS DataSync User Guide*](https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html) .
├[~] service aws-ec2
│ └ resources
│    ├[~] resource AWS::EC2::NatGateway
│    │ └ properties
│    │    └ SecondaryAllocationIds: (documentation changed)
│    ├[~] resource AWS::EC2::TransitGateway
│    │ └ properties
│    │    └[+] SecurityGroupReferencingSupport: string
│    ├[~] resource AWS::EC2::TransitGatewayAttachment
│    │ └ types
│    │    └[~] type Options
│    │      └ properties
│    │         └[+] SecurityGroupReferencingSupport: string
│    ├[~] resource AWS::EC2::TransitGatewayVpcAttachment
│    │ └ types
│    │    └[~] type Options
│    │      └ properties
│    │         └[+] SecurityGroupReferencingSupport: string
│    └[~] resource AWS::EC2::VPCEndpoint
│      └ properties
│         └ PolicyDocument: (documentation changed)
├[~] service aws-ecs
│ └ resources
│    ├[~] resource AWS::ECS::Service
│    │ └ types
│    │    └[~] type LogConfiguration
│    │      └ properties
│    │         └ Options: (documentation changed)
│    └[~] resource AWS::ECS::TaskDefinition
│      └ types
│         └[~] type LogConfiguration
│           └ properties
│              └ Options: (documentation changed)
├[~] service aws-eks
│ └ resources
│    └[~] resource AWS::EKS::Cluster
│      ├ properties
│      │  └[+] ZonalShiftConfig: ZonalShiftConfig
│      └ types
│         └[+] type ZonalShiftConfig
│           ├  documentation: The current zonal shift configuration to use for the cluster.
│           │  name: ZonalShiftConfig
│           └ properties
│              └Enabled: boolean
├[~] service aws-elasticloadbalancingv2
│ └ resources
│    └[~] resource AWS::ElasticLoadBalancingV2::Listener
│      └ properties
│         └ ListenerAttributes: (documentation changed)
├[~] service aws-glue
│ └ resources
│    ├[~] resource AWS::Glue::Crawler
│    ├[~] resource AWS::Glue::Job
│    │ └ properties
│    │    ├[+] JobMode: string
│    │    └[+] JobRunQueuingEnabled: boolean
│    └[+] resource AWS::Glue::UsageProfile
│      ├  name: UsageProfile
│      │  cloudFormationType: AWS::Glue::UsageProfile
│      │  documentation: Creates an AWS Glue usage profile.
│      │  tagInformation: {"tagPropertyName":"Tags","variant":"standard"}
│      ├ properties
│      │  ├Name: string (required, immutable)
│      │  ├Description: string
│      │  └Tags: Array<tag>
│      └ attributes
│         └CreatedOn: string
├[~] service aws-iotfleetwise
│ └ resources
│    └[~] resource AWS::IoTFleetWise::Campaign
│      └ properties
│         └ Action: - string (required)
│                   + string
├[~] service aws-iottwinmaker
│ └ resources
│    └[~] resource AWS::IoTTwinMaker::Scene
│      └ properties
│         └ WorkspaceId: (documentation changed)
├[~] service aws-iotwireless
│ └ resources
│    └[~] resource AWS::IoTWireless::WirelessDevice
│      └ types
│         └[~] type OtaaV10x
│           └  - documentation: undefined
│              + documentation: OTAA device object for v1.0.x
├[~] service aws-kinesisfirehose
│ └ resources
│    └[~] resource AWS::KinesisFirehose::DeliveryStream
│      ├ properties
│      │  ├ DeliveryStreamName: (documentation changed)
│      │  ├ DeliveryStreamType: (documentation changed)
│      │  ├ IcebergDestinationConfiguration: (documentation changed)
│      │  └ Tags: (documentation changed)
│      └ types
│         ├[~] type AmazonOpenSearchServerlessBufferingHints
│         │ └ properties
│         │    └ SizeInMBs: (documentation changed)
│         ├[~] type CatalogConfiguration
│         │ ├  - documentation: Describes the containers where the destination Apache Iceberg Tables are persisted.
│         │ │  Amazon Data Firehose is in preview release and is subject to change.
│         │ │  + documentation: Describes the containers where the destination Apache Iceberg Tables are persisted.
│         │ └ properties
│         │    └ CatalogArn: (documentation changed)
│         ├[~] type DestinationTableConfiguration
│         │ ├  - documentation: Describes the configuration of a destination in Apache Iceberg Tables.
│         │ │  Amazon Data Firehose is in preview release and is subject to change.
│         │ │  + documentation: Describes the configuration of a destination in Apache Iceberg Tables.
│         │ └ properties
│         │    ├ DestinationDatabaseName: (documentation changed)
│         │    ├ DestinationTableName: (documentation changed)
│         │    ├ S3ErrorOutputPrefix: (documentation changed)
│         │    └ UniqueKeys: (documentation changed)
│         ├[~] type ExtendedS3DestinationConfiguration
│         │ └ properties
│         │    ├ CloudWatchLoggingOptions: (documentation changed)
│         │    └ S3BackupMode: (documentation changed)
│         ├[~] type IcebergDestinationConfiguration
│         │ ├  - documentation: Specifies the destination configure settings for Apache Iceberg Table.
│         │ │  Amazon Data Firehose is in preview release and is subject to change.
│         │ │  + documentation: Specifies the destination configure settings for Apache Iceberg Table.
│         │ └ properties
│         │    ├ CatalogConfiguration: (documentation changed)
│         │    ├ DestinationTableConfigurationList: (documentation changed)
│         │    ├ RoleARN: (documentation changed)
│         │    └ s3BackupMode: (documentation changed)
│         ├[~] type RedshiftDestinationConfiguration
│         │ └ properties
│         │    ├ CloudWatchLoggingOptions: (documentation changed)
│         │    └ S3BackupMode: (documentation changed)
│         ├[~] type S3DestinationConfiguration
│         │ └ properties
│         │    └ CloudWatchLoggingOptions: (documentation changed)
│         ├[~] type SecretsManagerConfiguration
│         │ └ properties
│         │    ├ Enabled: (documentation changed)
│         │    └ SecretARN: (documentation changed)
│         ├[~] type SnowflakeBufferingHints
│         │ └ properties
│         │    └ SizeInMBs: (documentation changed)
│         └[~] type SplunkDestinationConfiguration
│           └ properties
│              └ CloudWatchLoggingOptions: (documentation changed)
├[~] service aws-lambda
│ └ resources
│    ├[~] resource AWS::Lambda::CodeSigningConfig
│    │ └ properties
│    │    └ Tags: (documentation changed)
│    ├[~] resource AWS::Lambda::EventSourceMapping
│    │ ├ properties
│    │ │  └ Tags: (documentation changed)
│    │ └ attributes
│    │    └ EventSourceMappingArn: (documentation changed)
│    ├[~] resource AWS::Lambda::Function
│    │ └ properties
│    │    └ Tags: (documentation changed)
│    └[~] resource AWS::Lambda::Permission
│      └ properties
│         └ Principal: (documentation changed)
├[~] service aws-logs
│ └ resources
│    └[~] resource AWS::Logs::QueryDefinition
│      └ properties
│         └ Name: (documentation changed)
├[~] service aws-mediaconnect
│ └ resources
│    └[~] resource AWS::MediaConnect::FlowOutput
│      └ properties
│         └ OutputStatus: (documentation changed)
├[~] service aws-medialive
│ └ resources
│    └[~] resource AWS::MediaLive::Channel
│      └ types
│         ├[~] type H264Settings
│         │ └ properties
│         │    └[+] MinQp: integer
│         └[~] type H265Settings
│           └ properties
│              └[+] MinQp: integer
├[~] service aws-organizations
│ └ resources
│    └[~] resource AWS::Organizations::Policy
│      └ properties
│         └ Content: (documentation changed)
├[~] service aws-pipes
│ └ resources
│    └[~] resource AWS::Pipes::Pipe
│      └ types
│         └[~] type PipeTargetTimestreamParameters
│           └ properties
│              └ TimestampFormat: (documentation changed)
├[~] service aws-quicksight
│ └ resources
│    ├[~] resource AWS::QuickSight::Analysis
│    │ └ types
│    │    ├[~] type DefaultDateTimePickerControlOptions
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    ├[~] type DefaultFilterDropDownControlOptions
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    ├[~] type DefaultRelativeDateTimeControlOptions
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    ├[~] type FilterDateTimePickerControl
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    ├[~] type FilterDropDownControl
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    ├[~] type FilterRelativeDateTimeControl
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    └[~] type ParameterDropDownControl
│    │      └ properties
│    │         └[+] CommitMode: string
│    ├[~] resource AWS::QuickSight::Dashboard
│    │ └ types
│    │    ├[~] type DefaultDateTimePickerControlOptions
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    ├[~] type DefaultFilterDropDownControlOptions
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    ├[~] type DefaultRelativeDateTimeControlOptions
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    ├[~] type FilterDateTimePickerControl
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    ├[~] type FilterDropDownControl
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    ├[~] type FilterRelativeDateTimeControl
│    │    │ └ properties
│    │    │    └[+] CommitMode: string
│    │    └[~] type ParameterDropDownControl
│    │      └ properties
│    │         └[+] CommitMode: string
│    ├[+] resource AWS::QuickSight::Folder
│    │ ├  name: Folder
│    │ │  cloudFormationType: AWS::QuickSight::Folder
│    │ │  documentation: Definition of the AWS::QuickSight::Folder Resource Type.
│    │ │  tagInformation: {"tagPropertyName":"Tags","variant":"standard"}
│    │ ├ properties
│    │ │  ├AwsAccountId: string (immutable)
│    │ │  ├FolderId: string (immutable)
│    │ │  ├FolderType: string (immutable)
│    │ │  ├Name: string
│    │ │  ├ParentFolderArn: string (immutable)
│    │ │  ├Permissions: Array<ResourcePermission>
│    │ │  ├SharingModel: string (immutable)
│    │ │  └Tags: Array<tag>
│    │ ├ attributes
│    │ │  ├Arn: string
│    │ │  ├CreatedTime: string
│    │ │  └LastUpdatedTime: string
│    │ └ types
│    │    └type ResourcePermission
│    │     ├  documentation: <p>Permission for the resource.</p>
│    │     │  name: ResourcePermission
│    │     └ properties
│    │        ├Principal: string (required)
│    │        └Actions: Array<string> (required)
│    └[~] resource AWS::QuickSight::Template
│      └ types
│         ├[~] type DefaultDateTimePickerControlOptions
│         │ └ properties
│         │    └[+] CommitMode: string
│         ├[~] type DefaultFilterDropDownControlOptions
│         │ └ properties
│         │    └[+] CommitMode: string
│         ├[~] type DefaultRelativeDateTimeControlOptions
│         │ └ properties
│         │    └[+] CommitMode: string
│         ├[~] type FilterDateTimePickerControl
│         │ └ properties
│         │    └[+] CommitMode: string
│         ├[~] type FilterDropDownControl
│         │ └ properties
│         │    └[+] CommitMode: string
│         ├[~] type FilterRelativeDateTimeControl
│         │ └ properties
│         │    └[+] CommitMode: string
│         └[~] type ParameterDropDownControl
│           └ properties
│              └[+] CommitMode: string
├[~] service aws-rds
│ └ resources
│    └[~] resource AWS::RDS::GlobalCluster
│      ├  - tagInformation: undefined
│      │  + tagInformation: {"tagPropertyName":"Tags","variant":"standard"}
│      └ properties
│         └[+] Tags: Array<tag>
├[~] service aws-route53resolver
│ └ resources
│    └[~] resource AWS::Route53Resolver::ResolverRule
│      └ types
│         └[~] type TargetAddress
│           └ properties
│              └ Protocol: (documentation changed)
├[~] service aws-s3
│ └ resources
│    └[~] resource AWS::S3::Bucket
│      └ types
│         ├[~] type ServerSideEncryptionByDefault
│         │ ├  - documentation: Describes the default server-side encryption to apply to new objects in the bucket. If a PUT Object request doesn't specify any server-side encryption, this default encryption will be applied. If you don't specify a customer managed key at configuration, Amazon S3 automatically creates an AWS KMS key in your AWS account the first time that you add an object encrypted with SSE-KMS to a bucket. By default, Amazon S3 uses this KMS key for SSE-KMS. For more information, see [PUT Bucket encryption](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTencryption.html) in the *Amazon S3 API Reference* .
│         │ │  > If you're specifying a customer managed KMS key, we recommend using a fully qualified KMS key ARN. If you use a KMS key alias instead, then AWS KMS resolves the key within the requester’s account. This behavior can result in data that's encrypted with a KMS key that belongs to the requester, and not the bucket owner.
│         │ │  + documentation: Describes the default server-side encryption to apply to new objects in the bucket. If a PUT Object request doesn't specify any server-side encryption, this default encryption will be applied. For more information, see [PutBucketEncryption](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTencryption.html) .
│         │ │  > - *General purpose buckets* - If you don't specify a customer managed key at configuration, Amazon S3 automatically creates an AWS KMS key ( `aws/s3` ) in your AWS account the first time that you add an object encrypted with SSE-KMS to a bucket. By default, Amazon S3 uses this KMS key for SSE-KMS.
│         │ │  > - *Directory buckets* - Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. [AWS managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) ( `aws/s3` ) isn't supported.
│         │ │  > - *Directory buckets* - For directory buckets, there are only two supported options for server-side encryption: SSE-S3 and SSE-KMS.
│         │ └ properties
│         │    ├ KMSMasterKeyID: (documentation changed)
│         │    └ SSEAlgorithm: (documentation changed)
│         └[~] type ServerSideEncryptionRule
│           └  - documentation: Specifies the default server-side encryption configuration.
│              > If you're specifying a customer managed KMS key, we recommend using a fully qualified KMS key ARN. If you use a KMS key alias instead, then AWS KMS resolves the key within the requester’s account. This behavior can result in data that's encrypted with a KMS key that belongs to the requester, and not the bucket owner.
│              + documentation: Specifies the default server-side encryption configuration.
│              > - *General purpose buckets* - If you're specifying a customer managed KMS key, we recommend using a fully qualified KMS key ARN. If you use a KMS key alias instead, then AWS KMS resolves the key within the requester’s account. This behavior can result in data that's encrypted with a KMS key that belongs to the requester, and not the bucket owner.
│              > - *Directory buckets* - When you specify an [AWS KMS customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) for encryption in your directory bucket, only use the key ID or key ARN. The key alias format of the KMS key isn't supported.
├[~] service aws-s3express
│ └ resources
│    └[~] resource AWS::S3Express::DirectoryBucket
│      ├  - documentation: The `AWS::S3Express::DirectoryBucket` resource creates an Amazon S3 directory bucket in the same AWS Region where you create the AWS CloudFormation stack.
│      │  To control how AWS CloudFormation handles the bucket when the stack is deleted, you can set a deletion policy for your bucket. You can choose to *retain* the bucket or to *delete* the bucket. For more information, see [DeletionPolicy attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html) .
│      │  > You can only delete empty buckets. Deletion fails for buckets that have contents. 
│      │  - **Permissions** - The required permissions for CloudFormation to use are based on the operations that are performed on the stack.
│      │  - Create
│      │  - s3express:CreateBucket
│      │  - s3express:ListAllMyDirectoryBuckets
│      │  - Read
│      │  - s3express:ListAllMyDirectoryBuckets
│      │  - Delete
│      │  - s3express:DeleteBucket
│      │  - s3express:ListAllMyDirectoryBuckets
│      │  - List
│      │  - s3express:ListAllMyDirectoryBuckets
│      │  The following operations are related to `AWS::S3Express::DirectoryBucket` :
│      │  - [CreateBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)
│      │  - [ListDirectoryBuckets](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListDirectoryBuckets.html)
│      │  - [DeleteBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html)
│      │  + documentation: The `AWS::S3Express::DirectoryBucket` resource creates an Amazon S3 directory bucket in the same AWS Region where you create the AWS CloudFormation stack.
│      │  To control how AWS CloudFormation handles the bucket when the stack is deleted, you can set a deletion policy for your bucket. You can choose to *retain* the bucket or to *delete* the bucket. For more information, see [DeletionPolicy attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html) .
│      │  > You can only delete empty buckets. Deletion fails for buckets that have contents. 
│      │  - **Permissions** - The required permissions for CloudFormation to use are based on the operations that are performed on the stack.
│      │  - Create
│      │  - s3express:CreateBucket
│      │  - s3express:ListAllMyDirectoryBuckets
│      │  - Read
│      │  - s3express:ListAllMyDirectoryBuckets
│      │  - ec2:DescribeAvailabilityZones
│      │  - Delete
│      │  - s3express:DeleteBucket
│      │  - s3express:ListAllMyDirectoryBuckets
│      │  - List
│      │  - s3express:ListAllMyDirectoryBuckets
│      │  - PutBucketEncryption
│      │  - s3express:PutEncryptionConfiguration
│      │  - To set a directory bucket default encryption with SSE-KMS, you must also have the kms:GenerateDataKey and kms:Decrypt permissions in IAM identity-based policies and AWS KMS key policies for the target AWS KMS key.
│      │  - GetBucketEncryption
│      │  - s3express:GetBucketEncryption
│      │  - DeleteBucketEncryption
│      │  - s3express:PutEncryptionConfiguration
│      │  The following operations are related to `AWS::S3Express::DirectoryBucket` :
│      │  - [CreateBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)
│      │  - [ListDirectoryBuckets](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListDirectoryBuckets.html)
│      │  - [DeleteBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html)
│      ├ properties
│      │  ├[+] BucketEncryption: BucketEncryption
│      │  └ BucketName: (documentation changed)
│      ├ attributes
│      │  ├ Arn: (documentation changed)
│      │  └[+] AvailabilityZoneName: string
│      └ types
│         ├[+] type BucketEncryption
│         │ ├  documentation: Specifies default encryption for a bucket using server-side encryption with Amazon S3 managed keys (SSE-S3) or AWS KMS keys (SSE-KMS).
│         │ │  name: BucketEncryption
│         │ └ properties
│         │    └ServerSideEncryptionConfiguration: Array<ServerSideEncryptionRule> (required)
│         ├[+] type ServerSideEncryptionByDefault
│         │ ├  documentation: Specifies the default server-side encryption to apply to new objects in the bucket. If a PUT Object request doesn't specify any server-side encryption, this default encryption will be applied.
│         │ │  name: ServerSideEncryptionByDefault
│         │ └ properties
│         │    └SSEAlgorithm: string (required)
│         └[+] type ServerSideEncryptionRule
│           ├  documentation: Specifies the default server-side encryption configuration.
│           │  name: ServerSideEncryptionRule
│           └ properties
│              ├BucketKeyEnabled: boolean
│              └ServerSideEncryptionByDefault: ServerSideEncryptionByDefault
├[~] service aws-sagemaker
│ └ resources
│    └[~] resource AWS::SageMaker::ImageVersion
│      ├ properties
│      │  └[+] Version: integer
│      └ attributes
│         └ Version: (documentation changed)
├[~] service aws-secretsmanager
│ └ resources
│    ├[~] resource AWS::SecretsManager::RotationSchedule
│    │ ├  - documentation: Sets the rotation schedule and Lambda rotation function for a secret. For more information, see [How rotation works](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_how.html) .
│    │ │  For Amazon RDS master user credentials, see [AWS::RDS::DBCluster MasterUserSecret](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-dbcluster-masterusersecret.html) .
│    │ │  For Amazon Redshift admin user credentials, see [AWS::Redshift::Cluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-redshift-cluster.html) .
│    │ │  For the rotation function, you have two options:
│    │ │  - You can create a new rotation function based on one of the [Secrets Manager rotation function templates](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_available-rotation-templates.html) by using `HostedRotationLambda` .
│    │ │  - You can choose an existing rotation function by using `RotationLambdaARN` .
│    │ │  For database secrets, if you define both the secret and the database or service in the AWS CloudFormation template, then you need to define the [AWS::SecretsManager::SecretTargetAttachment](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-secretsmanager-secrettargetattachment.html) resource to populate the secret with the connection details of the database or service before you attempt to configure rotation.
│    │ │  + documentation: Sets the rotation schedule and Lambda rotation function for a secret. For more information, see [How rotation works](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_how.html) .
│    │ │  For Amazon RDS master user credentials, see [AWS::RDS::DBCluster MasterUserSecret](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-dbcluster-masterusersecret.html) .
│    │ │  For Amazon Redshift admin user credentials, see [AWS::Redshift::Cluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-redshift-cluster.html) .
│    │ │  For the rotation function, you have two options:
│    │ │  - You can create a new rotation function based on one of the [Secrets Manager rotation function templates](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_available-rotation-templates.html) by using `HostedRotationLambda` .
│    │ │  - You can choose an existing rotation function by using `RotationLambdaARN` .
│    │ │  For database secrets, if you define both the secret and the database or service in the AWS CloudFormation template, then you need to define the [AWS::SecretsManager::SecretTargetAttachment](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-secretsmanager-secrettargetattachment.html) resource to populate the secret with the connection details of the database or service before you attempt to configure rotation.
│    │ │  For a single secret, you can only define one rotation schedule with it.
│    │ └ properties
│    │    └ SecretId: (documentation changed)
│    └[~] resource AWS::SecretsManager::SecretTargetAttachment
│      ├  - documentation: The `AWS::SecretsManager::SecretTargetAttachment` resource completes the final link between a Secrets Manager secret and the associated database by adding the database connection information to the secret JSON. If you want to turn on automatic rotation for a database credential secret, the secret must contain the database connection information. For more information, see [JSON structure of Secrets Manager database credential secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_secret_json_structure.html) .
│      │  When you remove a `SecretTargetAttachment` from a stack, Secrets Manager removes the database connection information from the secret with a `PutSecretValue` call.
│      │  For Amazon RDS master user credentials, see [AWS::RDS::DBCluster MasterUserSecret](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-dbcluster-masterusersecret.html) .
│      │  For Amazon Redshift admin user credentials, see [AWS::Redshift::Cluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-redshift-cluster.html) .
│      │  + documentation: The `AWS::SecretsManager::SecretTargetAttachment` resource completes the final link between a Secrets Manager secret and the associated database by adding the database connection information to the secret JSON. If you want to turn on automatic rotation for a database credential secret, the secret must contain the database connection information. For more information, see [JSON structure of Secrets Manager database credential secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_secret_json_structure.html) .
│      │  A single secret resource can only have one target attached to it.
│      │  When you remove a `SecretTargetAttachment` from a stack, Secrets Manager removes the database connection information from the secret with a `PutSecretValue` call.
│      │  For Amazon RDS master user credentials, see [AWS::RDS::DBCluster MasterUserSecret](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-dbcluster-masterusersecret.html) .
│      │  For Amazon Redshift admin user credentials, see [AWS::Redshift::Cluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-redshift-cluster.html) .
│      └ properties
│         └ SecretId: (documentation changed)
├[~] service aws-securityhub
│ └ resources
│    ├[~] resource AWS::SecurityHub::AutomationRule
│    │ └ types
│    │    ├[~] type SeverityUpdate
│    │    │ └ properties
│    │    │    └ Normalized: (documentation changed)
│    │    └[~] type WorkflowUpdate
│    │      └ properties
│    │         └ Status: (documentation changed)
│    ├[~] resource AWS::SecurityHub::FindingAggregator
│    │ ├ properties
│    │ │  └ Regions: (documentation changed)
│    │ └ attributes
│    │    └ FindingAggregationRegion: (documentation changed)
│    └[~] resource AWS::SecurityHub::Insight
│      └ types
│         └[~] type AwsSecurityFindingFilters
│           └ properties
│              ├ SeverityNormalized: (documentation changed)
│              └ WorkflowStatus: (documentation changed)
├[~] service aws-ses
│ └ resources
│    └[~] resource AWS::SES::MailManagerRuleSet
│      └ types
│         └[~] type RuleStringToEvaluate
│           ├  - documentation: The string to evaluate in a string condition expression.
│           │  + documentation: The string to evaluate in a string condition expression.
│           │  > This data type is a UNION, so only one of the following members can be specified when used or returned.
│           └ properties
│              ├ Attribute: - string (required)
│              │            + string
│              └[+] MimeHeaderAttribute: string
├[~] service aws-sqs
│ └ resources
│    └[~] resource AWS::SQS::Queue
│      ├  - documentation: The `AWS::SQS::Queue` resource creates an Amazon SQS standard or FIFO queue.
│      │  Keep the following caveats in mind:
│      │  - If you don't specify the `FifoQueue` property, Amazon SQS creates a standard queue.
│      │  > You can't change the queue type after you create it and you can't convert an existing standard queue into a FIFO queue. You must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue. For more information, see [Moving from a standard queue to a FIFO queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues-moving.html) in the *Amazon SQS Developer Guide* .
│      │  - If you don't provide a value for a property, the queue is created with the default value for the property.
│      │  - If you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.
│      │  - To successfully create a new queue, you must provide a queue name that adheres to the [limits related to queues](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/limits-queues.html) and is unique within the scope of your queues.
│      │  For more information about creating FIFO (first-in-first-out) queues, see [Creating an Amazon SQS queue ( AWS CloudFormation )](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/screate-queue-cloudformation.html) in the *Amazon SQS Developer Guide* .
│      │  + documentation: The `AWS::SQS::Queue` resource creates an Amazon SQS standard or FIFO queue.
│      │  Keep the following caveats in mind:
│      │  - If you don't specify the `FifoQueue` property, Amazon SQS creates a standard queue.
│      │  > You can't change the queue type after you create it and you can't convert an existing standard queue into a FIFO queue. You must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue. For more information, see [Moving from a standard queue to a FIFO queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues-moving.html) in the *Amazon SQS Developer Guide* .
│      │  - If you don't provide a value for a property, the queue is created with the default value for the property.
│      │  - If you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.
│      │  - To successfully create a new queue, you must provide a queue name that adheres to the [limits related to queues](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/limits-queues.html) and is unique within the scope of your queues.
│      │  For more information about creating FIFO (first-in-first-out) queues, see [Creating an Amazon SQS queue ( AWS CloudFormation )](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/create-queue-cloudformation.html) in the *Amazon SQS Developer Guide* .
│      └ properties
│         ├ FifoQueue: (documentation changed)
│         ├ KmsMasterKeyId: (documentation changed)
│         └ QueueName: (documentation changed)
├[~] service aws-ssm
│ └ resources
│    └[~] resource AWS::SSM::PatchBaseline
│      └ properties
│         └ GlobalFilters: (documentation changed)
├[~] service aws-synthetics
│ └ resources
│    └[~] resource AWS::Synthetics::Canary
│      └ properties
│         └[+] ResourcesToReplicateTags: Array<string>
├[~] service aws-waf
│ └ resources
│    ├[~] resource AWS::WAF::ByteMatchSet
│    │ └ types
│    │    ├[~] type ByteMatchTuple
│    │    │ └  - documentation: > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │    │    > 
│    │    │    > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │    │    The bytes (typically a string that corresponds with ASCII characters) that you want AWS WAF to search for in web requests, the location in requests that you want AWS WAF to search, and other settings.
│    │    │    + documentation: > Deprecation notice: AWS WAF Classic support will end on September 30, 2025.
│    │    │    > 
│    │    │    > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │    │    > 
│    │    │    > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │    │    The bytes (typically a string that corresponds with ASCII characters) that you want AWS WAF to search for in web requests, the location in requests that you want AWS WAF to search, and other settings.
│    │    └[~] type FieldToMatch
│    │      └  - documentation: > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │         > 
│    │         > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │         Specifies where in a web request to look for `TargetString` .
│    │         + documentation: > Deprecation notice: AWS WAF Classic support will end on September 30, 2025.
│    │         > 
│    │         > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │         > 
│    │         > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │         Specifies where in a web request to look for `TargetString` .
│    ├[~] resource AWS::WAF::IPSet
│    │ ├  - documentation: > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │ │  > 
│    │ │  > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │ │  Contains one or more IP addresses or blocks of IP addresses specified in Classless Inter-Domain Routing (CIDR) notation. AWS WAF supports IPv4 address ranges: /8 and any range between /16 through /32. AWS WAF supports IPv6 address ranges: /24, /32, /48, /56, /64, and /128.
│    │ │  To specify an individual IP address, you specify the four-part IP address followed by a `/32` , for example, 192.0.2.0/32. To block a range of IP addresses, you can specify /8 or any range between /16 through /32 (for IPv4) or /24, /32, /48, /56, /64, or /128 (for IPv6). For more information about CIDR notation, see the Wikipedia entry [Classless Inter-Domain Routing](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) .
│    │ │  + documentation: > Deprecation notice: AWS WAF Classic support will end on September 30, 2025.
│    │ │  > 
│    │ │  > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │ │  > 
│    │ │  > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │ │  Contains one or more IP addresses or blocks of IP addresses specified in Classless Inter-Domain Routing (CIDR) notation. AWS WAF supports IPv4 address ranges: /8 and any range between /16 through /32. AWS WAF supports IPv6 address ranges: /24, /32, /48, /56, /64, and /128.
│    │ │  To specify an individual IP address, you specify the four-part IP address followed by a `/32` , for example, 192.0.2.0/32. To block a range of IP addresses, you can specify /8 or any range between /16 through /32 (for IPv4) or /24, /32, /48, /56, /64, or /128 (for IPv6). For more information about CIDR notation, see the Wikipedia entry [Classless Inter-Domain Routing](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) .
│    │ └ types
│    │    └[~] type IPSetDescriptor
│    │      └  - documentation: > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │         > 
│    │         > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │         Specifies the IP address type ( `IPV4` or `IPV6` ) and the IP address range (in CIDR format) that web requests originate from.
│    │         + documentation: > Deprecation notice: AWS WAF Classic support will end on September 30, 2025.
│    │         > 
│    │         > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │         > 
│    │         > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │         Specifies the IP address type ( `IPV4` or `IPV6` ) and the IP address range (in CIDR format) that web requests originate from.
│    ├[~] resource AWS::WAF::SizeConstraintSet
│    │ ├  - documentation: > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │ │  > 
│    │ │  > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │ │  A complex type that contains `SizeConstraint` objects, which specify the parts of web requests that you want AWS WAF to inspect the size of. If a `SizeConstraintSet` contains more than one `SizeConstraint` object, a request only needs to match one constraint to be considered a match.
│    │ │  + documentation: > Deprecation notice: AWS WAF Classic support will end on September 30, 2025.
│    │ │  > 
│    │ │  > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │ │  > 
│    │ │  > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │ │  A complex type that contains `SizeConstraint` objects, which specify the parts of web requests that you want AWS WAF to inspect the size of. If a `SizeConstraintSet` contains more than one `SizeConstraint` object, a request only needs to match one constraint to be considered a match.
│    │ └ types
│    │    └[~] type SizeConstraint
│    │      └  - documentation: > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │         > 
│    │         > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │         Specifies a constraint on the size of a part of the web request. AWS WAF uses the `Size` , `ComparisonOperator` , and `FieldToMatch` to build an expression in the form of " `Size` `ComparisonOperator` size in bytes of `FieldToMatch` ". If that expression is true, the `SizeConstraint` is considered to match.
│    │         + documentation: > Deprecation notice: AWS WAF Classic support will end on September 30, 2025.
│    │         > 
│    │         > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │         > 
│    │         > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │         Specifies a constraint on the size of a part of the web request. AWS WAF uses the `Size` , `ComparisonOperator` , and `FieldToMatch` to build an expression in the form of " `Size` `ComparisonOperator` size in bytes of `FieldToMatch` ". If that expression is true, the `SizeConstraint` is considered to match.
│    ├[~] resource AWS::WAF::SqlInjectionMatchSet
│    │ ├  - documentation: > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │ │  > 
│    │ │  > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │ │  A complex type that contains `SqlInjectionMatchTuple` objects, which specify the parts of web requests that you want AWS WAF to inspect for snippets of malicious SQL code and, if you want AWS WAF to inspect a header, the name of the header. If a `SqlInjectionMatchSet` contains more than one `SqlInjectionMatchTuple` object, a request needs to include snippets of SQL code in only one of the specified parts of the request to be considered a match.
│    │ │  + documentation: > Deprecation notice: AWS WAF Classic support will end on September 30, 2025.
│    │ │  > 
│    │ │  > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │ │  > 
│    │ │  > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │ │  A complex type that contains `SqlInjectionMatchTuple` objects, which specify the parts of web requests that you want AWS WAF to inspect for snippets of malicious SQL code and, if you want AWS WAF to inspect a header, the name of the header. If a `SqlInjectionMatchSet` contains more than one `SqlInjectionMatchTuple` object, a request needs to include snippets of SQL code in only one of the specified parts of the request to be considered a match.
│    │ └ types
│    │    └[~] type SqlInjectionMatchTuple
│    │      └  - documentation: > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │         > 
│    │         > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │         Specifies the part of a web request that you want AWS WAF to inspect for snippets of malicious SQL code and, if you want AWS WAF to inspect a header, the name of the header.
│    │         + documentation: > Deprecation notice: AWS WAF Classic support will end on September 30, 2025.
│    │         > 
│    │         > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │         > 
│    │         > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │         Specifies the part of a web request that you want AWS WAF to inspect for snippets of malicious SQL code and, if you want AWS WAF to inspect a header, the name of the header.
│    ├[~] resource AWS::WAF::WebACL
│    │ └ types
│    │    └[~] type WafAction
│    │      └  - documentation: > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │         > 
│    │         > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) . With the latest version, AWS WAF has a single set of endpoints for regional and global use. 
│    │         For the action that is associated with a rule in a `WebACL` , specifies the action that you want AWS WAF to perform when a web request matches all of the conditions in a rule. For the default action in a `WebACL` , specifies the action that you want AWS WAF to take when a web request doesn't match all of the conditions in any of the rules in a `WebACL` .
│    │         + documentation: > Deprecation notice: AWS WAF Classic support will end on September 30, 2025.
│    │         > 
│    │         > This is *AWS WAF Classic* documentation. For more information, see [AWS WAF Classic](https://docs.aws.amazon.com/waf/latest/developerguide/classic-waf-chapter.html) in the developer guide.
│    │         > 
│    │         > *For the latest version of AWS WAF* , use the AWS WAF V2 API and see the …
…dk/aws-lambda-python-alpha/test/lambda-handler-custom-build (#31642)

Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.18 to 1.26.19.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>1.26.19</h2>
<h2>🚀 urllib3 is fundraising for HTTP/2 support</h2>
<p><a href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3 is raising ~$40,000 USD</a> to release HTTP/2 support and ensure long-term sustainable maintenance of the project after a sharp decline in financial support for 2023. If your company or organization uses Python and would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and thousands of other projects <a href="https://opencollective.com/urllib3">please consider contributing financially</a> to ensure HTTP/2 support is developed sustainably and maintained for the long-haul.</p>
<p>Thank you for your support.</p>
<h2>Changes</h2>
<ul>
<li>Added the <code>Proxy-Authorization</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>.</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/urllib3/urllib3/compare/1.26.18...1.26.19">https://github.com/urllib3/urllib3/compare/1.26.18...1.26.19</a></p>
<p>Note that due to an issue with our release automation, no <code> multiple.intoto.jsonl</code> file is available for this release.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p>
<blockquote>
<h1>1.26.19 (2024-06-17)</h1>
<ul>
<li>Added the <code>Proxy-Authorization</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>.</li>
<li>Fixed handling of OpenSSL 3.2.0 new error message for misconfiguring an HTTP proxy as HTTPS. (<code>[#3405](urllib3/urllib3#3405) &lt;https://github.com/urllib3/urllib3/issues/3405&gt;</code>__)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/urllib3/urllib3/commit/d9d85c88aa644af56d5e129634e750ce76e1a765"><code>d9d85c8</code></a> Release 1.26.19</li>
<li><a href="https://github.com/urllib3/urllib3/commit/8528b63b6fe5cfd7b21942cf988670de68fcd8c0"><code>8528b63</code></a> [1.26] Fix downstream tests (<a href="https://redirect.github.com/urllib3/urllib3/issues/3409">#3409</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/40b6d1605814dd1db0a46e202d6e56f2e4c9a468"><code>40b6d16</code></a> Merge pull request from GHSA-34jh-p97f-mpxf</li>
<li><a href="https://github.com/urllib3/urllib3/commit/29cfd02f66376c61bd20f1725477925106321f68"><code>29cfd02</code></a> Fix handling of OpenSSL 3.2.0 new error message &quot;record layer failure&quot; (<a href="https://redirect.github.com/urllib3/urllib3/issues/3405">#3405</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/b60064388302f54a3455259ddab121618650a154"><code>b600643</code></a> [1.26] Bump RECENT_DATE (<a href="https://redirect.github.com/urllib3/urllib3/issues/3404">#3404</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/7e2d3890926d4788e219f63e2e36fbeb8714827f"><code>7e2d389</code></a> [1.26] Fix running CPython 2.7 tests in CI (<a href="https://redirect.github.com/urllib3/urllib3/issues/3137">#3137</a>)</li>
<li>See full diff in <a href="https://github.com/urllib3/urllib3/compare/1.26.18...1.26.19">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=urllib3&package-manager=pip&previous-version=1.26.18&new-version=1.26.19)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/aws/aws-cdk/network/alerts).

</details>
Add Kinesis data stream as a Pipes target.

It's nontrivial to get the data from the Kinesis data stream, but here are screenshots 
showing data made it through during the integration test.

<img width="656" alt="Screenshot 2024-06-24 at 7 24 10 PM" src="https://github.com/aws/aws-cdk/assets/3310356/bc6e12a2-8fea-42a7-baaa-e8b5b5ea652f">

<img width="649" alt="Screenshot 2024-06-24 at 7 26 35 PM" src="https://github.com/aws/aws-cdk/assets/3310356/5224b0d9-a356-47e6-ab48-3551ff3b5078">
… runtimes (under feature flag) (#31639)

### Issue # (if applicable)

Closes #31610

### Reason for this change

for Node 18+ runtimes, since AWS Lambda includes AWS SDK v3 by default, and CDK excludes all the `@aws-sdk/*` packages because they’re expected to already be present. However, the CDK currently removes only the `@aws-sdk/*` packages when bundling for Node 18+ runtimes, but it does not remove the `@smithy/*` packages. This can cause a mismatch in versions between the `@smithy/*` packages and the AWS SDK packages that AWS Lambda provides.

The mismatch can happen in the following scenarios. This is a pretty edge case but customers did encounter this issue.
```
/user-app/node_modules/
  - /@smithy/* (v123) <-- this gets used because it wasn't deleted
  - /@aws-sdk/*  (v123) <-- CDK removes `@aws-sdk/*` currently
/lambda-hidden-folder/node_modules
  - /@smithy/* (v456)
  - /@aws-sdk/* (v456) <-- this gets used as fallback since the module is removed from node_modules by CDK
```

### Description of changes

Add a feature flag. When feature flag is set to true, we will also remove smithy models.

### Description of how you validated changes

Added unit tests and integration tests.

### Checklist
- [ ] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@aws-cdk-automation aws-cdk-automation added auto-approve pr/no-squash This PR should be merged instead of squash-merging it labels Oct 3, 2024
@aws-cdk-automation aws-cdk-automation requested a review from a team October 3, 2024 23:12
@github-actions github-actions bot added the p2 label Oct 3, 2024
@shikha372 shikha372 added pr/do-not-merge This PR should not be merged at this time. and removed pr/do-not-merge This PR should not be merged at this time. labels Oct 3, 2024
@aws-cdk-automation
Copy link
Collaborator Author

AWS CodeBuild CI Report

  • CodeBuild project: AutoBuildv2Project1C6BFA3F-wQm2hXv2jqQv
  • Commit ID: 4e27cc3
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

Copy link
Contributor

mergify bot commented Oct 3, 2024

Thank you for contributing! Your pull request will be automatically updated and merged without squashing (do not update manually, and be sure to allow changes to be pushed to your fork).

@mergify mergify bot merged commit be5ad8b into v2-release Oct 3, 2024
31 of 32 checks passed
@mergify mergify bot deleted the bump/2.161.0 branch October 3, 2024 23:43
Copy link

github-actions bot commented Oct 3, 2024

Comments on closed issues and PRs are hard for our team to see.
If you need help, please open a new issue that references this one.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 3, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
auto-approve p2 pr/no-squash This PR should be merged instead of squash-merging it
Projects
None yet
Development

Successfully merging this pull request may close these issues.