diff --git a/website/docs/cdktf/python/d/appstream_image.html.markdown b/website/docs/cdktf/python/d/appstream_image.html.markdown new file mode 100644 index 00000000000..a6034a8ddd5 --- /dev/null +++ b/website/docs/cdktf/python/d/appstream_image.html.markdown @@ -0,0 +1,88 @@ +--- +subcategory: "AppStream 2.0" +layout: "aws" +page_title: "AWS: aws_appstream_image" +description: |- + Terraform data source for describing an AWS AppStream 2.0 Appstream Image. +--- + + + +# Data Source: aws_appstream_image + +Terraform data source for managing an AWS AppStream 2.0 Image. + +### Basic Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.data_aws_appstream_image import DataAwsAppstreamImage +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + DataAwsAppstreamImage(self, "test", + most_recent=True, + name="AppStream-WinServer2019-06-17-2024", + type="PUBLIC" + ) +``` + +## Argument Reference + +The following arguments are optional: + +* `name` - Name of the image being searched for. Cannot be used with name_regex or arn. +* `name_regex` - Regular expression name of the image being searched for. Cannot be used with arn or name. +* `arn` - Arn of the image being searched for. Cannot be used with name_regex or name. +* `type` - The type of image which must be (PUBLIC, PRIVATE, or SHARED). +* `most_recent` - Boolean that if it is set to true and there are multiple images returned the most recent will be returned. If it is set to false and there are multiple images return the datasource will error. + +## Attribute Reference + +This data source exports the following attributes in addition to the arguments above: + +* `application` - A application object that contains the following: + * `app_block_arn` - The app block ARN of the application. + * `created_time` - The time at which the application was created within the app block. + * `description` - The description of the application. + * `display_name` - The application name to display. + * `enabled` - Bool based on if the application is enabled. + * `icon_s3_location` - A list named icon_s3_location that contains the following: + * `s3_bucket` - S3 bucket of the S3 object. + * `s3_key` - S3 key of the S3 object. + * `icon_url` - URL of the application icon. This URL may be time-limited. + * `instance_families` - List of the instance families of the application. + * `launch_parameters` - Arguments that are passed to the application at it's launch. + * `launch_path` - Path to the application's excecutable in the instance. + * `metadata` - String to string map that contains additional attributes used to describe the application. + * `Name` - Name of the application. + * `platforms` - Array of strings describing the platforms on which the application can run. + Values will be from: WINDOWS | WINDOWS_SERVER_2016 | WINDOWS_SERVER_2019 | WINDOWS_SERVER_2022 | AMAZON_LINUX2 + * `working_directory` - Working directory for the application. +* `appstream_agent_version` - Version of the AppStream 2.0 agent to use for instances that are launched from this image. Has a maximum length of 100 characters. +* `arn` - ARN of the image. +* `base_image_arn` - ARN of the image from which the image was created. +* `created_time` - Time at which this image was created. +* `description` - Description of image. +* `display_name` - Image name to display. +* `image_builder_name` - The name of the image builder that was used to created the private image. If the image is sharedthen the value is null. +* `image_builder_supported` - Boolean to indicate whether an image builder can be launched from this image. +* `image error` - Resource error object that describes the error containing the following: + * `error_code` - Error code of the image. Values will be from: IAM_SERVICE_ROLE_MISSING_ENI_DESCRIBE_ACTION | IAM_SERVICE_ROLE_MISSING_ENI_CREATE_ACTION | IAM_SERVICE_ROLE_MISSING_ENI_DELETE_ACTION | NETWORK_INTERFACE_LIMIT_EXCEEDED | INTERNAL_SERVICE_ERROR | IAM_SERVICE_ROLE_IS_MISSING | MACHINE_ROLE_IS_MISSING | STS_DISABLED_IN_REGION | SUBNET_HAS_INSUFFICIENT_IP_ADDRESSES | IAM_SERVICE_ROLE_MISSING_DESCRIBE_SUBNET_ACTION | SUBNET_NOT_FOUND | IMAGE_NOT_FOUND | INVALID_SUBNET_CONFIGURATION | SECURITY_GROUPS_NOT_FOUND | IGW_NOT_ATTACHED | IAM_SERVICE_ROLE_MISSING_DESCRIBE_SECURITY_GROUPS_ACTION | FLEET_STOPPED | FLEET_INSTANCE_PROVISIONING_FAILURE | DOMAIN_JOIN_ERROR_FILE_NOT_FOUND | DOMAIN_JOIN_ERROR_ACCESS_DENIED | DOMAIN_JOIN_ERROR_LOGON_FAILURE | DOMAIN_JOIN_ERROR_INVALID_PARAMETER | DOMAIN_JOIN_ERROR_MORE_DATA | DOMAIN_JOIN_ERROR_NO_SUCH_DOMAIN | DOMAIN_JOIN_ERROR_NOT_SUPPORTED | DOMAIN_JOIN_NERR_INVALID_WORKGROUP_NAME | DOMAIN_JOIN_NERR_WORKSTATION_NOT_STARTED | DOMAIN_JOIN_ERROR_DS_MACHINE_ACCOUNT_QUOTA_EXCEEDED | DOMAIN_JOIN_NERR_PASSWORD_EXPIRED | DOMAIN_JOIN_INTERNAL_SERVICE_ERROR as the values. + * `error_message` - Error message of the image. + * `error_timestamp` - Time when the error occurred. +* `image_permissions` - List of strings describing the image permissions containing the following: + * `allow_fleet` - Boolean indicating if the image can be used for a fleet. + * `allow_image_builder` - indicated whether the image can be used for an image builder. +* `platform` - Operating system platform of the image. Values will be from: WINDOWS | WINDOWS_SERVER_2016 | WINDOWS_SERVER_2019 | WINDOWS_SERVER_2022 | AMAZON_LINUX2 +* `public_image_released_date` - Release date of base image if public. For private images, it is the release date of the base image that it was created from. +* `state` - Current state of image. Image starts in PENDING state which changes to AVAILABLE if creation passes and FAILED if it fails. Values will be from: PENDING | AVAILABLE | FAILED | COPYING | DELETING | CREATING | IMPORTING. +* `visibility` - Visibility type enum indicating whether the image is PUBLIC, PRIVATE, or SHARED. Valid values include: PUBLIC | PRIVATE | SHARED. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/backup_plan.html.markdown b/website/docs/cdktf/python/d/backup_plan.html.markdown index 20249978101..41592b3448a 100644 --- a/website/docs/cdktf/python/d/backup_plan.html.markdown +++ b/website/docs/cdktf/python/d/backup_plan.html.markdown @@ -43,7 +43,8 @@ This data source exports the following attributes in addition to the arguments a * `arn` - ARN of the backup plan. * `name` - Display name of a backup plan. +* `rule` - Rules of a backup plan. * `tags` - Metadata that you can assign to help organize the plans you create. * `version` - Unique, randomly generated, Unicode, UTF-8 encoded string that serves as the version ID of the backup plan. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ecr_lifecycle_policy_document.html.markdown b/website/docs/cdktf/python/d/ecr_lifecycle_policy_document.html.markdown index 491edb33761..19f031ffbe8 100644 --- a/website/docs/cdktf/python/d/ecr_lifecycle_policy_document.html.markdown +++ b/website/docs/cdktf/python/d/ecr_lifecycle_policy_document.html.markdown @@ -60,14 +60,14 @@ Each document configuration may have one or more `rule` blocks, which each accep * `action` (Optional) - Specifies the action type. * `type` (Required) - The supported value is `expire`. * `description` (Optional) - Describes the purpose of a rule within a lifecycle policy. -* `priority` (Required) - Sets the order in which rules are evaluated, lowest to highest. When you add rules to a lifecycle policy, you must give them each a unique value for `priority`. Values do not need to be sequential across rules in a policy. A rule with a `tag_status` value of any must have the highest value for `priority` and be evaluated last. +* `priority` (Required) - Sets the order in which rules are evaluated, lowest to highest. When you add rules to a lifecycle policy, you must give them each a unique value for `priority`. Values do not need to be sequential across rules in a policy. A rule with a `tag_status` value of "any" must have the highest value for `priority` and be evaluated last. * `selection` (Required) - Collects parameters describing the selection criteria for the ECR lifecycle policy: - * `tag_status` (Required) - Determines whether the lifecycle policy rule that you are adding specifies a tag for an image. Acceptable options are tagged, untagged, or any. If you specify any, then all images have the rule applied to them. If you specify tagged, then you must also specify a `tag_prefix_list` value. If you specify untagged, then you must omit `tag_prefix_list`. - * `tag_pattern_list` (Required if `tag_status` is set to tagged and `tag_prefix_list` isn't specified) - You must specify a comma-separated list of image tag patterns that may contain wildcards (*) on which to take action with your lifecycle policy. For example, if your images are tagged as prod, prod1, prod2, and so on, you would use the tag pattern list prod* to specify all of them. If you specify multiple tags, only the images with all specified tags are selected. There is a maximum limit of four wildcards (*) per string. For example, ["*test*1*2*3", "test*1*2*3*"] is valid but ["test*1*2*3*4*5*6"] is invalid. - * `tag_prefix_list` (Required if `tag_status` is set to tagged and `tag_pattern_list` isn't specified) - You must specify a comma-separated list of image tag prefixes on which to take action with your lifecycle policy. For example, if your images are tagged as prod, prod1, prod2, and so on, you would use the tag prefix prod to specify all of them. If you specify multiple tags, only images with all specified tags are selected. - * `count_type` (Required) - Specify a count type to apply to the images. If `count_type` is set to imageCountMoreThan, you also specify `count_number` to create a rule that sets a limit on the number of images that exist in your repository. If `count_type` is set to sinceImagePushed, you also specify `count_unit` and `count_number` to specify a time limit on the images that exist in your repository. - * `count_unit` (Required if `count_type` is set to sinceImagePushed) - Specify a count unit of days to indicate that as the unit of time, in addition to `count_number`, which is the number of days. - * `count_number` (Required) - Specify a count number. If the `count_type` used is imageCountMoreThan, then the value is the maximum number of images that you want to retain in your repository. If the `count_type` used is sinceImagePushed, then the value is the maximum age limit for your images. + * `tag_status` (Required) - Determines whether the lifecycle policy rule that you are adding specifies a tag for an image. Acceptable options are "tagged", "untagged", or "any". If you specify "any", then all images have the rule applied to them. If you specify "tagged", then you must also specify a `tag_prefix_list` value. If you specify "untagged", then you must omit `tag_prefix_list`. + * `tag_pattern_list` (Required if `tag_status` is set to "tagged" and `tag_prefix_list` isn't specified) - You must specify a comma-separated list of image tag patterns that may contain wildcards (\*) on which to take action with your lifecycle policy. For example, if your images are tagged as `prod`, `prod1`, `prod2`, and so on, you would use the tag pattern list `["prod\*"]` to specify all of them. If you specify multiple tags, only the images with all specified tags are selected. There is a maximum limit of four wildcards (\*) per string. For example, `["*test*1*2*3", "test*1*2*3*"]` is valid but `["test*1*2*3*4*5*6"]` is invalid. + * `tag_prefix_list` (Required if `tag_status` is set to "tagged" and `tag_pattern_list` isn't specified) - You must specify a comma-separated list of image tag prefixes on which to take action with your lifecycle policy. For example, if your images are tagged as `prod`, `prod1`, `prod2`, and so on, you would use the tag prefix "prod" to specify all of them. If you specify multiple tags, only images with all specified tags are selected. + * `count_type` (Required) - Specify a count type to apply to the images. If `count_type` is set to "imageCountMoreThan", you also specify `count_number` to create a rule that sets a limit on the number of images that exist in your repository. If `count_type` is set to "sinceImagePushed", you also specify `count_unit` and `count_number` to specify a time limit on the images that exist in your repository. + * `count_unit` (Required if `count_type` is set to "sinceImagePushed") - Specify a count unit of days to indicate that as the unit of time, in addition to `count_number`, which is the number of days. + * `count_number` (Required) - Specify a count number. If the `count_type` used is "imageCountMoreThan", then the value is the maximum number of images that you want to retain in your repository. If the `count_type` used is "sinceImagePushed", then the value is the maximum age limit for your images. ## Attribute Reference @@ -75,4 +75,4 @@ This data source exports the following attributes in addition to the arguments a * `json` - The above arguments serialized as a standard JSON policy document. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/fsx_ontap_file_system.html.markdown b/website/docs/cdktf/python/d/fsx_ontap_file_system.html.markdown index f7924c3404f..20c9f3f2100 100644 --- a/website/docs/cdktf/python/d/fsx_ontap_file_system.html.markdown +++ b/website/docs/cdktf/python/d/fsx_ontap_file_system.html.markdown @@ -48,7 +48,9 @@ In addition to all arguments above, the following attributes are exported: * `daily_automatic_backup_start_time` - The preferred time (in `HH:MM` format) to take daily automatic backups, in the UTC time zone. * `deployment_type` - The file system deployment type. * `disk_iops_configuration` - The SSD IOPS configuration for the Amazon FSx for NetApp ONTAP file system, specifying the number of provisioned IOPS and the provision mode. See [Disk IOPS](#disk-iops) Below. -* `dns_name` - DNS name for the file system (e.g. `fs-12345678.corp.example.com`). +* `dns_name` - DNS name for the file system. + + **Note:** This attribute does not apply to FSx for ONTAP file systems and is consequently not set. You can access your FSx for ONTAP file system and volumes via a [Storage Virtual Machine (SVM)](fsx_ontap_storage_virtual_machine.html) using its DNS name or IP address. * `endpoint_ip_address_range` - (Multi-AZ only) Specifies the IP address range in which the endpoints to access your file system exist. * `endpoints` - The Management and Intercluster FileSystemEndpoints that are used to access data or to manage the file system using the NetApp ONTAP CLI, REST API, or NetApp SnapMirror. See [FileSystemEndpoints](#file-system-endpoints) below. * `ha_pairs` - The number of HA pairs for the file system. @@ -82,4 +84,4 @@ In addition to all arguments above, the following attributes are exported: * `DNSName` - The file system's DNS name. You can mount your file system using its DNS name. * `IpAddresses` - IP addresses of the file system endpoint. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/launch_configuration.html.markdown b/website/docs/cdktf/python/d/launch_configuration.html.markdown index 362c54c2fb9..b0cf10e5408 100644 --- a/website/docs/cdktf/python/d/launch_configuration.html.markdown +++ b/website/docs/cdktf/python/d/launch_configuration.html.markdown @@ -54,6 +54,7 @@ This data source exports the following attributes in addition to the arguments a * `http_put_response_hop_limit` - The desired HTTP PUT response hop limit for instance metadata requests. * `security_groups` - List of associated Security Group IDS. * `associate_public_ip_address` - Whether a Public IP address is associated with the instance. +* `primary_ipv6` - Whether the first IPv6 GUA will be made the primary IPv6 address. * `user_data` - User Data of the instance. * `enable_monitoring` - Whether Detailed Monitoring is Enabled. * `ebs_optimized` - Whether the launched EC2 instance will be EBS-optimized. @@ -89,4 +90,4 @@ This data source exports the following attributes in addition to the arguments a * `device_name` - Name of the device. * `virtual_name` - Virtual Name of the device. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/oam_link.html.markdown b/website/docs/cdktf/python/d/oam_link.html.markdown index b722810db51..fd4ea8a260e 100644 --- a/website/docs/cdktf/python/d/oam_link.html.markdown +++ b/website/docs/cdktf/python/d/oam_link.html.markdown @@ -44,10 +44,11 @@ The following arguments are required: This data source exports the following attributes in addition to the arguments above: * `arn` - ARN of the link. +* `id` - ARN of the link. * `label` - Label that is assigned to this link. * `label_template` - Human-readable name used to identify this source account when you are viewing data from it in the monitoring account. * `link_id` - ID string that AWS generated as part of the link ARN. * `resource_types` - Types of data that the source account shares with the monitoring account. * `sink_arn` - ARN of the sink that is used for this link. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/oam_sink.html.markdown b/website/docs/cdktf/python/d/oam_sink.html.markdown index 3b4bdc74de3..9b86f4bc6a6 100644 --- a/website/docs/cdktf/python/d/oam_sink.html.markdown +++ b/website/docs/cdktf/python/d/oam_sink.html.markdown @@ -44,8 +44,9 @@ The following arguments are required: This data source exports the following attributes in addition to the arguments above: * `arn` - ARN of the sink. +* `id` - ARN of the sink. * `name` - Name of the sink. * `sink_id` - Random ID string that AWS generated as part of the sink ARN. * `tags` - Tags assigned to the sink. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/transfer_connector.html.markdown b/website/docs/cdktf/python/d/transfer_connector.html.markdown new file mode 100644 index 00000000000..37112c1100d --- /dev/null +++ b/website/docs/cdktf/python/d/transfer_connector.html.markdown @@ -0,0 +1,67 @@ +--- +subcategory: "Transfer Family" +layout: "aws" +page_title: "AWS: aws_transfer_connector" +description: |- + Terraform data source for managing an AWS Transfer Family Connector. +--- + + + +# Data Source: aws_transfer_connector + +Terraform data source for managing an AWS Transfer Family Connector. + +### Basic Usage + +```python +# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +from constructs import Construct +from cdktf import TerraformStack +# +# Provider bindings are generated by running `cdktf get`. +# See https://cdk.tf/provider-generation for more details. +# +from imports.aws.data_aws_transfer_connector import DataAwsTransferConnector +class MyConvertedCode(TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + DataAwsTransferConnector(self, "test", + id="c-xxxxxxxxxxxxxx" + ) +``` + +## Argument Reference + +The following arguments are required: + +* `id` - (Required) Unique identifier for connector + +## Attribute Reference + +This data source exports the following attributes in addition to the arguments above: + +* `access_role` - ARN of the AWS Identity and Access Management role. +* `arn` - ARN of the Connector. +* `as2_config` - Structure containing the parameters for an AS2 connector object. Contains the following attributes: + * `basic_auth_secret_id` - Basic authentication for AS2 connector API. Returns a null value if not set. + * `compression` - Specifies whether AS2 file is compressed. Will be ZLIB or DISABLED + * `encryption_algorithm` - Algorithm used to encrypt file. Will be AES128_CBC or AES192_CBC or AES256_CBC or DES_EDE3_CBC or NONE. + * `local_profile_id` - Unique identifier for AS2 local profile. + * `mdn_response` - Used for outbound requests to tell if response is asynchronous or not. Will be either SYNC or NONE. + * `mdn_signing_algorithm` - Signing algorithm for MDN response. Will be SHA256 or SHA384 or SHA512 or SHA1 or NONE or DEFAULT. + * `message_subject` - Subject HTTP header attribute in outbound AS2 messages to the connector. + * `partner_profile_id` - Unique identifier used by connector for partner profile. + * `signing_algorithm` - Algorithm used for signing AS2 messages sent with the connector. +* `logging_role` - ARN of the IAM role that allows a connector to turn on CLoudwatch logging for Amazon S3 events. +* `security_policy_name` - Name of security policy. +* `service_managed_egress_ip_addresses` - List of egress Ip addresses. +* `sftp_config` - Object containing the following attributes: + * `trusted_host_keys` - List of the public portions of the host keys that are used to identify the servers the connector is connected to. + * `user_secret_id` - Identifer for the secret in AWS Secrets Manager that contains the SFTP user's private key, and/or password. +* `tags` - Object containing the following attributes: + * `key` - Name of the tag. + * `value` - Values associated with the tags key. +* `url` - URL of the partner's AS2 or SFTP endpoint. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/guides/custom-service-endpoints.html.markdown b/website/docs/cdktf/python/guides/custom-service-endpoints.html.markdown index 09cc96b839b..ac2f54ba115 100644 --- a/website/docs/cdktf/python/guides/custom-service-endpoints.html.markdown +++ b/website/docs/cdktf/python/guides/custom-service-endpoints.html.markdown @@ -158,6 +158,7 @@ class MyConvertedCode(TerraformStack):
  • costoptimizationhub
  • cur (or costandusagereportservice)
  • customerprofiles
  • +
  • databrew (or gluedatabrew)
  • dataexchange
  • datapipeline
  • datasync
  • @@ -431,4 +432,4 @@ class MyConvertedCode(TerraformStack): ) ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/index.html.markdown b/website/docs/cdktf/python/index.html.markdown index b77968bc9d8..9e9485a4108 100644 --- a/website/docs/cdktf/python/index.html.markdown +++ b/website/docs/cdktf/python/index.html.markdown @@ -13,7 +13,7 @@ Use the Amazon Web Services (AWS) provider to interact with the many resources supported by AWS. You must configure the provider with the proper credentials before you can use it. -Use the navigation to the left to read about the available resources. There are currently 1381 resources and 560 data sources available in the provider. +Use the navigation to the left to read about the available resources. There are currently 1387 resources and 560 data sources available in the provider. To learn the basics of Terraform using this provider, follow the hands-on [get started tutorials](https://learn.hashicorp.com/tutorials/terraform/infrastructure-as-code?in=terraform/aws-get-started&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS). Interact with AWS services, @@ -811,4 +811,4 @@ Approaches differ per authentication providers: There used to be no better way to get account ID out of the API when using the federated account until `sts:GetCallerIdentity` was introduced. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/amplify_domain_association.html.markdown b/website/docs/cdktf/python/r/amplify_domain_association.html.markdown index 7c43742dfc7..0f8215db59c 100644 --- a/website/docs/cdktf/python/r/amplify_domain_association.html.markdown +++ b/website/docs/cdktf/python/r/amplify_domain_association.html.markdown @@ -62,11 +62,17 @@ class MyConvertedCode(TerraformStack): This resource supports the following arguments: * `app_id` - (Required) Unique ID for an Amplify app. +* `certificate_settings` - (Optional) The type of SSL/TLS certificate to use for your custom domain. If you don't specify a certificate type, Amplify uses the default certificate that it provisions and manages for you. * `domain_name` - (Required) Domain name for the domain association. * `enable_auto_sub_domain` - (Optional) Enables the automated creation of subdomains for branches. * `sub_domain` - (Required) Setting for the subdomain. Documented below. * `wait_for_verification` - (Optional) If enabled, the resource will wait for the domain association status to change to `PENDING_DEPLOYMENT` or `AVAILABLE`. Setting this to `false` will skip the process. Default: `true`. +The `certificate_settings` configuration block supports the following arguments: + +* `type` - (Required) The certificate type. Valid values are `AMPLIFY_MANAGED` and `CUSTOM`. +* `custom_certificate_arn` - (Optional) The Amazon resource name (ARN) for the custom certificate. + The `sub_domain` configuration block supports the following arguments: * `branch_name` - (Required) Branch name setting for the subdomain. @@ -109,4 +115,4 @@ Using `terraform import`, import Amplify domain association using `app_id` and ` % terraform import aws_amplify_domain_association.app d2ypk4k47z8u6/example.com ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/cloudformation_stack_set_instance.html.markdown b/website/docs/cdktf/python/r/cloudformation_stack_set_instance.html.markdown index 2ba88ae5036..d8116e3313a 100644 --- a/website/docs/cdktf/python/r/cloudformation_stack_set_instance.html.markdown +++ b/website/docs/cdktf/python/r/cloudformation_stack_set_instance.html.markdown @@ -125,7 +125,7 @@ This resource supports the following arguments: * `stack_set_name` - (Required) Name of the StackSet. * `account_id` - (Optional) Target AWS Account ID to create a Stack based on the StackSet. Defaults to current account. -* `deployment_targets` - (Optional) The AWS Organizations accounts to which StackSets deploys. StackSets doesn't deploy stack instances to the organization management account, even if the organization management account is in your organization or in an OU in your organization. Drift detection is not possible for this argument. See [deployment_targets](#deployment_targets-argument-reference) below. +* `deployment_targets` - (Optional) AWS Organizations accounts to which StackSets deploys. StackSets doesn't deploy stack instances to the organization management account, even if the organization management account is in your organization or in an OU in your organization. Drift detection is not possible for this argument. See [deployment_targets](#deployment_targets-argument-reference) below. * `parameter_overrides` - (Optional) Key-value map of input parameters to override from the StackSet for this Instance. * `region` - (Optional) Target AWS Region to create a Stack based on the StackSet. Defaults to current region. * `retain_stack` - (Optional) During Terraform resource destroy, remove Instance from StackSet while keeping the Stack and its associated resources. Must be enabled in Terraform state _before_ destroy operation to take effect. You cannot reassociate a retained Stack or add an existing, saved Stack to a new StackSet. Defaults to `false`. @@ -136,25 +136,28 @@ This resource supports the following arguments: The `deployment_targets` configuration block supports the following arguments: -* `organizational_unit_ids` - (Optional) The organization root ID or organizational unit (OU) IDs to which StackSets deploys. +* `organizational_unit_ids` - (Optional) Organization root ID or organizational unit (OU) IDs to which StackSets deploys. +* `account_filter_type` - (Optional) Limit deployment targets to individual accounts or include additional accounts with provided OUs. Valid values: `INTERSECTION`, `DIFFERENCE`, `UNION`, `NONE`. +* `accounts` - (Optional) List of accounts to deploy stack set updates. +* `accounts_url` - (Optional) S3 URL of the file containing the list of accounts. ### `operation_preferences` Argument Reference The `operation_preferences` configuration block supports the following arguments: -* `failure_tolerance_count` - (Optional) The number of accounts, per Region, for which this operation can fail before AWS CloudFormation stops the operation in that Region. -* `failure_tolerance_percentage` - (Optional) The percentage of accounts, per Region, for which this stack operation can fail before AWS CloudFormation stops the operation in that Region. -* `max_concurrent_count` - (Optional) The maximum number of accounts in which to perform this operation at one time. -* `max_concurrent_percentage` - (Optional) The maximum percentage of accounts in which to perform this operation at one time. -* `region_concurrency_type` - (Optional) The concurrency type of deploying StackSets operations in Regions, could be in parallel or one Region at a time. Valid values are `SEQUENTIAL` and `PARALLEL`. -* `region_order` - (Optional) The order of the Regions in where you want to perform the stack operation. +* `failure_tolerance_count` - (Optional) Number of accounts, per Region, for which this operation can fail before AWS CloudFormation stops the operation in that Region. +* `failure_tolerance_percentage` - (Optional) Percentage of accounts, per Region, for which this stack operation can fail before AWS CloudFormation stops the operation in that Region. +* `max_concurrent_count` - (Optional) Maximum number of accounts in which to perform this operation at one time. +* `max_concurrent_percentage` - (Optional) Maximum percentage of accounts in which to perform this operation at one time. +* `region_concurrency_type` - (Optional) Concurrency type of deploying StackSets operations in Regions, could be in parallel or one Region at a time. Valid values are `SEQUENTIAL` and `PARALLEL`. +* `region_order` - (Optional) Order of the Regions in where you want to perform the stack operation. ## Attribute Reference This resource exports the following attributes in addition to the arguments above: * `id` - Unique identifier for the resource. If `deployment_targets` is set, this is a comma-delimited string combining stack set name, organizational unit IDs (`/`-delimited), and region (ie. `mystack,ou-123/ou-456,us-east-1`). Otherwise, this is a comma-delimited string combining stack set name, AWS account ID, and region (ie. `mystack,123456789012,us-east-1`). -* `organizational_unit_id` - The organization root ID or organizational unit (OU) ID in which the stack is deployed. +* `organizational_unit_id` - Organization root ID or organizational unit (OU) ID in which the stack is deployed. * `stack_id` - Stack identifier. * `stack_instance_summaries` - List of stack instances created from an organizational unit deployment target. This will only be populated when `deployment_targets` is set. See [`stack_instance_summaries`](#stack_instance_summaries-attribute-reference). @@ -243,4 +246,4 @@ Using `terraform import`, import CloudFormation StackSet Instances when acting a % terraform import aws_cloudformation_stack_set_instance.example example,ou-sdas-123123123/ou-sdas-789789789,us-east-1,DELEGATED_ADMIN ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_capacity_reservation.html.markdown b/website/docs/cdktf/python/r/ec2_capacity_reservation.html.markdown index 6b56d1de741..80d63abc83c 100644 --- a/website/docs/cdktf/python/r/ec2_capacity_reservation.html.markdown +++ b/website/docs/cdktf/python/r/ec2_capacity_reservation.html.markdown @@ -61,6 +61,14 @@ This resource exports the following attributes in addition to the arguments abov * `arn` - The ARN of the Capacity Reservation. * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10m`) +- `update` - (Default `10m`) +- `delete` - (Default `10m`) + ## Import In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import Capacity Reservations using the `id`. For example: @@ -86,4 +94,4 @@ Using `terraform import`, import Capacity Reservations using the `id`. For examp % terraform import aws_ec2_capacity_reservation.web cr-0123456789abcdef0 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_peering_attachment.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_peering_attachment.html.markdown index 7b98e70f984..050c5d2c2a3 100644 --- a/website/docs/cdktf/python/r/ec2_transit_gateway_peering_attachment.html.markdown +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_peering_attachment.html.markdown @@ -75,9 +75,16 @@ This resource supports the following arguments: * `peer_account_id` - (Optional) Account ID of EC2 Transit Gateway to peer with. Defaults to the account ID the [AWS provider][1] is currently connected to. * `peer_region` - (Required) Region of EC2 Transit Gateway to peer with. * `peer_transit_gateway_id` - (Required) Identifier of EC2 Transit Gateway to peer with. +* `options` - (Optional) Describes whether dynamic routing is enabled or disabled for the transit gateway peering request. See [options](#options) below for more details! * `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Peering Attachment. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `transit_gateway_id` - (Required) Identifier of EC2 Transit Gateway. +### options + +The `options` block supports the following: + +* `dynamic_routing` - (Optional) Indicates whether dynamic routing is enabled or disabled.. Supports `enable` and `disable`. + ## Attribute Reference This resource exports the following attributes in addition to the arguments above: @@ -112,4 +119,4 @@ Using `terraform import`, import `aws_ec2_transit_gateway_peering_attachment` us [1]: /docs/providers/aws/index.html - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/eks_cluster.html.markdown b/website/docs/cdktf/python/r/eks_cluster.html.markdown index 3d751d15369..8b84fa1b4f9 100644 --- a/website/docs/cdktf/python/r/eks_cluster.html.markdown +++ b/website/docs/cdktf/python/r/eks_cluster.html.markdown @@ -290,6 +290,7 @@ The following arguments are required: The following arguments are optional: * `access_config` - (Optional) Configuration block for the access config associated with your cluster, see [Amazon EKS Access Entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html). +* `bootstrap_self_managed_addons` - (Optional) Install default unmanaged add-ons, such as `aws-cni`, `kube-proxy`, and CoreDNS during cluster creation. If `false`, you must manually install desired add-ons. Changing this value will force a new cluster to be created. Defaults to `true`. * `enabled_cluster_log_types` - (Optional) List of the desired control plane logging to enable. For more information, see [Amazon EKS Control Plane Logging](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html). * `encryption_config` - (Optional) Configuration block with encryption configuration for the cluster. Only available on Kubernetes 1.13 and above clusters created after March 6, 2020. Detailed below. * `kubernetes_network_config` - (Optional) Configuration block with kubernetes network configuration for the cluster. Detailed below. If removed, Terraform will only perform drift detection if a configuration value is provided. @@ -427,4 +428,4 @@ Using `terraform import`, import EKS Clusters using the `name`. For example: % terraform import aws_eks_cluster.my_cluster my_cluster ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/fsx_ontap_file_system.html.markdown b/website/docs/cdktf/python/r/fsx_ontap_file_system.html.markdown index 28023de5eea..c5119443d52 100644 --- a/website/docs/cdktf/python/r/fsx_ontap_file_system.html.markdown +++ b/website/docs/cdktf/python/r/fsx_ontap_file_system.html.markdown @@ -90,7 +90,9 @@ This resource supports the following arguments: This resource exports the following attributes in addition to the arguments above: * `arn` - Amazon Resource Name of the file system. -* `dns_name` - DNS name for the file system, e.g., `fs-12345678.fsx.us-west-2.amazonaws.com` +* `dns_name` - DNS name for the file system. + + **Note:** This attribute does not apply to FSx for ONTAP file systems and is consequently not set. You can access your FSx for ONTAP file system and volumes via a [Storage Virtual Machine (SVM)](fsx_ontap_storage_virtual_machine.html) using its DNS name or IP address. * `endpoints` - The endpoints that are used to access data or to manage the file system using the NetApp ONTAP CLI, REST API, or NetApp SnapMirror. See [Endpoints](#endpoints) below. * `id` - Identifier of the file system, e.g., `fs-12345678` * `network_interface_ids` - Set of Elastic Network Interface identifiers from which the file system is accessible The first network interface returned is the primary network interface. @@ -168,4 +170,4 @@ class MyConvertedCode(TerraformStack): ) ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/iot_authorizer.html.markdown b/website/docs/cdktf/python/r/iot_authorizer.html.markdown index 49e818fecc1..d97b22b1c21 100644 --- a/website/docs/cdktf/python/r/iot_authorizer.html.markdown +++ b/website/docs/cdktf/python/r/iot_authorizer.html.markdown @@ -31,6 +31,9 @@ class MyConvertedCode(TerraformStack): name="example", signing_disabled=False, status="ACTIVE", + tags={ + "Name": "example" + }, token_key_name="Token-Header", token_signing_public_keys={ "Key1": Token.as_string( @@ -46,6 +49,7 @@ class MyConvertedCode(TerraformStack): * `name` - (Required) The name of the authorizer. * `signing_disabled` - (Optional) Specifies whether AWS IoT validates the token signature in an authorization request. Default: `false`. * `status` - (Optional) The status of Authorizer request at creation. Valid values: `ACTIVE`, `INACTIVE`. Default: `ACTIVE`. +* `tags` - (Optional) Map of tags to assign to this resource. If configured with a provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `token_key_name` - (Optional) The name of the token key used to extract the token from the HTTP headers. This value is required if signing is enabled in your authorizer. * `token_signing_public_keys` - (Optional) The public keys used to verify the digital signature returned by your custom authentication service. This value is required if signing is enabled in your authorizer. @@ -54,6 +58,7 @@ class MyConvertedCode(TerraformStack): This resource exports the following attributes in addition to the arguments above: * `arn` - The ARN of the authorizer. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block). ## Import @@ -80,4 +85,4 @@ Using `terraform import`, import IOT Authorizers using the name. For example: % terraform import aws_iot_authorizer.example example ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/iot_topic_rule.html.markdown b/website/docs/cdktf/python/r/iot_topic_rule.html.markdown index 9c05c3a08fd..c1bb93d31ad 100644 --- a/website/docs/cdktf/python/r/iot_topic_rule.html.markdown +++ b/website/docs/cdktf/python/r/iot_topic_rule.html.markdown @@ -108,6 +108,7 @@ The `cloudwatch_alarm` object takes the following arguments: The `cloudwatch_logs` object takes the following arguments: +* `batch_mode` - (Optional) The payload that contains a JSON array of records will be sent to CloudWatch via a batch call. * `log_group_name` - (Required) The CloudWatch log group name. * `role_arn` - (Required) The IAM role ARN that allows access to the CloudWatch alarm. @@ -275,4 +276,4 @@ Using `terraform import`, import IoT Topic Rules using the `name`. For example: % terraform import aws_iot_topic_rule.rule ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/mwaa_environment.html.markdown b/website/docs/cdktf/python/r/mwaa_environment.html.markdown index 5c1c6189380..21b5e6155c9 100644 --- a/website/docs/cdktf/python/r/mwaa_environment.html.markdown +++ b/website/docs/cdktf/python/r/mwaa_environment.html.markdown @@ -159,6 +159,7 @@ This resource supports the following arguments: * `airflow_configuration_options` - (Optional) The `airflow_configuration_options` parameter specifies airflow override options. Check the [Official documentation](https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-env-variables.html#configuring-env-variables-reference) for all possible configuration options. * `airflow_version` - (Optional) Airflow version of your environment, will be set by default to the latest version that MWAA supports. * `dag_s3_path` - (Required) The relative path to the DAG folder on your Amazon S3 storage bucket. For example, dags. For more information, see [Importing DAGs on Amazon MWAA](https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-dag-import.html). +* `endpoint_management` - (Optional) Defines whether the VPC endpoints configured for the environment are created and managed by the customer or by AWS. If set to `SERVICE`, Amazon MWAA will create and manage the required VPC endpoints in your VPC. If set to `CUSTOMER`, you must create, and manage, the VPC endpoints for your VPC. Defaults to `SERVICE` if not set. * `environment_class` - (Optional) Environment class for the cluster. Possible options are `mw1.small`, `mw1.medium`, `mw1.large`. Will be set by default to `mw1.small`. Please check the [AWS Pricing](https://aws.amazon.com/de/managed-workflows-for-apache-airflow/pricing/) for more information about the environment classes. * `execution_role_arn` - (Required) The Amazon Resource Name (ARN) of the task execution role that the Amazon MWAA and its environment can assume. Check the [official AWS documentation](https://docs.aws.amazon.com/mwaa/latest/userguide/mwaa-create-role.html) for the detailed role specification. * `kms_key` - (Optional) The Amazon Resource Name (ARN) of your KMS key that you want to use for encryption. Will be set to the ARN of the managed KMS key `aws/airflow` by default. Please check the [Official Documentation](https://docs.aws.amazon.com/mwaa/latest/userguide/custom-keys-certs.html) for more information. @@ -250,4 +251,4 @@ Using `terraform import`, import MWAA Environment using `Name`. For example: % terraform import aws_mwaa_environment.example MyAirflowEnvironment ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/oam_link.html.markdown b/website/docs/cdktf/python/r/oam_link.html.markdown index 592d8b4d770..4f37d968db5 100644 --- a/website/docs/cdktf/python/r/oam_link.html.markdown +++ b/website/docs/cdktf/python/r/oam_link.html.markdown @@ -55,6 +55,7 @@ The following arguments are optional: This resource exports the following attributes in addition to the arguments above: * `arn` - ARN of the link. +* `id` - ARN of the link. * `label` - Label that is assigned to this link. * `link_id` - ID string that AWS generated as part of the link ARN. * `sink_arn` - ARN of the sink that is used for this link. @@ -92,4 +93,4 @@ Using `terraform import`, import CloudWatch Observability Access Manager Link us % terraform import aws_oam_link.example arn:aws:oam:us-west-2:123456789012:link/link-id ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/oam_sink.html.markdown b/website/docs/cdktf/python/r/oam_sink.html.markdown index df144bc07b3..2ee2f1c021c 100644 --- a/website/docs/cdktf/python/r/oam_sink.html.markdown +++ b/website/docs/cdktf/python/r/oam_sink.html.markdown @@ -51,6 +51,7 @@ The following arguments are optional: This resource exports the following attributes in addition to the arguments above: * `arn` - ARN of the Sink. +* `id` - ARN of the Sink. * `sink_id` - ID string that AWS generated as part of the sink ARN. ## Timeouts @@ -86,4 +87,4 @@ Using `terraform import`, import CloudWatch Observability Access Manager Sink us % terraform import aws_oam_sink.example arn:aws:oam:us-west-2:123456789012:sink/sink-id ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/oam_sink_policy.html.markdown b/website/docs/cdktf/python/r/oam_sink_policy.html.markdown index 132a53f87d9..c93c22a4396 100644 --- a/website/docs/cdktf/python/r/oam_sink_policy.html.markdown +++ b/website/docs/cdktf/python/r/oam_sink_policy.html.markdown @@ -70,6 +70,7 @@ The following arguments are required: This resource exports the following attributes in addition to the arguments above: * `arn` - ARN of the Sink. +* `id` - ARN of the sink to attach this policy to. * `sink_id` - ID string that AWS generated as part of the sink ARN. ## Timeouts @@ -104,4 +105,4 @@ Using `terraform import`, import CloudWatch Observability Access Manager Sink Po % terraform import aws_oam_sink_policy.example arn:aws:oam:us-west-2:123456789012:sink/sink-id ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/appstream_image.html.markdown b/website/docs/cdktf/typescript/d/appstream_image.html.markdown new file mode 100644 index 00000000000..b626fc5c9b3 --- /dev/null +++ b/website/docs/cdktf/typescript/d/appstream_image.html.markdown @@ -0,0 +1,91 @@ +--- +subcategory: "AppStream 2.0" +layout: "aws" +page_title: "AWS: aws_appstream_image" +description: |- + Terraform data source for describing an AWS AppStream 2.0 Appstream Image. +--- + + + +# Data Source: aws_appstream_image + +Terraform data source for managing an AWS AppStream 2.0 Image. + +### Basic Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { DataAwsAppstreamImage } from "./.gen/providers/aws/data-aws-appstream-image"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new DataAwsAppstreamImage(this, "test", { + mostRecent: true, + name: "AppStream-WinServer2019-06-17-2024", + type: "PUBLIC", + }); + } +} + +``` + +## Argument Reference + +The following arguments are optional: + +* `name` - Name of the image being searched for. Cannot be used with name_regex or arn. +* `nameRegex` - Regular expression name of the image being searched for. Cannot be used with arn or name. +* `arn` - Arn of the image being searched for. Cannot be used with name_regex or name. +* `type` - The type of image which must be (PUBLIC, PRIVATE, or SHARED). +* `mostRecent` - Boolean that if it is set to true and there are multiple images returned the most recent will be returned. If it is set to false and there are multiple images return the datasource will error. + +## Attribute Reference + +This data source exports the following attributes in addition to the arguments above: + +* `application` - A application object that contains the following: + * `app_block_arn` - The app block ARN of the application. + * `createdTime` - The time at which the application was created within the app block. + * `description` - The description of the application. + * `displayName` - The application name to display. + * `enabled` - Bool based on if the application is enabled. + * `icon_s3_location` - A list named icon_s3_location that contains the following: + * `s3Bucket` - S3 bucket of the S3 object. + * `s3Key` - S3 key of the S3 object. + * `iconUrl` - URL of the application icon. This URL may be time-limited. + * `instance_families` - List of the instance families of the application. + * `launch_parameters` - Arguments that are passed to the application at it's launch. + * `launchPath` - Path to the application's excecutable in the instance. + * `metadata` - String to string map that contains additional attributes used to describe the application. + * `Name` - Name of the application. + * `platforms` - Array of strings describing the platforms on which the application can run. + Values will be from: WINDOWS | WINDOWS_SERVER_2016 | WINDOWS_SERVER_2019 | WINDOWS_SERVER_2022 | AMAZON_LINUX2 + * `workingDirectory` - Working directory for the application. +* `appstreamAgentVersion` - Version of the AppStream 2.0 agent to use for instances that are launched from this image. Has a maximum length of 100 characters. +* `arn` - ARN of the image. +* `baseImageArn` - ARN of the image from which the image was created. +* `createdTime` - Time at which this image was created. +* `description` - Description of image. +* `displayName` - Image name to display. +* `imageBuilderName` - The name of the image builder that was used to created the private image. If the image is sharedthen the value is null. +* `imageBuilderSupported` - Boolean to indicate whether an image builder can be launched from this image. +* `image error` - Resource error object that describes the error containing the following: + * `errorCode` - Error code of the image. Values will be from: IAM_SERVICE_ROLE_MISSING_ENI_DESCRIBE_ACTION | IAM_SERVICE_ROLE_MISSING_ENI_CREATE_ACTION | IAM_SERVICE_ROLE_MISSING_ENI_DELETE_ACTION | NETWORK_INTERFACE_LIMIT_EXCEEDED | INTERNAL_SERVICE_ERROR | IAM_SERVICE_ROLE_IS_MISSING | MACHINE_ROLE_IS_MISSING | STS_DISABLED_IN_REGION | SUBNET_HAS_INSUFFICIENT_IP_ADDRESSES | IAM_SERVICE_ROLE_MISSING_DESCRIBE_SUBNET_ACTION | SUBNET_NOT_FOUND | IMAGE_NOT_FOUND | INVALID_SUBNET_CONFIGURATION | SECURITY_GROUPS_NOT_FOUND | IGW_NOT_ATTACHED | IAM_SERVICE_ROLE_MISSING_DESCRIBE_SECURITY_GROUPS_ACTION | FLEET_STOPPED | FLEET_INSTANCE_PROVISIONING_FAILURE | DOMAIN_JOIN_ERROR_FILE_NOT_FOUND | DOMAIN_JOIN_ERROR_ACCESS_DENIED | DOMAIN_JOIN_ERROR_LOGON_FAILURE | DOMAIN_JOIN_ERROR_INVALID_PARAMETER | DOMAIN_JOIN_ERROR_MORE_DATA | DOMAIN_JOIN_ERROR_NO_SUCH_DOMAIN | DOMAIN_JOIN_ERROR_NOT_SUPPORTED | DOMAIN_JOIN_NERR_INVALID_WORKGROUP_NAME | DOMAIN_JOIN_NERR_WORKSTATION_NOT_STARTED | DOMAIN_JOIN_ERROR_DS_MACHINE_ACCOUNT_QUOTA_EXCEEDED | DOMAIN_JOIN_NERR_PASSWORD_EXPIRED | DOMAIN_JOIN_INTERNAL_SERVICE_ERROR as the values. + * `errorMessage` - Error message of the image. + * `error_timestamp` - Time when the error occurred. +* `imagePermissions` - List of strings describing the image permissions containing the following: + * `allow_fleet` - Boolean indicating if the image can be used for a fleet. + * `allow_image_builder` - indicated whether the image can be used for an image builder. +* `platform` - Operating system platform of the image. Values will be from: WINDOWS | WINDOWS_SERVER_2016 | WINDOWS_SERVER_2019 | WINDOWS_SERVER_2022 | AMAZON_LINUX2 +* `public_image_released_date` - Release date of base image if public. For private images, it is the release date of the base image that it was created from. +* `state` - Current state of image. Image starts in PENDING state which changes to AVAILABLE if creation passes and FAILED if it fails. Values will be from: PENDING | AVAILABLE | FAILED | COPYING | DELETING | CREATING | IMPORTING. +* `visibility` - Visibility type enum indicating whether the image is PUBLIC, PRIVATE, or SHARED. Valid values include: PUBLIC | PRIVATE | SHARED. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/backup_plan.html.markdown b/website/docs/cdktf/typescript/d/backup_plan.html.markdown index 20e86c5a1b1..2f1aa3ff2f5 100644 --- a/website/docs/cdktf/typescript/d/backup_plan.html.markdown +++ b/website/docs/cdktf/typescript/d/backup_plan.html.markdown @@ -46,7 +46,8 @@ This data source exports the following attributes in addition to the arguments a * `arn` - ARN of the backup plan. * `name` - Display name of a backup plan. +* `rule` - Rules of a backup plan. * `tags` - Metadata that you can assign to help organize the plans you create. * `version` - Unique, randomly generated, Unicode, UTF-8 encoded string that serves as the version ID of the backup plan. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ecr_lifecycle_policy_document.html.markdown b/website/docs/cdktf/typescript/d/ecr_lifecycle_policy_document.html.markdown index b3f0b1948c4..1323d065140 100644 --- a/website/docs/cdktf/typescript/d/ecr_lifecycle_policy_document.html.markdown +++ b/website/docs/cdktf/typescript/d/ecr_lifecycle_policy_document.html.markdown @@ -69,14 +69,14 @@ Each document configuration may have one or more `rule` blocks, which each accep * `action` (Optional) - Specifies the action type. * `type` (Required) - The supported value is `expire`. * `description` (Optional) - Describes the purpose of a rule within a lifecycle policy. -* `priority` (Required) - Sets the order in which rules are evaluated, lowest to highest. When you add rules to a lifecycle policy, you must give them each a unique value for `priority`. Values do not need to be sequential across rules in a policy. A rule with a `tagStatus` value of any must have the highest value for `priority` and be evaluated last. +* `priority` (Required) - Sets the order in which rules are evaluated, lowest to highest. When you add rules to a lifecycle policy, you must give them each a unique value for `priority`. Values do not need to be sequential across rules in a policy. A rule with a `tagStatus` value of "any" must have the highest value for `priority` and be evaluated last. * `selection` (Required) - Collects parameters describing the selection criteria for the ECR lifecycle policy: - * `tagStatus` (Required) - Determines whether the lifecycle policy rule that you are adding specifies a tag for an image. Acceptable options are tagged, untagged, or any. If you specify any, then all images have the rule applied to them. If you specify tagged, then you must also specify a `tagPrefixList` value. If you specify untagged, then you must omit `tagPrefixList`. - * `tagPatternList` (Required if `tagStatus` is set to tagged and `tagPrefixList` isn't specified) - You must specify a comma-separated list of image tag patterns that may contain wildcards (*) on which to take action with your lifecycle policy. For example, if your images are tagged as prod, prod1, prod2, and so on, you would use the tag pattern list prod* to specify all of them. If you specify multiple tags, only the images with all specified tags are selected. There is a maximum limit of four wildcards (*) per string. For example, ["*test*1*2*3", "test*1*2*3*"] is valid but ["test*1*2*3*4*5*6"] is invalid. - * `tagPrefixList` (Required if `tagStatus` is set to tagged and `tagPatternList` isn't specified) - You must specify a comma-separated list of image tag prefixes on which to take action with your lifecycle policy. For example, if your images are tagged as prod, prod1, prod2, and so on, you would use the tag prefix prod to specify all of them. If you specify multiple tags, only images with all specified tags are selected. - * `countType` (Required) - Specify a count type to apply to the images. If `countType` is set to imageCountMoreThan, you also specify `countNumber` to create a rule that sets a limit on the number of images that exist in your repository. If `countType` is set to sinceImagePushed, you also specify `countUnit` and `countNumber` to specify a time limit on the images that exist in your repository. - * `countUnit` (Required if `countType` is set to sinceImagePushed) - Specify a count unit of days to indicate that as the unit of time, in addition to `countNumber`, which is the number of days. - * `countNumber` (Required) - Specify a count number. If the `countType` used is imageCountMoreThan, then the value is the maximum number of images that you want to retain in your repository. If the `countType` used is sinceImagePushed, then the value is the maximum age limit for your images. + * `tagStatus` (Required) - Determines whether the lifecycle policy rule that you are adding specifies a tag for an image. Acceptable options are "tagged", "untagged", or "any". If you specify "any", then all images have the rule applied to them. If you specify "tagged", then you must also specify a `tagPrefixList` value. If you specify "untagged", then you must omit `tagPrefixList`. + * `tagPatternList` (Required if `tagStatus` is set to "tagged" and `tagPrefixList` isn't specified) - You must specify a comma-separated list of image tag patterns that may contain wildcards (\*) on which to take action with your lifecycle policy. For example, if your images are tagged as `prod`, `prod1`, `prod2`, and so on, you would use the tag pattern list `["prod\*"]` to specify all of them. If you specify multiple tags, only the images with all specified tags are selected. There is a maximum limit of four wildcards (\*) per string. For example, `["*test*1*2*3", "test*1*2*3*"]` is valid but `["test*1*2*3*4*5*6"]` is invalid. + * `tagPrefixList` (Required if `tagStatus` is set to "tagged" and `tagPatternList` isn't specified) - You must specify a comma-separated list of image tag prefixes on which to take action with your lifecycle policy. For example, if your images are tagged as `prod`, `prod1`, `prod2`, and so on, you would use the tag prefix "prod" to specify all of them. If you specify multiple tags, only images with all specified tags are selected. + * `countType` (Required) - Specify a count type to apply to the images. If `countType` is set to "imageCountMoreThan", you also specify `countNumber` to create a rule that sets a limit on the number of images that exist in your repository. If `countType` is set to "sinceImagePushed", you also specify `countUnit` and `countNumber` to specify a time limit on the images that exist in your repository. + * `countUnit` (Required if `countType` is set to "sinceImagePushed") - Specify a count unit of days to indicate that as the unit of time, in addition to `countNumber`, which is the number of days. + * `countNumber` (Required) - Specify a count number. If the `countType` used is "imageCountMoreThan", then the value is the maximum number of images that you want to retain in your repository. If the `countType` used is "sinceImagePushed", then the value is the maximum age limit for your images. ## Attribute Reference @@ -84,4 +84,4 @@ This data source exports the following attributes in addition to the arguments a * `json` - The above arguments serialized as a standard JSON policy document. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/fsx_ontap_file_system.html.markdown b/website/docs/cdktf/typescript/d/fsx_ontap_file_system.html.markdown index 9db204fddae..03703c90877 100644 --- a/website/docs/cdktf/typescript/d/fsx_ontap_file_system.html.markdown +++ b/website/docs/cdktf/typescript/d/fsx_ontap_file_system.html.markdown @@ -51,7 +51,9 @@ In addition to all arguments above, the following attributes are exported: * `dailyAutomaticBackupStartTime` - The preferred time (in `HH:MM` format) to take daily automatic backups, in the UTC time zone. * `deploymentType` - The file system deployment type. * `diskIopsConfiguration` - The SSD IOPS configuration for the Amazon FSx for NetApp ONTAP file system, specifying the number of provisioned IOPS and the provision mode. See [Disk IOPS](#disk-iops) Below. -* `dnsName` - DNS name for the file system (e.g. `fs-12345678.corp.example.com`). +* `dnsName` - DNS name for the file system. + + **Note:** This attribute does not apply to FSx for ONTAP file systems and is consequently not set. You can access your FSx for ONTAP file system and volumes via a [Storage Virtual Machine (SVM)](fsx_ontap_storage_virtual_machine.html) using its DNS name or IP address. * `endpointIpAddressRange` - (Multi-AZ only) Specifies the IP address range in which the endpoints to access your file system exist. * `endpoints` - The Management and Intercluster FileSystemEndpoints that are used to access data or to manage the file system using the NetApp ONTAP CLI, REST API, or NetApp SnapMirror. See [FileSystemEndpoints](#file-system-endpoints) below. * `haPairs` - The number of HA pairs for the file system. @@ -85,4 +87,4 @@ In addition to all arguments above, the following attributes are exported: * `DNSName` - The file system's DNS name. You can mount your file system using its DNS name. * `IpAddresses` - IP addresses of the file system endpoint. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/launch_configuration.html.markdown b/website/docs/cdktf/typescript/d/launch_configuration.html.markdown index 47a24a81c2e..09607d53386 100644 --- a/website/docs/cdktf/typescript/d/launch_configuration.html.markdown +++ b/website/docs/cdktf/typescript/d/launch_configuration.html.markdown @@ -57,6 +57,7 @@ This data source exports the following attributes in addition to the arguments a * `httpPutResponseHopLimit` - The desired HTTP PUT response hop limit for instance metadata requests. * `securityGroups` - List of associated Security Group IDS. * `associatePublicIpAddress` - Whether a Public IP address is associated with the instance. +* `primary_ipv6` - Whether the first IPv6 GUA will be made the primary IPv6 address. * `userData` - User Data of the instance. * `enableMonitoring` - Whether Detailed Monitoring is Enabled. * `ebsOptimized` - Whether the launched EC2 instance will be EBS-optimized. @@ -92,4 +93,4 @@ This data source exports the following attributes in addition to the arguments a * `deviceName` - Name of the device. * `virtualName` - Virtual Name of the device. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/oam_link.html.markdown b/website/docs/cdktf/typescript/d/oam_link.html.markdown index 193219ead56..2cd76e82208 100644 --- a/website/docs/cdktf/typescript/d/oam_link.html.markdown +++ b/website/docs/cdktf/typescript/d/oam_link.html.markdown @@ -48,10 +48,11 @@ The following arguments are required: This data source exports the following attributes in addition to the arguments above: * `arn` - ARN of the link. +* `id` - ARN of the link. * `label` - Label that is assigned to this link. * `labelTemplate` - Human-readable name used to identify this source account when you are viewing data from it in the monitoring account. * `linkId` - ID string that AWS generated as part of the link ARN. * `resourceTypes` - Types of data that the source account shares with the monitoring account. * `sinkArn` - ARN of the sink that is used for this link. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/oam_sink.html.markdown b/website/docs/cdktf/typescript/d/oam_sink.html.markdown index 6b083e4e4e0..2551c341b58 100644 --- a/website/docs/cdktf/typescript/d/oam_sink.html.markdown +++ b/website/docs/cdktf/typescript/d/oam_sink.html.markdown @@ -48,8 +48,9 @@ The following arguments are required: This data source exports the following attributes in addition to the arguments above: * `arn` - ARN of the sink. +* `id` - ARN of the sink. * `name` - Name of the sink. * `sinkId` - Random ID string that AWS generated as part of the sink ARN. * `tags` - Tags assigned to the sink. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/transfer_connector.html.markdown b/website/docs/cdktf/typescript/d/transfer_connector.html.markdown new file mode 100644 index 00000000000..7b94943c718 --- /dev/null +++ b/website/docs/cdktf/typescript/d/transfer_connector.html.markdown @@ -0,0 +1,70 @@ +--- +subcategory: "Transfer Family" +layout: "aws" +page_title: "AWS: aws_transfer_connector" +description: |- + Terraform data source for managing an AWS Transfer Family Connector. +--- + + + +# Data Source: aws_transfer_connector + +Terraform data source for managing an AWS Transfer Family Connector. + +### Basic Usage + +```typescript +// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug +import { Construct } from "constructs"; +import { TerraformStack } from "cdktf"; +/* + * Provider bindings are generated by running `cdktf get`. + * See https://cdk.tf/provider-generation for more details. + */ +import { DataAwsTransferConnector } from "./.gen/providers/aws/data-aws-transfer-connector"; +class MyConvertedCode extends TerraformStack { + constructor(scope: Construct, name: string) { + super(scope, name); + new DataAwsTransferConnector(this, "test", { + id: "c-xxxxxxxxxxxxxx", + }); + } +} + +``` + +## Argument Reference + +The following arguments are required: + +* `id` - (Required) Unique identifier for connector + +## Attribute Reference + +This data source exports the following attributes in addition to the arguments above: + +* `accessRole` - ARN of the AWS Identity and Access Management role. +* `arn` - ARN of the Connector. +* `as2Config` - Structure containing the parameters for an AS2 connector object. Contains the following attributes: + * `basic_auth_secret_id` - Basic authentication for AS2 connector API. Returns a null value if not set. + * `compression` - Specifies whether AS2 file is compressed. Will be ZLIB or DISABLED + * `encryptionAlgorithm` - Algorithm used to encrypt file. Will be AES128_CBC or AES192_CBC or AES256_CBC or DES_EDE3_CBC or NONE. + * `localProfileId` - Unique identifier for AS2 local profile. + * `mdnResponse` - Used for outbound requests to tell if response is asynchronous or not. Will be either SYNC or NONE. + * `mdnSigningAlgorithm` - Signing algorithm for MDN response. Will be SHA256 or SHA384 or SHA512 or SHA1 or NONE or DEFAULT. + * `messageSubject` - Subject HTTP header attribute in outbound AS2 messages to the connector. + * `partnerProfileId` - Unique identifier used by connector for partner profile. + * `signingAlgorithm` - Algorithm used for signing AS2 messages sent with the connector. +* `loggingRole` - ARN of the IAM role that allows a connector to turn on CLoudwatch logging for Amazon S3 events. +* `securityPolicyName` - Name of security policy. +* `serviceManagedEgressIpAddresses` - List of egress Ip addresses. +* `sftpConfig` - Object containing the following attributes: + * `trustedHostKeys` - List of the public portions of the host keys that are used to identify the servers the connector is connected to. + * `userSecretId` - Identifer for the secret in AWS Secrets Manager that contains the SFTP user's private key, and/or password. +* `tags` - Object containing the following attributes: + * `key` - Name of the tag. + * `value` - Values associated with the tags key. +* `url` - URL of the partner's AS2 or SFTP endpoint. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/guides/custom-service-endpoints.html.markdown b/website/docs/cdktf/typescript/guides/custom-service-endpoints.html.markdown index f235bf7fd88..a3727fc1b0e 100644 --- a/website/docs/cdktf/typescript/guides/custom-service-endpoints.html.markdown +++ b/website/docs/cdktf/typescript/guides/custom-service-endpoints.html.markdown @@ -166,6 +166,7 @@ class MyConvertedCode extends TerraformStack {
  • costoptimizationhub
  • cur (or costandusagereportservice)
  • customerprofiles
  • +
  • databrew (or gluedatabrew)
  • dataexchange
  • datapipeline
  • datasync
  • @@ -447,4 +448,4 @@ class MyConvertedCode extends TerraformStack { ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/index.html.markdown b/website/docs/cdktf/typescript/index.html.markdown index 64ad0a1a65f..98741c2321c 100644 --- a/website/docs/cdktf/typescript/index.html.markdown +++ b/website/docs/cdktf/typescript/index.html.markdown @@ -13,7 +13,7 @@ Use the Amazon Web Services (AWS) provider to interact with the many resources supported by AWS. You must configure the provider with the proper credentials before you can use it. -Use the navigation to the left to read about the available resources. There are currently 1381 resources and 560 data sources available in the provider. +Use the navigation to the left to read about the available resources. There are currently 1387 resources and 560 data sources available in the provider. To learn the basics of Terraform using this provider, follow the hands-on [get started tutorials](https://learn.hashicorp.com/tutorials/terraform/infrastructure-as-code?in=terraform/aws-get-started&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS). Interact with AWS services, @@ -853,4 +853,4 @@ Approaches differ per authentication providers: There used to be no better way to get account ID out of the API when using the federated account until `sts:GetCallerIdentity` was introduced. - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/amplify_domain_association.html.markdown b/website/docs/cdktf/typescript/r/amplify_domain_association.html.markdown index 0358cd7d5aa..f0d94693ad0 100644 --- a/website/docs/cdktf/typescript/r/amplify_domain_association.html.markdown +++ b/website/docs/cdktf/typescript/r/amplify_domain_association.html.markdown @@ -72,11 +72,17 @@ class MyConvertedCode extends TerraformStack { This resource supports the following arguments: * `appId` - (Required) Unique ID for an Amplify app. +* `certificateSettings` - (Optional) The type of SSL/TLS certificate to use for your custom domain. If you don't specify a certificate type, Amplify uses the default certificate that it provisions and manages for you. * `domainName` - (Required) Domain name for the domain association. * `enableAutoSubDomain` - (Optional) Enables the automated creation of subdomains for branches. * `subDomain` - (Required) Setting for the subdomain. Documented below. * `waitForVerification` - (Optional) If enabled, the resource will wait for the domain association status to change to `PENDING_DEPLOYMENT` or `AVAILABLE`. Setting this to `false` will skip the process. Default: `true`. +The `certificateSettings` configuration block supports the following arguments: + +* `type` - (Required) The certificate type. Valid values are `AMPLIFY_MANAGED` and `CUSTOM`. +* `customCertificateArn` - (Optional) The Amazon resource name (ARN) for the custom certificate. + The `subDomain` configuration block supports the following arguments: * `branchName` - (Required) Branch name setting for the subdomain. @@ -126,4 +132,4 @@ Using `terraform import`, import Amplify domain association using `appId` and `d % terraform import aws_amplify_domain_association.app d2ypk4k47z8u6/example.com ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/cloudformation_stack_set_instance.html.markdown b/website/docs/cdktf/typescript/r/cloudformation_stack_set_instance.html.markdown index e2cc8a8dd8b..1eec1ff6d2a 100644 --- a/website/docs/cdktf/typescript/r/cloudformation_stack_set_instance.html.markdown +++ b/website/docs/cdktf/typescript/r/cloudformation_stack_set_instance.html.markdown @@ -163,7 +163,7 @@ This resource supports the following arguments: * `stackSetName` - (Required) Name of the StackSet. * `accountId` - (Optional) Target AWS Account ID to create a Stack based on the StackSet. Defaults to current account. -* `deploymentTargets` - (Optional) The AWS Organizations accounts to which StackSets deploys. StackSets doesn't deploy stack instances to the organization management account, even if the organization management account is in your organization or in an OU in your organization. Drift detection is not possible for this argument. See [deployment_targets](#deployment_targets-argument-reference) below. +* `deploymentTargets` - (Optional) AWS Organizations accounts to which StackSets deploys. StackSets doesn't deploy stack instances to the organization management account, even if the organization management account is in your organization or in an OU in your organization. Drift detection is not possible for this argument. See [deployment_targets](#deployment_targets-argument-reference) below. * `parameterOverrides` - (Optional) Key-value map of input parameters to override from the StackSet for this Instance. * `region` - (Optional) Target AWS Region to create a Stack based on the StackSet. Defaults to current region. * `retainStack` - (Optional) During Terraform resource destroy, remove Instance from StackSet while keeping the Stack and its associated resources. Must be enabled in Terraform state _before_ destroy operation to take effect. You cannot reassociate a retained Stack or add an existing, saved Stack to a new StackSet. Defaults to `false`. @@ -174,25 +174,28 @@ This resource supports the following arguments: The `deploymentTargets` configuration block supports the following arguments: -* `organizationalUnitIds` - (Optional) The organization root ID or organizational unit (OU) IDs to which StackSets deploys. +* `organizationalUnitIds` - (Optional) Organization root ID or organizational unit (OU) IDs to which StackSets deploys. +* `account_filter_type` - (Optional) Limit deployment targets to individual accounts or include additional accounts with provided OUs. Valid values: `INTERSECTION`, `DIFFERENCE`, `UNION`, `NONE`. +* `accounts` - (Optional) List of accounts to deploy stack set updates. +* `accounts_url` - (Optional) S3 URL of the file containing the list of accounts. ### `operationPreferences` Argument Reference The `operationPreferences` configuration block supports the following arguments: -* `failureToleranceCount` - (Optional) The number of accounts, per Region, for which this operation can fail before AWS CloudFormation stops the operation in that Region. -* `failureTolerancePercentage` - (Optional) The percentage of accounts, per Region, for which this stack operation can fail before AWS CloudFormation stops the operation in that Region. -* `maxConcurrentCount` - (Optional) The maximum number of accounts in which to perform this operation at one time. -* `maxConcurrentPercentage` - (Optional) The maximum percentage of accounts in which to perform this operation at one time. -* `regionConcurrencyType` - (Optional) The concurrency type of deploying StackSets operations in Regions, could be in parallel or one Region at a time. Valid values are `SEQUENTIAL` and `PARALLEL`. -* `regionOrder` - (Optional) The order of the Regions in where you want to perform the stack operation. +* `failureToleranceCount` - (Optional) Number of accounts, per Region, for which this operation can fail before AWS CloudFormation stops the operation in that Region. +* `failureTolerancePercentage` - (Optional) Percentage of accounts, per Region, for which this stack operation can fail before AWS CloudFormation stops the operation in that Region. +* `maxConcurrentCount` - (Optional) Maximum number of accounts in which to perform this operation at one time. +* `maxConcurrentPercentage` - (Optional) Maximum percentage of accounts in which to perform this operation at one time. +* `regionConcurrencyType` - (Optional) Concurrency type of deploying StackSets operations in Regions, could be in parallel or one Region at a time. Valid values are `SEQUENTIAL` and `PARALLEL`. +* `regionOrder` - (Optional) Order of the Regions in where you want to perform the stack operation. ## Attribute Reference This resource exports the following attributes in addition to the arguments above: * `id` - Unique identifier for the resource. If `deploymentTargets` is set, this is a comma-delimited string combining stack set name, organizational unit IDs (`/`-delimited), and region (ie. `mystack,ou-123/ou-456,us-east-1`). Otherwise, this is a comma-delimited string combining stack set name, AWS account ID, and region (ie. `mystack,123456789012,us-east-1`). -* `organizationalUnitId` - The organization root ID or organizational unit (OU) ID in which the stack is deployed. +* `organizationalUnitId` - Organization root ID or organizational unit (OU) ID in which the stack is deployed. * `stackId` - Stack identifier. * `stackInstanceSummaries` - List of stack instances created from an organizational unit deployment target. This will only be populated when `deploymentTargets` is set. See [`stackInstanceSummaries`](#stack_instance_summaries-attribute-reference). @@ -302,4 +305,4 @@ Using `terraform import`, import CloudFormation StackSet Instances when acting a % terraform import aws_cloudformation_stack_set_instance.example example,ou-sdas-123123123/ou-sdas-789789789,us-east-1,DELEGATED_ADMIN ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_capacity_reservation.html.markdown b/website/docs/cdktf/typescript/r/ec2_capacity_reservation.html.markdown index 11bb6f8df8b..ca2c5f7e466 100644 --- a/website/docs/cdktf/typescript/r/ec2_capacity_reservation.html.markdown +++ b/website/docs/cdktf/typescript/r/ec2_capacity_reservation.html.markdown @@ -64,6 +64,14 @@ This resource exports the following attributes in addition to the arguments abov * `arn` - The ARN of the Capacity Reservation. * `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10m`) +- `update` - (Default `10m`) +- `delete` - (Default `10m`) + ## Import In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import Capacity Reservations using the `id`. For example: @@ -96,4 +104,4 @@ Using `terraform import`, import Capacity Reservations using the `id`. For examp % terraform import aws_ec2_capacity_reservation.web cr-0123456789abcdef0 ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_peering_attachment.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_peering_attachment.html.markdown index 349c5e5e71d..b57a0b9dc76 100644 --- a/website/docs/cdktf/typescript/r/ec2_transit_gateway_peering_attachment.html.markdown +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_peering_attachment.html.markdown @@ -78,9 +78,16 @@ This resource supports the following arguments: * `peerAccountId` - (Optional) Account ID of EC2 Transit Gateway to peer with. Defaults to the account ID the [AWS provider][1] is currently connected to. * `peerRegion` - (Required) Region of EC2 Transit Gateway to peer with. * `peerTransitGatewayId` - (Required) Identifier of EC2 Transit Gateway to peer with. +* `options` - (Optional) Describes whether dynamic routing is enabled or disabled for the transit gateway peering request. See [options](#options) below for more details! * `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Peering Attachment. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `transitGatewayId` - (Required) Identifier of EC2 Transit Gateway. +### options + +The `options` block supports the following: + +* `dynamicRouting` - (Optional) Indicates whether dynamic routing is enabled or disabled.. Supports `enable` and `disable`. + ## Attribute Reference This resource exports the following attributes in addition to the arguments above: @@ -122,4 +129,4 @@ Using `terraform import`, import `aws_ec2_transit_gateway_peering_attachment` us [1]: /docs/providers/aws/index.html - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/eks_cluster.html.markdown b/website/docs/cdktf/typescript/r/eks_cluster.html.markdown index ed259c882c2..95560c6c201 100644 --- a/website/docs/cdktf/typescript/r/eks_cluster.html.markdown +++ b/website/docs/cdktf/typescript/r/eks_cluster.html.markdown @@ -353,6 +353,7 @@ The following arguments are required: The following arguments are optional: * `accessConfig` - (Optional) Configuration block for the access config associated with your cluster, see [Amazon EKS Access Entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html). +* `bootstrap_self_managed_addons` - (Optional) Install default unmanaged add-ons, such as `aws-cni`, `kube-proxy`, and CoreDNS during cluster creation. If `false`, you must manually install desired add-ons. Changing this value will force a new cluster to be created. Defaults to `true`. * `enabledClusterLogTypes` - (Optional) List of the desired control plane logging to enable. For more information, see [Amazon EKS Control Plane Logging](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html). * `encryptionConfig` - (Optional) Configuration block with encryption configuration for the cluster. Only available on Kubernetes 1.13 and above clusters created after March 6, 2020. Detailed below. * `kubernetesNetworkConfig` - (Optional) Configuration block with kubernetes network configuration for the cluster. Detailed below. If removed, Terraform will only perform drift detection if a configuration value is provided. @@ -493,4 +494,4 @@ Using `terraform import`, import EKS Clusters using the `name`. For example: % terraform import aws_eks_cluster.my_cluster my_cluster ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/fsx_ontap_file_system.html.markdown b/website/docs/cdktf/typescript/r/fsx_ontap_file_system.html.markdown index 3fd89f4c5f3..587540fcb9e 100644 --- a/website/docs/cdktf/typescript/r/fsx_ontap_file_system.html.markdown +++ b/website/docs/cdktf/typescript/r/fsx_ontap_file_system.html.markdown @@ -96,7 +96,9 @@ This resource supports the following arguments: This resource exports the following attributes in addition to the arguments above: * `arn` - Amazon Resource Name of the file system. -* `dnsName` - DNS name for the file system, e.g., `fs-12345678.fsx.us-west-2.amazonaws.com` +* `dnsName` - DNS name for the file system. + + **Note:** This attribute does not apply to FSx for ONTAP file systems and is consequently not set. You can access your FSx for ONTAP file system and volumes via a [Storage Virtual Machine (SVM)](fsx_ontap_storage_virtual_machine.html) using its DNS name or IP address. * `endpoints` - The endpoints that are used to access data or to manage the file system using the NetApp ONTAP CLI, REST API, or NetApp SnapMirror. See [Endpoints](#endpoints) below. * `id` - Identifier of the file system, e.g., `fs-12345678` * `networkInterfaceIds` - Set of Elastic Network Interface identifiers from which the file system is accessible The first network interface returned is the primary network interface. @@ -189,4 +191,4 @@ class MyConvertedCode extends TerraformStack { ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/iot_authorizer.html.markdown b/website/docs/cdktf/typescript/r/iot_authorizer.html.markdown index 50d4f7b3525..b2151b3a701 100644 --- a/website/docs/cdktf/typescript/r/iot_authorizer.html.markdown +++ b/website/docs/cdktf/typescript/r/iot_authorizer.html.markdown @@ -31,6 +31,9 @@ class MyConvertedCode extends TerraformStack { name: "example", signingDisabled: false, status: "ACTIVE", + tags: { + Name: "example", + }, tokenKeyName: "Token-Header", tokenSigningPublicKeys: { Key1: Token.asString( @@ -50,6 +53,7 @@ class MyConvertedCode extends TerraformStack { * `name` - (Required) The name of the authorizer. * `signingDisabled` - (Optional) Specifies whether AWS IoT validates the token signature in an authorization request. Default: `false`. * `status` - (Optional) The status of Authorizer request at creation. Valid values: `ACTIVE`, `INACTIVE`. Default: `ACTIVE`. +* `tags` - (Optional) Map of tags to assign to this resource. If configured with a provider [`defaultTags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `tokenKeyName` - (Optional) The name of the token key used to extract the token from the HTTP headers. This value is required if signing is enabled in your authorizer. * `tokenSigningPublicKeys` - (Optional) The public keys used to verify the digital signature returned by your custom authentication service. This value is required if signing is enabled in your authorizer. @@ -58,6 +62,7 @@ class MyConvertedCode extends TerraformStack { This resource exports the following attributes in addition to the arguments above: * `arn` - The ARN of the authorizer. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block). ## Import @@ -87,4 +92,4 @@ Using `terraform import`, import IOT Authorizers using the name. For example: % terraform import aws_iot_authorizer.example example ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/iot_topic_rule.html.markdown b/website/docs/cdktf/typescript/r/iot_topic_rule.html.markdown index b2ecbd66f0b..d602fadb24f 100644 --- a/website/docs/cdktf/typescript/r/iot_topic_rule.html.markdown +++ b/website/docs/cdktf/typescript/r/iot_topic_rule.html.markdown @@ -115,6 +115,7 @@ The `cloudwatchAlarm` object takes the following arguments: The `cloudwatchLogs` object takes the following arguments: +* `batchMode` - (Optional) The payload that contains a JSON array of records will be sent to CloudWatch via a batch call. * `logGroupName` - (Required) The CloudWatch log group name. * `roleArn` - (Required) The IAM role ARN that allows access to the CloudWatch alarm. @@ -285,4 +286,4 @@ Using `terraform import`, import IoT Topic Rules using the `name`. For example: % terraform import aws_iot_topic_rule.rule ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/mwaa_environment.html.markdown b/website/docs/cdktf/typescript/r/mwaa_environment.html.markdown index fae3c144c29..498d09188fa 100644 --- a/website/docs/cdktf/typescript/r/mwaa_environment.html.markdown +++ b/website/docs/cdktf/typescript/r/mwaa_environment.html.markdown @@ -171,6 +171,7 @@ This resource supports the following arguments: * `airflowConfigurationOptions` - (Optional) The `airflowConfigurationOptions` parameter specifies airflow override options. Check the [Official documentation](https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-env-variables.html#configuring-env-variables-reference) for all possible configuration options. * `airflowVersion` - (Optional) Airflow version of your environment, will be set by default to the latest version that MWAA supports. * `dagS3Path` - (Required) The relative path to the DAG folder on your Amazon S3 storage bucket. For example, dags. For more information, see [Importing DAGs on Amazon MWAA](https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-dag-import.html). +* `endpointManagement` - (Optional) Defines whether the VPC endpoints configured for the environment are created and managed by the customer or by AWS. If set to `SERVICE`, Amazon MWAA will create and manage the required VPC endpoints in your VPC. If set to `CUSTOMER`, you must create, and manage, the VPC endpoints for your VPC. Defaults to `SERVICE` if not set. * `environmentClass` - (Optional) Environment class for the cluster. Possible options are `mw1.small`, `mw1.medium`, `mw1.large`. Will be set by default to `mw1.small`. Please check the [AWS Pricing](https://aws.amazon.com/de/managed-workflows-for-apache-airflow/pricing/) for more information about the environment classes. * `executionRoleArn` - (Required) The Amazon Resource Name (ARN) of the task execution role that the Amazon MWAA and its environment can assume. Check the [official AWS documentation](https://docs.aws.amazon.com/mwaa/latest/userguide/mwaa-create-role.html) for the detailed role specification. * `kmsKey` - (Optional) The Amazon Resource Name (ARN) of your KMS key that you want to use for encryption. Will be set to the ARN of the managed KMS key `aws/airflow` by default. Please check the [Official Documentation](https://docs.aws.amazon.com/mwaa/latest/userguide/custom-keys-certs.html) for more information. @@ -269,4 +270,4 @@ Using `terraform import`, import MWAA Environment using `Name`. For example: % terraform import aws_mwaa_environment.example MyAirflowEnvironment ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/oam_link.html.markdown b/website/docs/cdktf/typescript/r/oam_link.html.markdown index 8463066b743..ecce5518ee6 100644 --- a/website/docs/cdktf/typescript/r/oam_link.html.markdown +++ b/website/docs/cdktf/typescript/r/oam_link.html.markdown @@ -58,6 +58,7 @@ The following arguments are optional: This resource exports the following attributes in addition to the arguments above: * `arn` - ARN of the link. +* `id` - ARN of the link. * `label` - Label that is assigned to this link. * `linkId` - ID string that AWS generated as part of the link ARN. * `sinkArn` - ARN of the sink that is used for this link. @@ -102,4 +103,4 @@ Using `terraform import`, import CloudWatch Observability Access Manager Link us % terraform import aws_oam_link.example arn:aws:oam:us-west-2:123456789012:link/link-id ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/oam_sink.html.markdown b/website/docs/cdktf/typescript/r/oam_sink.html.markdown index d9c6ab0752f..83e197721ac 100644 --- a/website/docs/cdktf/typescript/r/oam_sink.html.markdown +++ b/website/docs/cdktf/typescript/r/oam_sink.html.markdown @@ -54,6 +54,7 @@ The following arguments are optional: This resource exports the following attributes in addition to the arguments above: * `arn` - ARN of the Sink. +* `id` - ARN of the Sink. * `sinkId` - ID string that AWS generated as part of the sink ARN. ## Timeouts @@ -96,4 +97,4 @@ Using `terraform import`, import CloudWatch Observability Access Manager Sink us % terraform import aws_oam_sink.example arn:aws:oam:us-west-2:123456789012:sink/sink-id ``` - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/oam_sink_policy.html.markdown b/website/docs/cdktf/typescript/r/oam_sink_policy.html.markdown index 0a156f024b1..62a3f6d2427 100644 --- a/website/docs/cdktf/typescript/r/oam_sink_policy.html.markdown +++ b/website/docs/cdktf/typescript/r/oam_sink_policy.html.markdown @@ -77,6 +77,7 @@ The following arguments are required: This resource exports the following attributes in addition to the arguments above: * `arn` - ARN of the Sink. +* `id` - ARN of the sink to attach this policy to. * `sinkId` - ID string that AWS generated as part of the sink ARN. ## Timeouts @@ -118,4 +119,4 @@ Using `terraform import`, import CloudWatch Observability Access Manager Sink Po % terraform import aws_oam_sink_policy.example arn:aws:oam:us-west-2:123456789012:sink/sink-id ``` - \ No newline at end of file + \ No newline at end of file