From 9b6210c105599a50e63bf1e63017816dad6603a1 Mon Sep 17 00:00:00 2001
From: Russell Cohen The authentication protocol to use to access the Git repository for an Amplify app.\n For a GitHub repository, specify This is for internal use. The Amplify service uses this parameter to specify the authentication protocol to use to access\n the Git repository for an Amplify app. Amplify specifies The OAuth token for a third-party source control system for an Amplify app. The OAuth\n token is used to create a webhook and a read-only deploy key. The OAuth token is not\n stored. The OAuth token for a third-party source control system for an Amplify app. The OAuth\n token is used to create a webhook and a read-only deploy key using SSH cloning. The\n OAuth token is not stored. Use You must specify either Existing Amplify apps deployed from a GitHub repository using OAuth continue to work\n with CI/CD. However, we strongly recommend that you migrate these apps to use the GitHub\n App. For more information, see Migrating an existing OAuth app to the Amplify GitHub App in the\n Amplify User Guide . The personal access token for a third-party source control system for an Amplify app.\n The personal access token is used to create a webhook and a read-only deploy key. The\n token is not stored. The personal access token for a GitHub repository for an Amplify app. The personal\n access token is used to authorize access to a GitHub repository using the Amplify GitHub\n App. The token is not stored. Use You must specify either Existing Amplify apps deployed from a GitHub repository using OAuth continue to work\n with CI/CD. However, we strongly recommend that you migrate these apps to use the GitHub\n App. For more information, see Migrating an existing OAuth app to the Amplify GitHub App in the\n Amplify User Guide . The OAuth token for a third-party source control system for an Amplify app. The token\n is used to create a webhook and a read-only deploy key. The OAuth token is not stored.\n The OAuth token for a third-party source control system for an Amplify app. The OAuth\n token is used to create a webhook and a read-only deploy key using SSH cloning. The\n OAuth token is not stored. Use To authorize access to GitHub as your repository provider, use\n You must specify either Existing Amplify apps deployed from a GitHub repository using OAuth continue to work\n with CI/CD. However, we strongly recommend that you migrate these apps to use the GitHub\n App. For more information, see Migrating an existing OAuth app to the Amplify GitHub App in the\n Amplify User Guide . The personal access token for a third-party source control system for an Amplify app.\n The token is used to create webhook and a read-only deploy key. The token is not stored.\n The personal access token for a GitHub repository for an Amplify app. The personal\n access token is used to authorize access to a GitHub repository using the Amplify GitHub\n App. The token is not stored. Use You must specify either Existing Amplify apps deployed from a GitHub repository using OAuth continue to work\n with CI/CD. However, we strongly recommend that you migrate these apps to use the GitHub\n App. For more information, see Migrating an existing OAuth app to the Amplify GitHub App in the\n Amplify User Guide . The OAuth 2.0 credentials required for OAuth 2.0 authentication. The key of the custom parameter required for OAuth 2.0 authentication. Indicates whether the custom parameter for OAuth 2.0 authentication is required. The label of the custom parameter used for OAuth 2.0 authentication. A description about the custom parameter used for OAuth 2.0 authentication. Indicates whether this authentication custom parameter is a sensitive field. Contains default values for this authentication parameter that are supplied by the\n connector. Indicates whether custom parameter is used with TokenUrl or AuthUrl. Custom parameter required for OAuth 2.0 authentication. OAuth 2.0 grant types supported by the connector. List of custom parameters required for OAuth 2.0 authentication. The OAuth 2.0 grant type used by connector for OAuth 2.0 authentication. Associates your token URL with a map of properties that you define. Use this parameter\n to provide any additional details that the connector requires to authenticate your\n request. The USB device filter strings that specify which USB devices a user can redirect to the fleet streaming session, when using the Windows native client. This is allowed but not required for Elastic fleets. The S3 location of the session scripts configuration zip file. This only applies to Elastic fleets. The USB device filter strings associated with the fleet. The S3 location of the session scripts configuration zip file. This only applies to Elastic fleets. The USB device filter strings that specify which USB devices a user can redirect to the fleet streaming session, when using the Windows native client. This is allowed but not required for Elastic fleets. The S3 location of the session scripts configuration zip file. This only applies to Elastic fleets. The Amazon S3 canned ACL that Athena should specify when storing\n query results. Currently the only supported canned ACL is\n The Amazon S3 canned ACL that Athena should specify when storing\n query results. Currently the only supported canned ACL is\n An integer value that provides specific information about an Athena query\n error. For the meaning of specific values, see the Error Type Reference in the Amazon Athena User\n Guide. True if the query might succeed if resubmitted. Contains a short description of the error that occurred. Indicates whether Amazon S3 server-side encryption with Amazon S3-managed keys ( If a query runs in a workgroup and the workgroup overrides client-side settings, then\n the workgroup's setting for encryption is used. It specifies whether query results must\n be encrypted, for all queries that run in this workgroup. Indicates whether Amazon S3 server-side encryption with Amazon S3-managed keys ( If a query runs in a workgroup and the workgroup overrides client-side settings, then\n the workgroup's setting for encryption is used. It specifies whether query results must\n be encrypted, for all queries that run in this workgroup. For For If query results are encrypted in Amazon S3, indicates the encryption option\n used (for example, If query results are encrypted in Amazon S3, indicates the encryption option\n used (for example, A query, where A query, where If query results are encrypted in Amazon S3, indicates the encryption option\n used (for example, If query results are encrypted in Amazon S3, indicates the encryption option\n used (for example, The Amazon Web Services account ID that you expect to be the owner of the Amazon S3 bucket specified by ResultConfiguration$OutputLocation.\n If set, Athena uses the value for This is a client-side setting. If workgroup settings override client-side settings,\n then the query uses the The Amazon Web Services account ID that you expect to be the owner of the Amazon S3 bucket specified by ResultConfiguration$OutputLocation.\n If set, Athena uses the value for This is a client-side setting. If workgroup settings override client-side settings,\n then the query uses the If set to \"true\", indicates that the previously-specified query results location (also\n known as a client-side setting) for queries in this workgroup should be ignored and set\n to null. If set to \"false\" or not set, and a value is present in the\n If set to \"true\", indicates that the previously-specified query results location (also\n known as a client-side setting) for queries in this workgroup should be ignored and set\n to null. If set to \"false\" or not set, and a value is present in the\n The Amazon Web Services account ID that you expect to be the owner of the Amazon S3 bucket specified by ResultConfiguration$OutputLocation.\n If set, Athena uses the value for If workgroup settings override client-side settings, then the query uses the\n The Amazon Web Services account ID that you expect to be the owner of the Amazon S3 bucket specified by ResultConfiguration$OutputLocation.\n If set, Athena uses the value for If workgroup settings override client-side settings, then the query uses the\n If set to \"true\", removes the Amazon Web Services account ID previously specified for\n ResultConfiguration$ExpectedBucketOwner. If set to \"false\" or not\n set, and a value is present in the If set to \"true\", removes the Amazon Web Services account ID previously specified for\n ResultConfiguration$ExpectedBucketOwner. If set to \"false\" or not\n set, and a value is present in the Updates a NamedQuery object. The database or workgroup cannot be updated. Updates a NamedQuery object. The database or workgroup cannot be\n updated. Deletes an assessment report from an assessment in Audit Manager. Deletes an assessment report in Audit Manager. When you run the The specified assessment report that’s stored in your S3 bucket The associated metadata that’s stored in Audit Manager If Audit Manager can’t access the assessment report in your S3 bucket, the report\n isn’t deleted. In this event, the This scenario happens when Audit Manager receives a The method of input for the keyword. The input method for the keyword. The value of the keyword that's used to search CloudTrail logs, Config rules, Security Hub checks, and Amazon Web Services API names\n when mapping a control data source. The value of the keyword that's used when mapping a control data source. For example,\n this can be a CloudTrail event name, a rule name for Config, a\n Security Hub control, or the name of an Amazon Web Services API call. If you’re mapping a data source to a rule in Config, the\n For managed rules, you can use the rule identifier as the\n Managed rule name: s3-bucket-acl-prohibited\n \n For custom rules, you form the Custom rule name: my-custom-config-rule \n For service-linked rules, you form the\n Service-linked rule name:\n CustomRuleForAccount-conformance-pack-szsm1uv0w \n Service-linked rule name: securityhub-api-gw-cache-encrypted-101104e1 \n Service-linked rule name:\n OrgConfigRule-s3-bucket-versioning-enabled-dbgzf8ba \n The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before checking the health status\n of an EC2 instance that has come into service and marking it unhealthy due to a failed\n health check. The duration of the health check grace period, in seconds. The unit of measurement for the value specified for desired capacity. Amazon EC2 Auto Scaling\n supports By default, Amazon EC2 Auto Scaling specifies Valid values: The unit of measurement for the value specified for desired capacity. Amazon EC2 Auto Scaling\n supports The duration of the default instance warmup, in seconds. The amount of time, in seconds, after a scaling activity completes before another\n scaling activity can start. The default value is \n Only needed if you use simple scaling policies.\n The amount of time, in seconds, between one scaling activity ending and another one\n starting due to simple scaling policies. For more information, see Scaling cooldowns\n for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide. Default: The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before checking the health status\n of an EC2 instance that has come into service and marking it unhealthy due to a failed\n health check. The default value is Required if you are adding an \n \n The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before checking the health status\n of an EC2 instance that has come into service and marking it unhealthy due to a failed\n Elastic Load Balancing or custom health check. This is useful if your instances do not immediately pass\n these health checks after they enter the Default: The name of an existing placement group into which to launch your instances, if any. A\n placement group is a logical grouping of instances within a single Availability Zone.\n You cannot specify multiple Availability Zones and a placement group. For more\n information, see Placement Groups in the\n Amazon EC2 User Guide for Linux Instances. The name of an existing placement group into which to launch your instances. For more\n information, see Placement groups in the\n Amazon EC2 User Guide for Linux Instances. A cluster placement group is a logical grouping of instances\n within a single Availability Zone. You cannot specify multiple Availability Zones\n and a cluster placement group. The unit of measurement for the value specified for desired capacity. Amazon EC2 Auto Scaling\n supports By default, Amazon EC2 Auto Scaling specifies Valid values: The amount of time, in seconds, until a newly launched instance can contribute to the\n Amazon CloudWatch metrics. This delay lets an instance finish initializing before Amazon EC2 Auto Scaling\n aggregates instance metrics, resulting in more reliable usage data. Set this value equal\n to the amount of time that it takes for resource consumption to become stable after an\n instance reaches the To manage your warm-up settings at the group level, we recommend that you set the\n default instance warmup, even if its value is set to 0 seconds.\n This also optimizes the performance of scaling policies that scale continuously,\n such as target tracking and step scaling policies. If you need to remove a value that you previously set, include the property but\n specify Default: None The ID of a ClassicLink-enabled VPC to link your EC2-Classic instances to. For more\n information, see ClassicLink in the\n Amazon EC2 User Guide for Linux Instances and Linking EC2-Classic\n instances to a VPC in the Amazon EC2 Auto Scaling User Guide. This parameter can only be used if you are launching EC2-Classic instances. \n EC2-Classic retires on August 15, 2022. This parameter is not supported after\n that date.\n The ID of a ClassicLink-enabled VPC to link your EC2-Classic instances to. For more\n information, see ClassicLink in the\n Amazon EC2 User Guide for Linux Instances. The IDs of one or more security groups for the specified ClassicLink-enabled VPC. For\n more information, see ClassicLink in the\n Amazon EC2 User Guide for Linux Instances and Linking EC2-Classic\n instances to a VPC in the Amazon EC2 Auto Scaling User Guide. If you specify the \n EC2-Classic retires on August 15, 2022. This parameter is not supported after\n that date.\n The IDs of one or more security groups for the specified ClassicLink-enabled VPC. For\n more information, see ClassicLink in the\n Amazon EC2 User Guide for Linux Instances. If you specify the Represents a CloudWatch metric of your choosing for a target tracking scaling policy to use\n with Amazon EC2 Auto Scaling. To create your customized metric specification: Add values for each required parameter from CloudWatch. You can use an existing\n metric, or a new metric that you create. To use your own metric, you must first\n publish the metric to CloudWatch. For more information, see Publish\n custom metrics in the Amazon CloudWatch User\n Guide. Choose a metric that changes proportionally with capacity. The value of the\n metric should increase or decrease in inverse proportion to the number of\n capacity units. That is, the value of the metric should decrease when capacity\n increases. For more information about the CloudWatch terminology below, see Amazon CloudWatch\n concepts. Each individual service provides information about the metrics, namespace, and\n dimensions they use. For more information, see Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User\n Guide. The name of the filter. The valid values for \n DescribeAutoScalingGroups\n Valid values for \n \n \n \n DescribeTags\n Valid values for \n \n \n \n The name of the filter. The valid values for \n DescribeAutoScalingGroups\n Valid values for \n \n \n \n DescribeTags\n Valid values for \n \n \n \n The ID of a ClassicLink-enabled VPC to link your EC2-Classic instances to. For more\n information, see ClassicLink in the\n Amazon EC2 User Guide for Linux Instances and Linking EC2-Classic\n instances to a VPC in the Amazon EC2 Auto Scaling User Guide. \n EC2-Classic retires on August 15, 2022. This parameter is not supported after\n that date.\n The ID of a ClassicLink-enabled VPC to link your EC2-Classic instances to. The IDs of one or more security groups for the VPC specified in\n For more information, see ClassicLink in the\n Amazon EC2 User Guide for Linux Instances and Linking EC2-Classic\n instances to a VPC in the Amazon EC2 Auto Scaling User Guide. \n EC2-Classic retires on August 15, 2022. This parameter is not supported after\n that date.\n The IDs of one or more security groups for the VPC specified in\n The maximum time, in seconds, that an instance can remain in a\n The maximum time, in seconds, that an instance can remain in a wait state. The maximum\n is 172800 seconds (48 hours) or 100 times The metric type. The following predefined metrics are available: \n \n \n \n The metric type. The following predefined metrics are available: \n \n \n \n The duration of the policy's cooldown period, in seconds. When a cooldown period is\n specified here, it overrides the default cooldown period defined for the Auto Scaling\n group. Valid only if the policy type is A cooldown period, in seconds, that applies to a specific simple scaling policy. When\n a cooldown period is specified here, it overrides the default cooldown. Valid only if the policy type is Default: None The estimated time, in seconds, until a newly launched instance can contribute to the\n CloudWatch metrics. If not provided, the default is to use the value from the default cooldown\n period for the Auto Scaling group. Valid only if the policy type is \n Not needed if the default instance warmup is defined for the\n group.\n The estimated time, in seconds, until a newly launched instance can contribute to the\n CloudWatch metrics. This warm-up period applies to instances launched due to a specific target\n tracking or step scaling policy. When a warm-up period is specified here, it overrides\n the default instance warmup. Valid only if the policy type is The default is to use the value for the default instance warmup defined for the\n group. If default instance warmup is null, then The amount of capacity in the Auto Scaling group that must remain healthy during an instance\n refresh to allow the operation to continue. The value is expressed as a percentage of\n the desired capacity of the Auto Scaling group (rounded up to the nearest integer). The default\n is Setting the minimum healthy percentage to 100 percent limits the rate of replacement\n to one instance at a time. In contrast, setting it to 0 percent has the effect of\n replacing all instances at the same time. The amount of capacity in the Auto Scaling group that must pass your group's health checks to\n allow the operation to continue. The value is expressed as a percentage of the desired\n capacity of the Auto Scaling group (rounded up to the nearest integer). The default is\n Setting the minimum healthy percentage to 100 percent limits the rate of replacement\n to one instance at a time. In contrast, setting it to 0 percent has the effect of\n replacing all instances at the same time. The number of seconds until a newly launched instance is configured and ready to use.\n During this time, Amazon EC2 Auto Scaling does not immediately move on to the next replacement. The\n default is to use the value for the health check grace period defined for the\n group. \n Not needed if the default instance warmup is defined for the\n group.\n The duration of the instance warmup, in seconds. The default is to use the value for the default instance warmup defined for the\n group. If default instance warmup is null, then The amount of time, in seconds, after a scaling activity completes before another\n scaling activity can start. The default value is \n Only needed if you use simple scaling policies.\n The amount of time, in seconds, between one scaling activity ending and another one\n starting due to simple scaling policies. For more information, see Scaling cooldowns\n for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide. The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before checking the health status\n of an EC2 instance that has come into service and marking it unhealthy due to a failed\n health check. The default value is Required if you are adding an The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before checking the health status\n of an EC2 instance that has come into service and marking it unhealthy due to a failed\n Elastic Load Balancing or custom health check. This is useful if your instances do not immediately pass\n these health checks after they enter the The name of an existing placement group into which to launch your instances, if any. A\n placement group is a logical grouping of instances within a single Availability Zone.\n You cannot specify multiple Availability Zones and a placement group. For more\n information, see Placement Groups in the\n Amazon EC2 User Guide for Linux Instances. The name of an existing placement group into which to launch your instances. For more\n information, see Placement groups in the\n Amazon EC2 User Guide for Linux Instances. A cluster placement group is a logical grouping of instances\n within a single Availability Zone. You cannot specify multiple Availability Zones\n and a cluster placement group. The unit of measurement for the value specified for desired capacity. Amazon EC2 Auto Scaling\n supports By default, Amazon EC2 Auto Scaling specifies Valid values: The amount of time, in seconds, until a newly launched instance can contribute to the\n Amazon CloudWatch metrics. This delay lets an instance finish initializing before Amazon EC2 Auto Scaling\n aggregates instance metrics, resulting in more reliable usage data. Set this value equal\n to the amount of time that it takes for resource consumption to become stable after an\n instance reaches the To manage your warm-up settings at the group level, we recommend that you set the\n default instance warmup, even if its value is set to 0 seconds.\n This also optimizes the performance of scaling policies that scale continuously,\n such as target tracking and step scaling policies. If you need to remove a value that you previously set, include the property but\n specify The Amazon Resource Name (ARN) of the underlying Amazon ECS cluster used by the compute environment. The Amazon Resource Name (ARN) of the underlying Amazon ECS cluster used by the compute environment. The type of the compute environment: The type of the compute environment: The compute resources defined for the compute environment. For more information, see Compute Environments in the\n Batch User Guide. The compute resources defined for the compute environment. For more information, see Compute environments in the\n Batch User Guide. The service role associated with the compute environment that allows Batch to make calls to Amazon Web Services API\n operations on your behalf. For more information, see Batch service IAM role in the\n Batch User Guide. Specifies the infrastructure update policy for the compute environment. For more information about\n infrastructure updates, see Updating compute\n environments in the Batch User Guide. The type of compute environment: If you choose The type of compute environment: If you choose The allocation strategy to use for the compute resource if not enough instances of the best fitting instance\n type can be allocated. This might be because of availability of the instance type in the Region or Amazon EC2 service limits. For more\n information, see Allocation Strategies\n in the Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. Batch selects an instance type that best fits the needs of the jobs with a preference for the lowest-cost\n instance type. If additional instances of the selected instance type aren't available, Batch waits for the\n additional instances to be available. If there aren't enough instances available, or if the user is reaching\n Amazon EC2 service limits\n then additional jobs aren't run until the currently running jobs have completed. This allocation strategy keeps\n costs lower but can limit scaling. If you are using Spot Fleets with Batch will select additional instance types that are large enough to meet the requirements of the jobs in\n the queue, with a preference for instance types with a lower cost per unit vCPU. If additional instances of the\n previously selected instance types aren't available, Batch will select new instance types. Batch will select one or more instance types that are large enough to meet the requirements of the jobs in\n the queue, with a preference for instance types that are less likely to be interrupted. This allocation strategy\n is only available for Spot Instance compute resources. With both The allocation strategy to use for the compute resource if not enough instances of the best fitting instance\n type can be allocated. This might be because of availability of the instance type in the Region or Amazon EC2 service limits. For more\n information, see Allocation strategies\n in the Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. Batch selects an instance type that best fits the needs of the jobs with a preference for the lowest-cost\n instance type. If additional instances of the selected instance type aren't available, Batch waits for the\n additional instances to be available. If there aren't enough instances available, or if the user is reaching\n Amazon EC2 service limits\n then additional jobs aren't run until the currently running jobs have completed. This allocation strategy keeps\n costs lower but can limit scaling. If you are using Spot Fleets with Batch will select additional instance types that are large enough to meet the requirements of the jobs in\n the queue, with a preference for instance types with a lower cost per unit vCPU. If additional instances of the\n previously selected instance types aren't available, Batch will select new instance types. Batch will select one or more instance types that are large enough to meet the requirements of the jobs in\n the queue, with a preference for instance types that are less likely to be interrupted. This allocation strategy\n is only available for Spot Instance compute resources. With both The minimum number of Amazon EC2 vCPUs that an environment should maintain (even if the compute environment is\n This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The minimum number of Amazon EC2 vCPUs that an environment should maintain (even if the compute environment is\n This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The desired number of Amazon EC2 vCPUS in the compute environment. Batch modifies this value between the minimum\n and maximum values, based on job queue demand. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The desired number of Amazon EC2 vCPUS in the compute environment. Batch modifies this value between the minimum\n and maximum values, based on job queue demand. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The instances types that can be launched. You can specify instance families to launch any instance type within\n those families (for example, This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. When you create a compute environment, the instance types that you select for the compute environment must\n share the same architecture. For example, you can't mix x86 and ARM instances in the same compute\n environment. Currently, The instances types that can be launched. You can specify instance families to launch any instance type within\n those families (for example, This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. When you create a compute environment, the instance types that you select for the compute environment must\n share the same architecture. For example, you can't mix x86 and ARM instances in the same compute\n environment. Currently, The Amazon Machine Image (AMI) ID used for instances launched in the compute environment. This parameter is\n overridden by the This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The AMI that you choose for a compute environment must match the architecture of the instance types that\n you intend to use for that compute environment. For example, if your compute environment uses A1 instance types,\n the compute resource AMI that you choose must support ARM instances. Amazon ECS vends both x86 and ARM versions of the\n Amazon ECS-optimized Amazon Linux 2 AMI. For more information, see Amazon ECS-optimized\n Amazon Linux 2 AMI\n in the Amazon Elastic Container Service Developer Guide. The Amazon Machine Image (AMI) ID used for instances launched in the compute environment. This parameter is\n overridden by the This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The AMI that you choose for a compute environment must match the architecture of the instance types that\n you intend to use for that compute environment. For example, if your compute environment uses A1 instance types,\n the compute resource AMI that you choose must support ARM instances. Amazon ECS vends both x86 and ARM versions of the\n Amazon ECS-optimized Amazon Linux 2 AMI. For more information, see Amazon ECS-optimized\n Amazon Linux 2 AMI\n in the Amazon Elastic Container Service Developer Guide. The VPC subnets where the compute resources are launched. These subnets must be within the same VPC. Fargate\n compute resources can contain up to 16 subnets. For more information, see VPCs and Subnets in the Amazon VPC User\n Guide. The VPC subnets where the compute resources are launched. These subnets must be within the same VPC. Fargate\n compute resources can contain up to 16 subnets. For more information, see VPCs and subnets in the Amazon VPC User\n Guide. The Amazon EC2 key pair that's used for instances launched in the compute environment. You can use this key pair to\n log in to your instances with SSH. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The Amazon EC2 key pair that's used for instances launched in the compute environment. You can use this key pair to\n log in to your instances with SSH. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The Amazon ECS instance profile applied to Amazon EC2 instances in a compute environment. You can specify the short name\n or full Amazon Resource Name (ARN) of an instance profile. For example,\n This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The Amazon ECS instance profile applied to Amazon EC2 instances in a compute environment. You can specify the short name\n or full Amazon Resource Name (ARN) of an instance profile. For example,\n This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. Key-value pair tags to be applied to EC2 resources that are launched in the compute environment. For Batch,\n these take the form of \"String1\": \"String2\", where String1 is the tag key and String2 is the tag value−for\n example, This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. Key-value pair tags to be applied to EC2 resources that are launched in the compute environment. For Batch,\n these take the form of \"String1\": \"String2\", where String1 is the tag key and String2 is the tag value−for\n example, This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The Amazon EC2 placement group to associate with your compute resources. If you intend to submit multi-node parallel\n jobs to your compute environment, you should consider creating a cluster placement group and associate it with your\n compute resources. This keeps your multi-node parallel job on a logical grouping of instances within a single\n Availability Zone with high network flow potential. For more information, see Placement Groups in the Amazon EC2 User Guide for\n Linux Instances. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The Amazon EC2 placement group to associate with your compute resources. If you intend to submit multi-node parallel\n jobs to your compute environment, you should consider creating a cluster placement group and associate it with your\n compute resources. This keeps your multi-node parallel job on a logical grouping of instances within a single\n Availability Zone with high network flow potential. For more information, see Placement groups in the Amazon EC2 User Guide for\n Linux Instances. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that\n instance type before instances are launched. For example, if your maximum percentage is 20%, then the Spot price must\n be less than 20% of the current On-Demand price for that Amazon EC2 instance. You always pay the lowest (market) price and\n never more than your maximum percentage. If you leave this field empty, the default value is 100% of the On-Demand\n price. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that\n instance type before instances are launched. For example, if your maximum percentage is 20%, then the Spot price must\n be less than 20% of the current On-Demand price for that Amazon EC2 instance. You always pay the lowest (market) price and\n never more than your maximum percentage. If you leave this field empty, the default value is 100% of the On-Demand\n price. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The Amazon Resource Name (ARN) of the Amazon EC2 Spot Fleet IAM role applied to a This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. To tag your Spot Instances on creation, the Spot Fleet IAM role specified here must use the newer AmazonEC2SpotFleetTaggingRole managed policy. The previously recommended AmazonEC2SpotFleetRole managed policy doesn't have the required permissions to tag Spot\n Instances. For more information, see Spot Instances not tagged on creation in the\n Batch User Guide. The Amazon Resource Name (ARN) of the Amazon EC2 Spot Fleet IAM role applied to a This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. To tag your Spot Instances on creation, the Spot Fleet IAM role specified here must use the newer AmazonEC2SpotFleetTaggingRole managed policy. The previously recommended AmazonEC2SpotFleetRole managed policy doesn't have the required permissions to tag Spot\n Instances. For more information, see Spot instances not tagged on creation in the\n Batch User Guide. The launch template to use for your compute resources. Any other compute resource parameters that you specify in\n a CreateComputeEnvironment API operation override the same parameters in the launch template. You\n must specify either the launch template ID or launch template name in the request, but not both. For more\n information, see Launch Template Support in\n the Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The launch template to use for your compute resources. Any other compute resource parameters that you specify in\n a CreateComputeEnvironment API operation override the same parameters in the launch template. You\n must specify either the launch template ID or launch template name in the request, but not both. For more\n information, see Launch template support in\n the Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. Provides information used to select Amazon Machine Images (AMIs) for EC2 instances in the compute environment.\n If One or two values can be provided. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. Provides information used to select Amazon Machine Images (AMIs) for EC2 instances in the compute environment.\n If One or two values can be provided. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. An object representing an Batch compute resource. For more information, see Compute Environments in the\n Batch User Guide. An object representing an Batch compute resource. For more information, see Compute environments in the\n Batch User Guide. The minimum number of Amazon EC2 vCPUs that an environment should maintain. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The minimum number of Amazon EC2 vCPUs that an environment should maintain (even if the compute environment is\n This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The desired number of Amazon EC2 vCPUS in the compute environment. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The desired number of Amazon EC2 vCPUS in the compute environment. Batch modifies this value between the minimum\n and maximum values based on job queue demand. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The VPC subnets where the compute resources are launched. Fargate compute resources can contain up to 16\n subnets. Providing an empty list will be handled as if this parameter wasn't specified and no change is made. This\n can't be specified for EC2 compute resources. For more information, see VPCs and Subnets in the Amazon VPC User\n Guide. The VPC subnets where the compute resources are launched. Fargate compute resources can contain up to 16\n subnets. For Fargate compute resources, providing an empty list will be handled as if this parameter wasn't\n specified and no change is made. For EC2 compute resources, providing an empty list removes the VPC subnets from the\n compute resource. For more information, see VPCs and subnets in the Amazon VPC User Guide. When updating a compute environment, changing the VPC subnets requires an infrastructure update of the compute\n environment. For more information, see Updating compute environments in the Batch User Guide. The Amazon EC2 security groups associated with instances launched in the compute environment. This parameter is\n required for Fargate compute resources, where it can contain up to 5 security groups. This can't be specified for\n EC2 compute resources. Providing an empty list is handled as if this parameter wasn't specified and no change is\n made. The Amazon EC2 security groups associated with instances launched in the compute environment. This parameter is\n required for Fargate compute resources, where it can contain up to 5 security groups. For Fargate compute\n resources, providing an empty list is handled as if this parameter wasn't specified and no change is made. For EC2\n compute resources, providing an empty list removes the security groups from the compute resource. When updating a compute environment, changing the EC2 security groups requires an infrastructure update of the\n compute environment. For more information, see Updating compute environments in the\n Batch User Guide. The allocation strategy to use for the compute resource if not enough instances of the best fitting instance\n type can be allocated. This might be because of availability of the instance type in the Region or Amazon EC2 service limits. For more\n information, see Allocation strategies\n in the Batch User Guide. When updating a compute environment, changing the allocation strategy requires an infrastructure update of the\n compute environment. For more information, see Updating compute environments in the\n Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. Batch will select additional instance types that are large enough to meet the requirements of the jobs in\n the queue, with a preference for instance types with a lower cost per unit vCPU. If additional instances of the\n previously selected instance types aren't available, Batch will select new instance types. Batch will select one or more instance types that are large enough to meet the requirements of the jobs in\n the queue, with a preference for instance types that are less likely to be interrupted. This allocation strategy\n is only available for Spot Instance compute resources. With both The instances types that can be launched. You can specify instance families to launch any instance type within\n those families (for example, When updating a compute environment, changing this setting requires an infrastructure update of the compute\n environment. For more information, see Updating compute environments in the Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. When you create a compute environment, the instance types that you select for the compute environment must\n share the same architecture. For example, you can't mix x86 and ARM instances in the same compute\n environment. Currently, The Amazon EC2 key pair that's used for instances launched in the compute environment. You can use this key pair to\n log in to your instances with SSH. To remove the Amazon EC2 key pair, set this value to an empty string. When updating a compute environment, changing the EC2 key pair requires an infrastructure update of the compute\n environment. For more information, see Updating compute environments in the Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The Amazon ECS instance profile applied to Amazon EC2 instances in a compute environment. You can specify the short name\n or full Amazon Resource Name (ARN) of an instance profile. For example,\n When updating a compute environment, changing this setting requires an infrastructure update of the compute\n environment. For more information, see Updating compute environments in the Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. Key-value pair tags to be applied to EC2 resources that are launched in the compute environment. For Batch,\n these take the form of \"String1\": \"String2\", where String1 is the tag key and String2 is the tag value−for\n example, When updating a compute environment, changing this setting requires an infrastructure update of the compute\n environment. For more information, see Updating compute environments in the Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The Amazon EC2 placement group to associate with your compute resources. If you intend to submit multi-node parallel\n jobs to your compute environment, you should consider creating a cluster placement group and associate it with your\n compute resources. This keeps your multi-node parallel job on a logical grouping of instances within a single\n Availability Zone with high network flow potential. For more information, see Placement groups in the Amazon EC2 User Guide for\n Linux Instances. When updating a compute environment, changing the placement group requires an infrastructure update of the\n compute environment. For more information, see Updating compute environments in the\n Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that\n instance type before instances are launched. For example, if your maximum percentage is 20%, then the Spot price must\n be less than 20% of the current On-Demand price for that Amazon EC2 instance. You always pay the lowest (market) price and\n never more than your maximum percentage. When updating a compute environment, changing the bid percentage requires an infrastructure update of the\n compute environment. For more information, see Updating compute environments in the\n Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The updated launch template to use for your compute resources. You must specify either the launch template ID or\n launch template name in the request, but not both. For more information, see Launch template support in the Batch User Guide.\n To remove the custom launch template and use the default launch template, set When updating a compute environment, changing the launch template requires an infrastructure update of the\n compute environment. For more information, see Updating compute environments in the\n Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. Provides information used to select Amazon Machine Images (AMIs) for EC2 instances in the compute environment.\n If When updating a compute environment, changing this setting requires an infrastructure update of the compute\n environment. For more information, see Updating compute environments in the Batch User Guide. To remove the EC2 configuration\n and any custom AMI ID specified in One or two values can be provided. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. Specifies whether the AMI ID is updated to the latest one that's supported by Batch when the compute\n environment has an infrastructure update. The default value is If an AMI ID is specified in the When updating a compute environment, changing this setting requires an infrastructure update of the compute\n environment. For more information, see Updating compute environments in the Batch User Guide. The type of compute environment: If you choose When updating a compute environment, changing the type of a compute environment requires an infrastructure\n update of the compute environment. For more information, see Updating compute environments in the\n Batch User Guide. The Amazon Machine Image (AMI) ID used for instances launched in the compute environment. This parameter is\n overridden by the When updating a compute environment, changing the AMI ID requires an infrastructure update of the compute\n environment. For more information, see Updating compute environments in the Batch User Guide. This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be\n specified. The AMI that you choose for a compute environment must match the architecture of the instance types that\n you intend to use for that compute environment. For example, if your compute environment uses A1 instance types,\n the compute resource AMI that you choose must support ARM instances. Amazon ECS vends both x86 and ARM versions of the\n Amazon ECS-optimized Amazon Linux 2 AMI. For more information, see Amazon ECS-optimized\n Amazon Linux 2 AMI\n in the Amazon Elastic Container Service Developer Guide. An object representing the attributes of a compute environment that can be updated. For more information, see\n Compute Environments in the\n Batch User Guide. An object representing the attributes of a compute environment that can be updated. For more information, see\n Updating compute environments\n in the Batch User Guide. The Amazon Resource Name (ARN) of the\n execution\n role that Batch can assume. For more information, see Batch execution IAM role in the\n Batch User Guide. The Amazon Resource Name (ARN) of the execution role that Batch can assume. For more information, see Batch execution IAM role in the\n Batch User Guide. The log configuration specification for the container. This parameter maps to Batch currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers might be available in future releases of the Amazon ECS\n container agent. This parameter requires version 1.18 of the Docker Remote API or greater on your\n container instance. To check the Docker Remote API version on your container instance, log into your\n container instance and run the following command: The Amazon ECS container agent running on a container instance must register the logging drivers available on that\n instance with the The log configuration specification for the container. This parameter maps to Batch currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers might be available in future releases of the Amazon ECS\n container agent. This parameter requires version 1.18 of the Docker Remote API or greater on your\n container instance. To check the Docker Remote API version on your container instance, log into your\n container instance and run the following command: The Amazon ECS container agent running on a container instance must register the logging drivers available on that\n instance with the This parameter is deprecated, use This parameter is deprecated, use This parameter is deprecated, use This parameter is deprecated, use The Amazon Resource Name (ARN) of the IAM role that the container can assume for Amazon Web Services permissions. For more information, see\n IAM Roles for Tasks\n in the Amazon Elastic Container Service Developer Guide. The Amazon Resource Name (ARN) of the IAM role that the container can assume for Amazon Web Services permissions. For more information, see\n IAM roles for tasks\n in the Amazon Elastic Container Service Developer Guide. The log configuration specification for the container. This parameter maps to Batch currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). This parameter requires version 1.18 of the Docker Remote API or greater on your\n container instance. To check the Docker Remote API version on your container instance, log into your\n container instance and run the following command: The Amazon ECS container agent running on a container instance must register the logging drivers available on that\n instance with the The log configuration specification for the container. This parameter maps to Batch currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). This parameter requires version 1.18 of the Docker Remote API or greater on your\n container instance. To check the Docker Remote API version on your container instance, log into your\n container instance and run the following command: The Amazon ECS container agent running on a container instance must register the logging drivers available on that\n instance with the The set of compute environments mapped to a job queue and their order relative to each other. The job scheduler\n uses this parameter to determine which compute environment should run a specific job. Compute environments must be in\n the All compute environments that are associated with a job queue must share the same architecture. Batch doesn't\n support mixing compute environment architecture types in a single job queue. The set of compute environments mapped to a job queue and their order relative to each other. The job scheduler\n uses this parameter to determine which compute environment runs a specific job. Compute environments must be in\n the All compute environments that are associated with a job queue must share the same architecture. Batch doesn't\n support mixing compute environment architecture types in a single job queue. The tags that you apply to the scheduling policy to help you categorize and organize your resources. Each tag\n consists of a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Amazon Web Services General\n Reference. These tags can be updated or removed using the TagResource and UntagResource API operations. Contains the parameters for Contains the parameters for Describes one or more of your compute environments. If you're using an unmanaged compute environment, you can use the Describes one or more of your compute environments. If you're using an unmanaged compute environment, you can use the Contains the parameters for The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the\n The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the\n Whether or not to use the Batch job IAM role defined in a job definition when mounting the Amazon EFS file system.\n If enabled, transit encryption must be enabled in the Whether or not to use the Batch job IAM role defined in a job definition when mounting the Amazon EFS file system.\n If enabled, transit encryption must be enabled in the The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a\n transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. The value must be\n between 0 and 65,535. For more information, see EFS Mount Helper in the Amazon Elastic File System User Guide. The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a\n transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. The value must be\n between 0 and 65,535. For more information, see EFS mount helper in the Amazon Elastic File System User Guide. The image type to match with the instance type to select an AMI. If the \n Amazon Linux\n 2− Default for all non-GPU instance families. \n Amazon Linux\n 2 (GPU)−Default for all GPU instance families (for example \n Amazon Linux.\n Amazon Linux is reaching the end-of-life of standard support. For more information, see Amazon Linux AMI. The image type to match with the instance type to select an AMI. If the \n Amazon Linux\n 2− Default for all non-GPU instance families. \n Amazon Linux\n 2 (GPU)−Default for all GPU instance families (for example \n Amazon Linux.\n Amazon Linux is reaching the end-of-life of standard support. For more information, see Amazon Linux AMI. The AMI ID used for instances launched in the compute environment that match the image type. This setting\n overrides the The AMI ID used for instances launched in the compute environment that match the image type. This setting\n overrides the The AMI that you choose for a compute environment must match the architecture of the instance types that\n you intend to use for that compute environment. For example, if your compute environment uses A1 instance types,\n the compute resource AMI that you choose must support ARM instances. Amazon ECS vends both x86 and ARM versions of the\n Amazon ECS-optimized Amazon Linux 2 AMI. For more information, see Amazon ECS-optimized\n Amazon Linux 2 AMI\n in the Amazon Elastic Container Service Developer Guide. Default parameters or parameter substitution placeholders that are set in the job definition. Parameters are\n specified as a key-value pair mapping. Parameters in a Default parameters or parameter substitution placeholders that are set in the job definition. Parameters are\n specified as a key-value pair mapping. Parameters in a The current status for the job. If your jobs don't progress to The current status for the job. If your jobs don't progress to The job definition that's used by this job. The Amazon Resource Name (ARN) of the job definition that's used by this job. The version number of the launch template, If the value is After the compute environment is created, the launch template version that's used isn't changed, even if the\n Default: The version number of the launch template, If the value is If the AMI ID that's used in a compute environment is from the launch template, the AMI isn't changed when the\n compute environment is updated. It's only changed if the Default: This allows you to tune a container's memory swappiness behavior. A Consider the following when you use a per-container swap configuration. Swap space must be enabled and allocated on the container instance for the containers to use. The Amazon ECS optimized AMIs don't have swap enabled by default. You must enable swap on the instance to use this\n feature. For more information, see Instance Store Swap Volumes in the\n Amazon EC2 User Guide for Linux Instances or How do I allocate memory to work as swap space in an\n Amazon EC2 instance by using a swap file?\n The swap space parameters are only supported for job definitions using EC2 resources. If the This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be\n provided. This allows you to tune a container's memory swappiness behavior. A Consider the following when you use a per-container swap configuration. Swap space must be enabled and allocated on the container instance for the containers to use. The Amazon ECS optimized AMIs don't have swap enabled by default. You must enable swap on the instance to use this\n feature. For more information, see Instance store swap volumes in the\n Amazon EC2 User Guide for Linux Instances or How do I allocate memory to work as swap space in an\n Amazon EC2 instance by using a swap file?\n The swap space parameters are only supported for job definitions using EC2 resources. If the This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be\n provided. The This token should be treated as an opaque identifier that's only used to\n retrieve the next items in a list and not for other programmatic purposes. Contains the parameters for Contains the parameters for The log driver to use for the container. The valid values listed for this parameter are log drivers that the\n Amazon ECS container agent can communicate with by default. The supported log drivers are Jobs that are running on Fargate resources are restricted to the Specifies the Amazon CloudWatch Logs logging driver. For more information, see Using the awslogs Log Driver in the\n Batch User Guide and Amazon CloudWatch Logs logging driver in the Docker documentation. Specifies the Fluentd logging driver. For more information, including usage and options, see Fluentd logging driver in the Docker\n documentation. Specifies the Graylog Extended Format (GELF) logging driver. For more information, including usage and\n options, see Graylog Extended Format logging\n driver in the Docker documentation. Specifies the journald logging driver. For more information, including usage and options, see Journald logging driver in the Docker\n documentation. Specifies the JSON file logging driver. For more information, including usage and options, see JSON File logging driver in the Docker\n documentation. Specifies the Splunk logging driver. For more information, including usage and options, see Splunk logging driver in the Docker\n documentation. Specifies the syslog logging driver. For more information, including usage and options, see Syslog logging driver in the Docker\n documentation. If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you\n can fork the Amazon ECS container agent project that's available on\n GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that\n you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this\n software. This parameter requires version 1.18 of the Docker Remote API or greater on your\n container instance. To check the Docker Remote API version on your container instance, log into your\n container instance and run the following command: The log driver to use for the container. The valid values listed for this parameter are log drivers that the\n Amazon ECS container agent can communicate with by default. The supported log drivers are Jobs that are running on Fargate resources are restricted to the Specifies the Amazon CloudWatch Logs logging driver. For more information, see Using the awslogs log driver in the\n Batch User Guide and Amazon CloudWatch Logs logging driver in the Docker documentation. Specifies the Fluentd logging driver. For more information, including usage and options, see Fluentd logging driver in the Docker\n documentation. Specifies the Graylog Extended Format (GELF) logging driver. For more information, including usage and\n options, see Graylog Extended Format logging\n driver in the Docker documentation. Specifies the journald logging driver. For more information, including usage and options, see Journald logging driver in the Docker\n documentation. Specifies the JSON file logging driver. For more information, including usage and options, see JSON File logging driver in the Docker\n documentation. Specifies the Splunk logging driver. For more information, including usage and options, see Splunk logging driver in the Docker\n documentation. Specifies the syslog logging driver. For more information, including usage and options, see Syslog logging driver in the Docker\n documentation. If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you\n can fork the Amazon ECS container agent project that's available on\n GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that\n you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this\n software. This parameter requires version 1.18 of the Docker Remote API or greater on your\n container instance. To check the Docker Remote API version on your container instance, log into your\n container instance and run the following command: The secrets to pass to the log configuration. For more information, see Specifying Sensitive Data in the\n Batch User Guide. The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the\n Batch User Guide. The quantity of the specified resource to reserve for the container. The values vary based on the\n The number of physical GPUs to reserve for the container. The number of GPUs reserved for all containers in a\n job shouldn't exceed the number of available GPUs on the compute resource that the job is launched on. GPUs are not available for jobs that are running on Fargate resources. The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are\n running on EC2 resources. If your container attempts to exceed the memory specified, the container is terminated.\n This parameter maps to If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for\n a particular instance type, see Memory\n Management in the Batch User Guide. For jobs that are running on Fargate resources, then \n \n \n \n \n \n \n \n \n The number of vCPUs reserved for the container. This parameter maps to For jobs that are running on Fargate resources, then \n \n \n \n \n The quantity of the specified resource to reserve for the container. The values vary based on the\n The number of physical GPUs to reserve for the container. The number of GPUs reserved for all containers in a\n job shouldn't exceed the number of available GPUs on the compute resource that the job is launched on. GPUs are not available for jobs that are running on Fargate resources. The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are\n running on EC2 resources. If your container attempts to exceed the memory specified, the container is terminated.\n This parameter maps to If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for\n a particular instance type, see Memory\n management in the Batch User Guide. For jobs that are running on Fargate resources, then \n \n \n \n \n \n \n \n \n The number of vCPUs reserved for the container. This parameter maps to For jobs that are running on Fargate resources, then \n \n \n \n \n The tags that you apply to the scheduling policy to categorize and organize your resources. Each tag consists of\n a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Amazon Web Services General\n Reference. The tags that you apply to the scheduling policy to categorize and organize your resources. Each tag consists of\n a key and an optional value. For more information, see Tagging Amazon Web Services resources in Amazon Web Services General\n Reference. A list of container overrides in the JSON format that specify the name of a container in the specified job\n definition and the overrides it should receive. You can override the default command for a container, which is\n specified in the job definition or the Docker image, with a A list of container overrides in the JSON format that specify the name of a container in the specified job\n definition and the overrides it receives. You can override the default command for a container, which is specified in\n the job definition or the Docker image, with a Contains the parameters for Contains the parameters for The maximum number of vCPUs expected to be used for an unmanaged compute environment. This parameter should not\n be specified for a managed compute environment. This parameter is only used for fair share scheduling to reserve vCPU\n capacity for new share identifiers. If this parameter is not provided for a fair share job queue, no vCPU capacity\n will be reserved. The maximum number of vCPUs expected to be used for an unmanaged compute environment. Do not specify this\n parameter for a managed compute environment. This parameter is only used for fair share scheduling to reserve vCPU\n capacity for new share identifiers. If this parameter is not provided for a fair share job queue, no vCPU capacity\n will be reserved. The full Amazon Resource Name (ARN) of the IAM role that allows Batch to make calls to other Amazon Web Services services on your behalf.\n For more information, see Batch service IAM\n role in the Batch User Guide. If the compute environment has a service-linked role, it can't be changed to use a regular IAM role.\n Likewise, if the compute environment has a regular IAM role, it can't be changed to use a service-linked\n role. If your specified role has a path other than Depending on how you created your Batch service role, its ARN might contain the The full Amazon Resource Name (ARN) of the IAM role that allows Batch to make calls to other Amazon Web Services services on your behalf.\n For more information, see Batch service IAM\n role in the Batch User Guide. If the compute environment has a service-linked role, it can't be changed to use a regular IAM role.\n Likewise, if the compute environment has a regular IAM role, it can't be changed to use a service-linked role. To\n update the parameters for the compute environment that require an infrastructure update to change, the AWSServiceRoleForBatch service-linked role must be used. For more information, see\n Updating compute\n environments in the Batch User Guide. If your specified role has a path other than Depending on how you created your Batch service role, its ARN might contain the Specifies the updated infrastructure update policy for the compute environment. For more information about\n infrastructure updates, see Updating\n compute environments in the Batch User Guide. The priority of the job queue. Job queues with a higher priority (or a higher integer value for the\n The priority of the job queue. Job queues with a higher priority (or a higher integer value for the\n Details the set of compute environments mapped to a job queue and their order relative to each other. This is\n one of the parameters used by the job scheduler to determine which compute environment should run a given job.\n Compute environments must be in the All compute environments that are associated with a job queue must share the same architecture. Batch doesn't\n support mixing compute environment architecture types in a single job queue. Details the set of compute environments mapped to a job queue and their order relative to each other. This is\n one of the parameters used by the job scheduler to determine which compute environment runs a given job. Compute\n environments must be in the All compute environments that are associated with a job queue must share the same architecture. Batch doesn't\n support mixing compute environment architecture types in a single job queue. Specifies whether jobs are automatically terminated when the computer environment infrastructure is updated. The\n default value is Specifies the job timeout, in minutes, when the compute environment infrastructure is updated. The default value\n is 30. Specifies the infrastructure update policy for the compute environment. For more information about\n infrastructure updates, see Infrastructure updates\n in the Batch User Guide. The fair share policy. Contains the parameters for Defines the Amazon Braket job to be created. Specifies the container image the job uses and the paths to\n the Python scripts used for entry and training. Defines the Amazon Braket job to be created. Specifies the container image the job uses\n and the paths to the Python scripts used for entry and training. The Amazon Braket API Reference provides information about the operations and structures supported in Amazon Braket. The Amazon Braket API Reference provides information about the operations and structures\n supported in Amazon Braket. Additional Resources: Cancels an Amazon Braket job. Cancels the specified task. Creates an Amazon Braket job. Definition of the Amazon Braket job to be created. Specifies the container image the job uses and information\n about the Python scripts used for entry and training. Definition of the Amazon Braket job to be created. Specifies the container image the job\n uses and information about the Python scripts used for entry and training. The path to the S3 location where you want to store job artifacts and the\n encryption key used to store them. The path to the S3 location where you want to store job artifacts and the encryption key\n used to store them. Algorithm-specific parameters used by an Amazon Braket job that influence the quality of\n the training job. The values are set with a string of JSON key:value pairs, where the key is the\n name of the hyperparameter and the value is the value of th hyperparameter. Algorithm-specific parameters used by an Amazon Braket job that influence the quality of\n the training job. The values are set with a string of JSON key:value pairs, where the key\n is the name of the hyperparameter and the value is the value of th hyperparameter. The quantum processing unit (QPU) or simulator used to create an Amazon Braket job. The quantum processing unit (QPU) or simulator used to create an Amazon Braket\n job. A tag object that consists of a key and an optional value, used to manage metadata for Amazon Braket resources. A tag object that consists of a key and an optional value, used to manage metadata for\n Amazon Braket resources. Creates a quantum task. The primary quantum processing unit (QPU) or simulator used to create and run an Amazon Braket job. The primary quantum processing unit (QPU) or simulator used to create and run an Amazon\n Braket job. Configures the quantum processing units (QPUs) or simulator used to create and run an Amazon Braket job. Configures the quantum processing units (QPUs) or simulator used to create and run an\n Amazon Braket job. Retrieves the devices available in Amazon Braket. Retrieves the devices available in Amazon Braket. For backwards compatibility with older versions of BraketSchemas, OpenQASM\n information is omitted from GetDevice API calls. To get this information the user-agent\n needs to present a recent version of the BraketSchemas (1.8.0 or later). The Braket SDK\n automatically reports this for you. If you do not see OpenQASM results in the GetDevice\n response when using a Braket SDK, you may need to set AWS_EXECUTION_ENV environment\n variable to configure user-agent. See the code examples provided below for how to do\n this for the AWS CLI, Boto3, and the Go, Java, and JavaScript/TypeScript SDKs. Retrieves the specified Amazon Braket job. Algorithm-specific parameters used by an Amazon Braket job that influence the quality of\n the traiing job. The values are set with a string of JSON key:value pairs, where the key is the\n name of the hyperparameter and the value is the value of th hyperparameter. Algorithm-specific parameters used by an Amazon Braket job that influence the quality of\n the traiing job. The values are set with a string of JSON key:value pairs, where the key is\n the name of the hyperparameter and the value is the value of th hyperparameter. The path to the S3 location where job artifacts are stored and the encryption\n key used to store them there. The path to the S3 location where job artifacts are stored and the encryption key used\n to store them there. Definition of the Amazon Braket job created. Specifies the container image the job uses, information about\n the Python scripts used for entry and training, and the user-defined metrics used to\n evaluation the job. Definition of the Amazon Braket job created. Specifies the container image the job uses,\n information about the Python scripts used for entry and training, and the user-defined\n metrics used to evaluation the job. The resource instances to use while running the hybrid job on Amazon\n Braket. The resource instances to use while running the hybrid job on Amazon Braket. A tag object that consists of a key and an optional value, used to manage metadata for Amazon Braket resources. A tag object that consists of a key and an optional value, used to manage metadata for\n Amazon Braket resources. Retrieves the specified quantum task. Configures the type resource instances to use while running an Amazon Braket hybrid job. Configures the type resource instances to use while running an Amazon Braket hybrid\n job. Configures the number of resource instances to use while running an Amazon Braket job on\n Amazon Braket. The default value is 1. Configures the resource instances to use while running the Amazon Braket hybrid job on Amazon\n Braket. Configures the resource instances to use while running the Amazon Braket hybrid job on\n Amazon Braket. The request processing has failed because of an unknown error, exception, or failure. The request processing has failed because of an unknown error, exception, or\n failure. (Optional) The local directory where checkpoints are written. The default directory is (Optional) The local directory where checkpoints are written. The default directory is\n Identifies the S3 path where you want Amazon Braket to store checkpoints. For example, Identifies the S3 path where you want Amazon Braket to store checkpoints. For example,\n The AWS Key Management Service (AWS KMS) key that Amazon Braket uses to encrypt the\n job training artifacts at rest using Amazon S3 server-side encryption. The AWS Key Management Service (AWS KMS) key that Amazon Braket uses to encrypt the job\n training artifacts at rest using Amazon S3 server-side encryption. Identifies the S3 path where you want Amazon Braket to store the job training artifacts. For\n example, Identifies the S3 path where you want Amazon Braket to store the job training artifacts.\n For example, Provides summary information about the primary device used by an Amazon Braket job. Provides summary information about the primary device used by an Amazon Braket\n job. A tag object that consists of a key and an optional value, used to manage metadata for Amazon Braket resources. A tag object that consists of a key and an optional value, used to manage metadata for\n Amazon Braket resources. Shows the tags associated with this resource. Contains information about the Python scripts used for entry and by an Amazon Braket job. Contains information about the Python scripts used for entry and by an Amazon Braket\n job. Searches for devices using the specified filters. A token used for pagination of results returned in the response. Use the token returned from the previous request continue results where the previous request ended. A token used for pagination of results returned in the response. Use the token returned\n from the previous request continue results where the previous request ended. An array of An array of A token used for pagination of results, or null if there are no additional results. Use the token value in a subsequent request to continue results where the previous request ended. A token used for pagination of results, or null if there are no additional results. Use\n the token value in a subsequent request to continue results where the previous request\n ended. Searches for Amazon Braket jobs that match the specified filter values. A token used for pagination of results, or A token used for pagination of results, or Searches for tasks that match the specified filter values. A token used for pagination of results returned in the response. Use the token returned from the previous request continue results where the previous request ended. A token used for pagination of results returned in the response. Use the token returned\n from the previous request continue results where the previous request ended. An array of An array of A token used for pagination of results, or null if there are no additional results. Use the token value in a subsequent request to continue results where the previous request ended. A token used for pagination of results, or null if there are no additional results. Use\n the token value in a subsequent request to continue results where the previous request\n ended. Add a tag to the specified resource. Specify the Specify the Remove tags from a resource. Specify the Specify the The service encountered an unexpected error. The number of customer requests exceeds the request rate limit. A configuration for a set of custom HTTP response headers. A configuration for enabling the A configuration for a set of security-related HTTP response headers. CloudFront adds these headers\n\t\t\tto HTTP responses that it sends for requests that match a cache behavior associated with\n\t\t\tthis response headers policy. A Boolean that determines whether CloudFront adds the A number 0–100 (inclusive) that specifies the percentage of responses that you want CloudFront to\n\t\t\tadd the A configuration for enabling the You can use the Adds one or more tags to a trail, up to a limit of 50. Overwrites an existing tag's value when a new value is specified for an existing tag key. \n Tag key names must be unique for a trail; you cannot have two keys with the same name but different values. \n If you specify a key without a value, the tag will be created with the specified key and a value of null. \n You can tag a trail that applies to all Amazon Web Services Regions only from the Region in which the trail was created (also known as its home region). Adds one or more tags to a trail or event data store, up to a limit of 50. Overwrites an \n existing tag's value when a new value is specified for an existing tag key. \n Tag key names must be unique for a trail; you cannot have two keys with the same name but \n different values. \n If you specify a key without a value, the tag will be created with the specified key and a \n value of null. \n You can tag a trail or event data store that applies to all Amazon Web Services Regions \n only from the Region in which the trail or event data store was created (also known as its \n home region). Specifies the ARN of the trail to which one or more tags will be added. The format of a trail ARN is: \n Specifies the ARN of the trail or event data store to which one or more tags will be added. The format of a trail ARN is: \n Specifies the tags to add to a trail. Specifies the tags to add to a trail or event data store. This exception is thrown when the specified resource is not ready for an operation. \n This can occur when you try to run an operation on a trail before CloudTrail has time to fully load the trail. \n If this exception occurs, wait a few minutes, and then try the operation again. This exception is thrown when the specified resource is not ready for an operation. \n This can occur when you try to run an operation on a resource before CloudTrail has time to fully load the resource. \n If this exception occurs, wait a few minutes, and then try the operation again. Indicates whether the event data store is protected from termination. This field is being deprecated. Indicates whether the event data store is protected from termination. The status of an event data store. Values are This field is being deprecated. The status of an event data store. Values are The advanced event selectors that were used to select events for the data store. This field is being deprecated. The advanced event selectors that were used to select events for the data store. Indicates whether the event data store includes events from all regions, or only from the region in which it was created. This field is being deprecated. Indicates whether the event data store includes events from all regions, or only from the region in which it was created. Indicates that an event data store is collecting logged events for an organization. This field is being deprecated. Indicates that an event data store is collecting logged events for an organization. The retention period, in days. This field is being deprecated. The retention period, in days. The timestamp of the event data store's creation. This field is being deprecated. The timestamp of the event data store's creation. The timestamp showing when an event data store was updated, if applicable. This field is being deprecated. The timestamp showing when an event data store was updated, if applicable. The event data store against which you ran your query is inactive. The event data store is inactive. This exception is thrown when the IAM user or role that is used to create the organization trail is lacking one or more required permissions for \n creating an organization trail in a required service. For more information, see \n Prepare For Creating a Trail For Your Organization. This exception is thrown when the IAM user or role that is used to create \n the organization resource lacks one or more required permissions for \n creating an organization resource in a required service. A date range for the query was specified that is not valid. For more information \n about writing a query, see Create \n or edit a query in the CloudTrail User Guide. A date range for the query was specified that is not valid. Be sure that the start time is chronologically \n before the end time. For more information \n about writing a query, see Create \n or edit a query in the CloudTrail User Guide. Lists the tags for the trail in the current region. Lists the tags for the trail or event data store in the current region. Specifies a list of trail ARNs whose tags will be listed. The list has a limit of 20 ARNs. The following is the format of \n a trail ARN. \n Specifies a list of trail and event data store ARNs whose tags will be listed. The list \n has a limit of 20 ARNs. Specifies a list of trail tags to return. Specifies a list of tags to return. This exception is thrown when the Amazon Web Services account making the request to create or update an organization trail is not the management account for an \n organization in Organizations. For more information, see \n Prepare For Creating a Trail For Your Organization. This exception is thrown when the Amazon Web Services account making the request to create \n or update an organization trail or event data store is not the management account for an \n organization in Organizations. For more information, see \n Prepare For Creating a Trail For Your Organization or Create an event data store. This exception is thrown when Organizations is not configured to support all features. All features must be enabled in Organizations to support\n creating an organization trail. For more information, see \n Prepare For Creating a Trail For Your Organization. This exception is thrown when Organizations is not configured to support all \n features. All features must be enabled in Organizations to support\n creating an organization trail or event data store. Removes the specified tags from a trail. Removes the specified tags from a trail or event data store. Specifies the ARN of the trail from which tags should be removed. The format of a trail ARN is: \n Specifies the ARN of the trail or event data store from which tags should be removed. \n Example trail ARN format: Example event data store ARN format: Specifies the tags to remove from a trail. Specifies the tags to remove from a trail or event data store. Configuration of the answering machine detection. Associates a contact flow with a phone number claimed to your Amazon Connect instance. A unique identifier for the phone number. The identifier of the Amazon Connect instance. You can find the instanceId in the ARN of the instance. The identifier of the contact flow. The phone number. Phone numbers are formatted The ISO country code. The type of phone number. Information about available phone numbers. Claims an available phone number to your Amazon Connect instance. The Amazon Resource Name (ARN) for Amazon Connect instances that phone numbers are claimed to. The phone number you want to claim. Phone numbers are formatted The description of the phone number. The tags used to organize, track, or control access for this resource. A unique, case-sensitive identifier that you provide to ensure the idempotency of the\n request. A unique identifier for the phone number. The Amazon Resource Name (ARN) of the phone number. A unique identifier for the phone number. The Amazon Resource Name (ARN) of the phone number. The phone number. Phone numbers are formatted The ISO country code. The type of phone number. The description of the phone number. The Amazon Resource Name (ARN) for Amazon Connect instances that phone numbers are claimed to. The tags used to organize, track, or control access for this resource. The status of the phone number. Information about a phone number that has been claimed to your Amazon Connect instance. A list of conditions which would be applied together with an A list of conditions which would be applied together with an A leaf node condition which can be used to specify a tag condition. An object that can be used to specify Tag conditions inside the Top level list specifies conditions that need to be applied\n with Inner list specifies conditions that need to be applied with Gets details and status of a phone number that’s claimed to your Amazon Connect instance A unique identifier for the phone number. Information about a phone number that's been claimed to your Amazon Connect instance. Removes the contact flow association from a phone number claimed to your Amazon Connect instance, if a contact flow association exists. A unique identifier for the phone number. The identifier of the Amazon Connect instance. You can find the instanceId in the ARN of the instance. Contains information about a hierarchy group. The value in the hierarchy group condition. The type of hierarchy group match. A leaf node condition which can be used to specify a hierarchy group condition. Configuration information of a Kinesis video stream. A unique identifier for the phone number. The Amazon Resource Name (ARN) of the phone number. The phone number. Phone numbers are formatted The ISO country code. The type of phone number. The Amazon Resource Name (ARN) for Amazon Connect instances that phone numbers are claimed to. Information about phone numbers that have been claimed to your Amazon Connect instance. Provides information about the prompts for the specified Amazon Connect instance. Lists phone numbers claimed to your Amazon Connect instance. For more information about phone numbers, see Set Up Phone Numbers for Your\n Contact Center in the Amazon Connect Administrator Guide. The identifier of the Amazon Connect instance. The Amazon Resource Name (ARN) for Amazon Connect instances that phone numbers are claimed to. If The maximum number of results to return per page. The token for the next set of results. Use the value returned in the previous response in\n the next request to retrieve the next set of results. The token for the next set of results. Use the value returned in the previous \nresponse in the next request to retrieve the next set of results. The maximum number of results to return per page. The ISO country code. The type of phone number. The prefix of the phone number. If provided, it must contain Information about the prompts. If there are additional results, this is the token for the next set of results. If there are additional results, this is the token for the next set of results. Information about phone numbers that have been claimed to your Amazon Connect instances. Provides information about the prompts for the specified Amazon Connect instance. The identifier of the Amazon Connect instance. The token for the next set of results. Use the value returned in the previous response in\n the next request to retrieve the next set of results. The maximum number of results to return per page. Information about the prompts. If there are additional results, this is the token for the next set of results. Contains information about a phone number for a quick connect. The status. The status message. The status of the phone number. Changes the current status of a user or agent in Amazon Connect.\n If the agent is currently handling a contact, this sets the agent's next status. For more information, see Agent status \n and Set your next status\n in the Amazon Connect Administrator Guide. The identifier of the user. The identifier of the Amazon Connect instance. You can find the instanceId in the ARN of the instance. The identifier of the agent status. Releases a phone number previously claimed to an Amazon Connect instance. A unique identifier for the phone number. A unique, case-sensitive identifier that you provide to ensure the idempotency of the\n request. Information about the Amazon Simple Storage Service (Amazon S3) storage type. Searches for available phone numbers that you can claim to your Amazon Connect instance. The Amazon Resource Name (ARN) for Amazon Connect instances that phone numbers are claimed to. The ISO country code. The type of phone number. The prefix of the phone number. If provided, it must contain The maximum number of results to return per page. The token for the next set of results. Use the value returned in the previous \nresponse in the next request to retrieve the next set of results. If there are additional results, this is the token for the next set of results. A list of available phone numbers that you can claim for your Amazon Connect instance. Searches users in an Amazon Connect instance, with optional filtering. The identifier of the Amazon Connect instance. You can find the instanceId in the ARN of the instance. The token for the next set of results. Use the value returned in the previous \nresponse in the next request to retrieve the next set of results. The maximum number of results to return per page. Filters to be applied to search results. Information about the users. If there are additional results, this is the token for the next set of results. The total number of users who matched your search query. The name of the field in the string condition. The value of the string. The type of comparison to be made when evaluating the string condition. A leaf node condition which can be used to specify a string condition, for example,\n The tag key in the tag condition. The tag value in the tag condition. A leaf node condition which can be used to specify a tag condition, for example, Adds the specified tags to the specified resource. The supported resource types are users, routing profiles, queues, quick connects, contact\n flows, agent status, and hours of operation. For sample policies that use tags, see Amazon Connect Identity-Based\n Policy Examples in the Amazon Connect Administrator Guide. Adds the specified tags to the specified resource. The supported resource types are users, routing profiles, queues, quick connects, contact\n flows, agent status, hours of operation, and phone number. For sample policies that use tags, see Amazon Connect Identity-Based\n Policy Examples in the Amazon Connect Administrator Guide. Updates your claimed phone number from its current Amazon Connect instance to another Amazon Connect instance in the same Region. A unique identifier for the phone number. The Amazon Resource Name (ARN) for Amazon Connect instances that phone numbers are claimed to. A unique, case-sensitive identifier that you provide to ensure the idempotency of the\n request. A unique identifier for the phone number. The Amazon Resource Name (ARN) of the phone number. Contains information about the identity of a user. The user's first name. The user's last name. The user's first name and last name. Contains information about the quick connect configuration settings for a user. The contact\n flow must be of type Transfer to Agent. A list of conditions which would be applied together with an A list of conditions which would be applied together with an A leaf node condition which can be used to specify a string condition. A leaf node condition which can be used to specify a hierarchy group condition. The search criteria to be used to return users. Filters to be applied to search results. The Amazon Resource Name (ARN) of the user. The directory identifier of the user. The identifier of the user's hierarchy group. The identifier of the user's summary. The user's first name and last name. The identifier of the user's routing profile. The identifiers of the user's security profiles. The tags used to organize, track, or control access for this resource. The name of the user. Information about the returned users. Accepts one or more interface VPC endpoint connection requests to your VPC endpoint\n service. Accepts one or more interface VPC endpoint connection requests to your VPC endpoint service. Associates a set of DHCP options (that you've previously created) with the specified VPC, or associates no DHCP options with the VPC. After you associate the options with the VPC, any existing instances and all new instances that you launch in that VPC use the options. You don't need to restart or relaunch the instances. They automatically pick up the changes within a few hours, depending on how frequently the instance renews its DHCP lease. You can explicitly renew the lease using the operating system on the instance. For more information, see DHCP options sets\n in the Amazon Virtual Private Cloud User Guide. Attaches an internet gateway or a virtual private gateway to a VPC, enabling connectivity between the internet and\n\t\t\tthe VPC. For more information about your VPC and internet gateway, see the Amazon Virtual Private Cloud User Guide. Cancels an active conversion task. The task can be the import of an instance or volume. The action removes all\n artifacts of the conversion, including a partially uploaded volume or instance. If the conversion is complete or is\n in the process of transferring the final disk image, the command fails and returns an exception. For more information, see Importing a Virtual Machine Using the Amazon\n EC2 CLI. Cancels an active export task. The request removes all artifacts of the export, including any partially-created\n Amazon S3 objects. If the export task is complete or is in the process of transferring the final disk image, the\n command fails and returns an error. Provides information to Amazon Web Services about your VPN customer gateway device. The\n customer gateway is the appliance at your end of the VPN connection. (The device on the\n Amazon Web Services side of the VPN connection is the virtual private gateway.) You\n must provide the internet-routable IP address of the customer gateway's external\n interface. The IP address must be static and can be behind a device performing network\n address translation (NAT). For devices that use Border Gateway Protocol (BGP), you can also provide the device's\n BGP Autonomous System Number (ASN). You can use an existing ASN assigned to your\n network. If you don't have an ASN already, you can use a private ASN (in the 64512 -\n 65534 range). Amazon EC2 supports all 4-byte ASN numbers in the range of 1 - 2147483647, with\n the exception of the following: 7224 - reserved in the 9059 - reserved in the 17943 - reserved in the 10124 - reserved in the For more information, see Amazon Web Services Site-to-Site VPN in the Amazon Web Services Site-to-Site VPN\n User Guide. To create more than one customer gateway with the same VPN type, IP address, and\n BGP ASN, specify a unique device name for each customer gateway. Identical requests\n return information about the existing customer gateway and do not create new\n customer gateways. Provides information to Amazon Web Services about your VPN customer gateway device. The\n customer gateway is the appliance at your end of the VPN connection. (The device on the\n Amazon Web Services side of the VPN connection is the virtual private gateway.) You\n must provide the internet-routable IP address of the customer gateway's external\n interface. The IP address must be static and can be behind a device performing network\n address translation (NAT). For devices that use Border Gateway Protocol (BGP), you can also provide the device's\n BGP Autonomous System Number (ASN). You can use an existing ASN assigned to your network.\n If you don't have an ASN already, you can use a private ASN. For more information, see \n Customer gateway \n options for your Site-to-Site VPN connection in the Amazon Web Services Site-to-Site VPN User Guide. To create more than one customer gateway with the same VPN type, IP address, and\n BGP ASN, specify a unique device name for each customer gateway. An identical request\n returns information about the existing customer gateway; it doesn't create a new customer\n gateway. Creates an Amazon EBS-backed AMI from an Amazon EBS-backed instance \n \tthat is either running or stopped. By default, Amazon EC2 shuts down and reboots the instance before creating the AMI to ensure that everything on \n the instance is stopped and in a consistent state during the creation process. If you're confident that your \n instance is in a consistent state appropriate for AMI creation, use the NoReboot \n parameter to prevent Amazon EC2 from shutting down and rebooting the instance. If you customized your instance with instance store volumes or Amazon EBS volumes in addition to the root device volume, the \n \tnew AMI contains block device mapping information for those volumes. When you launch an instance from this new AMI, \n \tthe instance automatically launches with those additional volumes. For more information, see Creating Amazon EBS-Backed Linux AMIs \n\t\t\t\tin the Amazon Elastic Compute Cloud User Guide. Creates an Amazon EBS-backed AMI from an Amazon EBS-backed instance \n \tthat is either running or stopped. By default, when Amazon EC2 creates the new AMI, it reboots the instance so that it can \n\t\t\t\t\ttake snapshots of the attached volumes while data is at rest, in order to ensure a consistent \n\t\t\t\t\tstate. You can set the If you choose to bypass the shutdown and reboot process by setting the If you customized your instance with instance store volumes or Amazon EBS volumes in addition to the root device volume, the \n \tnew AMI contains block device mapping information for those volumes. When you launch an instance from this new AMI, \n \tthe instance automatically launches with those additional volumes. For more information, see Creating Amazon EBS-Backed Linux AMIs \n\t\t\t\tin the Amazon Elastic Compute Cloud User Guide. By default, Amazon EC2 attempts to shut down and reboot the instance before creating the image. \n If the By default, when Amazon EC2 creates the new AMI, it reboots the instance so that it can \n\t\t\t\t\ttake snapshots of the attached volumes while data is at rest, in order to ensure a consistent \n\t\t\t\t\tstate. You can set the If you choose to bypass the shutdown and reboot process by setting the Default: Create an IPAM. Amazon VCP IP Address Manager (IPAM) is a VPC feature that you can use to automate your IP address management workflows including assigning, tracking, troubleshooting, and auditing IP addresses across Amazon Web Services Regions and accounts throughout your Amazon Web Services Organization. For more information, see Create an IPAM in the Amazon VPC IPAM User Guide.\n Create an IPAM. Amazon VPC IP Address Manager (IPAM) is a VPC feature that you can use\n to automate your IP address management workflows including assigning, tracking,\n troubleshooting, and auditing IP addresses across Amazon Web Services Regions and accounts\n throughout your Amazon Web Services Organization. For more information, see Create an IPAM in the Amazon VPC IPAM User Guide.\n Creates an ED25519 or 2048-bit RSA key pair with the specified name. Amazon EC2 stores the public\n key and displays the private key for you to save to a file. The private key is returned\n as an unencrypted PEM encoded PKCS#1 private key. If a key with the specified name\n already exists, Amazon EC2 returns an error. The key pair returned to you is available only in the Amazon Web Services Region in which you create it.\n If you prefer, you can create your own key pair using a third-party tool and upload it\n to any Region using ImportKeyPair. You can have up to 5,000 key pairs per Amazon Web Services Region. For more information, see Amazon EC2 key pairs in the\n Amazon Elastic Compute Cloud User Guide. Creates an ED25519 or 2048-bit RSA key pair with the specified name and in the\n specified PEM or PPK format. Amazon EC2 stores the public key and displays the private\n key for you to save to a file. The private key is returned as an unencrypted PEM encoded\n PKCS#1 private key or an unencrypted PPK formatted private key for use with PuTTY. If a\n key with the specified name already exists, Amazon EC2 returns an error. The key pair returned to you is available only in the Amazon Web Services Region in which you create it.\n If you prefer, you can create your own key pair using a third-party tool and upload it\n to any Region using ImportKeyPair. You can have up to 5,000 key pairs per Amazon Web Services Region. For more information, see Amazon EC2 key pairs in the\n Amazon Elastic Compute Cloud User Guide. The type of key pair. Note that ED25519 keys are not supported for Windows instances, EC2 Instance Connect, and EC2 Serial Console. Default: The type of key pair. Note that ED25519 keys are not supported for Windows instances. Default: The tags to apply to the new key pair. The format of the key pair. Default: Creates a launch template. A launch template contains the parameters to launch an\n instance. When you launch an instance using RunInstances, you can\n specify a launch template instead of providing the launch parameters in the request. For\n more information, see Launching an instance from a\n launch template in the Amazon Elastic Compute Cloud User Guide. Creates a launch template. A launch template contains the parameters to launch an\n instance. When you launch an instance using RunInstances, you can\n specify a launch template instead of providing the launch parameters in the request. For\n more information, see Launching an instance from a\n launch template in the Amazon Elastic Compute Cloud User Guide. If you want to clone an existing launch template as the basis for creating a new\n launch template, you can use the Amazon EC2 console. The API, SDKs, and CLI do not support\n cloning a template. For more information, see Create a launch template from an existing launch template in the\n Amazon Elastic Compute Cloud User Guide. Creates an entry (a rule) in a network ACL with the specified rule number. Each network ACL has a set of numbered ingress rules \n\t\t and a separate set of numbered egress rules. When determining whether a packet should be allowed in or out of a subnet associated \n\t\t with the ACL, we process the entries in the ACL according to the rule numbers, in ascending order. Each network ACL has a set of \n\t\t ingress rules and a separate set of egress rules. We recommend that you leave room between the rule numbers (for example, 100, 110, 120, ...), and not number them one right after the \n\t\t other (for example, 101, 102, 103, ...). This makes it easier to add a rule between existing ones without having to renumber the rules. After you add an entry, you can't modify it; you must either replace it, or create an entry and delete the old one. For more information about network ACLs, see Network ACLs in the Amazon Virtual Private Cloud User Guide. Adds or overwrites only the specified tags for the specified Amazon EC2 resource or\n resources. When you specify an existing tag key, the value is overwritten with\n the new value. Each resource can have a maximum of 50 tags. Each tag consists of a key and\n optional value. Tag keys must be unique per resource. For more information about tags, see Tagging Your Resources in the\n Amazon Elastic Compute Cloud User Guide. For more information about\n creating IAM policies that control users' access to resources based on tags, see Supported\n Resource-Level Permissions for Amazon EC2 API Actions in the Amazon\n Elastic Compute Cloud User Guide. Creates a VPC endpoint for a specified service. An endpoint enables you to create a\n private connection between your VPC and the service. The service may be provided by Amazon Web Services,\n an Amazon Web Services Marketplace Partner, or another Amazon Web Services account. For more information, \n see VPC Endpoints in the\n Amazon Virtual Private Cloud User Guide. A An A Use DescribeVpcEndpointServices to get a list of supported\n services. Creates a VPC endpoint for a specified service. An endpoint enables you to create a\n private connection between your VPC and the service. The service may be provided by Amazon Web Services,\n an Amazon Web Services Marketplace Partner, or another Amazon Web Services account. For more information, \n see the Amazon Web Services PrivateLink Guide. Creates a VPC endpoint service configuration to which service consumers (Amazon Web Services accounts,\n IAM users, and IAM roles) can connect. To create an endpoint service configuration, you must first create one of the\n following for your service: A Network Load Balancer. Service consumers connect to your service using an\n interface endpoint. A Gateway Load Balancer. Service consumers connect to your service using a\n Gateway Load Balancer endpoint. For more information, see VPC Endpoint Services in the\n Amazon Virtual Private Cloud User Guide. If you set the private DNS name, you must prove that you own the private DNS domain\n name. For more information, see VPC Endpoint Service\n Private DNS Name Verification in the\n Amazon Virtual Private Cloud User Guide. Creates a VPC endpoint service to which service consumers (Amazon Web Services accounts,\n IAM users, and IAM roles) can connect. Before you create an endpoint service, you must create one of the following for your service: A Network Load Balancer. \n Service consumers connect to your service using an interface endpoint. A Gateway Load Balancer. \n Service consumers connect to your service using a Gateway Load Balancer endpoint. If you set the private DNS name, you must prove that you own the private DNS domain\n name. For more information, see the Amazon Web Services PrivateLink \n\t Guide. Indicates whether requests from service consumers to create an endpoint to your service must\n be accepted. To accept a request, use AcceptVpcEndpointConnections. Indicates whether requests from service consumers to create an endpoint to your service must\n be accepted manually. Creates a static route associated with a VPN connection between an existing virtual\n private gateway and a VPN customer gateway. The static route allows traffic to be routed\n from the virtual private gateway to the VPN customer gateway. For more information, see Amazon Web Services Site-to-Site VPN in the Amazon Web Services Site-to-Site VPN\n User Guide. Deletes the specified customer gateway. You must delete the VPN connection before you\n can delete the customer gateway. Deletes the specified set of DHCP options. You must disassociate the set of DHCP options before you can delete it. You can disassociate the set of DHCP options by associating either a new set of options or the default set of options with the VPC. Deletes the specified internet gateway. You must detach the internet gateway from the\n\t\t\tVPC before you can delete it. Delete an IPAM. Deleting an IPAM removes all monitored data associated with the IPAM including the historical data for CIDRs. You cannot delete an IPAM if there are CIDRs provisioned to pools or if there are allocations in the pools within the IPAM. To deprovision pool \n CIDRs, see DeprovisionIpamPoolCidr. To release allocations, see ReleaseIpamPoolAllocation.\n For more information, see Delete an IPAM in the Amazon VPC IPAM User Guide.\n Delete an IPAM. Deleting an IPAM removes all monitored data associated with the IPAM including the historical data for CIDRs. For more information, see Delete an IPAM in the Amazon VPC IPAM User Guide.\n Deletes the specified key pair, by removing the public key from Amazon EC2. Deletes the specified network ACL. You can't delete the ACL if it's associated with any subnets. You can't delete the default network ACL. Deletes the specified ingress or egress entry (rule) from the specified network ACL. Deletes the specified network interface. You must detach the network interface before you can delete it. Deletes the specified placement group. You must terminate all instances in the\n placement group before you can delete the placement group. For more information, see\n Placement groups in the Amazon EC2 User Guide. Deletes the specified route from the specified route table. Deletes the specified route table. You must disassociate the route table from any subnets before you can delete it. You can't delete the main route table. Deletes a security group. If you attempt to delete a security group that is associated with an instance, or is\n\t\t\t referenced by another security group, the operation fails with\n\t\t\t\t Deletes the specified snapshot. When you make periodic snapshots of a volume, the snapshots are incremental, and only the\n blocks on the device that have changed since your last snapshot are saved in the new snapshot.\n When you delete a snapshot, only the data not needed for any other snapshot is removed. So\n regardless of which prior snapshots have been deleted, all active snapshots will have access\n to all the information needed to restore the volume. You cannot delete a snapshot of the root device of an EBS volume used by a registered AMI.\n You must first de-register the AMI before you can delete the snapshot. For more information, see Delete an Amazon EBS snapshot in the\n Amazon Elastic Compute Cloud User Guide. Deletes the data feed for Spot Instances. Deletes the specified subnet. You must terminate all running instances in the subnet before you can delete the subnet. Deletes the specified set of tags from the specified set of resources. To list the current tags, use DescribeTags. For more information about tags, see \n Tagging Your Resources \n in the Amazon Elastic Compute Cloud User Guide. Deletes the specified EBS volume. The volume must be in the The volume can remain in the For more information, see Delete an Amazon EBS volume in the\n Amazon Elastic Compute Cloud User Guide. Deletes the specified VPC. You must detach or delete all gateways and resources that are associated with the VPC before you can delete it. For example, you must terminate all instances running in the VPC, delete all security groups associated with the VPC (except the default one), delete all route tables associated with the VPC (except the default one), and so on. Deletes the specified VPN connection. If you're deleting the VPC and its associated components, we recommend that you detach\n the virtual private gateway from the VPC and delete the VPC before deleting the VPN\n connection. If you believe that the tunnel credentials for your VPN connection have been\n compromised, you can delete the VPN connection and create a new one that has new keys,\n without needing to delete the VPC or virtual private gateway. If you create a new VPN\n connection, you must reconfigure the customer gateway device using the new configuration\n information returned with the new VPN connection ID. For certificate-based authentication, delete all Certificate Manager (ACM) private\n certificates used for the Amazon Web Services-side tunnel endpoints for the VPN\n connection before deleting the VPN connection. Deletes the specified static route associated with a VPN connection between an\n existing virtual private gateway and a VPN customer gateway. The static route allows\n traffic to be routed from the virtual private gateway to the VPN customer\n gateway. Deletes the specified virtual private gateway. You must first detach the virtual\n private gateway from the VPC. Note that you don't need to delete the virtual private\n gateway if you plan to delete and recreate the VPN connection between your VPC and your\n network. Deregisters the specified AMI. After you deregister an AMI, it can't be used to \n launch new instances. If you deregister an AMI that matches a Recycle Bin retention rule, the AMI is \n retained in the Recycle Bin for the specified retention period. For more information, \n see Recycle\n Bin in the Amazon Elastic Compute Cloud User Guide. When you deregister an AMI, it doesn't affect any instances that you've already \n launched from the AMI. You'll continue to incur usage costs for those instances until \n you terminate them. When you deregister an Amazon EBS-backed AMI, it doesn't affect the snapshot that was\n\t\t\tcreated for the root volume of the instance during the AMI creation process. When you\n\t\t\tderegister an instance store-backed AMI, it doesn't affect the files that you uploaded\n\t\t\tto Amazon S3 when you created the AMI. The filters. \n \n \n \n \t \n \t \n \n \t\t\t \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n The filters. \n \n \n \n \t \n \t \n \n \t\t\t \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n The filters. \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n The filters. \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n Checks whether you have the required permissions for the action, without actually making the request, \n and provides an error response. If you have the required permissions, the error response is If Default: One or more filters. \n \n \n \n \n \n \n \n \n One or more filters. \n \n \n \n \n \n \n \n One or more filters. \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n One or more filters. \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n Detaches an internet gateway from a VPC, disabling connectivity between the internet\n\t\t\tand the VPC. The VPC must not contain any running instances with Elastic IP addresses or\n\t\t\tpublic IPv4 addresses. Detaches a network interface from an instance. Detaches a virtual private gateway from a VPC. You do this if you're planning to turn\n off the VPC and not use it anymore. You can confirm a virtual private gateway has been\n completely detached from a VPC by describing the virtual private gateway (any\n attachments to the virtual private gateway are also described). You must wait for the attachment's state to switch to Disables a virtual private gateway (VGW) from propagating routes to a specified route\n table of a VPC. Disassociates an Elastic IP address from the instance or network interface it's associated with. An Elastic IP address is for use in either the EC2-Classic platform or in a VPC. For more\n\t\t\tinformation, see Elastic IP\n\t\t\t\tAddresses in the Amazon Elastic Compute Cloud User Guide. This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error. Disassociates a subnet or gateway from a route table. After you perform this action, the subnet no longer uses the routes in the route table.\n\t\t\t\tInstead, it uses the routes in the VPC's main route table. For more information\n\t\t\t\tabout route tables, see Route\n\t\t\t\ttables in the Amazon Virtual Private Cloud User Guide. The ARN of the Outpost on which the snapshot is stored. The ARN of the Outpost on which the snapshot is stored. This parameter is only supported on Enables a virtual private gateway (VGW) to propagate routes to the specified route\n table of a VPC. Enables I/O operations for a volume that had I/O operations disabled because the data on\n the volume was potentially inconsistent. The MD5 public key fingerprint as specified in section 4 of RFC 4716. For RSA key pairs, the key fingerprint is the MD5 public key fingerprint as specified in section 4 of RFC 4716. For ED25519 key pairs, the key fingerprint is the base64-encoded SHA-256 digest, which is the default for OpenSSH, starting with OpenSSH 6.8. The SHA-1 digest of the DER encoded private key. For RSA key pairs, the key fingerprint is the SHA-1 digest of the DER encoded private key. For ED25519 key pairs, the key fingerprint is the base64-encoded SHA-256 digest, which is the default for OpenSSH, starting with OpenSSH 6.8. If you used CreateKeyPair to create the key pair: For RSA key pairs, the key fingerprint is the SHA-1 digest of the DER encoded private key. \n For ED25519 key pairs, the key fingerprint is the base64-encoded SHA-256 digest, which \n is the default for OpenSSH, starting with OpenSSH 6.8. If you used ImportKeyPair to provide Amazon Web Services the public key: For RSA key pairs, the key fingerprint is the MD5 public key fingerprint as specified in section 4 of RFC4716. For ED25519 key pairs, the key fingerprint is the base64-encoded SHA-256\n digest, which is the default for OpenSSH, starting with OpenSSH 6.8. If you used CreateKeyPair to create the key pair: For RSA key pairs, the key fingerprint is the SHA-1 digest of the DER encoded private key. For ED25519 key pairs, the key fingerprint is the base64-encoded SHA-256 digest, which \n is the default for OpenSSH, starting with OpenSSH 6.8. If you used ImportKeyPair to provide Amazon Web Services the public key: For RSA key pairs, the key fingerprint is the MD5 public key fingerprint as specified in section 4 of RFC4716. For ED25519 key pairs, the key fingerprint is the base64-encoded SHA-256\n digest, which is the default for OpenSSH, starting with OpenSSH 6.8. Any tags applied to the key pair. The public key material. If you used Amazon EC2 to create the key pair, this is the date and time when the key\n was created, in ISO\n 8601 date-time format, in the UTC time zone. If you imported an existing key pair to Amazon EC2, this is the date and time the key\n was imported, in ISO\n 8601 date-time format, in the UTC time zone. The instance type. The instance type. Only one instance type can be specified. Modifies the ID format for the specified resource on a per-Region basis. You can\n specify that resources should receive longer IDs (17-character IDs) when they are\n created. This request can only be used to modify longer ID settings for resource types that\n are within the opt-in period. Resources currently in their opt-in period include:\n This setting applies to the IAM user who makes the request; it does not apply to the\n entire Amazon Web Services account. By default, an IAM user defaults to the same settings as the root user. If\n you're using this action as the root user, then these settings apply to the entire account,\n unless an IAM user explicitly overrides these settings for themselves. For more information,\n see Resource IDs \n in the Amazon Elastic Compute Cloud User Guide. Resources created with longer IDs are visible to all IAM roles and users, regardless\n of these settings and provided that they have permission to use the relevant\n Modifies the ID format of a resource for a specified IAM user, IAM role, or the root\n user for an account; or all IAM users, IAM roles, and the root user for an account. You can\n specify that resources should receive longer IDs (17-character IDs) when they are created. This request can only be used to modify longer ID settings for resource types that are\n within the opt-in period. Resources currently in their opt-in period include:\n For more information, see Resource IDs in the\n Amazon Elastic Compute Cloud User Guide. This setting applies to the principal specified in the request; it does not apply to the\n principal that makes the request. Resources created with longer IDs are visible to all IAM roles and users, regardless of these\n settings and provided that they have permission to use the relevant Modifies the specified attribute of the specified AMI. You can specify only one attribute at a time.\n You can use the Images with an Amazon Web Services Marketplace product code cannot be made public. To enable the SriovNetSupport enhanced networking attribute of an image, enable SriovNetSupport on an instance \n and create an AMI from the instance. Modifies the specified attribute of the specified instance. You can specify only one\n attribute at a time. \n Note: Using this action to change the security groups\n associated with an elastic network interface (ENI) attached to an instance in a VPC can\n result in an error if the instance has more than one ENI. To change the security groups\n associated with an ENI attached to an instance that has multiple ENIs, we recommend that\n you use the ModifyNetworkInterfaceAttribute action. To modify some attributes, the instance must be stopped. For more information, see\n Modify a stopped instance in the\n Amazon EC2 User Guide. Modifies the specified network interface attribute. You can specify only one\n attribute at a time. You can use this action to attach and detach security groups from\n an existing EC2 instance. Adds or removes permission settings for the specified snapshot. You may add or remove\n specified Amazon Web Services account IDs from a snapshot's list of create volume permissions, but you cannot\n do both in a single operation. If you need to both add and remove account IDs for a snapshot,\n you must use multiple operations. You can make up to 500 modifications to a snapshot in a single operation. Encrypted snapshots and snapshots with Amazon Web Services Marketplace product codes cannot be made\n public. Snapshots encrypted with your default KMS key cannot be shared with other accounts. For more information about modifying snapshot permissions, see Share a snapshot in the\n Amazon Elastic Compute Cloud User Guide. Modifies a subnet attribute. You can only modify one attribute at a time. Use this action to modify subnets on Amazon Web Services Outposts. To modify a subnet on an Outpost rack, set both\n To modify a subnet on an Outpost server, set either\n For more information about Amazon Web Services Outposts, see the following: \n Outpost servers\n \n Outpost racks\n The type of hostnames to assign to instances in the subnet at launch. For IPv4 only subnets, an\n instance DNS name must be based on the instance IPv4 address. For IPv6 only subnets, an instance\n DNS name must be based on the instance ID. For dual-stack subnets, you can specify whether DNS\n names use the instance IPv4 address or the instance ID. The type of hostname to assign to instances in the subnet at launch. For IPv4-only and dual-stack (IPv4 and IPv6) subnets, an\n instance DNS name can be based on the instance IPv4 address (ip-name) or the instance ID (resource-name). For IPv6 only subnets, an instance\n DNS name must be based on the instance ID (resource-name). You can modify several parameters of an existing EBS volume, including volume size, volume\n type, and IOPS capacity. If your EBS volume is attached to a current-generation EC2 instance\n type, you might be able to apply these changes without stopping the instance or detaching the\n volume from it. For more information about modifying EBS volumes, see Amazon EBS Elastic Volumes (Linux instances) \n or Amazon EBS Elastic Volumes (Windows instances). When you complete a resize operation on your volume, you need to extend the volume's\n file-system size to take advantage of the new storage capacity. For more information, see Extend a Linux file system or \n Extend a Windows file system. You can use CloudWatch Events to check the status of a modification to an EBS volume. For\n information about CloudWatch Events, see the Amazon CloudWatch Events User Guide. You can also track the status of a\n modification using DescribeVolumesModifications. For information\n about tracking status changes using either method, see Monitor the progress of volume modifications. With previous-generation instance types, resizing an EBS volume might require detaching and\n reattaching the volume or stopping and restarting the instance. If you reach the maximum volume modification rate per volume limit, you must wait\n at least six hours before applying further modifications to the affected EBS volume. You can modify several parameters of an existing EBS volume, including volume size, volume\n type, and IOPS capacity. If your EBS volume is attached to a current-generation EC2 instance\n type, you might be able to apply these changes without stopping the instance or detaching the\n volume from it. For more information about modifying EBS volumes, see Amazon EBS Elastic Volumes (Linux instances) \n or Amazon EBS Elastic Volumes (Windows instances). When you complete a resize operation on your volume, you need to extend the volume's\n file-system size to take advantage of the new storage capacity. For more information, see Extend a Linux file system or \n Extend a Windows file system. You can use CloudWatch Events to check the status of a modification to an EBS volume. For\n information about CloudWatch Events, see the Amazon CloudWatch Events User Guide. You can also track the status of a\n modification using DescribeVolumesModifications. For information\n about tracking status changes using either method, see Monitor the progress of volume modifications. With previous-generation instance types, resizing an EBS volume might require detaching and\n reattaching the volume or stopping and restarting the instance. After modifying a volume, you must wait at least six hours and ensure that the volume \n is in the Modifies a volume attribute. By default, all I/O operations for the volume are suspended when the data on the volume is\n determined to be potentially inconsistent, to prevent undetectable, latent data corruption.\n The I/O access to the volume can be resumed by first enabling I/O access and then checking the\n data consistency on your volume. You can change the default behavior to resume I/O operations. We recommend that you change\n this only for boot volumes or for volumes that are stateless or disposable. Modifies the specified attribute of the specified VPC. Modifies attributes of a specified VPC endpoint. The attributes that you can modify\n depend on the type of VPC endpoint (interface, gateway, or Gateway Load Balancer). For more information, see\n VPC\n Endpoints in the Amazon Virtual Private Cloud User Guide. Modifies attributes of a specified VPC endpoint. The attributes that you can modify\n depend on the type of VPC endpoint (interface, gateway, or Gateway Load Balancer). For more information, \n see the Amazon Web Services PrivateLink \n Guide. Modifies the attributes of your VPC endpoint service configuration. You can change the\n Network Load Balancers or Gateway Load Balancers for your service, and you can specify whether acceptance is\n required for requests to connect to your endpoint service through an interface VPC\n endpoint. If you set or modify the private DNS name, you must prove that you own the private DNS\n domain name. For more information, see VPC Endpoint Service\n Private DNS Name Verification in the\n Amazon Virtual Private Cloud User Guide. Modifies the attributes of your VPC endpoint service configuration. You can change the\n Network Load Balancers or Gateway Load Balancers for your service, and you can specify whether acceptance is\n required for requests to connect to your endpoint service through an interface VPC\n endpoint. If you set or modify the private DNS name, you must prove that you own the private DNS\n domain name. Modifies the permissions for your VPC endpoint service. You can add or remove permissions for service consumers (IAM users, \n\t IAM roles, and Amazon Web Services accounts) to connect to your endpoint service. If you grant permissions to all principals, the service is public. Any users who know the name of a\n\t public service can send a request to attach an endpoint. If the service does not require manual approval,\n\t attachments are automatically approved. Modifies the permissions for your VPC endpoint service. You can add or remove permissions for service consumers \n\t (IAM users, IAM roles, and Amazon Web Services accounts) to connect to your endpoint service. If you grant permissions to all principals, the service is public. Any users who know the name of a\n\t public service can send a request to attach an endpoint. If the service does not require manual approval,\n\t attachments are automatically approved. Information about the private DNS name for the service endpoint. For more information\n about these parameters, see VPC Endpoint Service\n Private DNS Name Verification in the\n Amazon Virtual Private Cloud User Guide. Information about the private DNS name for the service endpoint. Requests a reboot of the specified instances. This operation is asynchronous; it only\n queues a request to reboot the specified instances. The operation succeeds if the\n instances are valid and belong to you. Requests to reboot terminated instances are\n ignored. If an instance does not cleanly shut down within a few minutes, Amazon EC2 performs a\n hard reboot. For more information about troubleshooting, see Troubleshoot an unreachable\n instance in the Amazon EC2 User Guide. Releases the specified Elastic IP address. [EC2-Classic, default VPC] Releasing an Elastic IP address automatically disassociates it\n\t\t\t\tfrom any instance that it's associated with. To disassociate an Elastic IP address without\n\t\t\t\treleasing it, use DisassociateAddress. [Nondefault VPC] You must use DisassociateAddress to disassociate the Elastic IP address\n\t\t\t before you can release it. Otherwise, Amazon EC2 returns an error ( After releasing an Elastic IP address, it is released to the IP address pool. \n Be sure to update your DNS records and any servers or devices that communicate with the address. \n If you attempt to release an Elastic IP address that you already released, you'll get an\n [EC2-VPC] After you release an Elastic IP address for use in a VPC, you might be able to recover it.\n For more information, see AllocateAddress. Replaces an entry (rule) in a network ACL. For more information, see Network ACLs in the\n\t\t\t\tAmazon Virtual Private Cloud User Guide. Replaces an existing route within a route table in a VPC. You must provide only one of\n the following: internet gateway, virtual private gateway, NAT instance, NAT gateway, VPC\n peering connection, network interface, egress-only internet gateway, or transit\n gateway. For more information, see Route tables in the\n Amazon Virtual Private Cloud User Guide. Submits feedback about the status of an instance. The instance must be in the\n Use of this action does not change the value returned by DescribeInstanceStatus. The information to include in the launch template. The information to include in the launch template. You must specify at least one parameter for the launch template data. Creates a Spot Fleet request. The Spot Fleet request specifies the total target capacity and the On-Demand target\n capacity. Amazon EC2 calculates the difference between the total capacity and On-Demand\n capacity, and launches the difference as Spot capacity. You can submit a single request that includes multiple launch specifications that vary\n by instance type, AMI, Availability Zone, or subnet. By default, the Spot Fleet requests Spot Instances in the Spot Instance pool where the\n price per unit is the lowest. Each launch specification can include its own instance\n weighting that reflects the value of the instance type to your application\n workload. Alternatively, you can specify that the Spot Fleet distribute the target capacity\n across the Spot pools included in its launch specifications. By ensuring that the Spot\n Instances in your Spot Fleet are in different Spot pools, you can improve the\n availability of your fleet. You can specify tags for the Spot Fleet request and instances launched by the fleet.\n You cannot tag other resource types in a Spot Fleet request because only the\n For more information, see Spot Fleet requests\n in the Amazon EC2 User Guide for Linux Instances. Creates a Spot Fleet request. The Spot Fleet request specifies the total target capacity and the On-Demand target\n capacity. Amazon EC2 calculates the difference between the total capacity and On-Demand\n capacity, and launches the difference as Spot capacity. You can submit a single request that includes multiple launch specifications that vary\n by instance type, AMI, Availability Zone, or subnet. By default, the Spot Fleet requests Spot Instances in the Spot Instance pool where the\n price per unit is the lowest. Each launch specification can include its own instance\n weighting that reflects the value of the instance type to your application\n workload. Alternatively, you can specify that the Spot Fleet distribute the target capacity\n across the Spot pools included in its launch specifications. By ensuring that the Spot\n Instances in your Spot Fleet are in different Spot pools, you can improve the\n availability of your fleet. You can specify tags for the Spot Fleet request and instances launched by the fleet.\n You cannot tag other resource types in a Spot Fleet request because only the\n For more information, see Spot Fleet requests\n in the Amazon EC2 User Guide for Linux Instances. We strongly discourage using the RequestSpotFleet API because it is a legacy\n API with no planned investment. For options for requesting Spot Instances, see\n Which\n is the best Spot request method to use? in the\n Amazon EC2 User Guide for Linux Instances. Creates a Spot Instance request. For more information, see Spot Instance requests in\n the Amazon EC2 User Guide for Linux Instances. Creates a Spot Instance request. For more information, see Spot Instance requests in\n the Amazon EC2 User Guide for Linux Instances. We strongly discourage using the RequestSpotInstances API because it is a legacy\n API with no planned investment. For options for requesting Spot Instances, see\n Which\n is the best Spot request method to use? in the\n Amazon EC2 User Guide for Linux Instances. The instance type. The instance type. Only one instance type can be specified. Resets an attribute of an AMI to its default value. Resets an attribute of an instance to its default value. To reset the\n The Resets a network interface attribute. You can specify only one attribute at a time. Resets permission settings for the specified snapshot. For more information about modifying snapshot permissions, see Share a snapshot in the\n Amazon Elastic Compute Cloud User Guide. The user data to make available to the instance. For more information, see Run commands on\n your Linux instance at launch and Run commands on your\n Windows instance at launch. If you are using a command line tool,\n base64-encoding is performed for you, and you can load the text from a file. Otherwise,\n you must provide base64-encoded text. User data is limited to 16 KB. The user data script to make available to the instance. For more information, see Run commands\n on your Linux instance at launch and Run commands on your Windows instance at launch. If you are using a command line tool,\n base64-encoding is performed for you, and you can load the text from a file. Otherwise,\n you must provide base64-encoded text. User data is limited to 16 KB. Sends a diagnostic interrupt to the specified Amazon EC2 instance to trigger a\n kernel panic (on Linux instances), or a blue\n screen/stop error (on Windows instances). For\n instances based on Intel and AMD processors, the interrupt is received as a\n non-maskable interrupt (NMI). In general, the operating system crashes and reboots when a kernel panic or stop error\n is triggered. The operating system can also be configured to perform diagnostic tasks,\n such as generating a memory dump file, loading a secondary kernel, or obtaining a call\n trace. Before sending a diagnostic interrupt to your instance, ensure that its operating\n system is configured to perform the required diagnostic tasks. For more information about configuring your operating system to generate a crash dump\n when a kernel panic or stop error occurs, see Send a diagnostic interrupt\n (for advanced users) (Linux instances) or Send a diagnostic\n interrupt (for advanced users) (Windows instances). The state of the Spot Instance request. Spot status information helps track your Spot\n Instance requests. For more information, see Spot status in the\n Amazon EC2 User Guide for Linux Instances. The state of the Spot Instance request. Spot request status information helps track your Spot\n Instance requests. For more information, see Spot request status in the\n Amazon EC2 User Guide for Linux Instances. The status code. For a list of status codes, see Spot status codes in the Amazon EC2 User Guide for Linux Instances. The status code. For a list of status codes, see Spot request status codes in the Amazon EC2 User Guide for Linux Instances. Initiates the verification process to prove that the service provider owns the private\n DNS name domain for the endpoint service. The service provider must successfully perform the verification before the consumer can use the name to access the service. Before the service provider runs this command, they must add a record to the DNS server. For more information, see Adding a TXT Record to Your Domain's DNS Server in the Amazon VPC User Guide. Initiates the verification process to prove that the service provider owns the private\n DNS name domain for the endpoint service. The service provider must successfully perform the verification before the consumer can use the name to access the service. Before the service provider runs this command, they must add a record to the DNS server. Unassigns one or more secondary private IP addresses, or IPv4 Prefix Delegation prefixes from a network interface. The name of the compute and memory capacity node type for the cluster. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): \t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The name of the compute and memory capacity node type for the cluster. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): \t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables Represents an individual cache node within a cluster. Each cache node runs its own\n instance of the cluster's protocol-compliant caching software - either Memcached or\n Redis. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): \t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables Represents an individual cache node within a cluster. Each cache node runs its own\n instance of the cluster's protocol-compliant caching software - either Memcached or\n Redis. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): \t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The compute and memory capacity of the nodes in the node group (shard). The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n\t\t\t\t\t \t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended) \n C1 node types:\n\t\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The compute and memory capacity of the nodes in the node group (shard). The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n\t\t\t\t\t \t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n C1 node types:\n\t\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The compute and memory capacity of the nodes in the node group (shard). The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t\t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): \n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The compute and memory capacity of the nodes in the node group (shard). The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t\t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): \n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The cache node type filter value. \n Use this parameter to show only those reservations matching the specified cache node type. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The cache node type filter value. \n Use this parameter to show only those reservations matching the specified cache node type. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The cache node type filter value. \n Use this parameter to show only the available offerings matching the specified cache node type. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward)\t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The cache node type filter value. \n Use this parameter to show only the available offerings matching the specified cache node type. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward)\t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The cache node type for the reserved cache nodes. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The cache node type for the reserved cache nodes. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward):\t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The cache node type for the reserved cache node. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): \t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The cache node type for the reserved cache node. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types: (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): \t For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and Memcached engine version 1.5.16 onward):\n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t\t\t\t\t\t For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The name of the compute and memory capacity node type for the source cluster. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): \n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t For region availability, see Supported Node Types\n For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables The name of the compute and memory capacity node type for the source cluster. The following node types are supported by ElastiCache. \n\t\t\t\tGenerally speaking, the current generation types provide more memory and computational power\n\t\t\tat lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: \n M6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n For region availability, see Supported Node Types\n \n M5 node types:\n \t\t\t\t\t\t \n M4 node types:\n \t\t\t\t\t\t \n T4g node types (available only for Redis engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): \n\t\t\t\t\t \n T3 node types:\n\t\t\t\t\t \n T2 node types:\n\t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n T1 node types:\n\t\t\t\t\t \n M1 node types:\n\t\t\t\t\t\t \n M3 node types:\n \t\t\t\t\t\t Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n C1 node types:\n\t\t\t Memory optimized with data tiering: Current generation: \n R6gd node types (available only for Redis engine version 6.2 onward). \t\n\t\t \n\t\t Memory optimized: Current generation: \n R6g node types (available only for Redis engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward). \t\n\t\t For region availability, see Supported Node Types\n For region availability, see Supported Node Types\n \n R5 node types:\n \t\t\t\t\t \n R4 node types:\n \t\t\t\t\t Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) \n M2 node types:\t\t\t\t\t\t\n \t\t\t\t\t \n R3 node types:\n \t\t\t\t\t \n Additional node type info\n All current generation instance types are created in Amazon VPC by default. Redis append-only files (AOF) are not supported for T1 or T2 instances. Redis Multi-AZ with automatic failover is not supported on T1 instances. Redis configuration variables Specifies the FSx for ONTAP file system deployment type to use in creating the file system. \n Specifies the FSx for ONTAP file system deployment type to use in creating\n the file system. \n \n For information about the use cases for Multi-AZ and Single-AZ deployments, refer to\n Choosing Multi-AZ or\n Single-AZ file system deployment. Specifies the IP address range in which the endpoints to access your file system\n will be created. By default, Amazon FSx selects an unused IP address range for you\n from the 198.19.* range. The Endpoint IP address range you select for your file system\n must exist outside the VPC's CIDR range and must be at least /30 or larger. (Multi-AZ only) Specifies the IP address range in which the endpoints to access your\n file system will be created. By default, Amazon FSx selects an unused IP address\n range for you from the 198.19.* range. The Endpoint IP address range you select for your file system must exist outside\n the VPC's CIDR range and must be at least /30 or larger. Required when Required when Specifies the virtual private cloud (VPC) route tables in which your file system's\n endpoints will be created. You should specify all VPC route tables associated with the\n subnets in which your clients are located. By default, Amazon FSx selects your VPC's\n default route table. (Multi-AZ only) Specifies the virtual private cloud (VPC) route tables in which your\n file system's endpoints will be created. You should specify all VPC route tables\n associated with the subnets in which your clients are located. By default, Amazon FSx\n selects your VPC's default route table. Sets the throughput capacity for the file system that you're creating. \n Valid values are 128, 256, 512, 1024, and 2048 MBps. Sets the throughput capacity for the file system that you're creating. Valid values\n are 128, 256, 512, 1024, and 2048 MBps. The ONTAP configuration properties of the FSx for ONTAP file system that you are creating. The ONTAP configuration properties of the FSx for ONTAP file system that you\n are creating. The ONTAP file system deployment type. Specifies the FSx for ONTAP file system deployment type in use in the file\n system. \n \n For information about the use cases for Multi-AZ and Single-AZ deployments, refer to\n Choosing Multi-AZ or\n Single-AZ file system deployment. The IP address range in which the endpoints to access your file system\n are created. The Endpoint IP address range you select for your file system\n must exist outside the VPC's CIDR range and must be at least /30 or larger.\n If you do not specify this optional parameter, Amazon FSx will automatically\n select a CIDR block for you. (Multi-AZ only) The IP address range in which the endpoints to access your file system\n are created. The Endpoint IP address range you select for your file system\n must exist outside the VPC's CIDR range and must be at least /30 or larger.\n If you do not specify this optional parameter, Amazon FSx will automatically\n select a CIDR block for you. The VPC route tables in which your file system's endpoints are\n created. (Multi-AZ only) The VPC route tables in which your file system's endpoints are\n created. Registers a player's acceptance or rejection of a proposed FlexMatch match. A\n matchmaking configuration may require player acceptance; if so, then matches built with\n that configuration cannot be completed unless all players accept the proposed match\n within a specified time limit. When FlexMatch builds a match, all the matchmaking tickets involved in the proposed\n match are placed into status To register acceptance, specify the ticket ID, a response, and one or more players.\n Once all players have registered acceptance, the matchmaking tickets advance to status\n If any player rejects the match, or if acceptances are not received before a specified\n timeout, the proposed match is dropped. The matchmaking tickets are then handled in one\n of two ways: For tickets where one or more players rejected the match, the ticket status\n is returned to \n Learn more\n \n \n Add FlexMatch to a game client\n \n FlexMatch events (reference) \n Related actions\n \n StartMatchmaking | \n DescribeMatchmaking | \n StopMatchmaking | \n AcceptMatch | \n StartMatchBackfill | \n All APIs by task\n Registers a player's acceptance or rejection of a proposed FlexMatch match. A\n matchmaking configuration may require player acceptance; if so, then matches built with\n that configuration cannot be completed unless all players accept the proposed match\n within a specified time limit. When FlexMatch builds a match, all the matchmaking tickets involved in the proposed\n match are placed into status To register acceptance, specify the ticket ID, a response, and one or more players.\n Once all players have registered acceptance, the matchmaking tickets advance to status\n If any player rejects the match, or if acceptances are not received before a specified\n timeout, the proposed match is dropped. The matchmaking tickets are then handled in one\n of two ways: For tickets where one or more players rejected the match or failed to\n respond, the ticket status is set to \n Learn more\n \n \n Add FlexMatch to a game client\n \n FlexMatch events (reference) \n Related actions\n \n StartMatchmaking | \n DescribeMatchmaking | \n StopMatchmaking | \n AcceptMatch | \n StartMatchBackfill | \n All APIs by task\n Temporary key allowing access to the Amazon Web Services S3 account. Temporary key allowing access to the Amazon GameLift S3 account. Temporary secret key allowing access to the Amazon Web Services S3 account. Temporary secret key allowing access to the Amazon GameLift S3 account. Temporary access credentials used for uploading game build files to Amazon Web Services. They\n are valid for a limited time. If they expire before you upload your game build, get a\n new set by calling RequestUploadCredentials. Temporary access credentials used for uploading game build files to Amazon GameLift. They\n are valid for a limited time. If they expire before you upload your game build, get a\n new set by calling RequestUploadCredentials. Indicates whether a TLS/SSL certificate is generated for a fleet. Valid values include: \n GENERATED - Generate a TLS/SSL certificate\n for this fleet. \n DISABLED - (default) Do not generate a\n TLS/SSL certificate for this fleet. Indicates whether a TLS/SSL certificate is generated for a fleet. Valid values include: \n GENERATED - Generate a TLS/SSL certificate\n for this fleet. \n DISABLED - (default) Do not generate a\n TLS/SSL certificate for this fleet. Creates an alias for a fleet. In most situations, you can use an alias ID in place of\n a fleet ID. An alias provides a level of abstraction for a fleet that is useful when\n redirecting player traffic from one fleet to another, such as when updating your game\n build. Amazon Web Services supports two types of routing strategies for aliases: simple and terminal. A\n simple alias points to an active fleet. A terminal alias is used to display messaging or\n link to a URL instead of routing players to an active fleet. For example, you might use\n a terminal alias when a game version is no longer supported and you want to direct\n players to an upgrade site. To create a fleet alias, specify an alias name, routing strategy, and optional\n description. Each simple alias can point to only one fleet, but a fleet can have\n multiple aliases. If successful, a new alias record is returned, including an alias ID\n and an ARN. You can reassign an alias to another fleet by calling\n \n Related actions\n \n CreateAlias | \n ListAliases | \n DescribeAlias | \n UpdateAlias | \n DeleteAlias | \n ResolveAlias | \n All APIs by task\n Creates an alias for a fleet. In most situations, you can use an alias ID in place of\n a fleet ID. An alias provides a level of abstraction for a fleet that is useful when\n redirecting player traffic from one fleet to another, such as when updating your game\n build. Amazon GameLift supports two types of routing strategies for aliases: simple and terminal. A\n simple alias points to an active fleet. A terminal alias is used to display messaging or\n link to a URL instead of routing players to an active fleet. For example, you might use\n a terminal alias when a game version is no longer supported and you want to direct\n players to an upgrade site. To create a fleet alias, specify an alias name, routing strategy, and optional\n description. Each simple alias can point to only one fleet, but a fleet can have\n multiple aliases. If successful, a new alias record is returned, including an alias ID\n and an ARN. You can reassign an alias to another fleet by calling\n \n Related actions\n \n CreateAlias | \n ListAliases | \n DescribeAlias | \n UpdateAlias | \n DeleteAlias | \n ResolveAlias | \n All APIs by task\n Creates a new Amazon Web Services build resource for your game server binary files. Game server\n binaries must be combined into a zip file for use with Amazon Web Services. When setting up a new game build for GameLift, we recommend using the Amazon Web Services CLI\n command \n upload-build\n . This helper command combines two tasks: (1) it\n uploads your build files from a file directory to a GameLift Amazon S3 location, and (2)\n it creates a new build resource. The To create a new game build with build files that are in an Amazon S3 location under\n an Amazon Web Services account that you control. To use this option, you must first give Amazon Web Services\n access to the Amazon S3 bucket. With permissions in place, call\n To directly upload your build files to a GameLift Amazon S3 location. To use this\n option, first call If successful, this operation creates a new build resource with a unique build ID and\n places it in \n Learn more\n \n Create a Build with Files in Amazon S3\n \n Related actions\n \n CreateBuild | \n ListBuilds | \n DescribeBuild | \n UpdateBuild | \n DeleteBuild | \n All APIs by task\n Creates a new Amazon GameLift build resource for your game server binary files. Game server\n binaries must be combined into a zip file for use with Amazon GameLift. When setting up a new game build for GameLift, we recommend using the Amazon Web Services CLI\n command \n upload-build\n . This helper command combines two tasks: (1) it\n uploads your build files from a file directory to a GameLift Amazon S3 location, and (2)\n it creates a new build resource. The To create a new game build with build files that are in an Amazon S3 location under\n an Amazon Web Services account that you control. To use this option, you must first give Amazon GameLift\n access to the Amazon S3 bucket. With permissions in place, call\n To directly upload your build files to a GameLift Amazon S3 location. To use this\n option, first call If successful, this operation creates a new build resource with a unique build ID and\n places it in \n Learn more\n \n Create a Build with Files in Amazon S3\n \n Related actions\n \n CreateBuild | \n ListBuilds | \n DescribeBuild | \n UpdateBuild | \n DeleteBuild | \n All APIs by task\n Information indicating where your game build files are stored. Use this parameter only\n when creating a build with files stored in an Amazon S3 bucket that you own. The storage\n location must specify an Amazon S3 bucket name and key. The location must also specify a role\n ARN that you set up to allow Amazon Web Services to access your Amazon S3 bucket. The S3 bucket and your\n new build must be in the same Region. If a Information indicating where your game build files are stored. Use this parameter only\n when creating a build with files stored in an Amazon S3 bucket that you own. The storage\n location must specify an Amazon S3 bucket name and key. The location must also specify a role\n ARN that you set up to allow Amazon GameLift to access your Amazon S3 bucket. The S3 bucket and your\n new build must be in the same Region. If a This element is returned only when the operation is called without a storage\n location. It contains credentials to use when you are uploading a build file to an Amazon S3\n bucket that is owned by Amazon Web Services. Credentials have a limited life span. To refresh these\n credentials, call RequestUploadCredentials. This element is returned only when the operation is called without a storage\n location. It contains credentials to use when you are uploading a build file to an Amazon S3\n bucket that is owned by Amazon GameLift. Credentials have a limited life span. To refresh these\n credentials, call RequestUploadCredentials. The Amazon Resource Name (ARN) for an IAM role that\n allows Amazon Web Services to access your Amazon EC2 Auto Scaling groups. The Amazon Resource Name (ARN) for an IAM role that\n allows Amazon GameLift to access your Amazon EC2 Auto Scaling groups. Creates a multiplayer game session for players in a specific fleet location. This\n operation prompts an available server process to start a game session and retrieves\n connection information for the new game session. As an alternative, consider using the\n GameLift game session placement feature with with StartGameSessionPlacement, which uses FleetIQ algorithms and\n queues to optimize the placement process. When creating a game session, you specify exactly where you want to place it and\n provide a set of game session configuration settings. The fleet must be in\n This operation can be used in the following ways: To create a game session on an instance in a fleet's home Region, provide a\n fleet or alias ID along with your game session configuration. To create a game session on an instance in a fleet's remote location, provide\n a fleet or alias ID and a location name, along with your game session\n configuration. If successful, a workflow is initiated to start a new game session. A\n Game session logs are retained for all active game sessions for 14 days. To access the\n logs, call GetGameSessionLogUrl to download the log files. \n Available in Amazon Web Services Local.\n \n Learn more\n \n Start a game session\n \n Related actions\n \n CreateGameSession | \n DescribeGameSessions | \n DescribeGameSessionDetails | \n SearchGameSessions | \n UpdateGameSession | \n GetGameSessionLogUrl | \n StartGameSessionPlacement | \n DescribeGameSessionPlacement | \n StopGameSessionPlacement | \n All APIs by task\n Creates a multiplayer game session for players in a specific fleet location. This\n operation prompts an available server process to start a game session and retrieves\n connection information for the new game session. As an alternative, consider using the\n GameLift game session placement feature with with StartGameSessionPlacement, which uses FleetIQ algorithms and\n queues to optimize the placement process. When creating a game session, you specify exactly where you want to place it and\n provide a set of game session configuration settings. The fleet must be in\n This operation can be used in the following ways: To create a game session on an instance in a fleet's home Region, provide a\n fleet or alias ID along with your game session configuration. To create a game session on an instance in a fleet's remote location, provide\n a fleet or alias ID and a location name, along with your game session\n configuration. If successful, a workflow is initiated to start a new game session. A\n Game session logs are retained for all active game sessions for 14 days. To access the\n logs, call GetGameSessionLogUrl to download the log files. \n Available in Amazon GameLift Local.\n \n Learn more\n \n Start a game session\n \n Related actions\n \n CreateGameSession | \n DescribeGameSessions | \n DescribeGameSessionDetails | \n SearchGameSessions | \n UpdateGameSession | \n GetGameSessionLogUrl | \n StartGameSessionPlacement | \n DescribeGameSessionPlacement | \n StopGameSessionPlacement | \n All APIs by task\n Reserves an open player slot in a game session for a player. New player sessions can be\n created in any game session with an open slot that is in To create a player session, specify a game session ID, player ID, and optionally a set of\n player data. If successful, a slot is reserved in the game session for the player and a new PlayerSession object is returned with a player session ID. The player\n references the player session ID when sending a connection request to the game session,\n and the game server can use it to validate the player reservation with the GameLift service. Player\n sessions cannot be updated. The maximum number of players per game session is 200. It is not adjustable.\n \n Available in Amazon Web Services Local.\n \n Related actions\n \n CreatePlayerSession |\n CreatePlayerSessions |\n DescribePlayerSessions |\n StartGameSessionPlacement | \n DescribeGameSessionPlacement |\n All APIs by task\n Reserves an open player slot in a game session for a player. New player sessions can be\n created in any game session with an open slot that is in To create a player session, specify a game session ID, player ID, and optionally a set of\n player data. If successful, a slot is reserved in the game session for the player and a new PlayerSession object is returned with a player session ID. The player\n references the player session ID when sending a connection request to the game session,\n and the game server can use it to validate the player reservation with the GameLift service. Player\n sessions cannot be updated. The maximum number of players per game session is 200. It is not adjustable.\n \n Available in Amazon GameLift Local.\n \n Related actions\n \n CreatePlayerSession |\n CreatePlayerSessions |\n DescribePlayerSessions |\n StartGameSessionPlacement | \n DescribeGameSessionPlacement |\n All APIs by task\n Reserves open slots in a game session for a group of players. New player sessions can be\n created in any game session with an open slot that is in To create player sessions, specify a game session ID and a list of player IDs. Optionally,\n provide a set of player data for each player ID. If successful, a slot is reserved in the game session for each player, and new PlayerSession objects are returned with player session IDs. Each player\n references their player session ID when sending a connection request to the game\n session, and the game server can use it to validate the player reservation with the\n GameLift service. Player sessions cannot be updated. The maximum number of players per game session is 200. It is not adjustable.\n \n Available in Amazon Web Services Local.\n \n Related actions\n \n CreatePlayerSession |\n CreatePlayerSessions |\n DescribePlayerSessions |\n StartGameSessionPlacement | \n DescribeGameSessionPlacement |\n All APIs by task\n Reserves open slots in a game session for a group of players. New player sessions can be\n created in any game session with an open slot that is in To create player sessions, specify a game session ID and a list of player IDs. Optionally,\n provide a set of player data for each player ID. If successful, a slot is reserved in the game session for each player, and new PlayerSession objects are returned with player session IDs. Each player\n references their player session ID when sending a connection request to the game\n session, and the game server can use it to validate the player reservation with the\n GameLift service. Player sessions cannot be updated. The maximum number of players per game session is 200. It is not adjustable.\n \n Available in Amazon GameLift Local.\n \n Related actions\n \n CreatePlayerSession |\n CreatePlayerSessions |\n DescribePlayerSessions |\n StartGameSessionPlacement | \n DescribeGameSessionPlacement |\n All APIs by task\n Map of string pairs, each specifying a player ID and a set of developer-defined\n information related to the player. Amazon Web Services does not use this data, so it can be formatted\n as needed for use in the game. Any player data strings for player IDs that are not\n included in the Map of string pairs, each specifying a player ID and a set of developer-defined\n information related to the player. Amazon GameLift does not use this data, so it can be formatted\n as needed for use in the game. Any player data strings for player IDs that are not\n included in the Creates a new script record for your Realtime Servers script. Realtime scripts are JavaScript that\n provide configuration settings and optional custom game logic for your game. The script\n is deployed when you create a Realtime Servers fleet to host your game sessions. Script logic is\n executed during an active game session. To create a new script record, specify a script name and provide the script file(s).\n The script files and all dependencies must be zipped into a single file. You can pull\n the zip file from either of these locations: A locally available directory. Use the ZipFile parameter for this\n option. An Amazon Simple Storage Service (Amazon S3) bucket under your Amazon Web Services account. Use the\n StorageLocation parameter for this option. You'll need\n to have an Identity Access Management (IAM) role that allows the Amazon Web Services\n service to access your S3 bucket. If the call is successful, a new script record is created with a unique script ID. If the \n script file is provided as a local file, the file is uploaded to an Amazon Web Services-owned S3 bucket \n and the script record's storage location reflects this location. If the script file is provided\n as an S3 bucket, Amazon Web Services accesses the file at this storage location as needed for deployment. \n Learn more\n \n Amazon Web Services Realtime Servers\n \n Set Up a Role for Amazon Web Services Access\n \n Related actions\n \n CreateScript | \n ListScripts | \n DescribeScript | \n UpdateScript | \n DeleteScript | \n All APIs by task\n Creates a new script record for your Realtime Servers script. Realtime scripts are JavaScript that\n provide configuration settings and optional custom game logic for your game. The script\n is deployed when you create a Realtime Servers fleet to host your game sessions. Script logic is\n executed during an active game session. To create a new script record, specify a script name and provide the script file(s).\n The script files and all dependencies must be zipped into a single file. You can pull\n the zip file from either of these locations: A locally available directory. Use the ZipFile parameter for this\n option. An Amazon Simple Storage Service (Amazon S3) bucket under your Amazon Web Services account. Use the\n StorageLocation parameter for this option. You'll need\n to have an Identity Access Management (IAM) role that allows the Amazon GameLift\n service to access your S3 bucket. If the call is successful, a new script record is created with a unique script ID. If the \n script file is provided as a local file, the file is uploaded to an Amazon GameLift-owned S3 bucket \n and the script record's storage location reflects this location. If the script file is provided\n as an S3 bucket, Amazon GameLift accesses the file at this storage location as needed for deployment. \n Learn more\n \n Amazon GameLift Realtime Servers\n \n Set Up a Role for Amazon GameLift Access\n \n Related actions\n \n CreateScript | \n ListScripts | \n DescribeScript | \n UpdateScript | \n DeleteScript | \n All APIs by task\n The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is\n stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the\n \"key\"), and a role ARN that allows Amazon Web Services to access the Amazon S3 storage location. The S3\n bucket must be in the same Region where you want to create a new script. By default,\n Amazon Web Services uploads the latest version of the zip file; if you have S3 object versioning\n turned on, you can use the The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is\n stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the\n \"key\"), and a role ARN that allows Amazon GameLift to access the Amazon S3 storage location. The S3\n bucket must be in the same Region where you want to create a new script. By default,\n Amazon GameLift uploads the latest version of the zip file; if you have S3 object versioning\n turned on, you can use the The newly created script record with a unique script ID and ARN. The new script's\n storage location reflects an Amazon S3 location: (1) If the script was uploaded from an S3\n bucket under your account, the storage location reflects the information that was\n provided in the CreateScript request; (2) If the script file was\n uploaded from a local zip file, the storage location reflects an S3 location controls by\n the Amazon Web Services service. The newly created script record with a unique script ID and ARN. The new script's\n storage location reflects an Amazon S3 location: (1) If the script was uploaded from an S3\n bucket under your account, the storage location reflects the information that was\n provided in the CreateScript request; (2) If the script file was\n uploaded from a local zip file, the storage location reflects an S3 location controls by\n the Amazon GameLift service. Requests authorization to create or delete a peer connection between the VPC for\n your Amazon Web Services fleet and a virtual private cloud (VPC) in your Amazon Web Services account. VPC peering enables the game\n servers on your fleet to communicate directly with other Amazon Web Services resources. Once you've\n received authorization, call CreateVpcPeeringConnection to establish\n the peering connection. For more information, see VPC Peering with Amazon Web Services\n Fleets. You can peer with VPCs that are owned by any Amazon Web Services account you have access to,\n including the account that you use to manage your Amazon Web Services fleets. You cannot peer with\n VPCs that are in different Regions. To request authorization to create a connection, call this operation from the Amazon Web Services\n account with the VPC that you want to peer to your Amazon Web Services fleet. For example, to\n enable your game servers to retrieve data from a DynamoDB table, use the account that\n manages that DynamoDB resource. Identify the following values: (1) The ID of the VPC\n that you want to peer with, and (2) the ID of the Amazon Web Services account that you use to manage\n Amazon Web Services. If successful, VPC peering is authorized for the specified VPC. To request authorization to delete a connection, call this operation from the Amazon Web Services\n account with the VPC that is peered with your Amazon Web Services fleet. Identify the following\n values: (1) VPC ID that you want to delete the peering connection for, and (2) ID of the\n Amazon Web Services account that you use to manage Amazon Web Services. The authorization remains valid for 24 hours unless it is canceled by a call to\n DeleteVpcPeeringAuthorization. You must create or delete the\n peering connection while the authorization is valid. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n Requests authorization to create or delete a peer connection between the VPC for\n your Amazon GameLift fleet and a virtual private cloud (VPC) in your Amazon Web Services account. VPC peering enables the game\n servers on your fleet to communicate directly with other Amazon Web Services resources. Once you've\n received authorization, call CreateVpcPeeringConnection to establish\n the peering connection. For more information, see VPC Peering with Amazon GameLift\n Fleets. You can peer with VPCs that are owned by any Amazon Web Services account you have access to,\n including the account that you use to manage your Amazon GameLift fleets. You cannot peer with\n VPCs that are in different Regions. To request authorization to create a connection, call this operation from the Amazon Web Services\n account with the VPC that you want to peer to your Amazon GameLift fleet. For example, to\n enable your game servers to retrieve data from a DynamoDB table, use the account that\n manages that DynamoDB resource. Identify the following values: (1) The ID of the VPC\n that you want to peer with, and (2) the ID of the Amazon Web Services account that you use to manage\n Amazon GameLift. If successful, VPC peering is authorized for the specified VPC. To request authorization to delete a connection, call this operation from the Amazon Web Services\n account with the VPC that is peered with your Amazon GameLift fleet. Identify the following\n values: (1) VPC ID that you want to delete the peering connection for, and (2) ID of the\n Amazon Web Services account that you use to manage Amazon GameLift. The authorization remains valid for 24 hours unless it is canceled by a call to\n DeleteVpcPeeringAuthorization. You must create or delete the\n peering connection while the authorization is valid. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n Establishes a VPC peering connection between a virtual private cloud (VPC) in an Amazon Web Services account with the VPC\n for your Amazon Web Services fleet. VPC peering enables the game servers on your fleet to\n communicate directly with other Amazon Web Services resources. You can peer with VPCs in any Amazon Web Services account\n that you have access to, including the account that you use to manage your Amazon Web Services\n fleets. You cannot peer with VPCs that are in different Regions. For more information,\n see VPC Peering with Amazon Web Services Fleets. Before calling this operation to establish the peering connection, you first need\n to call CreateVpcPeeringAuthorization and identify the VPC you want to\n peer with. Once the authorization for the specified VPC is issued, you have 24 hours to\n establish the connection. These two operations handle all tasks necessary to peer the\n two VPCs, including acceptance, updating routing tables, etc. To establish the connection, call this operation from the Amazon Web Services account that is used\n to manage the Amazon Web Services fleets. Identify the following values: (1) The ID of the fleet\n you want to be enable a VPC peering connection for; (2) The Amazon Web Services account with the VPC\n that you want to peer with; and (3) The ID of the VPC you want to peer with. This\n operation is asynchronous. If successful, a VpcPeeringConnection\n request is created. You can use continuous polling to track the request's status using\n DescribeVpcPeeringConnections, or by monitoring fleet events for\n success or failure using DescribeFleetEvents. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n Establishes a VPC peering connection between a virtual private cloud (VPC) in an Amazon Web Services account with the VPC\n for your Amazon GameLift fleet. VPC peering enables the game servers on your fleet to\n communicate directly with other Amazon Web Services resources. You can peer with VPCs in any Amazon Web Services account\n that you have access to, including the account that you use to manage your Amazon GameLift\n fleets. You cannot peer with VPCs that are in different Regions. For more information,\n see VPC Peering with Amazon GameLift Fleets. Before calling this operation to establish the peering connection, you first need\n to call CreateVpcPeeringAuthorization and identify the VPC you want to\n peer with. Once the authorization for the specified VPC is issued, you have 24 hours to\n establish the connection. These two operations handle all tasks necessary to peer the\n two VPCs, including acceptance, updating routing tables, etc. To establish the connection, call this operation from the Amazon Web Services account that is used\n to manage the Amazon GameLift fleets. Identify the following values: (1) The ID of the fleet\n you want to be enable a VPC peering connection for; (2) The Amazon Web Services account with the VPC\n that you want to peer with; and (3) The ID of the VPC you want to peer with. This\n operation is asynchronous. If successful, a VpcPeeringConnection\n request is created. You can use continuous polling to track the request's status using\n DescribeVpcPeeringConnections, or by monitoring fleet events for\n success or failure using DescribeFleetEvents. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n A unique identifier for the fleet. You can use either the fleet ID or ARN value. This tells Amazon Web Services which GameLift\n VPC to peer with. A unique identifier for the fleet. You can use either the fleet ID or ARN value. This tells Amazon GameLift which GameLift\n VPC to peer with. A unique identifier for the Amazon Web Services account with the VPC that you want to peer your\n Amazon Web Services fleet with. You can find your Account ID in the Amazon Web Services Management Console under account\n settings. A unique identifier for the Amazon Web Services account with the VPC that you want to peer your\n Amazon GameLift fleet with. You can find your Account ID in the Amazon Web Services Management Console under account\n settings. Deletes a Realtime script. This operation permanently deletes the script record. If\n script files were uploaded, they are also deleted (files stored in an S3 bucket are not\n deleted). To delete a script, specify the script ID. Before deleting a script, be sure to\n terminate all fleets that are deployed with the script being deleted. Fleet instances\n periodically check for script updates, and if the script record no longer exists, the\n instance will go into an error state and be unable to host game sessions. \n Learn more\n \n Amazon Web Services Realtime Servers\n \n Related actions\n \n CreateScript | \n ListScripts | \n DescribeScript | \n UpdateScript | \n DeleteScript | \n All APIs by task\n Deletes a Realtime script. This operation permanently deletes the script record. If\n script files were uploaded, they are also deleted (files stored in an S3 bucket are not\n deleted). To delete a script, specify the script ID. Before deleting a script, be sure to\n terminate all fleets that are deployed with the script being deleted. Fleet instances\n periodically check for script updates, and if the script record no longer exists, the\n instance will go into an error state and be unable to host game sessions. \n Learn more\n \n Amazon GameLift Realtime Servers\n \n Related actions\n \n CreateScript | \n ListScripts | \n DescribeScript | \n UpdateScript | \n DeleteScript | \n All APIs by task\n Removes a VPC peering connection. To delete the connection, you must have a valid\n authorization for the VPC peering connection that you want to delete. You can check for\n an authorization by calling DescribeVpcPeeringAuthorizations or\n request a new one using CreateVpcPeeringAuthorization. Once a valid authorization exists, call this operation from the Amazon Web Services account that is\n used to manage the Amazon Web Services fleets. Identify the connection to delete by the connection\n ID and fleet ID. If successful, the connection is removed. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n Removes a VPC peering connection. To delete the connection, you must have a valid\n authorization for the VPC peering connection that you want to delete. You can check for\n an authorization by calling DescribeVpcPeeringAuthorizations or\n request a new one using CreateVpcPeeringAuthorization. Once a valid authorization exists, call this operation from the Amazon Web Services account that is\n used to manage the Amazon GameLift fleets. Identify the connection to delete by the connection\n ID and fleet ID. If successful, the connection is removed. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n Retrieves a set of one or more game sessions in a specific fleet location. You can\n optionally filter the results by current game session status. Alternatively, use SearchGameSessions to request a set of active game sessions that are\n filtered by certain criteria. To retrieve the protection policy for game sessions, use\n DescribeGameSessionDetails. This operation is not designed to be continually called to track game session status. \n This practice can cause you to exceed your API limit, which results in errors. Instead, \n you must configure configure an \n Amazon Simple Notification Service (SNS) topic to receive notifications from FlexMatch or queues. Continuously polling \n with This operation can be used in the following ways: To retrieve all game sessions that are currently running on all locations in a\n fleet, provide a fleet or alias ID, with an optional status filter. This\n approach returns all game sessions in the fleet's home Region and all remote\n locations. To retrieve all game sessions that are currently running on a specific fleet\n location, provide a fleet or alias ID and a location name, with optional status\n filter. The location can be the fleet's home Region or any remote\n location. To retrieve a specific game session, provide the game session ID. This\n approach looks for the game session ID in all fleets that reside in the Amazon Web Services\n Region defined in the request. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a This operation is not designed to be continually called to track matchmaking ticket\n status. This practice can cause you to exceed your API limit, which results in errors.\n Instead, as a best practice, set up an Amazon Simple Notification Service to receive notifications, and provide\n the topic ARN in the matchmaking configuration. Continuously poling ticket status with\n DescribeGameSessions should only be used for games in development\n with low matchmaking usage. \n Available in Amazon Web Services Local.\n \n Learn more\n \n Find a game session\n \n Related actions\n \n CreateGameSession | \n DescribeGameSessions | \n DescribeGameSessionDetails | \n SearchGameSessions | \n UpdateGameSession | \n GetGameSessionLogUrl | \n StartGameSessionPlacement | \n DescribeGameSessionPlacement | \n StopGameSessionPlacement | \n All APIs by task\n Retrieves a set of one or more game sessions in a specific fleet location. You can\n optionally filter the results by current game session status. Alternatively, use SearchGameSessions to request a set of active game sessions that are\n filtered by certain criteria. To retrieve the protection policy for game sessions, use\n DescribeGameSessionDetails. This operation is not designed to be continually called to track game session status. \n This practice can cause you to exceed your API limit, which results in errors. Instead, \n you must configure configure an \n Amazon Simple Notification Service (SNS) topic to receive notifications from FlexMatch or queues. Continuously polling \n with This operation can be used in the following ways: To retrieve all game sessions that are currently running on all locations in a\n fleet, provide a fleet or alias ID, with an optional status filter. This\n approach returns all game sessions in the fleet's home Region and all remote\n locations. To retrieve all game sessions that are currently running on a specific fleet\n location, provide a fleet or alias ID and a location name, with optional status\n filter. The location can be the fleet's home Region or any remote\n location. To retrieve a specific game session, provide the game session ID. This\n approach looks for the game session ID in all fleets that reside in the Amazon Web Services\n Region defined in the request. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a This operation is not designed to be continually called to track matchmaking ticket\n status. This practice can cause you to exceed your API limit, which results in errors.\n Instead, as a best practice, set up an Amazon Simple Notification Service to receive notifications, and provide\n the topic ARN in the matchmaking configuration. Continuously poling ticket status with\n DescribeGameSessions should only be used for games in development\n with low matchmaking usage. \n Available in Amazon GameLift Local.\n \n Learn more\n \n Find a game session\n \n Related actions\n \n CreateGameSession | \n DescribeGameSessions | \n DescribeGameSessionDetails | \n SearchGameSessions | \n UpdateGameSession | \n GetGameSessionLogUrl | \n StartGameSessionPlacement | \n DescribeGameSessionPlacement | \n StopGameSessionPlacement | \n All APIs by task\n Retrieves properties for one or more player sessions. This action can be used in the following ways: To retrieve a specific player session, provide the player session ID\n only. To retrieve all player sessions in a game session, provide the game session ID\n only. To retrieve all player sessions for a specific player, provide a player ID\n only. To request player sessions, specify either a player session ID, game session ID, or player\n ID. You can filter this request by player session status. Use the pagination parameters\n to retrieve results as a set of sequential pages. If successful, a \n Available in Amazon Web Services Local.\n \n Related actions\n \n CreatePlayerSession |\n CreatePlayerSessions |\n DescribePlayerSessions |\n StartGameSessionPlacement | \n DescribeGameSessionPlacement |\n All APIs by task\n Retrieves properties for one or more player sessions. This action can be used in the following ways: To retrieve a specific player session, provide the player session ID\n only. To retrieve all player sessions in a game session, provide the game session ID\n only. To retrieve all player sessions for a specific player, provide a player ID\n only. To request player sessions, specify either a player session ID, game session ID, or player\n ID. You can filter this request by player session status. Use the pagination parameters\n to retrieve results as a set of sequential pages. If successful, a \n Available in Amazon GameLift Local.\n \n Related actions\n \n CreatePlayerSession |\n CreatePlayerSessions |\n DescribePlayerSessions |\n StartGameSessionPlacement | \n DescribeGameSessionPlacement |\n All APIs by task\n Retrieves properties for a Realtime script. To request a script record, specify the script ID. If successful, an object containing the script properties \n is returned. \n Learn more\n \n Amazon Web Services Realtime Servers\n \n Related actions\n \n CreateScript | \n ListScripts | \n DescribeScript | \n UpdateScript | \n DeleteScript | \n All APIs by task\n Retrieves properties for a Realtime script. To request a script record, specify the script ID. If successful, an object containing the script properties \n is returned. \n Learn more\n \n Amazon GameLift Realtime Servers\n \n Related actions\n \n CreateScript | \n ListScripts | \n DescribeScript | \n UpdateScript | \n DeleteScript | \n All APIs by task\n Retrieves information on VPC peering connections. Use this operation to get peering\n information for all fleets or for one specific fleet ID. To retrieve connection information, call this operation from the Amazon Web Services account that\n is used to manage the Amazon Web Services fleets. Specify a fleet ID or leave the parameter empty\n to retrieve all connection records. If successful, the retrieved information includes\n both active and pending connections. Active connections identify the IpV4 CIDR block\n that the VPC uses to connect. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n Retrieves information on VPC peering connections. Use this operation to get peering\n information for all fleets or for one specific fleet ID. To retrieve connection information, call this operation from the Amazon Web Services account that\n is used to manage the Amazon GameLift fleets. Specify a fleet ID or leave the parameter empty\n to retrieve all connection records. If successful, the retrieved information includes\n both active and pending connections. Active connections identify the IpV4 CIDR block\n that the VPC uses to connect. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n The Amazon Resource Name (ARN) for an IAM role that\n allows Amazon Web Services to access your Amazon EC2 Auto Scaling groups. The Amazon Resource Name (ARN) for an IAM role that\n allows Amazon GameLift to access your Amazon EC2 Auto Scaling groups. A starting value for a range of allowed port numbers. For fleets using Linux builds, only port 22, 443, 1026-60000 are valid.\n For fleets using Windows builds, only port 443, 1026-60000 are valid. A starting value for a range of allowed port numbers. For fleets using Windows and Linux builds, only ports 1026-60000 are valid. An ending value for a range of allowed port numbers. Port numbers are end-inclusive.\n This value must be higher than For fleets using Linux builds, only port 22, 443, 1026-60000 are valid.\n For fleets using Windows builds, only port 443, 1026-60000 are valid. An ending value for a range of allowed port numbers. Port numbers are end-inclusive.\n This value must be higher than For fleets using Windows and Linux builds, only ports 1026-60000 are valid. The version of the Amazon EC2 launch template to use. If no version is specified, the\n default version will be used. With Amazon Elastic Compute Cloud, you can specify a default version for a\n launch template. If none is set, the default is the first version created. The version of the Amazon EC2 launch template to use. If no version is specified, the\n default version will be used. With Amazon EC2, you can specify a default version for a\n launch template. If none is set, the default is the first version created. \n This data type is used with the GameLift FleetIQ and game server groups.\n An Amazon EC2 launch template that contains configuration settings and game server code to\n be deployed to all instances in a game server group. The launch template is specified\n when creating a new game server group with CreateGameServerGroup. \n This data type is used with the GameLift FleetIQ and game server groups.\n An Amazon Elastic Compute Cloud launch template that contains configuration settings and game server code to\n be deployed to all instances in a game server group. The launch template is specified\n when creating a new game server group with CreateGameServerGroup. Retrieves script records for all Realtime scripts that are associated with the Amazon Web Services account in use. \n Learn more\n \n Amazon Web Services Realtime Servers\n \n Related actions\n \n CreateScript | \n ListScripts | \n DescribeScript | \n UpdateScript | \n DeleteScript | \n All APIs by task\n Retrieves script records for all Realtime scripts that are associated with the Amazon Web Services account in use. \n Learn more\n \n Amazon GameLift Realtime Servers\n \n Related actions\n \n CreateScript | \n ListScripts | \n DescribeScript | \n UpdateScript | \n DeleteScript | \n All APIs by task\n Port number for the game session. To connect to a Amazon Web Services server process, an app\n needs both the IP address and port number. Port number for the game session. To connect to a Amazon GameLift server process, an app\n needs both the IP address and port number. Creates or updates a scaling policy for a fleet. Scaling policies are used to\n automatically scale a fleet's hosting capacity to meet player demand. An active scaling\n policy instructs Amazon Web Services to track a fleet metric and automatically change the fleet's\n capacity when a certain threshold is reached. There are two types of scaling policies:\n target-based and rule-based. Use a target-based policy to quickly and efficiently manage\n fleet scaling; this option is the most commonly used. Use rule-based policies when you\n need to exert fine-grained control over auto-scaling. Fleets can have multiple scaling policies of each type in force at the same time;\n you can have one target-based policy, one or multiple rule-based scaling policies, or\n both. We recommend caution, however, because multiple auto-scaling policies can have\n unintended consequences. You can temporarily suspend all scaling policies for a fleet by calling StopFleetActions with the fleet action AUTO_SCALING. To resume scaling\n policies, call StartFleetActions with the same fleet action. To stop\n just one scaling policy--or to permanently remove it, you must delete the policy with\n DeleteScalingPolicy. Learn more about how to work with auto-scaling in Set Up Fleet Automatic\n Scaling. \n Target-based policy\n A target-based policy tracks a single metric: PercentAvailableGameSessions. This\n metric tells us how much of a fleet's hosting capacity is ready to host game sessions\n but is not currently in use. This is the fleet's buffer; it measures the additional\n player demand that the fleet could handle at current capacity. With a target-based\n policy, you set your ideal buffer size and leave it to Amazon Web Services to take whatever action\n is needed to maintain that target. For example, you might choose to maintain a 10% buffer for a fleet that has the\n capacity to host 100 simultaneous game sessions. This policy tells Amazon Web Services to take\n action whenever the fleet's available capacity falls below or rises above 10 game\n sessions. Amazon Web Services will start new instances or stop unused instances in order to return\n to the 10% buffer. To create or update a target-based policy, specify a fleet ID and name, and set the\n policy type to \"TargetBased\". Specify the metric to track (PercentAvailableGameSessions)\n and reference a TargetConfiguration object with your desired buffer\n value. Exclude all other parameters. On a successful request, the policy name is\n returned. The scaling policy is automatically in force as soon as it's successfully\n created. If the fleet's auto-scaling actions are temporarily suspended, the new policy\n will be in force once the fleet actions are restarted. \n Rule-based policy\n A rule-based policy tracks specified fleet metric, sets a threshold value, and\n specifies the type of action to initiate when triggered. With a rule-based policy, you\n can select from several available fleet metrics. Each policy specifies whether to scale\n up or scale down (and by how much), so you need one policy for each type of action. For example, a policy may make the following statement: \"If the percentage of idle\n instances is greater than 20% for more than 15 minutes, then reduce the fleet capacity\n by 10%.\" A policy's rule statement has the following structure: If To implement the example, the rule statement would look like this: If To create or update a scaling policy, specify a unique combination of name and\n fleet ID, and set the policy type to \"RuleBased\". Specify the parameter values for a\n policy rule statement. On a successful request, the policy name is returned. Scaling\n policies are automatically in force as soon as they're successfully created. If the\n fleet's auto-scaling actions are temporarily suspended, the new policy will be in force\n once the fleet actions are restarted. \n Related actions\n \n DescribeFleetCapacity | \n UpdateFleetCapacity | \n DescribeEC2InstanceLimits | \n PutScalingPolicy | \n DescribeScalingPolicies | \n DeleteScalingPolicy | \n StopFleetActions | \n StartFleetActions | \n All APIs by task\n Creates or updates a scaling policy for a fleet. Scaling policies are used to\n automatically scale a fleet's hosting capacity to meet player demand. An active scaling\n policy instructs Amazon GameLift to track a fleet metric and automatically change the fleet's\n capacity when a certain threshold is reached. There are two types of scaling policies:\n target-based and rule-based. Use a target-based policy to quickly and efficiently manage\n fleet scaling; this option is the most commonly used. Use rule-based policies when you\n need to exert fine-grained control over auto-scaling. Fleets can have multiple scaling policies of each type in force at the same time;\n you can have one target-based policy, one or multiple rule-based scaling policies, or\n both. We recommend caution, however, because multiple auto-scaling policies can have\n unintended consequences. You can temporarily suspend all scaling policies for a fleet by calling StopFleetActions with the fleet action AUTO_SCALING. To resume scaling\n policies, call StartFleetActions with the same fleet action. To stop\n just one scaling policy--or to permanently remove it, you must delete the policy with\n DeleteScalingPolicy. Learn more about how to work with auto-scaling in Set Up Fleet Automatic\n Scaling. \n Target-based policy\n A target-based policy tracks a single metric: PercentAvailableGameSessions. This\n metric tells us how much of a fleet's hosting capacity is ready to host game sessions\n but is not currently in use. This is the fleet's buffer; it measures the additional\n player demand that the fleet could handle at current capacity. With a target-based\n policy, you set your ideal buffer size and leave it to Amazon GameLift to take whatever action\n is needed to maintain that target. For example, you might choose to maintain a 10% buffer for a fleet that has the\n capacity to host 100 simultaneous game sessions. This policy tells Amazon GameLift to take\n action whenever the fleet's available capacity falls below or rises above 10 game\n sessions. Amazon GameLift will start new instances or stop unused instances in order to return\n to the 10% buffer. To create or update a target-based policy, specify a fleet ID and name, and set the\n policy type to \"TargetBased\". Specify the metric to track (PercentAvailableGameSessions)\n and reference a TargetConfiguration object with your desired buffer\n value. Exclude all other parameters. On a successful request, the policy name is\n returned. The scaling policy is automatically in force as soon as it's successfully\n created. If the fleet's auto-scaling actions are temporarily suspended, the new policy\n will be in force once the fleet actions are restarted. \n Rule-based policy\n A rule-based policy tracks specified fleet metric, sets a threshold value, and\n specifies the type of action to initiate when triggered. With a rule-based policy, you\n can select from several available fleet metrics. Each policy specifies whether to scale\n up or scale down (and by how much), so you need one policy for each type of action. For example, a policy may make the following statement: \"If the percentage of idle\n instances is greater than 20% for more than 15 minutes, then reduce the fleet capacity\n by 10%.\" A policy's rule statement has the following structure: If To implement the example, the rule statement would look like this: If To create or update a scaling policy, specify a unique combination of name and\n fleet ID, and set the policy type to \"RuleBased\". Specify the parameter values for a\n policy rule statement. On a successful request, the policy name is returned. Scaling\n policies are automatically in force as soon as they're successfully created. If the\n fleet's auto-scaling actions are temporarily suspended, the new policy will be in force\n once the fleet actions are restarted. \n Related actions\n \n DescribeFleetCapacity | \n UpdateFleetCapacity | \n DescribeEC2InstanceLimits | \n PutScalingPolicy | \n DescribeScalingPolicies | \n DeleteScalingPolicy | \n StopFleetActions | \n StartFleetActions | \n All APIs by task\n Name of the Amazon Web Services-defined metric that is used to trigger a scaling adjustment. For\n detailed descriptions of fleet metrics, see Monitor Amazon Web Services\n with Amazon CloudWatch. \n ActivatingGameSessions -- Game sessions in\n the process of being created. \n ActiveGameSessions -- Game sessions that\n are currently running. \n ActiveInstances -- Fleet instances that\n are currently running at least one game session. \n AvailableGameSessions -- Additional game\n sessions that fleet could host simultaneously, given current capacity. \n AvailablePlayerSessions -- Empty player\n slots in currently active game sessions. This includes game sessions that are\n not currently accepting players. Reserved player slots are not\n included. \n CurrentPlayerSessions -- Player slots in\n active game sessions that are being used by a player or are reserved for a\n player. \n IdleInstances -- Active instances that are\n currently hosting zero game sessions. \n PercentAvailableGameSessions -- Unused\n percentage of the total number of game sessions that a fleet could host\n simultaneously, given current capacity. Use this metric for a target-based\n scaling policy. \n PercentIdleInstances -- Percentage of the\n total number of active instances that are hosting zero game sessions. \n QueueDepth -- Pending game session\n placement requests, in any queue, where the current fleet is the top-priority\n destination. \n WaitTime -- Current wait time for pending\n game session placement requests, in any queue, where the current fleet is the\n top-priority destination. Name of the Amazon GameLift-defined metric that is used to trigger a scaling adjustment. For\n detailed descriptions of fleet metrics, see Monitor Amazon GameLift\n with Amazon CloudWatch. \n ActivatingGameSessions -- Game sessions in\n the process of being created. \n ActiveGameSessions -- Game sessions that\n are currently running. \n ActiveInstances -- Fleet instances that\n are currently running at least one game session. \n AvailableGameSessions -- Additional game\n sessions that fleet could host simultaneously, given current capacity. \n AvailablePlayerSessions -- Empty player\n slots in currently active game sessions. This includes game sessions that are\n not currently accepting players. Reserved player slots are not\n included. \n CurrentPlayerSessions -- Player slots in\n active game sessions that are being used by a player or are reserved for a\n player. \n IdleInstances -- Active instances that are\n currently hosting zero game sessions. \n PercentAvailableGameSessions -- Unused\n percentage of the total number of game sessions that a fleet could host\n simultaneously, given current capacity. Use this metric for a target-based\n scaling policy. \n PercentIdleInstances -- Percentage of the\n total number of active instances that are hosting zero game sessions. \n QueueDepth -- Pending game session\n placement requests, in any queue, where the current fleet is the top-priority\n destination. \n WaitTime -- Current wait time for pending\n game session placement requests, in any queue, where the current fleet is the\n top-priority destination. Retrieves a fresh set of credentials for use when uploading a new set of game build\n files to Amazon Web Services's Amazon S3. This is done as part of the build creation process; see\n CreateBuild. To request new credentials, specify the build ID as returned with an initial\n \n Learn more\n \n \n Create a Build with Files in S3\n \n Related actions\n \n CreateBuild | \n ListBuilds | \n DescribeBuild | \n UpdateBuild | \n DeleteBuild | \n All APIs by task\n Retrieves a fresh set of credentials for use when uploading a new set of game build\n files to Amazon GameLift's Amazon S3. This is done as part of the build creation process; see\n CreateBuild. To request new credentials, specify the build ID as returned with an initial\n \n Learn more\n \n \n Create a Build with Files in S3\n \n Related actions\n \n CreateBuild | \n ListBuilds | \n DescribeBuild | \n UpdateBuild | \n DeleteBuild | \n All APIs by task\n The Amazon Resource Name (ARN) for an IAM role that\n allows Amazon Web Services to access the S3 bucket. The Amazon Resource Name (ARN) for an IAM role that\n allows Amazon GameLift to access the S3 bucket. The version of the file, if object versioning is turned on for the bucket. Amazon Web Services uses\n this information when retrieving files from an S3 bucket that you own. Use this\n parameter to specify a specific version of the file. If not set, the latest version of\n the file is retrieved. The version of the file, if object versioning is turned on for the bucket. Amazon GameLift uses\n this information when retrieving files from an S3 bucket that you own. Use this\n parameter to specify a specific version of the file. If not set, the latest version of\n the file is retrieved. The location in Amazon S3 where build or script files are stored for access by Amazon Web Services. This\n location is specified in CreateBuild, CreateScript,\n and UpdateScript requests. The location in Amazon S3 where build or script files are stored for access by Amazon GameLift. This\n location is specified in CreateBuild, CreateScript,\n and UpdateScript requests. Name of the Amazon Web Services-defined metric that is used to trigger a scaling adjustment. For\n detailed descriptions of fleet metrics, see Monitor Amazon Web Services\n with Amazon CloudWatch. \n ActivatingGameSessions -- Game sessions in\n the process of being created. \n ActiveGameSessions -- Game sessions that\n are currently running. \n ActiveInstances -- Fleet instances that\n are currently running at least one game session. \n AvailableGameSessions -- Additional game\n sessions that fleet could host simultaneously, given current capacity. \n AvailablePlayerSessions -- Empty player\n slots in currently active game sessions. This includes game sessions that are\n not currently accepting players. Reserved player slots are not\n included. \n CurrentPlayerSessions -- Player slots in\n active game sessions that are being used by a player or are reserved for a\n player. \n IdleInstances -- Active instances that are\n currently hosting zero game sessions. \n PercentAvailableGameSessions -- Unused\n percentage of the total number of game sessions that a fleet could host\n simultaneously, given current capacity. Use this metric for a target-based\n scaling policy. \n PercentIdleInstances -- Percentage of the\n total number of active instances that are hosting zero game sessions. \n QueueDepth -- Pending game session\n placement requests, in any queue, where the current fleet is the top-priority\n destination. \n WaitTime -- Current wait time for pending\n game session placement requests, in any queue, where the current fleet is the\n top-priority destination. Name of the Amazon GameLift-defined metric that is used to trigger a scaling adjustment. For\n detailed descriptions of fleet metrics, see Monitor Amazon GameLift\n with Amazon CloudWatch. \n ActivatingGameSessions -- Game sessions in\n the process of being created. \n ActiveGameSessions -- Game sessions that\n are currently running. \n ActiveInstances -- Fleet instances that\n are currently running at least one game session. \n AvailableGameSessions -- Additional game\n sessions that fleet could host simultaneously, given current capacity. \n AvailablePlayerSessions -- Empty player\n slots in currently active game sessions. This includes game sessions that are\n not currently accepting players. Reserved player slots are not\n included. \n CurrentPlayerSessions -- Player slots in\n active game sessions that are being used by a player or are reserved for a\n player. \n IdleInstances -- Active instances that are\n currently hosting zero game sessions. \n PercentAvailableGameSessions -- Unused\n percentage of the total number of game sessions that a fleet could host\n simultaneously, given current capacity. Use this metric for a target-based\n scaling policy. \n PercentIdleInstances -- Percentage of the\n total number of active instances that are hosting zero game sessions. \n QueueDepth -- Pending game session\n placement requests, in any queue, where the current fleet is the top-priority\n destination. \n WaitTime -- Current wait time for pending\n game session placement requests, in any queue, where the current fleet is the\n top-priority destination. Places a request for a new game session in a queue (see CreateGameSessionQueue). When processing a placement request, Amazon Web Services\n searches for available resources on the queue's destinations, scanning each until it\n finds resources or the placement request times out. A game session placement request can also request player sessions. When a new game\n session is successfully created, Amazon Web Services creates a player session for each player\n included in the request. When placing a game session, by default Amazon Web Services tries each fleet in the order they\n are listed in the queue configuration. Ideally, a queue's destinations are listed in\n preference order. Alternatively, when requesting a game session with players, you can also provide\n latency data for each player in relevant Regions. Latency data indicates the performance\n lag a player experiences when connected to a fleet in the Region. Amazon Web Services uses latency\n data to reorder the list of destinations to place the game session in a Region with\n minimal lag. If latency data is provided for multiple players, Amazon Web Services calculates each\n Region's average lag for all players and reorders to get the best game play across all\n players. To place a new game session request, specify the following: The queue name and a set of game session properties and settings A unique ID (such as a UUID) for the placement. You use this ID to track\n the status of the placement request (Optional) A set of player data and a unique player ID for each player that\n you are joining to the new game session (player data is optional, but if you\n include it, you must also provide a unique ID for each player) Latency data for all players (if you want to optimize game play for the\n players) If successful, a new game session placement is created. To track the status of a placement request, call DescribeGameSessionPlacement and check the request's status. If the\n status is \n Related actions\n \n CreateGameSession | \n DescribeGameSessions | \n DescribeGameSessionDetails | \n SearchGameSessions | \n UpdateGameSession | \n GetGameSessionLogUrl | \n StartGameSessionPlacement | \n DescribeGameSessionPlacement | \n StopGameSessionPlacement | \n All APIs by task\n Places a request for a new game session in a queue (see CreateGameSessionQueue). When processing a placement request, Amazon GameLift\n searches for available resources on the queue's destinations, scanning each until it\n finds resources or the placement request times out. A game session placement request can also request player sessions. When a new game\n session is successfully created, Amazon GameLift creates a player session for each player\n included in the request. When placing a game session, by default Amazon GameLift tries each fleet in the order they\n are listed in the queue configuration. Ideally, a queue's destinations are listed in\n preference order. Alternatively, when requesting a game session with players, you can also provide\n latency data for each player in relevant Regions. Latency data indicates the performance\n lag a player experiences when connected to a fleet in the Region. Amazon GameLift uses latency\n data to reorder the list of destinations to place the game session in a Region with\n minimal lag. If latency data is provided for multiple players, Amazon GameLift calculates each\n Region's average lag for all players and reorders to get the best game play across all\n players. To place a new game session request, specify the following: The queue name and a set of game session properties and settings A unique ID (such as a UUID) for the placement. You use this ID to track\n the status of the placement request (Optional) A set of player data and a unique player ID for each player that\n you are joining to the new game session (player data is optional, but if you\n include it, you must also provide a unique ID for each player) Latency data for all players (if you want to optimize game play for the\n players) If successful, a new game session placement is created. To track the status of a placement request, call DescribeGameSessionPlacement and check the request's status. If the\n status is \n Related actions\n \n CreateGameSession | \n DescribeGameSessions | \n DescribeGameSessionDetails | \n SearchGameSessions | \n UpdateGameSession | \n GetGameSessionLogUrl | \n StartGameSessionPlacement | \n DescribeGameSessionPlacement | \n StopGameSessionPlacement | \n All APIs by task\n A unique identifier for a matchmaking ticket. If no ticket ID is specified here, Amazon Web Services will generate one in the form of\n a UUID. Use this identifier to track the match backfill ticket status and retrieve match\n results. A unique identifier for a matchmaking ticket. If no ticket ID is specified here, Amazon GameLift will generate one in the form of\n a UUID. Use this identifier to track the match backfill ticket status and retrieve match\n results. A unique identifier for a matchmaking ticket. If no ticket ID is specified here, Amazon Web Services will generate one in the form of\n a UUID. Use this identifier to track the matchmaking ticket status and retrieve match\n results. A unique identifier for a matchmaking ticket. If no ticket ID is specified here, Amazon GameLift will generate one in the form of\n a UUID. Use this identifier to track the matchmaking ticket status and retrieve match\n results. Settings for a target-based scaling policy (see ScalingPolicy. A\n target-based policy tracks a particular fleet metric specifies a target value for the\n metric. As player usage changes, the policy triggers Amazon Web Services to adjust capacity so\n that the metric returns to the target value. The target configuration specifies settings\n as needed for the target based policy, including the target value. \n Related actions\n \n DescribeFleetCapacity | \n UpdateFleetCapacity | \n DescribeEC2InstanceLimits | \n PutScalingPolicy | \n DescribeScalingPolicies | \n DeleteScalingPolicy | \n StopFleetActions | \n StartFleetActions | \n All APIs by task\n Settings for a target-based scaling policy (see ScalingPolicy. A\n target-based policy tracks a particular fleet metric specifies a target value for the\n metric. As player usage changes, the policy triggers Amazon GameLift to adjust capacity so\n that the metric returns to the target value. The target configuration specifies settings\n as needed for the target based policy, including the target value. \n Related actions\n \n DescribeFleetCapacity | \n UpdateFleetCapacity | \n DescribeEC2InstanceLimits | \n PutScalingPolicy | \n DescribeScalingPolicies | \n DeleteScalingPolicy | \n StopFleetActions | \n StartFleetActions | \n All APIs by task\n The Amazon Resource Name (ARN) for an IAM role that\n allows Amazon Web Services to access your Amazon EC2 Auto Scaling groups. The Amazon Resource Name (ARN) for an IAM role that\n allows Amazon GameLift to access your Amazon EC2 Auto Scaling groups. Updates Realtime script metadata and content. To update script metadata, specify the script ID and provide updated name and/or\n version values. To update script content, provide an updated zip file by pointing to either a local\n file or an Amazon S3 bucket location. You can use either method regardless of how the\n original script was uploaded. Use the Version parameter to track\n updates to the script. If the call is successful, the updated metadata is stored in the script record and a\n revised script is uploaded to the Amazon Web Services service. Once the script is updated and\n acquired by a fleet instance, the new version is used for all new game sessions. \n Learn more\n \n Amazon Web Services Realtime Servers\n \n Related actions\n \n CreateScript | \n ListScripts | \n DescribeScript | \n UpdateScript | \n DeleteScript | \n All APIs by task\n Updates Realtime script metadata and content. To update script metadata, specify the script ID and provide updated name and/or\n version values. To update script content, provide an updated zip file by pointing to either a local\n file or an Amazon S3 bucket location. You can use either method regardless of how the\n original script was uploaded. Use the Version parameter to track\n updates to the script. If the call is successful, the updated metadata is stored in the script record and a\n revised script is uploaded to the Amazon GameLift service. Once the script is updated and\n acquired by a fleet instance, the new version is used for all new game sessions. \n Learn more\n \n Amazon GameLift Realtime Servers\n \n Related actions\n \n CreateScript | \n ListScripts | \n DescribeScript | \n UpdateScript | \n DeleteScript | \n All APIs by task\n The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is\n stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the\n \"key\"), and a role ARN that allows Amazon Web Services to access the Amazon S3 storage location. The S3\n bucket must be in the same Region where you want to create a new script. By default,\n Amazon Web Services uploads the latest version of the zip file; if you have S3 object versioning\n turned on, you can use the The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is\n stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the\n \"key\"), and a role ARN that allows Amazon GameLift to access the Amazon S3 storage location. The S3\n bucket must be in the same Region where you want to create a new script. By default,\n Amazon GameLift uploads the latest version of the zip file; if you have S3 object versioning\n turned on, you can use the The newly created script record with a unique script ID. The new script's storage\n location reflects an Amazon S3 location: (1) If the script was uploaded from an S3 bucket\n under your account, the storage location reflects the information that was provided in\n the CreateScript request; (2) If the script file was uploaded from\n a local zip file, the storage location reflects an S3 location controls by the Amazon Web Services\n service. The newly created script record with a unique script ID. The new script's storage\n location reflects an Amazon S3 location: (1) If the script was uploaded from an S3 bucket\n under your account, the storage location reflects the information that was provided in\n the CreateScript request; (2) If the script file was uploaded from\n a local zip file, the storage location reflects an S3 location controls by the Amazon GameLift\n service. Represents an authorization for a VPC peering connection between the VPC for an\n Amazon Web Services fleet and another VPC on an account you have access to. This authorization\n must exist and be valid for the peering connection to be established. Authorizations are\n valid for 24 hours after they are issued. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n Represents an authorization for a VPC peering connection between the VPC for an\n Amazon GameLift fleet and another VPC on an account you have access to. This authorization\n must exist and be valid for the peering connection to be established. Authorizations are\n valid for 24 hours after they are issued. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n A unique identifier for the fleet. This ID determines the ID of the Amazon Web Services VPC for your fleet. A unique identifier for the fleet. This ID determines the ID of the Amazon GameLift VPC for your fleet. A unique identifier for the VPC that contains the Amazon Web Services fleet for this\n connection. This VPC is managed by Amazon Web Services and does not appear in your Amazon Web Services account.\n A unique identifier for the VPC that contains the Amazon GameLift fleet for this\n connection. This VPC is managed by Amazon GameLift and does not appear in your Amazon Web Services account.\n Represents a peering connection between a VPC on one of your Amazon Web Services accounts and the\n VPC for your Amazon Web Services fleets. This record may be for an active peering connection or a\n pending connection that has not yet been established. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n Represents a peering connection between a VPC on one of your Amazon Web Services accounts and the\n VPC for your Amazon GameLift fleets. This record may be for an active peering connection or a\n pending connection that has not yet been established. \n Related actions\n \n CreateVpcPeeringAuthorization | \n DescribeVpcPeeringAuthorizations | \n DeleteVpcPeeringAuthorization | \n CreateVpcPeeringConnection | \n DescribeVpcPeeringConnections | \n DeleteVpcPeeringConnection | \n All APIs by task\n Retrieves the details for the custom patterns specified by a list of names. A list of names of the custom patterns that you want to retrieve. A list of A list of the names of custom patterns that were not found. Specifies the connections used by a job. Specifies whether the crawler should use AWS Lake Formation credentials for the crawler instead of the IAM role credentials. Specifies a custom CSV classifier for Creates a custom pattern that is used to detect sensitive data across the columns and rows of your structured data. Each custom pattern you create specifies a regular expression and an optional list of context words. If no context words are passed only a regular expression is checked. A name for the custom pattern that allows it to be retrieved or deleted later. This name must be unique per Amazon Web Services account. A regular expression string that is used for detecting sensitive data in a custom pattern. A list of context words. If none of these context words are found within the vicinity of the regular expression the data will not be detected as sensitive data. If no context words are passed only a regular expression is checked. The name of the custom pattern you created. A name for the custom pattern that allows it to be retrieved or deleted later. This name must be unique per Amazon Web Services account. A regular expression string that is used for detecting sensitive data in a custom pattern. A list of context words. If none of these context words are found within the vicinity of the regular expression the data will not be detected as sensitive data. If no context words are passed only a regular expression is checked. An object representing a custom pattern for detecting sensitive data across the columns and rows of your structured data. Deletes a custom pattern by specifying its name. The name of the custom pattern that you want to delete. The name of the custom pattern you deleted. Retrieves the details of a custom pattern by specifying its name. The name of the custom pattern that you want to retrieve. The name of the custom pattern that you retrieved. A regular expression string that is used for detecting sensitive data in a custom pattern. A list of context words if specified when you created the custom pattern. If none of these context words are found within the vicinity of the regular expression the data will not be detected as sensitive data. Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark. For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide. Jobs that are created without specifying a Glue version default to Glue 0.9. This field populates only when an Auto Scaling job run completes, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for Specifies whether to use AWS Lake Formation credentials for the crawler instead of the IAM role credentials. Required for cross account crawls. For same account crawls as the target data, this can be left as null. Specifies AWS Lake Formation configuration settings for the crawler. Lists all the custom patterns that have been created. A paginated token to offset the results. The maximum number of results to return. A list of A pagination token, if more results are available. The agent through which the API request was made. Represents the criteria to be used in the filter for querying findings. You can only use the following attributes to query findings: accountId region confidence id resource.accessKeyDetails.accessKeyId resource.accessKeyDetails.principalId resource.accessKeyDetails.userName resource.accessKeyDetails.userType resource.instanceDetails.iamInstanceProfile.id resource.instanceDetails.imageId resource.instanceDetails.instanceId resource.instanceDetails.outpostArn resource.instanceDetails.networkInterfaces.ipv6Addresses resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress resource.instanceDetails.networkInterfaces.publicDnsName resource.instanceDetails.networkInterfaces.publicIp resource.instanceDetails.networkInterfaces.securityGroups.groupId resource.instanceDetails.networkInterfaces.securityGroups.groupName resource.instanceDetails.networkInterfaces.subnetId resource.instanceDetails.networkInterfaces.vpcId resource.instanceDetails.tags.key resource.instanceDetails.tags.value resource.resourceType service.action.actionType service.action.awsApiCallAction.api service.action.awsApiCallAction.callerType service.action.awsApiCallAction.errorCode service.action.awsApiCallAction.remoteIpDetails.city.cityName service.action.awsApiCallAction.remoteIpDetails.country.countryName service.action.awsApiCallAction.remoteIpDetails.ipAddressV4 service.action.awsApiCallAction.remoteIpDetails.organization.asn service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg service.action.awsApiCallAction.serviceName service.action.dnsRequestAction.domain service.action.networkConnectionAction.blocked service.action.networkConnectionAction.connectionDirection service.action.networkConnectionAction.localPortDetails.port service.action.networkConnectionAction.protocol service.action.networkConnectionAction.localIpDetails.ipAddressV4 service.action.networkConnectionAction.remoteIpDetails.city.cityName service.action.networkConnectionAction.remoteIpDetails.country.countryName service.action.networkConnectionAction.remoteIpDetails.ipAddressV4 service.action.networkConnectionAction.remoteIpDetails.organization.asn service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg service.action.networkConnectionAction.remotePortDetails.port service.additionalInfo.threatListName resource.s3BucketDetails.publicAccess.effectivePermissions resource.s3BucketDetails.name resource.s3BucketDetails.tags.key resource.s3BucketDetails.tags.value resource.s3BucketDetails.type service.archived When this attribute is set to TRUE, only archived findings are listed. When it's set\n to FALSE, only unarchived findings are listed. When this attribute is not set, all\n existing findings are listed. service.resourceRole severity type updatedAt Type: ISO 8601 string format: YYYY-MM-DDTHH:MM:SS.SSSZ or YYYY-MM-DDTHH:MM:SSZ\n depending on whether the value contains milliseconds. Represents the criteria to be used in the filter for querying findings. You can only use the following attributes to query findings: accountId region confidence id resource.accessKeyDetails.accessKeyId resource.accessKeyDetails.principalId resource.accessKeyDetails.userName resource.accessKeyDetails.userType resource.instanceDetails.iamInstanceProfile.id resource.instanceDetails.imageId resource.instanceDetails.instanceId resource.instanceDetails.outpostArn resource.instanceDetails.networkInterfaces.ipv6Addresses resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress resource.instanceDetails.networkInterfaces.publicDnsName resource.instanceDetails.networkInterfaces.publicIp resource.instanceDetails.networkInterfaces.securityGroups.groupId resource.instanceDetails.networkInterfaces.securityGroups.groupName resource.instanceDetails.networkInterfaces.subnetId resource.instanceDetails.networkInterfaces.vpcId resource.instanceDetails.tags.key resource.instanceDetails.tags.value resource.resourceType service.action.actionType service.action.awsApiCallAction.api service.action.awsApiCallAction.callerType service.action.awsApiCallAction.errorCode service.action.awsApiCallAction.userAgent service.action.awsApiCallAction.remoteIpDetails.city.cityName service.action.awsApiCallAction.remoteIpDetails.country.countryName service.action.awsApiCallAction.remoteIpDetails.ipAddressV4 service.action.awsApiCallAction.remoteIpDetails.organization.asn service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg service.action.awsApiCallAction.serviceName service.action.dnsRequestAction.domain service.action.networkConnectionAction.blocked service.action.networkConnectionAction.connectionDirection service.action.networkConnectionAction.localPortDetails.port service.action.networkConnectionAction.protocol service.action.networkConnectionAction.localIpDetails.ipAddressV4 service.action.networkConnectionAction.remoteIpDetails.city.cityName service.action.networkConnectionAction.remoteIpDetails.country.countryName service.action.networkConnectionAction.remoteIpDetails.ipAddressV4 service.action.networkConnectionAction.remoteIpDetails.organization.asn service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg service.action.networkConnectionAction.remotePortDetails.port service.additionalInfo.threatListName resource.s3BucketDetails.publicAccess.effectivePermissions resource.s3BucketDetails.name resource.s3BucketDetails.tags.key resource.s3BucketDetails.tags.value resource.s3BucketDetails.type service.archived When this attribute is set to TRUE, only archived findings are listed. When it's set\n to FALSE, only unarchived findings are listed. When this attribute is not set, all\n existing findings are listed. service.resourceRole severity type updatedAt Type: ISO 8601 string format: YYYY-MM-DDTHH:MM:SS.SSSZ or YYYY-MM-DDTHH:MM:SSZ\n depending on whether the value contains milliseconds. Disassociates GuardDuty member accounts (to the current GuardDuty administrator account)\n specified by the account IDs. Disassociates GuardDuty member accounts (to the current GuardDuty administrator account) \n specified by the account IDs. Member accounts added through Invitation get deleted from the \n current GuardDuty administrator account after 30 days of disassociation. The ID of the asset property. A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required. The ID of the asset. The ARN of the asset, which has the following format. \n The name of the asset. The ID of the asset model used to create the asset. The date the asset was created, in Unix epoch time. The date the asset was last updated, in Unix epoch time. The current status of the asset. A list of asset hierarchies that each contain a Contains a summary of an associated asset. The default value of the asset model property attribute. All assets that you create from\n the asset model contain this attribute value. You can update an attribute's value after you\n create an asset. For more information, see Updating attribute values in the\n IoT SiteWise User Guide. Contains an asset attribute property. For more information, see\n Attributes in the IoT SiteWise User Guide. Associates a group (batch) of assets with an IoT SiteWise Monitor project. The ID of the project to which to associate the assets. The IDs of the assets to be associated to the project. A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required. A list of associated error information, if any. Disassociates a group (batch) of assets from an IoT SiteWise Monitor project. The ID of the project from which to disassociate the assets. The IDs of the assets to be disassociated from the project. A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required. A list of associated error information, if any. Gets aggregated values (for example, average, minimum, and maximum) for one or more asset properties. \n For more information, see Querying\n aggregates in the IoT SiteWise User Guide. The ID of the entry. The ID of the asset in which the asset property was created. The ID of the asset property. The alias that identifies the property, such as an OPC-UA server data stream path\n (for example, The data aggregating function. The time interval over which to aggregate data. The exclusive start of the range from which to query historical data, expressed in seconds in Unix epoch time. The inclusive end of the range from which to query historical data, expressed in seconds in Unix epoch time. The quality by which to filter asset data. The chronological sorting order of the requested information. Default: Contains information for an asset property aggregate entry that is associated with the \n BatchGetAssetPropertyAggregates API. To identify an asset property, you must specify one of the following: The A The error code. The associated error message. The ID of the entry. Contains error information for an asset property aggregate entry that is associated with the \n BatchGetAssetPropertyAggregates API. The error code. The date the error occurred, in Unix epoch time. Contains the error code and the timestamp for an asset property aggregate entry that is associated with the \n BatchGetAssetPropertyAggregates API. The list of asset property aggregate entries for the batch get request. \n You can specify up to 16 entries per request. The token to be used for the next set of paginated results. The maximum number of results to return for each paginated request. A result set is returned in the two cases, whichever occurs first. The size of the result set is less than 1 MB. The number of data points in the result set is less than the value of A list of the errors (if any) associated with the batch request. Each error entry\n contains the A list of entries that were processed successfully by this batch request. Each success entry\n contains the A list of entries that were not processed by this batch request. \n because these entries had been completely processed by previous paginated requests. \n Each skipped entry contains the The token for the next set of results, or null if there are no additional results. The ID of the entry. The completion status of each entry that is associated with the \n BatchGetAssetPropertyAggregates API. The error information, such as the error code and the timestamp. Contains information for an entry that has been processed by the previous \n BatchGetAssetPropertyAggregates request. The ID of the entry. The requested aggregated asset property values (for example, average, minimum, and maximum). Contains success information for an entry that is associated with the \n BatchGetAssetPropertyAggregates API. Gets the current value for one or more asset properties. For more information, see Querying\n current values in the IoT SiteWise User Guide. The ID of the entry. The ID of the asset in which the asset property was created. The ID of the asset property. The alias that identifies the property, such as an OPC-UA server data stream path\n (for example, Contains information for an asset property value entry that is associated with the \n BatchGetAssetPropertyValue API. To identify an asset property, you must specify one of the following: The A The error code. The associated error message. The ID of the entry. Contains error information for an asset property value entry that is associated with the \n BatchGetAssetPropertyValue API. The error code. The date the error occurred, in Unix epoch time. The error information, such as the error code and the timestamp. Gets the historical values for one or more asset properties. For more information, see Querying\n historical values in the IoT SiteWise User Guide. The ID of the entry. The ID of the asset in which the asset property was created. The ID of the asset property. The alias that identifies the property, such as an OPC-UA server data stream path\n (for example, The exclusive start of the range from which to query historical data, expressed in seconds in Unix epoch time. The inclusive end of the range from which to query historical data, expressed in seconds in Unix epoch time. The quality by which to filter asset data. The chronological sorting order of the requested information. Default: Contains information for an asset property historical value entry that is associated with the \n BatchGetAssetPropertyValueHistory API. To identify an asset property, you must specify one of the following: The A The error code. The associated error message. The ID of the entry. A list of the errors (if any) associated with the batch request. Each error entry\n contains the The error code. The date the error occurred, in Unix epoch time. The error information, such as the error code and the timestamp. The list of asset property historical value entries for the batch get request. \n You can specify up to 16 entries per request. The ID of the asset property. The token to be used for the next set of paginated results. A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required. The maximum number of results to return for each paginated request. A result set is returned in the two cases, whichever occurs first. The size of the result set is less than 1 MB. The number of data points in the result set is less than the value of The ID of the asset. The ARN of the asset, which has the following format. \n A list of the errors (if any) associated with the batch request. Each error entry\n contains the The name of the asset. A list of entries that were processed successfully by this batch request. Each success entry\n contains the The ID of the asset model used to create the asset. A list of entries that were not processed by this batch request. \n because these entries had been completely processed by previous paginated requests. \n Each skipped entry contains the The date the asset was created, in Unix epoch time. The token for the next set of results, or null if there are no additional results. The date the asset was last updated, in Unix epoch time. The ID of the entry. The current status of the asset. The completion status of each entry that is associated with the \n BatchGetAssetPropertyValueHistory API. A list of asset hierarchies that each contain a The error information, such as the error code and the timestamp. Contains a summary of an associated asset. Contains information for an entry that has been processed by the previous \n BatchGetAssetPropertyValueHistory request. The default value of the asset model property attribute. All assets that you create from\n the asset model contain this attribute value. You can update an attribute's value after you\n create an asset. For more information, see Updating attribute values in the\n IoT SiteWise User Guide. The ID of the entry. The requested historical values for the specified asset property. Contains an asset attribute property. For more information, see\n Attributes in the IoT SiteWise User Guide. Contains success information for an entry that is associated with the \n BatchGetAssetPropertyValueHistory API. The list of asset property value entries for the batch get request. \n You can specify up to 16 entries per request. Associates a group (batch) of assets with an IoT SiteWise Monitor project. The token to be used for the next set of paginated results. The ID of the project to which to associate the assets. A list of the errors (if any) associated with the batch request. Each error entry\n contains the The IDs of the assets to be associated to the project. A list of entries that were processed successfully by this batch request. Each success entry\n contains the A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required. A list of entries that were not processed by this batch request. \n because these entries had been completely processed by previous paginated requests. \n Each skipped entry contains the A list of associated error information, if any. The token for the next set of results, or null if there are no additional results. Disassociates a group (batch) of assets from an IoT SiteWise Monitor project. The ID of the project from which to disassociate the assets. The ID of the entry. The IDs of the assets to be disassociated from the project. The completion status of each entry that is associated with the \n BatchGetAssetPropertyValue request. A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required. The error information, such as the error code and the timestamp. Contains information for an entry that has been processed by the previous \n BatchGetAssetPropertyValue request. A list of associated error information, if any. The ID of the entry. Contains success information for an entry that is associated with the \n BatchGetAssetPropertyValue API. The ISO8601 DateTime of the earliest property value to return. For more information about the ISO8601 DateTime format, see the data type PropertyValue. The ISO8601 DateTime of the latest property value to return. For more information about the ISO8601 DateTime format, see the data type PropertyValue. An object that filters items in a list of component types. An object that filters items in a list of component types. Only one object is accepted as a valid input. A list of objects that filter the request. A list of objects that filter the request. Only one object is accepted as a valid input. ISO8601 DateTime of a value for a time series property. The time for when the property value was recorded in ISO 8601 format: YYYY-MM-DDThh:mm:ss[.SSSSSSSSS][Z/±HH:mm]. \n [YYYY]: year \n [MM]: month \n [DD]: day \n [hh]: hour \n [mm]: minute \n [ss]: seconds \n [.SSSSSSSSS]: additional precision, where precedence is maintained. For\n example: [.573123] is equal to 573123000 nanoseconds. \n Z: default timezone UTC \n ± HH:mm: time zone offset in Hours and Minutes. \n Required sub-fields: YYYY-MM-DDThh:mm:ss and [Z/±HH:mm] An object that specifies information about time series property values. An object that specifies information about time series property values. This object is used and consumed by the BatchPutPropertyValues action. Session keys for ABP v1.0.x The FCnt init value. Session keys for ABP v1.1 The FCnt init value. Connection status event configuration object for enabling or disabling LoRaWAN related event topics. Enum to denote whether the wireless gateway id connection status event topic is enabled or disabled\n . Connection status event configuration object for enabling or disabling topic. Connection status resource type event configuration object for enabling or disabling LoRaWAN related\n event topics. Connection status resource type event configuration object for enabling or disabling topic. Creates a new network analyzer configuration. Wireless device resources to add to the network analyzer configuration. Provide the Wireless gateway resources to add to the network analyzer configuration. Provide the The Amazon Resource Name of the new resource. Deletes a network analyzer configuration. The operation to delete queued messages. Remove queued messages from the downlink queue. Id of a given wireless device which messages will be deleted The ID of a given wireless device for which downlink messages will be deleted. if messageID==\"*\", the queue for a particular wireless deviceId will be purged, otherwise, the specific message with messageId will be deleted If message ID is The wireless device type, it is either Sidewalk or LoRaWAN. The wireless device type, which can be either Sidewalk or LoRaWAN. Device registration state event configuration object for enabling or disabling Sidewalk related event\n topics. Enum to denote whether the wireless device id device registration state event topic is enabled or disabled. Device registration state event configuration object for enabling and disabling relevant topics. Device registration resource type state event configuration object for enabling or disabling Sidewalk\n related event topics. Device registration state resource type event configuration object for enabling or disabling topic. The messageId allocated by IoT Wireless for tracing purpose The message ID assigned by IoT Wireless to each downlink message, which helps identify the\n message. The transmit mode to use to send data to the wireless device. Can be: The transmit mode to use for sending data to the wireless device. This can be The timestamp that Iot Wireless received the message. The time at which Iot Wireless received the downlink message. The message in downlink queue. The message in the downlink queue. Resource identifier opted in for event messaging. Identifier type of the particular resource identifier for event configuration. Partner type of the resource if the identifier type is PartnerAccountId. Event configuration object for a single resource. Device registration state event configuration for an event configuration item. Proximity event configuration for an event configuration item. Join event configuration for an event configuration item. Connection status event configuration for an event configuration item. Object of all event configurations and the status of the event topics. The FCnt init value. Get the event configuration by resource types. Resource type event configuration for the device registration state event Resource type event configuration for the proximity event Resource type event configuration for the join event Resource type event configuration for the connection status event Get NetworkAnalyzer configuration. Get network analyzer configuration. List of WirelessDevices in the NetworkAnalyzerConfiguration. List of wireless gateway resources that have been added to the network analyzer configuration. List of WirelessGateways in the NetworkAnalyzerConfiguration. List of wireless gateway resources that have been added to the network analyzer configuration. The Amazon Resource Name of the new resource. Event configuration for the Proximity event Event configuration for the join event. Event configuration for the connection status event. The service type for which to get endpoint information about. Can be The service type for which to get endpoint information about. Can be Join event configuration object for enabling or disabling LoRaWAN related event topics. Enum to denote whether the wireless device id join event topic is enabled or disabled. Join event configuration object for enabling or disabling topic. Join resource type event configuration object for enabling or disabling LoRaWAN related\n event topics. Join resource type event configuration object for enabling or disabling topic. The token to use to get the next set of results, or null if there are no additional results. The list of device profiles. List event configurations where at least one event topic has been enabled. Resource type to filter event configurations. To retrieve the next set of results, the The token to use to get the next set of results, or null if there are no additional results. To retrieve the next set of results, the The list of device profiles. Event configurations of all events for a single resource. Lists the network analyzer configurations. To retrieve the next set of results, the The token to use to get the next set of results, or null if there are no additional results. The list of network analyzer configurations. The operation to list queued messages. List queued messages in the downlink queue. Id of a given wireless device which the downlink packets are targeted The ID of a given wireless device which the downlink message packets are being sent. To retrieve the next set of results, the To retrieve the next set of results, the The wireless device type, it is either Sidewalk or LoRaWAN. The wireless device type, whic can be either Sidewalk or LoRaWAN. To retrieve the next set of results, the To retrieve the next set of results, the The messages in downlink queue. The messages in the downlink queue. Enum to denote whether the gateway eui connection status event topic is enabled or disabled. Object for LoRaWAN connection status resource type event configuration. Enum to denote whether the wireless gateway connection status event topic is enabled or disabled. Object for LoRaWAN connection status resource type event configuration. LoRaWANGetServiceProfileInfo object. Enum to denote whether the dev eui join event topic is enabled or disabled. Object for LoRaWAN join resource type event configuration. Enum to denote whether the wireless device join event topic is enabled or disabled. Object for LoRaWAN join resource type event configuration. The ID of the service profile. ABP device object for update APIs for v1.1 ABP device object for update APIs for v1.0.x The log level for a log message. The log level for a log message. The log levels can be disabled, or set to NetworkAnalyzer configuration name. Name of the network analyzer configuration. The Amazon Resource Name of the new resource. Network analyzer configurations. Proximity event configuration object for enabling or disabling Sidewalk related event topics. Enum to denote whether the wireless device id proximity event topic is enabled or disabled. Proximity event configuration object for enabling and disabling relevant topics. Proximity resource type event configuration object for enabling and disabling wireless device topic. Proximity resource type event configuration object for enabling or disabling topic. Enum to denote whether the wireless device join event topic is enabled or disabled. Sidewalk resource type event configuration object for enabling or disabling topic. Trace Content for resources. Trace content for your wireless gateway and wireless device resources. The FCnt init value. ABP device object for LoRaWAN specification v1.0.x The FCnt init value. ABP device object for LoRaWAN specification v1.1 Update the event configuration by resource types. Device registration state resource type event configuration object for enabling and disabling wireless\n gateway topic. Proximity resource type event configuration object for enabling and disabling wireless gateway topic. Join resource type event configuration object for enabling and disabling wireless device topic. Connection status resource type event configuration object for enabling and disabling wireless gateway topic. Update NetworkAnalyzer configuration. Update network analyzer configuration. WirelessDevices to add into NetworkAnalyzerConfiguration. Wireless device resources to add to the network analyzer configuration. Provide the \n WirelessDevices to remove from NetworkAnalyzerConfiguration. Wireless device resources to remove from the network analyzer configuration. Provide the \n WirelessGateways to add into NetworkAnalyzerConfiguration. Wireless gateway resources to add to the network analyzer configuration. Provide the \n WirelessGateways to remove from NetworkAnalyzerConfiguration. Wireless gateway resources to remove from the network analyzer configuration. Provide the \n Event configuration for the Proximity event Event configuration for the join event Event configuration for the connection status event WirelessDevice FrameInfo for trace content. FrameInfo of your wireless device resources for the trace content. Use FrameInfo to debug\n the communication between your LoRaWAN end devices and the network server. Configuration information for an Amazon VPC to connect to your Box. For \n more information, see Configuring a VPC. Configuration information for an Amazon VPC to connect to your Box. For \n more information, see Configuring a VPC. Provides the configuration information to connect to Box as your data source. Provides the configuration information to connect to Quip as your \n data source. Document metadata files that contain information such as the\n document access control information, source URI, document author,\n and custom attributes. Each metadata file contains metadata about a\n single document.TOKEN
. For an Amazon Web Services CodeCommit repository,\n specify SIGV4
. For GitLab and Bitbucket repositories, specify\n SSH
.TOKEN
for a GitHub\n repository, SIGV4
for an Amazon Web Services CodeCommit repository, and\n SSH
for GitLab and Bitbucket repositories.oauthToken
for repository providers other than GitHub, such as\n Bitbucket or CodeCommit. To authorize access to GitHub as your repository provider, use\n accessToken
.oauthToken
or accessToken
when you\n create a new app.accessToken
for GitHub repositories only. To authorize access to a\n repository provider such as Bitbucket or CodeCommit, use oauthToken
.accessToken
or oauthToken
when you\n create a new app.oauthToken
for repository providers other than GitHub, such as\n Bitbucket or CodeCommit.accessToken
.oauthToken
or accessToken
when you\n update an app.accessToken
for GitHub repositories only. To authorize access to a\n repository provider such as Bitbucket or CodeCommit, use oauthToken
.accessToken
or oauthToken
when you\n update an app.BUCKET_OWNER_FULL_CONTROL
. If a query runs in a workgroup and the\n workgroup overrides client-side settings, then the Amazon S3 canned ACL\n specified in the workgroup's settings is used for all queries that run in the workgroup.\n For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 User\n Guide.BUCKET_OWNER_FULL_CONTROL
. If a query runs in a workgroup and the\n workgroup overrides client-side settings, then the Amazon S3 canned ACL\n specified in the workgroup's settings is used for all queries that run in the workgroup.\n For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 User Guide.SSE-S3
), server-side encryption with KMS-managed keys\n (SSE-KMS
), or client-side encryption with KMS-managed keys (CSE-KMS) is\n used.SSE_S3
), server-side encryption with KMS-managed keys\n (SSE_KMS
), or client-side encryption with KMS-managed keys\n (CSE_KMS
) is used.SSE-KMS
and CSE-KMS
, this is the KMS key ARN or\n ID.SSE_KMS
and CSE_KMS
, this is the KMS key ARN or\n ID.SSE-KMS
or CSE-KMS
) and key\n information.SSE_KMS
or CSE_KMS
) and key\n information.QueryString
contains the SQL statements that\n make up the query.QueryString
contains the SQL statements that make up the\n query.SSE-KMS
or CSE-KMS
) and key information.\n This is a client-side setting. If workgroup settings override client-side settings, then\n the query uses the encryption configuration that is specified for the workgroup, and\n also uses the location for storing query results specified in the workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration and Workgroup Settings Override Client-Side Settings.SSE_KMS
or CSE_KMS
) and key information.\n This is a client-side setting. If workgroup settings override client-side settings, then\n the query uses the encryption configuration that is specified for the workgroup, and\n also uses the location for storing query results specified in the workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration and Workgroup Settings Override Client-Side Settings.ExpectedBucketOwner
when it\n makes Amazon S3 calls to your specified output location. If the\n ExpectedBucketOwner
\n Amazon Web Services account ID does not match the actual owner of the Amazon S3\n bucket, the call fails with a permissions error.ExpectedBucketOwner
setting that is specified for\n the workgroup, and also uses the location for storing query results specified in the\n workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration\n and Workgroup Settings Override Client-Side Settings.ExpectedBucketOwner
when it\n makes Amazon S3 calls to your specified output location. If the\n ExpectedBucketOwner
\n Amazon Web Services account ID does not match the actual owner of the Amazon S3\n bucket, the call fails with a permissions error.ExpectedBucketOwner
setting that is specified for\n the workgroup, and also uses the location for storing query results specified in the\n workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration\n and Workgroup Settings Override Client-Side Settings.OutputLocation
in ResultConfigurationUpdates
(the\n client-side setting), the OutputLocation
in the workgroup's\n ResultConfiguration
will be updated with the new value. For more\n information, see Workgroup Settings Override\n Client-Side Settings.OutputLocation
in ResultConfigurationUpdates
(the\n client-side setting), the OutputLocation
in the workgroup's\n ResultConfiguration
will be updated with the new value. For more\n information, see Workgroup Settings Override\n Client-Side Settings.ExpectedBucketOwner
when it\n makes Amazon S3 calls to your specified output location. If the\n ExpectedBucketOwner
\n Amazon Web Services account ID does not match the actual owner of the Amazon S3\n bucket, the call fails with a permissions error.ExpectedBucketOwner
setting that is specified for the workgroup, and\n also uses the location for storing query results specified in the workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration and Workgroup Settings Override Client-Side Settings.ExpectedBucketOwner
when it\n makes Amazon S3 calls to your specified output location. If the\n ExpectedBucketOwner
\n Amazon Web Services account ID does not match the actual owner of the Amazon S3\n bucket, the call fails with a permissions error.ExpectedBucketOwner
setting that is specified for the workgroup, and\n also uses the location for storing query results specified in the workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration and Workgroup Settings Override Client-Side Settings.ExpectedBucketOwner
in\n ResultConfigurationUpdates
(the client-side setting), the\n ExpectedBucketOwner
in the workgroup's ResultConfiguration
\n is updated with the new value. For more information, see Workgroup Settings Override\n Client-Side Settings.ExpectedBucketOwner
in\n ResultConfigurationUpdates
(the client-side setting), the\n ExpectedBucketOwner
in the workgroup's ResultConfiguration
\n is updated with the new value. For more information, see Workgroup Settings Override\n Client-Side Settings.DeleteAssessmentReport
operation, Audit Manager attempts to delete the following data:\n
\n DeleteAssessmentReport
operation doesn’t\n fail. Instead, it proceeds to delete the associated metadata only. You must then delete the\n assessment report from the S3 bucket yourself. 403 (Forbidden)
or\n 404 (Not Found)
error from Amazon S3. To avoid this, make sure that\n your S3 bucket is available, and that you configured the correct permissions for Audit Manager to delete resources in your S3 bucket. For an example permissions policy that\n you can use, see Assessment report destination permissions in the Audit Manager User Guide. For information about the issues that could cause a 403\n (Forbidden)
or 404 (Not Found
) error from Amazon S3, see\n List of Error Codes in the Amazon Simple Storage Service API\n Reference. keywordValue
that you specify depends on the type of rule:\n
"
}
}
},
diff --git a/aws/sdk/aws-models/autoscaling.json b/aws/sdk/aws-models/autoscaling.json
index 6fb25d9b68..2d80fed1b2 100644
--- a/aws/sdk/aws-models/autoscaling.json
+++ b/aws/sdk/aws-models/autoscaling.json
@@ -394,6 +394,9 @@
"input": {
"target": "com.amazonaws.autoscaling#AttachInstancesQuery"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.autoscaling#ResourceContentionFault"
@@ -607,7 +610,7 @@
"HealthCheckGracePeriod": {
"target": "com.amazonaws.autoscaling#HealthCheckGracePeriod",
"traits": {
- "smithy.api#documentation": "keywordValue
. You can find the rule identifier from the list of Config managed rules.\n
\n keywordValue
: S3_BUCKET_ACL_PROHIBITED
\n keywordValue
\n by adding the Custom_
prefix to the rule name. This prefix distinguishes\n the rule from a managed rule.\n
\n keywordValue
: Custom_my-custom-config-rule
\n keywordValue
by adding the Custom_
prefix to the rule\n name. In addition, you remove the suffix ID that appears at the end of the rule\n name.\n
\n keywordValue
:\n Custom_CustomRuleForAccount-conformance-pack
\n keywordValue
:\n Custom_securityhub-api-gw-cache-encrypted
\n keywordValue
:\n Custom_OrgConfigRule-s3-bucket-versioning-enabled
\n DesiredCapacityType
for attribute-based instance type selection\n only. For more information, see Creating\n an Auto Scaling group using attribute-based instance type selection in the\n Amazon EC2 Auto Scaling User Guide.units
, which translates into number of\n instances.units
| vcpu
| memory-mib
\n DesiredCapacityType
for attribute-based instance type selection\n only.300
. This setting applies\n when using simple scaling policies, but not when using other scaling policies or\n scheduled scaling. For more information, see Scaling cooldowns for Amazon EC2 Auto Scaling\n in the Amazon EC2 Auto Scaling User Guide.300
seconds0
. For more information, see Health\n check grace period in the Amazon EC2 Auto Scaling User Guide.ELB
health check.InService
state. For more\n information, see Health\n check grace period in the Amazon EC2 Auto Scaling User Guide.0
secondsDesiredCapacityType
for attribute-based instance type selection\n only. For more information, see Creating\n an Auto Scaling group using attribute-based instance type selection in the\n Amazon EC2 Auto Scaling User Guide.units
, which translates into number of\n instances.units
| vcpu
| memory-mib
\n InService
state. For more information, see Set\n the default instance warmup for an Auto Scaling group in the\n Amazon EC2 Auto Scaling User Guide.-1
for the value. However, we strongly recommend keeping the\n default instance warmup enabled by specifying a minimum value of\n 0
.ClassicLinkVPCId
parameter, you must specify this\n parameter.ClassicLinkVPCId
parameter, you must specify this\n parameter.\n
\n Name
depend on which API operation you're using with\n the filter (DescribeAutoScalingGroups or DescribeTags).Name
include the following: \n
\n tag-key
- Accepts tag keys. The results only include information\n about the Auto Scaling groups associated with these tag keys. tag-value
- Accepts tag values. The results only include\n information about the Auto Scaling groups associated with these tag values. tag:
- Accepts the key/value combination of the tag.\n Use the tag key in the filter name and the tag value as the filter value. The\n results only include information about the Auto Scaling groups associated with the\n specified key/value combination. Name
include the following: \n
"
+ "smithy.api#documentation": "auto-scaling-group
- Accepts the names of Auto Scaling groups. The\n results only include information about the tags associated with these Auto Scaling\n groups. key
- Accepts tag keys. The results only include information\n about the tags associated with these tag keys. value
- Accepts tag values. The results only include information\n about the tags associated with these tag values. propagate-at-launch
- Accepts a Boolean value, which specifies\n whether tags propagate to instances at launch. The results only include\n information about the tags associated with the specified Boolean value. Name
depend on which API operation you're using with\n the filter (DescribeAutoScalingGroups or DescribeTags).Name
include the following: \n
\n tag-key
- Accepts tag keys. The results only include information\n about the Auto Scaling groups associated with these tag keys. tag-value
- Accepts tag values. The results only include\n information about the Auto Scaling groups associated with these tag values. tag:
- Accepts the key/value combination of the tag.\n Use the tag key in the filter name and the tag value as the filter value. The\n results only include information about the Auto Scaling groups associated with the\n specified key/value combination. Name
include the following: \n
"
}
},
"Values": {
@@ -4517,13 +4595,13 @@
"ClassicLinkVPCId": {
"target": "com.amazonaws.autoscaling#XmlStringMaxLen255",
"traits": {
- "smithy.api#documentation": "auto-scaling-group
- Accepts the names of Auto Scaling groups. The\n results only include information about the tags associated with these Auto Scaling\n groups. key
- Accepts tag keys. The results only include information\n about the tags associated with these tag keys. value
- Accepts tag values. The results only include information\n about the tags associated with these tag values. propagate-at-launch
- Accepts a Boolean value, which specifies\n whether tags propagate to instances at launch. The results only include\n information about the tags associated with the specified Boolean value. ClassicLinkVPCId
.ClassicLinkVPCId
.Pending:Wait
or Terminating:Wait
state. The maximum is\n 172800 seconds (48 hours) or 100 times HeartbeatTimeout
, whichever is\n smaller.HeartbeatTimeout
, whichever is\n smaller.\n
",
+ "smithy.api#documentation": "ASGAverageCPUUtilization
- Average CPU utilization of the Auto Scaling\n group.ASGAverageNetworkIn
- Average number of bytes received (per\n instance per minute) for the Auto Scaling group.ASGAverageNetworkOut
- Average number of bytes sent out (per\n instance per minute) for the Auto Scaling group.ALBRequestCountPerTarget
- Average Application Load Balancer request count (per\n target per minute) for your Auto Scaling group.\n
",
"smithy.api#required": {}
}
},
@@ -6216,6 +6294,9 @@
"input": {
"target": "com.amazonaws.autoscaling#PutNotificationConfigurationType"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.autoscaling#LimitExceededFault"
@@ -6330,7 +6411,7 @@
"Cooldown": {
"target": "com.amazonaws.autoscaling#Cooldown",
"traits": {
- "smithy.api#documentation": "ASGAverageCPUUtilization
- Average CPU utilization of the Auto Scaling\n group.ASGAverageNetworkIn
- Average number of bytes received on all\n network interfaces by the Auto Scaling group.ASGAverageNetworkOut
- Average number of bytes sent out on all\n network interfaces by the Auto Scaling group.ALBRequestCountPerTarget
- Average Application Load Balancer request count per target\n for your Auto Scaling group.SimpleScaling
. For more information, see\n Scaling\n cooldowns for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide.SimpleScaling
. For more information, see\n Scaling\n cooldowns for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide.TargetTrackingScaling
or\n StepScaling
.TargetTrackingScaling
or\n StepScaling
.EstimatedInstanceWarmup
\n falls back to the value of default cooldown.90
.90
.InstanceWarmup
falls\n back to the value of the health check grace period.300
. This setting applies\n when using simple scaling policies, but not when using other scaling policies or\n scheduled scaling. For more information, see Scaling cooldowns for Amazon EC2 Auto Scaling\n in the Amazon EC2 Auto Scaling User Guide.0
. For more information, see Health\n check grace period in the Amazon EC2 Auto Scaling User Guide.ELB
health check.InService
state. For more\n information, see Health\n check grace period in the Amazon EC2 Auto Scaling User Guide.DesiredCapacityType
for attribute-based instance type selection\n only. For more information, see Creating\n an Auto Scaling group using attribute-based instance type selection in the\n Amazon EC2 Auto Scaling User Guide.units
, which translates into number of\n instances.units
| vcpu
| memory-mib
\n InService
state. For more information, see Set\n the default instance warmup for an Auto Scaling group in the\n Amazon EC2 Auto Scaling User Guide.-1
for the value. However, we strongly recommend keeping the\n default instance warmup enabled by specifying a minimum value of\n 0
.MANAGED
or UNMANAGED
. For more information, see\n Compute Environments in the\n Batch User Guide.MANAGED
or UNMANAGED
. For more information, see\n Compute environments in the\n Batch User Guide.EC2
, SPOT
, FARGATE
, or\n FARGATE_SPOT
. For more information, see Compute Environments in the\n Batch User Guide.SPOT
, you must also specify an Amazon EC2 Spot Fleet role with the\n spotIamFleetRole
parameter. For more information, see Amazon EC2 Spot Fleet role in the\n Batch User Guide.EC2
, SPOT
, FARGATE
, or\n FARGATE_SPOT
. For more information, see Compute environments in the\n Batch User Guide.SPOT
, you must also specify an Amazon EC2 Spot Fleet role with the\n spotIamFleetRole
parameter. For more information, see Amazon EC2 spot fleet role in the\n Batch User Guide.\n
\n BEST_FIT
then the Spot Fleet IAM\n Role must be specified.BEST_FIT_PROGRESSIVE
and SPOT_CAPACITY_OPTIMIZED
strategies, Batch might\n need to go above maxvCpus
to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus
by more than a single instance.\n
\n BEST_FIT
then the Spot Fleet IAM\n Role must be specified. Compute resources that use a BEST_FIT
allocation strategy don't support\n infrastructure updates and can't update some parameters. For more information, see Updating compute environments in the\n Batch User Guide.BEST_FIT_PROGRESSIVE
and SPOT_CAPACITY_OPTIMIZED
strategies, Batch might\n need to go above maxvCpus
to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus
by more than a single instance.DISABLED
).DISABLED
).c5
or p3
), or you can specify specific sizes within a family\n (such as c5.8xlarge
). You can also choose optimal
to select instance types (from the C4,\n M4, and R4 instance families) that match the demand of your job queues.optimal
uses instance types from the C4, M4, and R4 instance families. In Regions that\n don't have instance types from those instance families, instance types from the C5, M5. and R5 instance families are\n used.c5
or p3
), or you can specify specific sizes within a family\n (such as c5.8xlarge
). You can also choose optimal
to select instance types (from the C4,\n M4, and R4 instance families) that match the demand of your job queues.optimal
uses instance types from the C4, M4, and R4 instance families. In Regions that\n don't have instance types from those instance families, instance types from the C5, M5. and R5 instance families are\n used.imageIdOverride
member of the Ec2Configuration
structure.imageIdOverride
member of the Ec2Configuration
structure.\n ecsInstanceRole\n
or\n arn:aws:iam::
.\n For more information, see Amazon ECS Instance\n Role in the Batch User Guide.\n ecsInstanceRole\n
or\n arn:aws:iam::
.\n For more information, see Amazon ECS instance\n role in the Batch User Guide.{ \"Name\": \"Batch Instance - C4OnDemand\" }
. This is helpful for recognizing your Batch\n instances in the Amazon EC2 console. These tags can't be updated or removed after the compute environment is created. Any\n changes to these tags require that you create a new compute environment and remove the old compute environment. These\n tags aren't seen when using the Batch ListTagsForResource
API operation.{ \"Name\": \"Batch Instance - C4OnDemand\" }
. This is helpful for recognizing your Batch\n instances in the Amazon EC2 console. These tags can't be updated or removed after the compute environment is created. Any\n changes to these tags require that you create a new compute environment and remove the old compute environment. These\n tags aren't seen when using the Batch ListTagsForResource
API operation.SPOT
compute environment. This role is\n required if the allocation strategy set to BEST_FIT
or if the allocation strategy isn't specified. For\n more information, see Amazon EC2 Spot Fleet\n Role in the Batch User Guide.SPOT
compute environment. This role is\n required if the allocation strategy set to BEST_FIT
or if the allocation strategy isn't specified. For\n more information, see Amazon EC2 spot fleet\n role in the Batch User Guide.Ec2Configuration
isn't specified, the default is ECS_AL2
.Ec2Configuration
isn't specified, the default is ECS_AL2
.DISABLED
).BEST_FIT
isn't supported when updating a compute\n environment.\n
\n BEST_FIT_PROGRESSIVE
and SPOT_CAPACITY_OPTIMIZED
strategies, Batch might\n need to go above maxvCpus
to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus
by more than a single instance.c5
or p3
), or you can specify specific sizes within a family\n (such as c5.8xlarge
). You can also choose optimal
to select instance types (from the C4,\n M4, and R4 instance families) that match the demand of your job queues.optimal
uses instance types from the C4, M4, and R4 instance families. In Regions that\n don't have instance types from those instance families, instance types from the C5, M5. and R5 instance families are\n used.\n ecsInstanceRole\n
or\n arn:aws:iam::
.\n For more information, see Amazon ECS instance\n role in the Batch User Guide.{ \"Name\": \"Batch Instance - C4OnDemand\" }
. This is helpful for recognizing your Batch\n instances in the Amazon EC2 console. These tags aren't seen when using the Batch ListTagsForResource
API\n operation.launchTemplateId
or\n launchTemplateName
member of the launch template specification to an empty string. Removing the launch\n template from a compute environment will not remove the AMI specified in the launch template. In order to update the\n AMI specified in a launch template, the updateToLatestImageVersion
parameter must be set to\n true
.Ec2Configuration
isn't specified, the default is ECS_AL2
.imageIdOverride
, set this value to an empty string.false
.imageId
or imageIdOverride
parameters or by the\n launch template specified in the launchTemplate
parameter, this parameter is ignored. For more\n information on updating AMI IDs during an infrastructure update, see Updating the AMI ID in\n the Batch User Guide.EC2
, SPOT
, FARGATE
, or\n FARGATE_SPOT
. For more information, see Compute environments in the\n Batch User Guide.SPOT
, you must also specify an Amazon EC2 Spot Fleet role with the\n spotIamFleetRole
parameter. For more information, see Amazon EC2 spot fleet role in the\n Batch User Guide.imageIdOverride
member of the Ec2Configuration
structure. To remove the\n custom AMI ID and use the default AMI ID, set this value to an empty string.LogConfig
in the Create a container section of the\n Docker Remote API and the --log-driver
option to docker run.\n By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a\n different logging driver than the Docker daemon by specifying a log driver with this parameter in the container\n definition. To use a different logging driver for a container, the log system must be configured properly on the\n container instance. Or, alternatively, it must be configured on a different log server for remote logging options.\n For more information on the options for different supported log drivers, see Configure logging drivers in the Docker\n documentation.sudo docker version | grep \"Server API version\"
\n ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that\n instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the\n Amazon Elastic Container Service Developer Guide.LogConfig
in the Create a container section of the\n Docker Remote API and the --log-driver
option to docker run.\n By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a\n different logging driver than the Docker daemon by specifying a log driver with this parameter in the container\n definition. To use a different logging driver for a container, the log system must be configured properly on the\n container instance. Or, alternatively, it must be configured on a different log server for remote logging options.\n For more information on the options for different supported log drivers, see Configure logging drivers in the Docker\n documentation.sudo docker version | grep \"Server API version\"
\n ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that\n instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the\n Amazon Elastic Container Service Developer Guide.resourceRequirements
to override the vcpus
parameter\n that's set in the job definition. It's not supported for jobs running on Fargate resources. For jobs running on EC2\n resources, it overrides the vcpus
parameter set in the job definition, but doesn't override any vCPU\n requirement specified in the resourceRequirements
structure in the job definition. To override vCPU\n requirements that are specified in the resourceRequirements
structure in the job definition,\n resourceRequirements
must be specified in the SubmitJob
request, with type
\n set to VCPU
and value
set to the new value. For more information, see Can't override\n job definition resource requirements in the Batch User Guide.resourceRequirements
to override the vcpus
parameter\n that's set in the job definition. It's not supported for jobs running on Fargate resources. For jobs running on EC2\n resources, it overrides the vcpus
parameter set in the job definition, but doesn't override any vCPU\n requirement specified in the resourceRequirements
structure in the job definition. To override vCPU\n requirements that are specified in the resourceRequirements
structure in the job definition,\n resourceRequirements
must be specified in the SubmitJob
request, with type
\n set to VCPU
and value
set to the new value. For more information, see Can't override job\n definition resource requirements in the Batch User Guide.resourceRequirements
to override the memory requirements\n specified in the job definition. It's not supported for jobs running on Fargate resources. For jobs running on EC2\n resources, it overrides the memory
parameter set in the job definition, but doesn't override any memory\n requirement specified in the resourceRequirements
structure in the job definition. To override memory\n requirements that are specified in the resourceRequirements
structure in the job definition,\n resourceRequirements
must be specified in the SubmitJob
request, with type
\n set to MEMORY
and value
set to the new value. For more information, see Can't override\n job definition resource requirements in the Batch User Guide.resourceRequirements
to override the memory requirements\n specified in the job definition. It's not supported for jobs running on Fargate resources. For jobs running on EC2\n resources, it overrides the memory
parameter set in the job definition, but doesn't override any memory\n requirement specified in the resourceRequirements
structure in the job definition. To override memory\n requirements that are specified in the resourceRequirements
structure in the job definition,\n resourceRequirements
must be specified in the SubmitJob
request, with type
\n set to MEMORY
and value
set to the new value. For more information, see Can't override job\n definition resource requirements in the Batch User Guide.LogConfig
in the Create a container section of the\n Docker Remote API and the --log-driver
option to docker run.\n By default, containers use the same logging driver that the Docker daemon uses. However the container might use a\n different logging driver than the Docker daemon by specifying a log driver with this parameter in the container\n definition. To use a different logging driver for a container, the log system must be configured properly on the\n container instance (or on a different log server for remote logging options). For more information on the options for\n different supported log drivers, see Configure\n logging drivers in the Docker documentation.sudo docker version | grep \"Server API version\"
\n ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that\n instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the\n Amazon Elastic Container Service Developer Guide.LogConfig
in the Create a container section of the\n Docker Remote API and the --log-driver
option to docker run.\n By default, containers use the same logging driver that the Docker daemon uses. However the container might use a\n different logging driver than the Docker daemon by specifying a log driver with this parameter in the container\n definition. To use a different logging driver for a container, the log system must be configured properly on the\n container instance (or on a different log server for remote logging options). For more information on the options for\n different supported log drivers, see Configure\n logging drivers in the Docker documentation.sudo docker version | grep \"Server API version\"
\n ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that\n instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the\n Amazon Elastic Container Service Developer Guide.VALID
state before you can associate them with a job queue. You can associate up to three compute\n environments with a job queue. All of the compute environments must be either EC2 (EC2
or\n SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
); EC2 and Fargate compute\n environments can't be mixed.VALID
state before you can associate them with a job queue. You can associate up to three compute\n environments with a job queue. All of the compute environments must be either EC2 (EC2
or\n SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
); EC2 and Fargate compute\n environments can't be mixed.CreateSchedulingPolicy
.DeleteSchedulingPolicy
.DescribeComputeEnvironment
\n operation to determine the ecsClusterArn
that you should launch your Amazon ECS container instances\n into.DescribeComputeEnvironment
\n operation to determine the ecsClusterArn
that you launch your Amazon ECS container instances\n into.DescribeSchedulingPolicies
.EFSVolumeConfiguration
must either be omitted or set to /
which will enforce the path set\n on the EFS access point. If an access point is used, transit encryption must be enabled in the\n EFSVolumeConfiguration
. For more information, see Working with Amazon EFS Access Points in the\n Amazon Elastic File System User Guide.EFSVolumeConfiguration
must either be omitted or set to /
which will enforce the path set\n on the EFS access point. If an access point is used, transit encryption must be enabled in the\n EFSVolumeConfiguration
. For more information, see Working with Amazon EFS access points in the\n Amazon Elastic File System User Guide.EFSVolumeConfiguration
. If this parameter is\n omitted, the default value of DISABLED
is used. For more information, see Using Amazon EFS Access Points in the\n Batch User Guide. EFS IAM authorization requires that TransitEncryption
be\n ENABLED
and that a JobRoleArn
is specified.EFSVolumeConfiguration
. If this parameter is\n omitted, the default value of DISABLED
is used. For more information, see Using Amazon EFS access points in the\n Batch User Guide. EFS IAM authorization requires that TransitEncryption
be\n ENABLED
and that a JobRoleArn
is specified.imageIdOverride
parameter\n isn't specified, then a recent Amazon ECS-optimized Amazon Linux 2 AMI\n (ECS_AL2
) is used.\n
",
+ "smithy.api#documentation": "P4
and G4
) and\n can be used for all non Amazon Web Services Graviton-based instance types.imageIdOverride
parameter\n isn't specified, then a recent Amazon ECS-optimized Amazon Linux 2 AMI\n (ECS_AL2
) is used. If a new image type is specified in an update, but neither an imageId
nor a imageIdOverride
parameter is specified, then the latest Amazon ECS optimized AMI for that image type\n that's supported by Batch is used.\n
",
"smithy.api#required": {}
}
},
"imageIdOverride": {
"target": "com.amazonaws.batch#ImageIdOverride",
"traits": {
- "smithy.api#documentation": "P4
and G4
) and\n can be used for all non Amazon Web Services Graviton-based instance types.imageId
set in the computeResource
object.imageId
set in the computeResource
object.SubmitJob
request override any corresponding\n parameter defaults from the job definition. For more information about specifying parameters, see Job Definition Parameters in the\n Batch User Guide.SubmitJob
request override any corresponding\n parameter defaults from the job definition. For more information about specifying parameters, see Job definition parameters in the\n Batch User Guide.STARTING
, see Jobs Stuck in RUNNABLE Status in the\n troubleshooting section of the Batch User Guide.STARTING
, see Jobs stuck in RUNNABLE status in the\n troubleshooting section of the Batch User Guide.$Latest
, or $Default
.$Latest
, the latest version of the launch template is used. If the value is\n $Default
, the default version of the launch template is used.$Default
or $Latest
version for the launch template is updated. To use a new launch\n template version, create a new compute environment, add the new compute environment to the existing job queue,\n remove the old compute environment from the job queue, and delete the old compute environment.$Default
.$Latest
, or $Default
.$Latest
, the latest version of the launch template is used. If the value is\n $Default
, the default version of the launch template is used.updateToLatestImageVersion
parameter for the compute\n environment is set to true
. During an infrastructure update, if either $Latest
or\n $Default
is specified, Batch re-evaluates the launch template version, and it might use a different\n version of the launch template. This is the case even if the launch template isn't specified in the update. When\n updating a compute environment, changing the launch template requires an infrastructure update of the compute\n environment. For more information, see Updating compute environments in the\n Batch User Guide.$Default
.swappiness
value of\n 0
causes swapping not to happen unless absolutely necessary. A swappiness
value of\n 100
causes pages to be swapped very aggressively. Accepted values are whole numbers between\n 0
and 100
. If the swappiness
parameter isn't specified, a default value of\n 60
is used. If a value isn't specified for maxSwap
, then this parameter is ignored. If\n maxSwap
is set to 0, the container doesn't use swap. This parameter maps to the\n --memory-swappiness
option to docker run.\n
\n maxSwap
and swappiness
parameters are omitted from a job definition, each\n container will have a default swappiness
value of 60, and the total swap usage will be limited to two\n times the memory reservation of the container.swappiness
value of\n 0
causes swapping not to happen unless absolutely necessary. A swappiness
value of\n 100
causes pages to be swapped very aggressively. Accepted values are whole numbers between\n 0
and 100
. If the swappiness
parameter isn't specified, a default value of\n 60
is used. If a value isn't specified for maxSwap
, then this parameter is ignored. If\n maxSwap
is set to 0, the container doesn't use swap. This parameter maps to the\n --memory-swappiness
option to docker run.\n
\n maxSwap
and swappiness
parameters are omitted from a job definition, each\n container will have a default swappiness
value of 60, and the total swap usage will be limited to two\n times the memory reservation of the container.nextToken
value that's returned from a previous paginated ListSchedulingPolicies
\n request where maxResults
was used and the results exceeded the value of that parameter. Pagination\n continues from the end of the previous results that returned the nextToken
value. This value is\n null
when there are no more results to\n return.ListSchedulingPolicies
.ListTagsForResource
.awslogs
, fluentd
, gelf
,\n json-file
, journald
, logentries
, syslog
, and\n splunk
.awslogs
and splunk
\n log drivers.\n
\n sudo docker version | grep \"Server API version\"
\n awslogs
, fluentd
, gelf
,\n json-file
, journald
, logentries
, syslog
, and\n splunk
.awslogs
and splunk
\n log drivers.\n
\n sudo docker version | grep \"Server API version\"
\n type
specified.\n
",
+ "smithy.api#documentation": "Memory
in the Create a container section of the\n Docker Remote API and the --memory
option to docker run.\n You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for\n multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to\n Memory
in the Create a container section of the Docker Remote API and the\n --memory
option to docker run.value
is the hard limit (in MiB), and\n must match one of the supported values and the VCPU
values must be one of the values supported for\n that memory value.\n
\n VCPU
= 0.25VCPU
= 0.25 or 0.5VCPU
= 0.25, 0.5, or 1VCPU
= 0.5, or 1VCPU
= 0.5, 1, or 2VCPU
= 1 or 2VCPU
= 1, 2, or 4VCPU
= 2 or 4VCPU
= 4CpuShares
in the\n Create a container section of the Docker Remote API and the --cpu-shares
option to\n docker run. Each vCPU is equivalent to 1,024 CPU shares. For EC2\n resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be\n specified for each node at least once.value
must match one of the supported\n values and the MEMORY
values must be one of the values supported for that VCPU
value.\n The supported values are 0.25, 0.5, 1, 2, and 4\n
\n MEMORY
= 512, 1024, or 2048MEMORY
= 1024, 2048, 3072, or 4096MEMORY
= 2048, 3072, 4096, 5120, 6144, 7168, or 8192MEMORY
= 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384MEMORY
= 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456,\n 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720type
specified.\n
",
"smithy.api#required": {}
}
},
@@ -3627,7 +3743,7 @@
"tags": {
"target": "com.amazonaws.batch#TagrisTagsMap",
"traits": {
- "smithy.api#documentation": "Memory
in the Create a container section of the\n Docker Remote API and the --memory
option to docker run.\n You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for\n multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to\n Memory
in the Create a container section of the Docker Remote API and the\n --memory
option to docker run.value
is the hard limit (in MiB), and\n must match one of the supported values and the VCPU
values must be one of the values supported for\n that memory value.\n
\n VCPU
= 0.25VCPU
= 0.25 or 0.5VCPU
= 0.25, 0.5, or 1VCPU
= 0.5, or 1VCPU
= 0.5, 1, or 2VCPU
= 1 or 2VCPU
= 1, 2, or 4VCPU
= 2 or 4VCPU
= 4CpuShares
in the\n Create a container section of the Docker Remote API and the --cpu-shares
option to\n docker run. Each vCPU is equivalent to 1,024 CPU shares. For EC2\n resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be\n specified for each node at least once.value
must match one of the supported\n values and the MEMORY
values must be one of the values supported for that VCPU
value.\n The supported values are 0.25, 0.5, 1, 2, and 4\n
\n MEMORY
= 512, 1024, or 2048MEMORY
= 1024, 2048, 3072, or 4096MEMORY
= 2048, 3072, 4096, 5120, 6144, 7168, or 8192MEMORY
= 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384MEMORY
= 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456,\n 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720command
override. You can also override\n existing environment variables on a container or add new environment variables to it with an environment
\n override.command
override. You can also override\n existing environment variables on a container or add new environment variables to it with an environment
\n override.TagResource
.UntagResource
./
, then you must either specify the full role ARN\n (this is recommended) or prefix the role name with the path.service-role
\n path prefix. When you only specify the name of the service role, Batch assumes that your ARN doesn't use the\n service-role
path prefix. Because of this, we recommend that you specify the full ARN of your service\n role when you create compute environments./
, then you must either specify the full role ARN\n (recommended) or prefix the role name with the path.service-role
\n path prefix. When you only specify the name of the service role, Batch assumes that your ARN doesn't use the\n service-role
path prefix. Because of this, we recommend that you specify the full ARN of your service\n role when you create compute environments.priority
parameter) are evaluated first when associated with the same compute environment. Priority is\n determined in descending order, for example, a job queue with a priority value of 10
is given scheduling\n preference over a job queue with a priority value of 1
. All of the compute environments must be either\n EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
). EC2 and\n Fargate compute environments can't be mixed.priority
parameter) are evaluated first when associated with the same compute environment. Priority is\n determined in descending order. For example, a job queue with a priority value of 10
is given scheduling\n preference over a job queue with a priority value of 1
. All of the compute environments must be either\n EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
). EC2 and\n Fargate compute environments can't be mixed.VALID
state before you can associate them with a job queue. All of\n the compute environments must be either EC2 (EC2
or SPOT
) or Fargate\n (FARGATE
or FARGATE_SPOT
). EC2 and Fargate compute environments can't be mixed.VALID
state before you can associate them with a job queue. All of\n the compute environments must be either EC2 (EC2
or SPOT
) or Fargate\n (FARGATE
or FARGATE_SPOT
). EC2 and Fargate compute environments can't be mixed.false
.UpdateSchedulingPolicy
.\n
",
"smithy.api#title": "Braket"
},
"version": "2019-09-01",
@@ -124,9 +122,8 @@
"traits": {
"smithy.api#documentation": "/opt/braket/checkpoints/
./opt/braket/checkpoints/
.s3://bucket-name/key-name-prefix
.s3://bucket-name/key-name-prefix
.s3://bucket-name/key-name-prefix
.s3://bucket-name/key-name-prefix
.DeviceSummary
objects for devices that match the specified filter values.DeviceSummary
objects for devices that match the specified\n filter values.null
if there are no additional results. Use\n the token value in a subsequent request to continue results where the previous request\n ended.null
if there are no additional\n results. Use the token value in a subsequent request to continue results where the previous\n request ended.QuantumTaskSummary
objects for tasks that match the specified filters.QuantumTaskSummary
objects for tasks that match the specified\n filters.resourceArn
of the resource to which a tag will be added.resourceArn
of the resource to which a tag will be\n added.resourceArn
for the resource from which to remove the tags.resourceArn
for the resource from which to remove the\n tags.Server-Timing
header in HTTP responses\n\t\t\tsent from CloudFront.Server-Timing
header to HTTP\n\t\t\tresponses that it sends in response to requests that match a cache behavior that's\n\t\t\tassociated with this response headers policy.Server-Timing
header to. When you set the sampling rate to 100,\n\t\t\tCloudFront adds the Server-Timing
header to the HTTP response for every request\n\t\t\tthat matches the cache behavior that this response headers policy is attached to. When\n\t\t\tyou set it to 50, CloudFront adds the header to 50% of the responses for requests that match\n\t\t\tthe cache behavior. You can set the sampling rate to any number 0–100 with up to four\n\t\t\tdecimal places.Server-Timing
header in HTTP responses sent\n\t\t\tfrom CloudFront. CloudFront adds this header to HTTP responses that it sends in response to requests\n\t\t\tthat match a cache behavior that's associated with this response headers policy.Server-Timing
header to view metrics that can help you gain\n\t\t\tinsights about the behavior and performance of CloudFront. For example, you can see which\n\t\t\tcache layer served a cache hit, or the first byte latency from the origin when there was\n\t\t\ta cache miss. You can use the metrics in the Server-Timing
header to\n\t\t\ttroubleshoot issues or test the efficiency of your CloudFront configuration. For more\n\t\t\tinformation, see Server-Timing header in the\n\t\t\tAmazon CloudFront Developer Guide.arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
\n arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
\n ENABLED
and PENDING_DELETION
.ENABLED
and PENDING_DELETION
.UpdatedTimestamp
is always either the same or newer than the time shown in CreatedTimestamp
.UpdatedTimestamp
is always either the same or newer than the time shown in CreatedTimestamp
.arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
\n arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
\n arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
\n arn:aws:cloudtrail:us-east-2:12345678910:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
\n [+] [country code] [subscriber number including area code]
.[+] [country code] [subscriber number including area code]
.[+] [country code] [subscriber number including area code]
.OR
condition. AND
condition.SearchFilter
. This accepts an\n OR
of AND
(List of List) input where: \n
"
+ }
+ },
"com.amazonaws.connect#CreateAgentStatus": {
"type": "operation",
"input": {
@@ -3770,7 +4097,10 @@
"input": {
"target": "com.amazonaws.connect#DeleteContactFlowRequest"
},
- "errors": [
+ "output": {
+ "target": "smithy.api#Unit"
+ },
+ "errors": [
{
"target": "com.amazonaws.connect#AccessDeniedException"
},
@@ -3887,6 +4217,9 @@
"input": {
"target": "com.amazonaws.connect#DeleteHoursOfOperationRequest"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.connect#InternalServiceException"
@@ -3939,6 +4272,9 @@
"input": {
"target": "com.amazonaws.connect#DeleteInstanceRequest"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.connect#InternalServiceException"
@@ -3977,6 +4313,9 @@
"input": {
"target": "com.amazonaws.connect#DeleteIntegrationAssociationRequest"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.connect#InternalServiceException"
@@ -4026,6 +4365,9 @@
"input": {
"target": "com.amazonaws.connect#DeleteQuickConnectRequest"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.connect#InternalServiceException"
@@ -4078,6 +4420,9 @@
"input": {
"target": "com.amazonaws.connect#DeleteSecurityProfileRequest"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.connect#AccessDeniedException"
@@ -4136,6 +4481,9 @@
"input": {
"target": "com.amazonaws.connect#DeleteUseCaseRequest"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.connect#InternalServiceException"
@@ -4193,6 +4541,9 @@
"input": {
"target": "com.amazonaws.connect#DeleteUserRequest"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.connect#InternalServiceException"
@@ -4224,6 +4575,9 @@
"input": {
"target": "com.amazonaws.connect#DeleteUserHierarchyGroupRequest"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.connect#InternalServiceException"
@@ -4907,6 +5261,64 @@
}
}
},
+ "com.amazonaws.connect#DescribePhoneNumber": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connect#DescribePhoneNumberRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.connect#DescribePhoneNumberResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connect#AccessDeniedException"
+ },
+ {
+ "target": "com.amazonaws.connect#InternalServiceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidParameterException"
+ },
+ {
+ "target": "com.amazonaws.connect#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.connect#ThrottlingException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "OR
operatorAND
\n operator.[+] [country code] [subscriber number including area code]
.TargetArn
input is not provided, this API lists numbers claimed to all the Amazon Connect instances belonging to your account.+
as part of the country code.+
as part of the country code.username = 'abc'
. HAVE\n BPO = 123
. OR
condition.AND
condition. \n
\n us-east-1
Regioneu-west-1
Regionap-southeast-1
Regionap-northeast-1
RegionNoReboot
parameter to true
in the API request, \n\t\t\t\t\tor use the --no-reboot
option in the CLI to prevent Amazon EC2 from shutting down and \n\t\t\t\t\trebooting the instance.NoReboot
\n\t\t\t\t\tparameter to true
in the API request, or by using the --no-reboot
option \n\t\t\t\t\tin the CLI, we can't guarantee the file system integrity of the created image.No Reboot
option is set, Amazon EC2 doesn't shut down the instance before creating \n the image. Without a reboot, the AMI will be crash consistent (all the volumes are snapshotted \n at the same time), but not application consistent (all the operating system buffers are not flushed \n to disk before the snapshots are created).NoReboot
parameter to true
in the API request, \n\t\t\t\t\tor use the --no-reboot
option in the CLI to prevent Amazon EC2 from shutting down and \n\t\t\t\t\trebooting the instance.NoReboot
\n\t\t\t\t\tparameter to true
in the API request, or by using the --no-reboot
option \n\t\t\t\t\tin the CLI, we can't guarantee the file system integrity of the created image.false
(follow standard reboot process)rsa
\n rsa
\n pem
\n gateway
endpoint serves as a target for a route in your route table for\n traffic destined for the Amazon Web Service. You can specify an endpoint policy to attach \n to the endpoint, which will control access to the service from your VPC. You can also\n specify the VPC route tables that use the endpoint.interface
endpoint is a network interface in your subnet that\n serves as an endpoint for communicating with the specified service. You can specify the\n subnets in which to create an endpoint, and the security groups to associate with the\n endpoint network interface.GatewayLoadBalancer
endpoint is a network interface in your subnet that serves an endpoint for communicating with a Gateway Load Balancer that you've configured as a VPC endpoint service.\n
\n \n
\n InvalidGroup.InUse
in EC2-Classic or\n\t\t\t\tDependencyViolation
in EC2-VPC.available
state\n (not attached to an instance).deleting
state for several minutes.\n
",
+ "smithy.api#documentation": "architecture
- The image architecture (i386
|\n x86_64
| arm64
).block-device-mapping.delete-on-termination
- A Boolean value that indicates\n \twhether the Amazon EBS volume is deleted on instance termination.block-device-mapping.device-name
- The device name specified in the block device mapping (for\n example, /dev/sdh
or xvdh
).block-device-mapping.snapshot-id
- The ID of the snapshot used for the Amazon EBS\n volume.block-device-mapping.volume-size
- The volume size of the Amazon EBS volume, in GiB.block-device-mapping.volume-type
- The volume type of the Amazon EBS volume\n (io1
| io2
| gp2
| gp3
| sc1\n
| st1
| standard
).block-device-mapping.encrypted
- A Boolean that indicates whether the Amazon EBS volume is encrypted.description
- The description of the image (provided during image\n creation).ena-support
- A Boolean that indicates whether enhanced networking\n with ENA is enabled.hypervisor
- The hypervisor type (ovm
|\n xen
).image-id
- The ID of the image.image-type
- The image type (machine
| kernel
|\n ramdisk
).is-public
- A Boolean that indicates whether the image is public.kernel-id
- The kernel ID.manifest-location
- The location of the image manifest.name
- The name of the AMI (provided during image creation).owner-alias
- The owner alias (amazon
| aws-marketplace
). \n The valid aliases are defined in an Amazon-maintained list. This is not the Amazon Web Services account alias that can be \n \tset using the IAM console. We recommend that you use the Owner \n \trequest parameter instead of this filter.owner-id
- The Amazon Web Services account ID of the owner. We recommend that you use the \n \t\tOwner request parameter instead of this filter.platform
- The platform. To only list Windows-based AMIs, use\n windows
.product-code
- The product code.product-code.type
- The type of the product code (marketplace
).ramdisk-id
- The RAM disk ID.root-device-name
- The device name of the root device volume (for example, /dev/sda1
).root-device-type
- The type of the root device volume (ebs
|\n instance-store
).state
- The state of the image (available
| pending
\n | failed
).state-reason-code
- The reason code for the state change.state-reason-message
- The message for the state change.sriov-net-support
- A value of simple
indicates\n that enhanced networking with the Intel 82599 VF interface is enabled.tag
:Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.virtualization-type
- The virtualization type (paravirtual
|\n hvm
).\n
",
"smithy.api#xmlName": "Filter"
}
},
@@ -23286,7 +23376,7 @@
"Filters": {
"target": "com.amazonaws.ec2#FilterList",
"traits": {
- "smithy.api#documentation": "architecture
- The image architecture (i386
|\n x86_64
| arm64
).block-device-mapping.delete-on-termination
- A Boolean value that indicates\n \twhether the Amazon EBS volume is deleted on instance termination.block-device-mapping.device-name
- The device name specified in the block device mapping (for\n example, /dev/sdh
or xvdh
).block-device-mapping.snapshot-id
- The ID of the snapshot used for the Amazon EBS\n volume.block-device-mapping.volume-size
- The volume size of the Amazon EBS volume, in GiB.block-device-mapping.volume-type
- The volume type of the Amazon EBS volume\n (io1
| io2
| gp2
| gp3
| sc1\n
| st1
| standard
).block-device-mapping.encrypted
- A Boolean that indicates whether the Amazon EBS volume is encrypted.creation-date
- The time when the image was created, in the ISO 8601\n format in the UTC time zone (YYYY-MM-DDThh:mm:ss.sssZ), for example,\n 2021-09-29T11:04:43.305Z
. You can use a wildcard (*
), for\n example, 2021-09-29T*
, which matches an entire day.description
- The description of the image (provided during image\n creation).ena-support
- A Boolean that indicates whether enhanced networking\n with ENA is enabled.hypervisor
- The hypervisor type (ovm
|\n xen
).image-id
- The ID of the image.image-type
- The image type (machine
| kernel
|\n ramdisk
).is-public
- A Boolean that indicates whether the image is public.kernel-id
- The kernel ID.manifest-location
- The location of the image manifest.name
- The name of the AMI (provided during image creation).owner-alias
- The owner alias (amazon
| aws-marketplace
). \n The valid aliases are defined in an Amazon-maintained list. This is not the Amazon Web Services account alias that can be \n \tset using the IAM console. We recommend that you use the Owner \n \trequest parameter instead of this filter.owner-id
- The Amazon Web Services account ID of the owner. We recommend that you use the \n \t\tOwner request parameter instead of this filter.platform
- The platform. To only list Windows-based AMIs, use\n windows
.product-code
- The product code.product-code.type
- The type of the product code (marketplace
).ramdisk-id
- The RAM disk ID.root-device-name
- The device name of the root device volume (for example, /dev/sda1
).root-device-type
- The type of the root device volume (ebs
|\n instance-store
).state
- The state of the image (available
| pending
\n | failed
).state-reason-code
- The reason code for the state change.state-reason-message
- The message for the state change.sriov-net-support
- A value of simple
indicates\n that enhanced networking with the Intel 82599 VF interface is enabled.tag
:Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.virtualization-type
- The virtualization type (paravirtual
|\n hvm
).\n
",
+ "smithy.api#documentation": "affinity
- The affinity setting for an instance running on a\n Dedicated Host (default
| host
).architecture
- The instance architecture (i386
|\n x86_64
| arm64
).availability-zone
- The Availability Zone of the instance.block-device-mapping.attach-time
- The attach time for an EBS\n volume mapped to the instance, for example,\n 2010-09-15T17:15:20.000Z
.block-device-mapping.delete-on-termination
- A Boolean that\n indicates whether the EBS volume is deleted on instance termination.block-device-mapping.device-name
- The device name specified in the\n block device mapping (for example, /dev/sdh
or\n xvdh
).block-device-mapping.status
- The status for the EBS volume\n (attaching
| attached
| detaching
|\n detached
).block-device-mapping.volume-id
- The volume ID of the EBS\n volume.client-token
- The idempotency token you provided when you launched\n the instance.dns-name
- The public DNS name of the instance.group-id
- The ID of the security group for the instance.\n EC2-Classic only.group-name
- The name of the security group for the instance.\n EC2-Classic only.hibernation-options.configured
- A Boolean that indicates whether\n the instance is enabled for hibernation. A value of true
means that\n the instance is enabled for hibernation. host-id
- The ID of the Dedicated Host on which the instance is\n running, if applicable.hypervisor
- The hypervisor type of the instance\n (ovm
| xen
). The value xen
is used\n for both Xen and Nitro hypervisors.iam-instance-profile.arn
- The instance profile associated with\n the instance. Specified as an ARN.image-id
- The ID of the image used to launch the\n instance.instance-id
- The ID of the instance.instance-lifecycle
- Indicates whether this is a Spot Instance or\n a Scheduled Instance (spot
| scheduled
).instance-state-code
- The state of the instance, as a 16-bit\n unsigned integer. The high byte is used for internal purposes and should be\n ignored. The low byte is set based on the state represented. The valid values\n are: 0 (pending), 16 (running), 32 (shutting-down), 48 (terminated), 64\n (stopping), and 80 (stopped).instance-state-name
- The state of the instance\n (pending
| running
| shutting-down
|\n terminated
| stopping
|\n stopped
).instance-type
- The type of instance (for example,\n t2.micro
).instance.group-id
- The ID of the security group for the\n instance. instance.group-name
- The name of the security group for the\n instance. ip-address
- The public IPv4 address of the instance.kernel-id
- The kernel ID.key-name
- The name of the key pair used when the instance was\n launched.launch-index
- When launching multiple instances, this is the\n index for the instance in the launch group (for example, 0, 1, 2, and so on).\n launch-time
- The time when the instance was launched, in the ISO\n 8601 format in the UTC time zone (YYYY-MM-DDThh:mm:ss.sssZ), for example,\n 2021-09-29T11:04:43.305Z
. You can use a wildcard\n (*
), for example, 2021-09-29T*
, which matches an\n entire day.metadata-options.http-tokens
- The metadata request authorization\n state (optional
| required
)metadata-options.http-put-response-hop-limit
- The http metadata\n request put response hop limit (integer, possible values 1
to\n 64
)metadata-options.http-endpoint
- Enable or disable metadata\n access on http endpoint (enabled
| disabled
)monitoring-state
- Indicates whether detailed monitoring is\n enabled (disabled
| enabled
).network-interface.addresses.private-ip-address
- The private IPv4\n address associated with the network interface.network-interface.addresses.primary
- Specifies whether the IPv4\n address of the network interface is the primary private IPv4 address.network-interface.addresses.association.public-ip
- The ID of the\n association of an Elastic IP address (IPv4) with a network interface.network-interface.addresses.association.ip-owner-id
- The owner\n ID of the private IPv4 address associated with the network interface.network-interface.association.public-ip
- The address of the\n Elastic IP address (IPv4) bound to the network interface.network-interface.association.ip-owner-id
- The owner of the\n Elastic IP address (IPv4) associated with the network interface.network-interface.association.allocation-id
- The allocation ID\n returned when you allocated the Elastic IP address (IPv4) for your network\n interface.network-interface.association.association-id
- The association ID\n returned when the network interface was associated with an IPv4 address.network-interface.attachment.attachment-id
- The ID of the\n interface attachment.network-interface.attachment.instance-id
- The ID of the instance\n to which the network interface is attached.network-interface.attachment.instance-owner-id
- The owner ID of\n the instance to which the network interface is attached.network-interface.attachment.device-index
- The device index to\n which the network interface is attached.network-interface.attachment.status
- The status of the\n attachment (attaching
| attached
|\n detaching
| detached
).network-interface.attachment.attach-time
- The time that the\n network interface was attached to an instance.network-interface.attachment.delete-on-termination
- Specifies\n whether the attachment is deleted when an instance is terminated.network-interface.availability-zone
- The Availability Zone for\n the network interface.network-interface.description
- The description of the network\n interface.network-interface.group-id
- The ID of a security group\n associated with the network interface.network-interface.group-name
- The name of a security group\n associated with the network interface.network-interface.ipv6-addresses.ipv6-address
- The IPv6 address\n associated with the network interface.network-interface.mac-address
- The MAC address of the network\n interface.network-interface.network-interface-id
- The ID of the network\n interface.network-interface.owner-id
- The ID of the owner of the network\n interface.network-interface.private-dns-name
- The private DNS name of the\n network interface.network-interface.requester-id
- The requester ID for the network\n interface.network-interface.requester-managed
- Indicates whether the\n network interface is being managed by Amazon Web Services.network-interface.status
- The status of the network interface\n (available
) | in-use
).network-interface.source-dest-check
- Whether the network\n interface performs source/destination checking. A value of true
\n means that checking is enabled, and false
means that checking is\n disabled. The value must be false
for the network interface to\n perform network address translation (NAT) in your VPC.network-interface.subnet-id
- The ID of the subnet for the\n network interface.network-interface.vpc-id
- The ID of the VPC for the network\n interface.outpost-arn
- The Amazon Resource Name (ARN) of the\n Outpost.owner-id
- The Amazon Web Services account ID of the instance\n owner.placement-group-name
- The name of the placement group for the\n instance.placement-partition-number
- The partition in which the instance is\n located.platform
- The platform. To list only Windows instances, use\n windows
.private-dns-name
- The private IPv4 DNS name of the\n instance.private-ip-address
- The private IPv4 address of the\n instance.product-code
- The product code associated with the AMI used to\n launch the instance.product-code.type
- The type of product code (devpay
|\n marketplace
).ramdisk-id
- The RAM disk ID.reason
- The reason for the current state of the instance (for\n example, shows \"User Initiated [date]\" when you stop or terminate the instance).\n Similar to the state-reason-code filter.requester-id
- The ID of the entity that launched the instance on\n your behalf (for example, Amazon Web Services Management Console, Auto Scaling, and so\n on).reservation-id
- The ID of the instance's reservation. A\n reservation ID is created any time you launch an instance. A reservation ID has\n a one-to-one relationship with an instance launch request, but can be associated\n with more than one instance if you launch multiple instances using the same\n launch request. For example, if you launch one instance, you get one reservation\n ID. If you launch ten instances using the same launch request, you also get one\n reservation ID.root-device-name
- The device name of the root device volume (for\n example, /dev/sda1
).root-device-type
- The type of the root device volume\n (ebs
| instance-store
).source-dest-check
- Indicates whether the instance performs\n source/destination checking. A value of true
means that checking is\n enabled, and false
means that checking is disabled. The value must\n be false
for the instance to perform network address translation\n (NAT) in your VPC. spot-instance-request-id
- The ID of the Spot Instance\n request.state-reason-code
- The reason code for the state change.state-reason-message
- A message that describes the state\n change.subnet-id
- The ID of the subnet for the instance.tag:
- The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value.\n For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources that have a tag with a specific key, regardless of the tag value.tenancy
- The tenancy of an instance (dedicated
|\n default
| host
).virtualization-type
- The virtualization type of the instance\n (paravirtual
| hvm
).vpc-id
- The ID of the VPC that the instance is running in.\n
",
"smithy.api#xmlName": "Filter"
}
},
@@ -23831,6 +23921,12 @@
"smithy.api#documentation": "affinity
- The affinity setting for an instance running on a\n Dedicated Host (default
| host
).architecture
- The instance architecture (i386
|\n x86_64
| arm64
).availability-zone
- The Availability Zone of the instance.block-device-mapping.attach-time
- The attach time for an EBS\n volume mapped to the instance, for example,\n 2010-09-15T17:15:20.000Z
.block-device-mapping.delete-on-termination
- A Boolean that\n indicates whether the EBS volume is deleted on instance termination.block-device-mapping.device-name
- The device name specified in the\n block device mapping (for example, /dev/sdh
or\n xvdh
).block-device-mapping.status
- The status for the EBS volume\n (attaching
| attached
| detaching
|\n detached
).block-device-mapping.volume-id
- The volume ID of the EBS\n volume.capacity-reservation-id
- The ID of the Capacity Reservation into which the\n instance was launched.client-token
- The idempotency token you provided when you launched\n the instance.dns-name
- The public DNS name of the instance.group-id
- The ID of the security group for the instance.\n EC2-Classic only.group-name
- The name of the security group for the instance.\n EC2-Classic only.hibernation-options.configured
- A Boolean that indicates whether\n the instance is enabled for hibernation. A value of true
means that\n the instance is enabled for hibernation. host-id
- The ID of the Dedicated Host on which the instance is\n running, if applicable.hypervisor
- The hypervisor type of the instance\n (ovm
| xen
). The value xen
is used\n for both Xen and Nitro hypervisors.iam-instance-profile.arn
- The instance profile associated with\n the instance. Specified as an ARN.image-id
- The ID of the image used to launch the\n instance.instance-id
- The ID of the instance.instance-lifecycle
- Indicates whether this is a Spot Instance or\n a Scheduled Instance (spot
| scheduled
).instance-state-code
- The state of the instance, as a 16-bit\n unsigned integer. The high byte is used for internal purposes and should be\n ignored. The low byte is set based on the state represented. The valid values\n are: 0 (pending), 16 (running), 32 (shutting-down), 48 (terminated), 64\n (stopping), and 80 (stopped).instance-state-name
- The state of the instance\n (pending
| running
| shutting-down
|\n terminated
| stopping
|\n stopped
).instance-type
- The type of instance (for example,\n t2.micro
).instance.group-id
- The ID of the security group for the\n instance. instance.group-name
- The name of the security group for the\n instance. ip-address
- The public IPv4 address of the instance.kernel-id
- The kernel ID.key-name
- The name of the key pair used when the instance was\n launched.launch-index
- When launching multiple instances, this is the\n index for the instance in the launch group (for example, 0, 1, 2, and so on).\n launch-time
- The time when the instance was launched, in the ISO\n 8601 format in the UTC time zone (YYYY-MM-DDThh:mm:ss.sssZ), for example,\n 2021-09-29T11:04:43.305Z
. You can use a wildcard\n (*
), for example, 2021-09-29T*
, which matches an\n entire day.metadata-options.http-tokens
- The metadata request authorization\n state (optional
| required
)metadata-options.http-put-response-hop-limit
- The http metadata\n request put response hop limit (integer, possible values 1
to\n 64
)metadata-options.http-endpoint
- Enable or disable metadata\n access on http endpoint (enabled
| disabled
)monitoring-state
- Indicates whether detailed monitoring is\n enabled (disabled
| enabled
).network-interface.addresses.private-ip-address
- The private IPv4\n address associated with the network interface.network-interface.addresses.primary
- Specifies whether the IPv4\n address of the network interface is the primary private IPv4 address.network-interface.addresses.association.public-ip
- The ID of the\n association of an Elastic IP address (IPv4) with a network interface.network-interface.addresses.association.ip-owner-id
- The owner\n ID of the private IPv4 address associated with the network interface.network-interface.association.public-ip
- The address of the\n Elastic IP address (IPv4) bound to the network interface.network-interface.association.ip-owner-id
- The owner of the\n Elastic IP address (IPv4) associated with the network interface.network-interface.association.allocation-id
- The allocation ID\n returned when you allocated the Elastic IP address (IPv4) for your network\n interface.network-interface.association.association-id
- The association ID\n returned when the network interface was associated with an IPv4 address.network-interface.attachment.attachment-id
- The ID of the\n interface attachment.network-interface.attachment.instance-id
- The ID of the instance\n to which the network interface is attached.network-interface.attachment.instance-owner-id
- The owner ID of\n the instance to which the network interface is attached.network-interface.attachment.device-index
- The device index to\n which the network interface is attached.network-interface.attachment.status
- The status of the\n attachment (attaching
| attached
|\n detaching
| detached
).network-interface.attachment.attach-time
- The time that the\n network interface was attached to an instance.network-interface.attachment.delete-on-termination
- Specifies\n whether the attachment is deleted when an instance is terminated.network-interface.availability-zone
- The Availability Zone for\n the network interface.network-interface.description
- The description of the network\n interface.network-interface.group-id
- The ID of a security group\n associated with the network interface.network-interface.group-name
- The name of a security group\n associated with the network interface.network-interface.ipv6-addresses.ipv6-address
- The IPv6 address\n associated with the network interface.network-interface.mac-address
- The MAC address of the network\n interface.network-interface.network-interface-id
- The ID of the network\n interface.network-interface.owner-id
- The ID of the owner of the network\n interface.network-interface.private-dns-name
- The private DNS name of the\n network interface.network-interface.requester-id
- The requester ID for the network\n interface.network-interface.requester-managed
- Indicates whether the\n network interface is being managed by Amazon Web Services.network-interface.status
- The status of the network interface\n (available
) | in-use
).network-interface.source-dest-check
- Whether the network\n interface performs source/destination checking. A value of true
\n means that checking is enabled, and false
means that checking is\n disabled. The value must be false
for the network interface to\n perform network address translation (NAT) in your VPC.network-interface.subnet-id
- The ID of the subnet for the\n network interface.network-interface.vpc-id
- The ID of the VPC for the network\n interface.outpost-arn
- The Amazon Resource Name (ARN) of the\n Outpost.owner-id
- The Amazon Web Services account ID of the instance\n owner.placement-group-name
- The name of the placement group for the\n instance.placement-partition-number
- The partition in which the instance is\n located.platform
- The platform. To list only Windows instances, use\n windows
.private-dns-name
- The private IPv4 DNS name of the\n instance.private-ip-address
- The private IPv4 address of the\n instance.product-code
- The product code associated with the AMI used to\n launch the instance.product-code.type
- The type of product code (devpay
|\n marketplace
).ramdisk-id
- The RAM disk ID.reason
- The reason for the current state of the instance (for\n example, shows \"User Initiated [date]\" when you stop or terminate the instance).\n Similar to the state-reason-code filter.requester-id
- The ID of the entity that launched the instance on\n your behalf (for example, Amazon Web Services Management Console, Auto Scaling, and so\n on).reservation-id
- The ID of the instance's reservation. A\n reservation ID is created any time you launch an instance. A reservation ID has\n a one-to-one relationship with an instance launch request, but can be associated\n with more than one instance if you launch multiple instances using the same\n launch request. For example, if you launch one instance, you get one reservation\n ID. If you launch ten instances using the same launch request, you also get one\n reservation ID.root-device-name
- The device name of the root device volume (for\n example, /dev/sda1
).root-device-type
- The type of the root device volume\n (ebs
| instance-store
).source-dest-check
- Indicates whether the instance performs\n source/destination checking. A value of true
means that checking is\n enabled, and false
means that checking is disabled. The value must\n be false
for the instance to perform network address translation\n (NAT) in your VPC. spot-instance-request-id
- The ID of the Spot Instance\n request.state-reason-code
- The reason code for the state change.state-reason-message
- A message that describes the state\n change.subnet-id
- The ID of the subnet for the instance.tag:
- The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value.\n For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources that have a tag with a specific key, regardless of the tag value.tenancy
- The tenancy of an instance (dedicated
|\n default
| host
).virtualization-type
- The virtualization type of the instance\n (paravirtual
| hvm
).vpc-id
- The ID of the VPC that the instance is running in.DryRunOperation
. \n Otherwise, it is UnauthorizedOperation
.true
, the public key material is included in the response.false
\n \n
",
+ "smithy.api#documentation": "local-address
- The local address.local-bgp-asn
- The Border Gateway Protocol (BGP) Autonomous System Number (ASN) \n of the local gateway.local-gateway-id
- The ID of the local gateway.local-gateway-virtual-interface-id
- The ID of the virtual interface.local-gateway-virtual-interface-group-id
- The ID of the virtual interface group.owner-id
- The ID of the Amazon Web Services account that owns the local gateway virtual interface.peer-address
- The peer address.peer-bgp-asn
- The peer BGP ASN.vlan
- The ID of the VLAN.\n
",
"smithy.api#xmlName": "Filter"
}
},
@@ -24734,6 +24830,27 @@
}
],
"minDelay": 15
+ },
+ "NatGatewayDeleted": {
+ "acceptors": [
+ {
+ "state": "success",
+ "matcher": {
+ "output": {
+ "path": "NatGateways[].State",
+ "expected": "deleted",
+ "comparator": "allStringEquals"
+ }
+ }
+ },
+ {
+ "state": "success",
+ "matcher": {
+ "errorType": "NatGatewayNotFound"
+ }
+ }
+ ],
+ "minDelay": 15
}
}
}
@@ -27555,7 +27672,7 @@
"Filters": {
"target": "com.amazonaws.ec2#FilterList",
"traits": {
- "smithy.api#documentation": "local-address
- The local address.local-bgp-asn
- The Border Gateway Protocol (BGP) Autonomous System Number (ASN) \n of the local gateway.local-gateway-id
- The ID of the local gateway.local-gateway-virtual-interface-id
- The ID of the virtual interface.owner-id
- The ID of the Amazon Web Services account that owns the local gateway virtual interface.peer-address
- The peer address.peer-bgp-asn
- The peer BGP ASN.vlan
- The ID of the VLAN.\n
",
+ "smithy.api#documentation": "availability-zone-group
- The Availability Zone group.create-time
- The time stamp when the Spot Instance request was\n created.fault-code
- The fault code related to the request.fault-message
- The fault message related to the request.instance-id
- The ID of the instance that fulfilled the\n request.launch-group
- The Spot Instance launch group.launch.block-device-mapping.delete-on-termination
- Indicates\n whether the EBS volume is deleted on instance termination.launch.block-device-mapping.device-name
- The device name for the\n volume in the block device mapping (for example, /dev/sdh
or\n xvdh
).launch.block-device-mapping.snapshot-id
- The ID of the snapshot\n for the EBS volume.launch.block-device-mapping.volume-size
- The size of the EBS\n volume, in GiB.launch.block-device-mapping.volume-type
- The type of EBS volume:\n gp2
for General Purpose SSD, io1
or\n io2
for Provisioned IOPS SSD, st1
for Throughput\n Optimized HDD, sc1
for Cold HDD, or standard
for\n Magnetic.launch.group-id
- The ID of the security group for the\n instance.launch.group-name
- The name of the security group for the\n instance.launch.image-id
- The ID of the AMI.launch.instance-type
- The type of instance (for example,\n m3.medium
).launch.kernel-id
- The kernel ID.launch.key-name
- The name of the key pair the instance launched\n with.launch.monitoring-enabled
- Whether detailed monitoring is\n enabled for the Spot Instance.launch.ramdisk-id
- The RAM disk ID.launched-availability-zone
- The Availability Zone in which the\n request is launched.network-interface.addresses.primary
- Indicates whether the IP\n address is the primary private IP address.network-interface.delete-on-termination
- Indicates whether the\n network interface is deleted when the instance is terminated.network-interface.description
- A description of the network\n interface.network-interface.device-index
- The index of the device for the\n network interface attachment on the instance.network-interface.group-id
- The ID of the security group\n associated with the network interface.network-interface.network-interface-id
- The ID of the network\n interface.network-interface.private-ip-address
- The primary private IP\n address of the network interface.network-interface.subnet-id
- The ID of the subnet for the\n instance.product-description
- The product description associated with the\n instance (Linux/UNIX
| Windows
).spot-instance-request-id
- The Spot Instance request ID.spot-price
- The maximum hourly price for any Spot Instance\n launched to fulfill the request.state
- The state of the Spot Instance request (open
\n | active
| closed
| cancelled
|\n failed
). Spot request status information can help you track\n your Amazon EC2 Spot Instance requests. For more information, see Spot\n request status in the Amazon EC2 User Guide for Linux Instances.status-code
- The short code describing the most recent\n evaluation of your Spot Instance request.status-message
- The message explaining the status of the Spot\n Instance request.tag:
- The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value.\n For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.type
- The type of Spot Instance request (one-time
|\n persistent
).valid-from
- The start date of the request.valid-until
- The end date of the request.\n
",
"smithy.api#xmlName": "Filter"
}
},
@@ -30611,6 +30728,9 @@
"input": {
"target": "com.amazonaws.ec2#DetachInternetGatewayRequest"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"traits": {
"smithy.api#documentation": "availability-zone-group
- The Availability Zone group.create-time
- The time stamp when the Spot Instance request was\n created.fault-code
- The fault code related to the request.fault-message
- The fault message related to the request.instance-id
- The ID of the instance that fulfilled the\n request.launch-group
- The Spot Instance launch group.launch.block-device-mapping.delete-on-termination
- Indicates\n whether the EBS volume is deleted on instance termination.launch.block-device-mapping.device-name
- The device name for the\n volume in the block device mapping (for example, /dev/sdh
or\n xvdh
).launch.block-device-mapping.snapshot-id
- The ID of the snapshot\n for the EBS volume.launch.block-device-mapping.volume-size
- The size of the EBS\n volume, in GiB.launch.block-device-mapping.volume-type
- The type of EBS volume:\n gp2
for General Purpose SSD, io1
or\n io2
for Provisioned IOPS SSD, st1
for Throughput\n Optimized HDD, sc1
for Cold HDD, or standard
for\n Magnetic.launch.group-id
- The ID of the security group for the\n instance.launch.group-name
- The name of the security group for the\n instance.launch.image-id
- The ID of the AMI.launch.instance-type
- The type of instance (for example,\n m3.medium
).launch.kernel-id
- The kernel ID.launch.key-name
- The name of the key pair the instance launched\n with.launch.monitoring-enabled
- Whether detailed monitoring is\n enabled for the Spot Instance.launch.ramdisk-id
- The RAM disk ID.launched-availability-zone
- The Availability Zone in which the\n request is launched.network-interface.addresses.primary
- Indicates whether the IP\n address is the primary private IP address.network-interface.delete-on-termination
- Indicates whether the\n network interface is deleted when the instance is terminated.network-interface.description
- A description of the network\n interface.network-interface.device-index
- The index of the device for the\n network interface attachment on the instance.network-interface.group-id
- The ID of the security group\n associated with the network interface.network-interface.network-interface-id
- The ID of the network\n interface.network-interface.private-ip-address
- The primary private IP\n address of the network interface.network-interface.subnet-id
- The ID of the subnet for the\n instance.product-description
- The product description associated with the\n instance (Linux/UNIX
| Windows
).spot-instance-request-id
- The Spot Instance request ID.spot-price
- The maximum hourly price for any Spot Instance\n launched to fulfill the request.state
- The state of the Spot Instance request (open
\n | active
| closed
| cancelled
|\n failed
). Spot request status information can help you track\n your Amazon EC2 Spot Instance requests. For more information, see Spot\n request status in the Amazon EC2 User Guide for Linux Instances.status-code
- The short code describing the most recent\n evaluation of your Spot Instance request.status-message
- The message explaining the status of the Spot\n Instance request.tag:
- The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value.\n For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.type
- The type of Spot Instance request (one-time
|\n persistent
).valid-from
- The start date of the request.valid-until
- The end date of the request.detached
before you\n can delete the VPC or attach a different VPC to the virtual private gateway.BlockDeviceMapping
objects called \n by \n CreateImage.\n
",
"smithy.api#xmlName": "keyFingerprint"
}
},
@@ -49522,6 +49663,21 @@
"com.amazonaws.ec2#KernelId": {
"type": "string"
},
+ "com.amazonaws.ec2#KeyFormat": {
+ "type": "string",
+ "traits": {
+ "smithy.api#enum": [
+ {
+ "value": "pem",
+ "name": "pem"
+ },
+ {
+ "value": "ppk",
+ "name": "ppk"
+ }
+ ]
+ }
+ },
"com.amazonaws.ec2#KeyNameStringList": {
"type": "list",
"member": {
@@ -49538,7 +49694,7 @@
"target": "com.amazonaws.ec2#String",
"traits": {
"aws.protocols#ec2QueryName": "KeyFingerprint",
- "smithy.api#documentation": "\n
",
"smithy.api#xmlName": "keyFingerprint"
}
},
@@ -49606,7 +49762,7 @@
"target": "com.amazonaws.ec2#String",
"traits": {
"aws.protocols#ec2QueryName": "KeyFingerprint",
- "smithy.api#documentation": "\n
\n \n
",
+ "smithy.api#documentation": "\n
\n \n
",
"smithy.api#xmlName": "keyFingerprint"
}
},
@@ -49633,6 +49789,22 @@
"smithy.api#documentation": "bundle
| conversion-task
| customer-gateway
| dhcp-options
|\n elastic-ip-allocation
| elastic-ip-association
|\n export-task
| flow-log
| image
|\n import-task
| internet-gateway
| network-acl
\n | network-acl-association
| network-interface
|\n network-interface-attachment
| prefix-list
|\n route-table
| route-table-association
|\n security-group
| subnet
|\n subnet-cidr-block-association
| vpc
|\n vpc-cidr-block-association
| vpc-endpoint
| vpc-peering-connection
| vpn-connection
| vpn-gateway
.Describe
command for the resource type.bundle
| conversion-task
| customer-gateway
| dhcp-options
|\n elastic-ip-allocation
| elastic-ip-association
|\n export-task
| flow-log
| image
|\n import-task
| internet-gateway
| network-acl
\n | network-acl-association
| network-interface
|\n network-interface-attachment
| prefix-list
|\n route-table
| route-table-association
|\n security-group
| subnet
|\n subnet-cidr-block-association
| vpc
|\n vpc-cidr-block-association
| vpc-endpoint
| vpc-peering-connection
| vpn-connection
| vpn-gateway
. Describe
\n command for the resource type.Attribute
parameter to specify the attribute or one of the following parameters: \n Description
or LaunchPermission
.\n
\n\t \n\t MapCustomerOwnedIpOnLaunch
and\n CustomerOwnedIpv4Pool
. These two parameters act as a single\n attribute.EnableLniAtDeviceIndex
or\n DisableLniAtDeviceIndex
.\n
"
}
@@ -55439,7 +55632,7 @@
"PrivateDnsHostnameTypeOnLaunch": {
"target": "com.amazonaws.ec2#HostnameType",
"traits": {
- "smithy.api#documentation": "in-use
or available
state before you can modify the same \n volume. This is sometimes referred to as a cooldown period.InvalidIPAddress.InUse
).AuthFailure
error if the address is already allocated to another Amazon Web Services account.running
state. If your experience with the instance differs from the\n instance status returned by DescribeInstanceStatus, use ReportInstanceStatus to report your experience with the instance. Amazon\n EC2 collects this information to improve the accuracy of status checks.spot-fleet-request
and instance
resource types are\n supported.spot-fleet-request
and instance
resource types are\n supported.kernel
or ramdisk
, the instance must be in a stopped\n state. To reset the sourceDestCheck
, the instance can be either running or\n stopped.sourceDestCheck
attribute controls whether source/destination\n checking is enabled. The default value is true
, which means checking is\n enabled. This value must be false
for a NAT instance to perform NAT. For\n more information, see NAT Instances in the\n Amazon VPC User Guide.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
+ "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
}
},
"Engine": {
@@ -1083,7 +1083,7 @@
}
},
"traits": {
- "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
+ "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
}
},
"com.amazonaws.elasticache#CacheNodeIdsList": {
@@ -2081,7 +2081,7 @@
"CacheNodeType": {
"target": "com.amazonaws.elasticache#String",
"traits": {
- "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
+ "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
}
},
"Engine": {
@@ -2639,7 +2639,7 @@
"CacheNodeType": {
"target": "com.amazonaws.elasticache#String",
"traits": {
- "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
+ "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
}
},
"Engine": {
@@ -3344,6 +3344,9 @@
"input": {
"target": "com.amazonaws.elasticache#DeleteCacheParameterGroupMessage"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.elasticache#CacheParameterGroupNotFoundFault"
@@ -3382,6 +3385,9 @@
"input": {
"target": "com.amazonaws.elasticache#DeleteCacheSecurityGroupMessage"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.elasticache#CacheSecurityGroupNotFoundFault"
@@ -3420,6 +3426,9 @@
"input": {
"target": "com.amazonaws.elasticache#DeleteCacheSubnetGroupMessage"
},
+ "output": {
+ "target": "smithy.api#Unit"
+ },
"errors": [
{
"target": "com.amazonaws.elasticache#CacheSubnetGroupInUse"
@@ -4562,7 +4571,7 @@
"CacheNodeType": {
"target": "com.amazonaws.elasticache#String",
"traits": {
- "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
+ "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
}
},
"Duration": {
@@ -4641,7 +4650,7 @@
"CacheNodeType": {
"target": "com.amazonaws.elasticache#String",
"traits": {
- "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
+ "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
}
},
"Duration": {
@@ -8715,7 +8724,7 @@
"CacheNodeType": {
"target": "com.amazonaws.elasticache#String",
"traits": {
- "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
+ "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
}
},
"StartTime": {
@@ -8875,7 +8884,7 @@
"CacheNodeType": {
"target": "com.amazonaws.elasticache#String",
"traits": {
- "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
+ "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t\t\t\t\t\tcache.r6g.xlarge
,\n\t\t\t\t\t\t\tcache.r6g.2xlarge
,\n\t\t\t\t\t\t\tcache.r6g.4xlarge
,\n\t\t\t\t\t\t\tcache.r6g.8xlarge
,\n\t\t\t\t\t\t\tcache.r6g.12xlarge
,\n\t\t\t\t\t\t\tcache.r6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
}
},
"Duration": {
@@ -9445,7 +9454,7 @@
"CacheNodeType": {
"target": "com.amazonaws.elasticache#String",
"traits": {
- "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t cache.r6g.xlarge
,\n\t\t cache.r6g.2xlarge
,\n\t\t cache.r6g.4xlarge
,\n\t\t cache.r6g.8xlarge
,\n\t\t cache.r6g.12xlarge
,\n\t\t cache.r6g.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
+ "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.\n
\n\t\t\t\t\n\t\t \n
\n cache.m6g.large
,\n\t\t\t\t\t\t\tcache.m6g.xlarge
,\n\t\t\t\t\t\t\tcache.m6g.2xlarge
,\n\t\t\t\t\t\t\tcache.m6g.4xlarge
,\n\t\t\t\t\t\t\tcache.m6g.8xlarge
,\n\t\t\t\t\t\t\tcache.m6g.12xlarge
,\n\t\t\t\t\t\t\tcache.m6g.16xlarge
\n\t\t\t\t\t\t\t\n\t\t\t\t\t cache.m5.large
,\n \t\t\t\t\t\tcache.m5.xlarge
,\n \t\t\t\t\t\tcache.m5.2xlarge
,\n \t\t\t\t\t\tcache.m5.4xlarge
,\n \t\t\t\t\t\tcache.m5.12xlarge
,\n \t\t\t\t\t\tcache.m5.24xlarge
\n \t\t\t\t\t\t\n \t\t\t\t\t\t\n \t\t\t\t\t\t cache.m4.large
,\n \t\t\t\t\t\tcache.m4.xlarge
,\n \t\t\t\t\t\tcache.m4.2xlarge
,\n \t\t\t\t\t\tcache.m4.4xlarge
,\n \t\t\t\t\t\tcache.m4.10xlarge
\n cache.t4g.micro
,\n\t\t\t\t\t cache.t4g.small
,\n\t\t\t\t\t cache.t4g.medium
\n\t\t\t\t\t cache.t3.micro
, \n \t\t\t\t\t\tcache.t3.small
,\n \t\t\t\t\t\tcache.t3.medium
\n cache.t2.micro
, \n \t\t\t\t\t\tcache.t2.small
,\n \t\t\t\t\t\tcache.t2.medium
\n cache.t1.micro
\n cache.m1.small
, \n\t\t\t\t\t\t cache.m1.medium
, \n\t\t\t\t\t\t cache.m1.large
,\n\t\t\t\t\t\t cache.m1.xlarge
\n cache.m3.medium
,\n \t\t\t\t\t\tcache.m3.large
, \n \t\t\t\t\t\tcache.m3.xlarge
,\n \t\t\t\t\t\tcache.m3.2xlarge
\n \n
\n cache.c1.xlarge
\n \n
\n cache.r6gd.xlarge
,\n\t\t cache.r6gd.2xlarge
,\n\t\t cache.r6gd.4xlarge
,\n\t\t cache.r6gd.8xlarge
,\n\t\t cache.r6gd.12xlarge
,\n\t\t cache.r6gd.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n
\n cache.r6g.large
,\n\t\t cache.r6g.xlarge
,\n\t\t cache.r6g.2xlarge
,\n\t\t cache.r6g.4xlarge
,\n\t\t cache.r6g.8xlarge
,\n\t\t cache.r6g.12xlarge
,\n\t\t cache.r6g.16xlarge
\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t cache.r5.large
,\n \t\t\t\t\t cache.r5.xlarge
,\n \t\t\t\t\t cache.r5.2xlarge
,\n \t\t\t\t\t cache.r5.4xlarge
,\n \t\t\t\t\t cache.r5.12xlarge
,\n \t\t\t\t\t cache.r5.24xlarge
\n cache.r4.large
,\n \t\t\t\t\t cache.r4.xlarge
,\n \t\t\t\t\t cache.r4.2xlarge
,\n \t\t\t\t\t cache.r4.4xlarge
,\n \t\t\t\t\t cache.r4.8xlarge
,\n \t\t\t\t\t cache.r4.16xlarge
\n cache.m2.xlarge
, \n \t\t\t\t\t\tcache.m2.2xlarge
,\n \t\t\t\t\t\tcache.m2.4xlarge
\n cache.r3.large
, \n \t\t\t\t\t\tcache.r3.xlarge
,\n \t\t\t\t\t\tcache.r3.2xlarge
, \n \t\t\t\t\t\tcache.r3.4xlarge
,\n \t\t\t\t\t\tcache.r3.8xlarge
\n \n
"
}
},
"Engine": {
diff --git a/aws/sdk/aws-models/fsx.json b/aws/sdk/aws-models/fsx.json
index 071d7dbb54..95d1d84a19 100644
--- a/aws/sdk/aws-models/fsx.json
+++ b/aws/sdk/aws-models/fsx.json
@@ -1607,14 +1607,14 @@
"DeploymentType": {
"target": "com.amazonaws.fsx#OntapDeploymentType",
"traits": {
- "smithy.api#documentation": "appendonly
and \n\t\t\t\tappendfsync
are not supported on Redis version 2.8.22 and later.MULTI_AZ_1
is the supported ONTAP deployment type.\n
\n MULTI_AZ_1
- (Default) A high availability file system configured\n for Multi-AZ redundancy to tolerate temporary Availability Zone (AZ)\n unavailability. SINGLE_AZ_1
- A file system configured for Single-AZ\n redundancy.DeploymentType
is set to MULTI_AZ_1
. This specifies the subnet \n in which you want the preferred file server to be located.DeploymentType
is set to MULTI_AZ_1
. This\n specifies the subnet in which you want the preferred file server to be located.\n
\n MULTI_AZ_1
- (Default) A high availability file system configured\n for Multi-AZ redundancy to tolerate temporary Availability Zone (AZ)\n unavailability. SINGLE_AZ_1
- A file system configured for Single-AZ\n redundancy.REQUIRES_ACCEPTANCE
. This is a trigger for\n your game to get acceptance from all players in the ticket. Acceptances are only valid\n for tickets when they are in this status; all other acceptances result in an\n error.PLACING
, where a new game session is created for the match. SEARCHING
to find a new match. For tickets where one or more\n players failed to respond, the ticket status is set to CANCELLED
, and\n processing is terminated. A new matchmaking request for these players can be submitted\n as needed. REQUIRES_ACCEPTANCE
. This is a trigger for\n your game to get acceptance from all players in the ticket. Acceptances are only valid\n for tickets when they are in this status; all other acceptances result in an\n error.PLACING
, where a new game session is created for the match. CANCELLED
, and processing is\n terminated. For tickets where players have accepted or not yet responded, the ticket\n status is returned to SEARCHING
to find a new match. A new matchmaking\n request for these players can be submitted as needed. \n
\n \n
",
"smithy.api#required": {}
}
}
@@ -565,7 +565,7 @@
}
],
"traits": {
- "smithy.api#documentation": "UpdateAlias
.UpdateAlias
.CreateBuild
operation can used in the following scenarios:\n
\n CreateBuild
and specify a build name, operating system, and the\n Amazon S3 storage location of your game build.CreateBuild
and specify a build name and\n operating system. This operation creates a new build resource and also returns an\n Amazon S3 location with temporary access credentials. Use the credentials to manually\n upload your build files to the specified Amazon S3 location. For more information,\n see Uploading Objects in the Amazon S3 Developer\n Guide. Build files can be uploaded to the GameLift Amazon S3 location\n once only; that can't be updated. INITIALIZED
status. A build must be in READY
\n status before you can create fleets with it.CreateBuild
operation can used in the following scenarios:\n
\n CreateBuild
and specify a build name, operating system, and the\n Amazon S3 storage location of your game build.CreateBuild
and specify a build name and\n operating system. This operation creates a new build resource and also returns an\n Amazon S3 location with temporary access credentials. Use the credentials to manually\n upload your build files to the specified Amazon S3 location. For more information,\n see Uploading Objects in the Amazon S3 Developer\n Guide. Build files can be uploaded to the GameLift Amazon S3 location\n once only; that can't be updated. INITIALIZED
status. A build must be in READY
\n status before you can create fleets with it.StorageLocation
is specified, the size of your file\n can be found in your Amazon S3 bucket. Amazon Web Services will report a SizeOnDisk
of 0. \n StorageLocation
is specified, the size of your file\n can be found in your Amazon S3 bucket. Amazon GameLift will report a SizeOnDisk
of 0. \n ACTIVE
status before a game session can be created in it. \n
\n GameSession
object is returned containing the game session\n configuration and status. When the status is ACTIVE
, game session\n connection information is provided and player sessions can be created for the game\n session. By default, newly created game sessions are open to new players. You can\n restrict new player access by using UpdateGameSession to change the\n game session's player session creation policy.ACTIVE
status before a game session can be created in it. \n
\n GameSession
object is returned containing the game session\n configuration and status. When the status is ACTIVE
, game session\n connection information is provided and player sessions can be created for the game\n session. By default, newly created game sessions are open to new players. You can\n restrict new player access by using UpdateGameSession to change the\n game session's player session creation policy.ACTIVE
status and\n has a player creation policy of ACCEPT_ALL
. You can add a group of players\n to a game session with CreatePlayerSessions. ACTIVE
status and\n has a player creation policy of ACCEPT_ALL
. You can add a group of players\n to a game session with CreatePlayerSessions. ACTIVE
status and\n has a player creation policy of ACCEPT_ALL
. To add a single player to a\n game session, use CreatePlayerSession. ACTIVE
status and\n has a player creation policy of ACCEPT_ALL
. To add a single player to a\n game session, use CreatePlayerSession. PlayerIds
parameter are ignored. PlayerIds
parameter are ignored. \n
\n \n
\n ObjectVersion
parameter to specify an earlier\n version. ObjectVersion
parameter to specify an earlier\n version. DescribeGameSessions
should only be used for games in development with \n low game session usage.\n \n
\n GameSession
object is returned for each game session\n that matches the request.DescribeGameSessions
should only be used for games in development with \n low game session usage.\n \n
\n GameSession
object is returned for each game session\n that matches the request.\n
\n PlayerSession
object is returned for each session that\n matches the request.\n
\n PlayerSession
object is returned for each session that\n matches the request.FromPort
.FromPort
.[MetricName]
is [ComparisonOperator]
\n [Threshold]
for [EvaluationPeriods]
minutes, then\n [ScalingAdjustmentType]
to/by\n [ScalingAdjustment]
.[PercentIdleInstances]
is [GreaterThanThreshold]
\n [20]
for [15]
minutes, then\n [PercentChangeInCapacity]
to/by [10]
.[MetricName]
is [ComparisonOperator]
\n [Threshold]
for [EvaluationPeriods]
minutes, then\n [ScalingAdjustmentType]
to/by\n [ScalingAdjustment]
.[PercentIdleInstances]
is [GreaterThanThreshold]
\n [20]
for [15]
minutes, then\n [PercentChangeInCapacity]
to/by [10]
.\n
",
+ "smithy.api#documentation": "\n
",
"smithy.api#required": {}
}
},
@@ -9715,7 +9715,7 @@
}
],
"traits": {
- "smithy.api#documentation": "CreateBuild
request. If successful, a new set of credentials are\n returned, along with the S3 storage location associated with the build ID.CreateBuild
request. If successful, a new set of credentials are\n returned, along with the S3 storage location associated with the build ID.\n
"
+ "smithy.api#documentation": "\n
"
}
},
"PolicyType": {
@@ -10506,7 +10506,7 @@
}
],
"traits": {
- "smithy.api#documentation": "\n
\n FULFILLED
, a new game session has been created and a game session\n ARN and Region are referenced. If the placement request times out, you can resubmit the\n request or retry it with a different queue. \n
\n FULFILLED
, a new game session has been created and a game session\n ARN and Region are referenced. If the placement request times out, you can resubmit the\n request or retry it with a different queue. ObjectVersion
parameter to specify an earlier\n version. ObjectVersion
parameter to specify an earlier\n version. CustomEntityType
objects representing the custom patterns that have been created.CreateClassifier
to create.G.1X
and 2 for G.2X
workers). This value may be different than the executionEngineRuntime
* MaxCapacity
as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity
. Therefore, it is possible that the value of DPUSeconds
is less than executionEngineRuntime
* MaxCapacity
.CustomEntityType
objects representing custom patterns.\n
",
+ "smithy.api#documentation": "\n
",
"smithy.api#jsonName": "findingCriteria",
"smithy.api#required": {}
}
@@ -2306,7 +2307,7 @@
}
],
"traits": {
- "smithy.api#documentation": "arn:${Partition}:iotsitewise:${Region}:${Account}:asset/${AssetId}
\n hierarchyId
. A hierarchy specifies allowed parent/child asset relationships./company/windfarm/3/turbine/7/temperature
). For more information, see\n Mapping industrial data streams to asset properties in the\n IoT SiteWise User Guide.ASCENDING
\n \n
"
+ }
+ },
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyAggregatesErrorCode": {
+ "type": "string",
+ "traits": {
+ "smithy.api#enum": [
+ {
+ "value": "ResourceNotFoundException",
+ "name": "ResourceNotFoundException"
+ },
+ {
+ "value": "InvalidRequestException",
+ "name": "InvalidRequestException"
+ },
+ {
+ "value": "AccessDeniedException",
+ "name": "AccessDeniedException"
+ }
+ ]
+ }
+ },
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyAggregatesErrorEntries": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.iotsitewise#BatchGetAssetPropertyAggregatesErrorEntry"
+ }
+ },
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyAggregatesErrorEntry": {
+ "type": "structure",
+ "members": {
+ "errorCode": {
+ "target": "com.amazonaws.iotsitewise#BatchGetAssetPropertyAggregatesErrorCode",
+ "traits": {
+ "smithy.api#documentation": "assetId
and propertyId
of an asset property.propertyAlias
, which is a data stream alias (for example,\n /company/windfarm/3/turbine/7/temperature
). To define an asset property's alias, see UpdateAssetProperty.\n
"
+ }
+ }
+ }
+ },
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyAggregatesResponse": {
+ "type": "structure",
+ "members": {
+ "errorEntries": {
+ "target": "com.amazonaws.iotsitewise#BatchGetAssetPropertyAggregatesErrorEntries",
+ "traits": {
+ "smithy.api#documentation": "maxResults
. \n The maximum value of maxResults
is 4000.entryId
of the entry that failed.entryId
of the entry that succeeded and the latest query result.entryId
of the entry that skipped./company/windfarm/3/turbine/7/temperature
). For more information, see\n Mapping industrial data streams to asset properties in the\n IoT SiteWise User Guide.\n
"
+ }
+ },
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueErrorCode": {
+ "type": "string",
+ "traits": {
+ "smithy.api#enum": [
+ {
+ "value": "ResourceNotFoundException",
+ "name": "ResourceNotFoundException"
+ },
+ {
+ "value": "InvalidRequestException",
+ "name": "InvalidRequestException"
+ },
+ {
+ "value": "AccessDeniedException",
+ "name": "AccessDeniedException"
+ }
+ ]
+ }
+ },
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueErrorEntries": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueErrorEntry"
+ }
+ },
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueErrorEntry": {
+ "type": "structure",
+ "members": {
+ "errorCode": {
+ "target": "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueErrorCode",
+ "traits": {
+ "smithy.api#documentation": "assetId
and propertyId
of an asset property.propertyAlias
, which is a data stream alias (for example,\n /company/windfarm/3/turbine/7/temperature
). To define an asset property's alias, see UpdateAssetProperty./company/windfarm/3/turbine/7/temperature
). For more information, see\n Mapping industrial data streams to asset properties in the\n IoT SiteWise User Guide.ASCENDING
\n \n
"
+ }
+ },
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueHistoryErrorCode": {
+ "type": "string",
+ "traits": {
+ "smithy.api#enum": [
+ {
+ "value": "ResourceNotFoundException",
+ "name": "ResourceNotFoundException"
+ },
+ {
+ "value": "InvalidRequestException",
+ "name": "InvalidRequestException"
+ },
+ {
+ "value": "AccessDeniedException",
+ "name": "AccessDeniedException"
+ }
+ ]
+ }
+ },
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueHistoryErrorEntries": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueHistoryErrorEntry"
+ }
+ },
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueHistoryErrorEntry": {
+ "type": "structure",
+ "members": {
+ "errorCode": {
+ "target": "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueHistoryErrorCode",
+ "traits": {
+ "smithy.api#documentation": "assetId
and propertyId
of an asset property.propertyAlias
, which is a data stream alias (for example,\n /company/windfarm/3/turbine/7/temperature
). To define an asset property's alias, see UpdateAssetProperty.entryId
of the entry that failed.\n
"
}
}
}
},
- "com.amazonaws.iotsitewise#AssociatedAssetsSummaries": {
- "type": "list",
- "member": {
- "target": "com.amazonaws.iotsitewise#AssociatedAssetsSummary"
- }
- },
- "com.amazonaws.iotsitewise#AssociatedAssetsSummary": {
+ "com.amazonaws.iotsitewise#BatchGetAssetPropertyValueHistoryResponse": {
"type": "structure",
"members": {
- "id": {
- "target": "com.amazonaws.iotsitewise#ID",
- "traits": {
- "smithy.api#documentation": "maxResults
. \n The maximum value of maxResults
is 4000.arn:${Partition}:iotsitewise:${Region}:${Account}:asset/${AssetId}
\n entryId
of the entry that failed.entryId
of the entry that succeeded and the latest query result.entryId
of the entry that skipped.hierarchyId
. A hierarchy specifies allowed parent/child asset relationships.entryId
of the entry that failed.entryId
of the entry that succeeded and the latest query result.entryId
of the entry that skipped.\n
\n WirelessDeviceId
of the resource to add in the input array.WirelessGatewayId
of the resource to add in the input array.\"*\"
, it cleares the entire downlink queue for a given\n device, specified by the wireless device ID. Otherwise, the downlink message with the\n specified message ID will be deleted.0
for UM (unacknowledge mode) or 1
for AM (acknowledge mode).0
for UM (unacknowledge mode)\n or 1
for AM (acknowledge mode).CUPS
for the Configuration and Update Server endpoint, or LNS
for the LoRaWAN Network Server endpoint.CUPS
for the\n Configuration and Update Server endpoint, or LNS
for the LoRaWAN Network Server endpoint or\n CLAIM
for the global endpoint.nextToken
value from a previous response;\n otherwise null to receive the first set of results.nextToken
value from a previous response;\n otherwise null to receive the first set of results.nextToken
value from a previous response; otherwise null to receive the first set of results.nextToken
value from a previous response; otherwise null to receive the first set of results.nextToken
value from a previous response; otherwise \n null to receive the first set of results.nextToken
value from a previous response; otherwise null to receive the first set of results.nextToken
value from a previous response; \n otherwise null to receive the first set of results.ERROR
to display\n less verbose logs containing only error information, or to INFO
for more detailed logs.WirelessDeviceId
of the resource to add in the input array.WirelessDeviceId
of the resources to remove in the input array.WirelessGatewayId
of the resource to add in the input array.WirelessGatewayId
of the resources to remove in the input array.
The Amazon Resource Name (ARN) of an Secrets Manager secret that contains the\n key-value pairs\n that are\n required to connect to your Quip file system. Windows is currently the\n only supported type. The secret must contain a JSON structure with the following\n keys:
\nusername—The Active Directory user name, along with the Domain Name\n System (DNS) domain\n name. For example,\n user@corp.example.com.\n The Active Directory user account must have read and mounting access to the Quip\n file system for Windows.
\npassword—The password of the Active Directory user account with \n read and mounting access to the Quip Windows file system.
\nSpecify whether to crawl file comments in your Quip data source. \n You can specify one or more of these options.
" + } + }, + "CrawlChatRooms": { + "target": "com.amazonaws.kendra#Boolean", + "traits": { + "smithy.api#documentation": "Specify whether to crawl chat rooms in your Quip data source. \n You can specify one or more of these options.
" + } + }, + "CrawlAttachments": { + "target": "com.amazonaws.kendra#Boolean", + "traits": { + "smithy.api#documentation": "Specify whether to crawl attachments in your Quip data source. \n You can specify one or more of these options.
" + } + }, + "FolderIds": { + "target": "com.amazonaws.kendra#FolderIdList", + "traits": { + "smithy.api#documentation": "The identifier of the Quip folder IDs to index.
" + } + }, + "ThreadFieldMappings": { + "target": "com.amazonaws.kendra#DataSourceToIndexFieldMappingList", + "traits": { + "smithy.api#documentation": "A list of field mappings to apply when indexing Quip threads.
" + } + }, + "MessageFieldMappings": { + "target": "com.amazonaws.kendra#DataSourceToIndexFieldMappingList", + "traits": { + "smithy.api#documentation": "A list of field mappings to apply when indexing Quip messages.
" + } + }, + "AttachmentFieldMappings": { + "target": "com.amazonaws.kendra#DataSourceToIndexFieldMappingList", + "traits": { + "smithy.api#documentation": "A list of field mappings to apply when indexing Quip attachments.
" + } + }, + "InclusionPatterns": { + "target": "com.amazonaws.kendra#DataSourceInclusionsExclusionsStrings", + "traits": { + "smithy.api#documentation": "A list of regular expression patterns to include certain files in your Quip file\n system. Files that match the patterns are included in the index. Files that don't match\n the patterns are excluded from the index. If a file matches both an inclusion pattern\n and an exclusion pattern, the exclusion pattern takes\n precedence,\n and the file isn't included in the index.
" + } + }, + "ExclusionPatterns": { + "target": "com.amazonaws.kendra#DataSourceInclusionsExclusionsStrings", + "traits": { + "smithy.api#documentation": "A list of regular expression patterns to exclude certain files in your Quip file\n system. Files that match the patterns are excluded from the index. Files that don’t\n match the patterns are included in the index. If a file matches both an inclusion\n pattern and an exclusion pattern, the exclusion pattern takes\n precedence,\n and the file isn't included in the index.
" + } + }, + "VpcConfiguration": { + "target": "com.amazonaws.kendra#DataSourceVpcConfiguration", + "traits": { + "smithy.api#documentation": "Configuration information for connecting to an Amazon Virtual Private Cloud\n (VPC)\n for your Quip. Your Quip instance must reside inside your VPC.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Provides the configuration information to connect to Quip as your data source.
" + } + }, "com.amazonaws.kendra#ReadAccessType": { "type": "string", "traits": { @@ -10273,6 +10414,9 @@ "input": { "target": "com.amazonaws.kendra#StopDataSourceSyncJobRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kendra#AccessDeniedException" @@ -10336,6 +10480,9 @@ "input": { "target": "com.amazonaws.kendra#SubmitFeedbackRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kendra#AccessDeniedException" @@ -10903,6 +11050,9 @@ "input": { "target": "com.amazonaws.kendra#UpdateDataSourceRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kendra#AccessDeniedException" @@ -10993,6 +11143,9 @@ "input": { "target": "com.amazonaws.kendra#UpdateExperienceRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kendra#AccessDeniedException" @@ -11065,6 +11218,9 @@ "input": { "target": "com.amazonaws.kendra#UpdateIndexRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kendra#AccessDeniedException" @@ -11157,6 +11313,9 @@ "input": { "target": "com.amazonaws.kendra#UpdateQuerySuggestionsBlockListRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kendra#AccessDeniedException" @@ -11229,6 +11388,9 @@ "input": { "target": "com.amazonaws.kendra#UpdateQuerySuggestionsConfigRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kendra#AccessDeniedException" @@ -11300,6 +11462,9 @@ "input": { "target": "com.amazonaws.kendra#UpdateThesaurusRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kendra#AccessDeniedException" diff --git a/aws/sdk/aws-models/kms.json b/aws/sdk/aws-models/kms.json index 96064356c9..6808c45563 100644 --- a/aws/sdk/aws-models/kms.json +++ b/aws/sdk/aws-models/kms.json @@ -160,7 +160,7 @@ } ], "traits": { - "smithy.api#documentation": "Cancels the deletion of a KMS key. When this operation succeeds, the key state of the KMS\n key is Disabled
. To enable the KMS key, use EnableKey.
For more information about scheduling and canceling deletion of a KMS key, see Deleting KMS keys in the\n Key Management Service Developer Guide.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n Required permissions: kms:CancelKeyDeletion (key policy)
\n\n Related operations: ScheduleKeyDeletion\n
" + "smithy.api#documentation": "Cancels the deletion of a KMS key. When this operation succeeds, the key state of the KMS\n key is Disabled
. To enable the KMS key, use EnableKey.
For more information about scheduling and canceling deletion of a KMS key, see Deleting KMS keys in the\n Key Management Service Developer Guide.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n Required permissions: kms:CancelKeyDeletion (key policy)
\n\n Related operations: ScheduleKeyDeletion\n
" } }, "com.amazonaws.kms#CancelKeyDeletionRequest": { @@ -409,6 +409,9 @@ "input": { "target": "com.amazonaws.kms#CreateAliasRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kms#AlreadyExistsException" @@ -433,7 +436,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a friendly name for a KMS key.
\nAdding, deleting, or updating an alias can allow or deny permission to the KMS key. For details, see Using ABAC in KMS in the Key Management Service Developer Guide.
\nYou can use an alias to identify a KMS key in the KMS console, in the DescribeKey operation and in cryptographic operations, such as Encrypt and\n GenerateDataKey. You can also change the KMS key that's associated with\n the alias (UpdateAlias) or delete the alias (DeleteAlias)\n at any time. These operations don't affect the underlying KMS key.
\nYou can associate the alias with any customer managed key in the same Amazon Web Services Region. Each\n alias is associated with only one KMS key at a time, but a KMS key can have multiple aliases.\n A valid KMS key is required. You can't create an alias without a KMS key.
\nThe alias must be unique in the account and Region, but you can have aliases with the same\n name in different Regions. For detailed information about aliases, see Using aliases in the\n Key Management Service Developer Guide.
\nThis operation does not return a response. To get the alias that you created, use the\n ListAliases operation.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on an alias in a different Amazon Web Services account.
\n\n\n Required permissions\n
\n\n kms:CreateAlias on\n the alias (IAM policy).
\n\n kms:CreateAlias on\n the KMS key (key policy).
\nFor details, see Controlling access to aliases in the\n Key Management Service Developer Guide.
\n\n Related operations:\n
\n\n DeleteAlias\n
\n\n ListAliases\n
\n\n UpdateAlias\n
\nCreates a friendly name for a KMS key.
\nAdding, deleting, or updating an alias can allow or deny permission to the KMS key. For details, see ABAC in KMS in the Key Management Service Developer Guide.
\nYou can use an alias to identify a KMS key in the KMS console, in the DescribeKey operation and in cryptographic operations, such as Encrypt and\n GenerateDataKey. You can also change the KMS key that's associated with\n the alias (UpdateAlias) or delete the alias (DeleteAlias)\n at any time. These operations don't affect the underlying KMS key.
\nYou can associate the alias with any customer managed key in the same Amazon Web Services Region. Each\n alias is associated with only one KMS key at a time, but a KMS key can have multiple aliases.\n A valid KMS key is required. You can't create an alias without a KMS key.
\nThe alias must be unique in the account and Region, but you can have aliases with the same\n name in different Regions. For detailed information about aliases, see Using aliases in the\n Key Management Service Developer Guide.
\nThis operation does not return a response. To get the alias that you created, use the\n ListAliases operation.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on an alias in a different Amazon Web Services account.
\n\n\n Required permissions\n
\n\n kms:CreateAlias on\n the alias (IAM policy).
\n\n kms:CreateAlias on\n the KMS key (key policy).
\nFor details, see Controlling access to aliases in the\n Key Management Service Developer Guide.
\n\n Related operations:\n
\n\n DeleteAlias\n
\n\n ListAliases\n
\n\n UpdateAlias\n
\nAdds a grant to a KMS key.
\nA grant is a policy instrument that allows Amazon Web Services principals to use\n KMS keys in cryptographic operations. It also can allow them to view a KMS key (DescribeKey) and create and manage grants. When authorizing access to a KMS key,\n grants are considered along with key policies and IAM policies. Grants are often used for\n temporary permissions because you can create one, use its permissions, and delete it without\n changing your key policies or IAM policies.
\nFor detailed information about grants, including grant terminology, see Using grants in the\n \n Key Management Service Developer Guide\n . For examples of working with grants in several\n programming languages, see Programming grants.
\nThe CreateGrant
operation returns a GrantToken
and a\n GrantId
.
When you create, retire, or revoke a grant, there might be a brief delay, usually less than five minutes, until the grant is available throughout KMS. This state is known as eventual consistency. Once the grant has achieved eventual consistency, the grantee\n principal can use the permissions in the grant without identifying the grant.
\nHowever, to use the permissions in the grant immediately, use the\n GrantToken
that CreateGrant
returns. For details, see Using a\n grant token in the \n Key Management Service Developer Guide\n .
The CreateGrant
operation also returns a GrantId
. You can\n use the GrantId
and a key identifier to identify the grant in the RetireGrant and RevokeGrant operations. To find the grant\n ID, use the ListGrants or ListRetirableGrants\n operations.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes.\n To perform this operation on a KMS key in a different Amazon Web Services account, specify the key\n ARN in the value of the KeyId
parameter.
\n Required permissions: kms:CreateGrant (key policy)
\n\n Related operations:\n
\n\n ListGrants\n
\n\n ListRetirableGrants\n
\n\n RetireGrant\n
\n\n RevokeGrant\n
\nAdds a grant to a KMS key.
\nA grant is a policy instrument that allows Amazon Web Services principals to use\n KMS keys in cryptographic operations. It also can allow them to view a KMS key (DescribeKey) and create and manage grants. When authorizing access to a KMS key,\n grants are considered along with key policies and IAM policies. Grants are often used for\n temporary permissions because you can create one, use its permissions, and delete it without\n changing your key policies or IAM policies.
\nFor detailed information about grants, including grant terminology, see Grants in KMS in the\n \n Key Management Service Developer Guide\n . For examples of working with grants in several\n programming languages, see Programming grants.
\nThe CreateGrant
operation returns a GrantToken
and a\n GrantId
.
When you create, retire, or revoke a grant, there might be a brief delay, usually less than five minutes, until the grant is available throughout KMS. This state is known as eventual consistency. Once the grant has achieved eventual consistency, the grantee\n principal can use the permissions in the grant without identifying the grant.
\nHowever, to use the permissions in the grant immediately, use the\n GrantToken
that CreateGrant
returns. For details, see Using a\n grant token in the \n Key Management Service Developer Guide\n .
The CreateGrant
operation also returns a GrantId
. You can\n use the GrantId
and a key identifier to identify the grant in the RetireGrant and RevokeGrant operations. To find the grant\n ID, use the ListGrants or ListRetirableGrants\n operations.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes.\n To perform this operation on a KMS key in a different Amazon Web Services account, specify the key\n ARN in the value of the KeyId
parameter.
\n Required permissions: kms:CreateGrant (key policy)
\n\n Related operations:\n
\n\n ListGrants\n
\n\n ListRetirableGrants\n
\n\n RetireGrant\n
\n\n RevokeGrant\n
\nA list of operations that the grant permits.
\nThe operation must be supported on the KMS key. For example, you cannot create a grant for\n a symmetric KMS key that allows the Sign operation, or a grant for an\n asymmetric KMS key that allows the GenerateDataKey operation. If you try,\n KMS returns a ValidationError
exception. For details, see Grant\n operations in the Key Management Service Developer Guide.
A list of operations that the grant permits.
\nThis list must include only operations that are permitted in a grant. Also, the operation\n must be supported on the KMS key. For example, you cannot create a grant for a symmetric encryption KMS key that allows the Sign operation, or a grant for an asymmetric KMS key\n that allows the GenerateDataKey operation. If you try, KMS returns a\n ValidationError
exception. For details, see Grant operations in the\n Key Management Service Developer Guide.
Specifies a grant constraint.
\nKMS supports the EncryptionContextEquals
and\n EncryptionContextSubset
grant constraints. Each constraint value can include up\n to 8 encryption context pairs. The encryption context value in each constraint cannot exceed\n 384 characters.
These grant constraints allow the permissions in the grant only when the encryption\n context in the request matches (EncryptionContextEquals
) or includes\n (EncryptionContextSubset
) the encryption context specified in this structure.\n For information about grant constraints, see Using grant\n constraints in the Key Management Service Developer Guide. For more information about encryption context,\n see Encryption\n Context in the \n Key Management Service Developer Guide\n .
The encryption context grant constraints are supported only on operations that include an\n encryption context. You cannot use an encryption context grant constraint for cryptographic\n operations with asymmetric KMS keys or for management operations, such as DescribeKey or RetireGrant.
" + "smithy.api#documentation": "Specifies a grant constraint.
\nKMS supports the EncryptionContextEquals
and\n EncryptionContextSubset
grant constraints. Each constraint value can include up\n to 8 encryption context pairs. The encryption context value in each constraint cannot exceed\n 384 characters. For information about grant constraints, see Using grant\n constraints in the Key Management Service Developer Guide. For more information about encryption context,\n see Encryption\n context in the \n Key Management Service Developer Guide\n .
The encryption context grant constraints allow the permissions in the grant only when the\n encryption context in the request matches (EncryptionContextEquals
) or includes\n (EncryptionContextSubset
) the encryption context specified in this structure.
The encryption context grant constraints are supported only on grant operations that\n include an EncryptionContext
parameter, such as cryptographic operations on\n symmetric encryption KMS keys. Grants with grant constraints can include the DescribeKey and RetireGrant operations, but the constraint\n doesn't apply to these operations. If a grant with a grant constraint includes the\n CreateGrant
operation, the constraint requires that any grants created with the\n CreateGrant
permission have an equally strict or stricter encryption context\n constraint.
You cannot use an encryption context grant constraint for cryptographic operations with\n asymmetric KMS keys or HMAC KMS keys. These keys don't support an encryption context.
\n " } }, "GrantTokens": { @@ -680,7 +683,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a unique customer managed KMS key in your Amazon Web Services account and\n Region.
\nKMS is replacing the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term.
\nYou can use the CreateKey
operation to create symmetric or asymmetric KMS\n keys.
\n Symmetric KMS keys contain a 256-bit symmetric key\n that never leaves KMS unencrypted. To use the KMS key, you must call KMS. You can use\n a symmetric KMS key to encrypt and decrypt small amounts of data, but they are typically\n used to generate data keys and data keys pairs. For details,\n see GenerateDataKey and GenerateDataKeyPair.
\n\n Asymmetric KMS keys can contain an RSA key pair or an\n Elliptic Curve (ECC) key pair. The private key in an asymmetric KMS key never leaves KMS\n unencrypted. However, you can use the GetPublicKey operation to download\n the public key so it can be used outside of KMS. KMS keys with RSA key pairs can be used\n to encrypt or decrypt data or sign and verify messages (but not both). KMS keys with ECC\n key pairs can be used only to sign and verify messages.
\nFor information about symmetric and asymmetric KMS keys, see Using Symmetric and Asymmetric KMS keys in the Key Management Service Developer Guide.
\n\n\nTo create different types of KMS keys, use the following guidance:
\n\nTo create an asymmetric KMS key, use the KeySpec
parameter to specify\n the type of key material in the KMS key. Then, use the KeyUsage
parameter\n to determine whether the KMS key will be used to encrypt and decrypt or sign and verify.\n You can't change these properties after the KMS key is created.
\n
When creating a symmetric KMS key, you don't need to specify the\n KeySpec
or KeyUsage
parameters. The default value for\n KeySpec
, SYMMETRIC_DEFAULT
, and the default value for\n KeyUsage
, ENCRYPT_DECRYPT
, are the only valid values for\n symmetric KMS keys.
\n
To create a multi-Region primary key in the local Amazon Web Services Region,\n use the MultiRegion
parameter with a value of True
. To create\n a multi-Region replica key, that is, a KMS key with the same key ID\n and key material as a primary key, but in a different Amazon Web Services Region, use the ReplicateKey operation. To change a replica key to a primary key, and its\n primary key to a replica key, use the UpdatePrimaryRegion\n operation.
This operation supports multi-Region keys, an KMS feature that lets you create multiple\n interoperable KMS keys in different Amazon Web Services Regions. Because these KMS keys have the same key ID, key\n material, and other metadata, you can use them interchangeably to encrypt data in one Amazon Web Services Region and decrypt\n it in a different Amazon Web Services Region without re-encrypting the data or making a cross-Region call. For more information about multi-Region keys, see Using multi-Region keys in the Key Management Service Developer Guide.
\nYou can create symmetric and asymmetric multi-Region keys and multi-Region keys with\n imported key material. You cannot create multi-Region keys in a custom key store.
\n\n
To import your own key material, begin by creating a symmetric KMS key with no key\n material. To do this, use the Origin
parameter of CreateKey
\n with a value of EXTERNAL
. Next, use GetParametersForImport operation to get a public key and import token, and use the public key to encrypt\n your key material. Then, use ImportKeyMaterial with your import token\n to import the key material. For step-by-step instructions, see Importing Key Material in the \n Key Management Service Developer Guide\n . You\n cannot import the key material into an asymmetric KMS key.
To create a multi-Region primary key with imported key material, use the\n Origin
parameter of CreateKey
with a value of\n EXTERNAL
and the MultiRegion
parameter with a value of\n True
. To create replicas of the multi-Region primary key, use the ReplicateKey operation. For more information about multi-Region keys, see Using multi-Region keys in the Key Management Service Developer Guide.
\n
To create a symmetric KMS key in a custom key store, use the\n CustomKeyStoreId
parameter to specify the custom key store. You must also\n use the Origin
parameter with a value of AWS_CLOUDHSM
. The\n CloudHSM cluster that is associated with the custom key store must have at least two active\n HSMs in different Availability Zones in the Amazon Web Services Region.
You cannot create an asymmetric KMS key in a custom key store. For information about\n custom key stores in KMS see Using Custom Key Stores in\n the \n Key Management Service Developer Guide\n .
\n\n Cross-account use: No. You cannot use this operation to\n create a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:CreateKey (IAM policy). To use the\n Tags
parameter, kms:TagResource (IAM policy). For examples and information about related\n permissions, see Allow a user to create\n KMS keys in the Key Management Service Developer Guide.
\n Related operations:\n
\n\n DescribeKey\n
\n\n ListKeys\n
\n\n ScheduleKeyDeletion\n
\nCreates a unique customer managed KMS key in your Amazon Web Services account and\n Region.
\nIn addition to the required parameters, you can use the optional parameters to specify a key policy, description, tags, and other useful elements for any key type.
\nKMS is replacing the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term.
\nTo create different types of KMS keys, use the following guidance:
\n\nTo create a symmetric encryption KMS key, you aren't required to specify any parameters. The default value for\n KeySpec
, SYMMETRIC_DEFAULT
, and the default value for\n KeyUsage
, ENCRYPT_DECRYPT
, create a symmetric encryption KMS key.
If you need a key for basic encryption and decryption or you \n are creating a KMS key to protect your resources in an Amazon Web Services service, create a symmetric encryption KMS key. The key material in a symmetric encryption key never leaves KMS unencrypted. You can use a symmetric encryption KMS key to encrypt and decrypt data up to 4,096 bytes, but they are typically used to generate data keys and data keys pairs. For details, see GenerateDataKey and GenerateDataKeyPair.
\n\n
To create an asymmetric KMS key, use the KeySpec
parameter to specify\n the type of key material in the KMS key. Then, use the KeyUsage
parameter\n to determine whether the KMS key will be used to encrypt and decrypt or sign and verify.\n You can't change these properties after the KMS key is created.
Asymmetric KMS keys contain an RSA key pair or an Elliptic Curve (ECC) key pair. The private key in an asymmetric \n KMS key never leaves AWS KMS unencrypted. However, you can use the GetPublicKey operation to download the public key\n so it can be used outside of AWS KMS. KMS keys with RSA key pairs can be used to encrypt or decrypt data or sign and verify messages (but not both). \n KMS keys with ECC key pairs can be used only to sign and verify messages. \n For information about asymmetric KMS keys, see Asymmetric KMS keys in the Key Management Service Developer Guide.
\n\n
To create an HMAC KMS key, set the KeySpec
parameter to a\n key spec value for HMAC KMS keys. Then set the KeyUsage
parameter to\n GENERATE_VERIFY_MAC
. You must set the key usage even though\n GENERATE_VERIFY_MAC
is the only valid key usage value for HMAC KMS keys.\n You can't change these properties after the KMS key is created.
HMAC KMS keys are symmetric keys that never leave KMS unencrypted. You can use\n HMAC keys to generate (GenerateMac) and verify (VerifyMac) HMAC codes for messages up to 4096 bytes.
\nHMAC KMS keys are not supported in all Amazon Web Services Regions. If you try to create an HMAC\n KMS key in an Amazon Web Services Region in which HMAC keys are not supported, the\n CreateKey
operation returns an\n UnsupportedOperationException
. For a list of Regions in which HMAC KMS keys\n are supported, see HMAC keys in\n KMS in the Key Management Service Developer Guide.
\n
To create a multi-Region primary key in the local Amazon Web Services Region,\n use the MultiRegion
parameter with a value of True
. To create\n a multi-Region replica key, that is, a KMS key with the same key ID\n and key material as a primary key, but in a different Amazon Web Services Region, use the ReplicateKey operation. To change a replica key to a primary key, and its\n primary key to a replica key, use the UpdatePrimaryRegion\n operation.
You can create multi-Region KMS keys for all supported KMS key types: symmetric\n encryption KMS keys, HMAC KMS keys, asymmetric encryption KMS keys, and asymmetric\n signing KMS keys. You can also create multi-Region keys with imported key material.\n However, you can't create multi-Region keys in a custom key store.
\nThis operation supports multi-Region keys, an KMS feature that lets you create multiple\n interoperable KMS keys in different Amazon Web Services Regions. Because these KMS keys have the same key ID, key\n material, and other metadata, you can use them interchangeably to encrypt data in one Amazon Web Services Region and decrypt\n it in a different Amazon Web Services Region without re-encrypting the data or making a cross-Region call. For more information about multi-Region keys, see Multi-Region keys in KMS in the Key Management Service Developer Guide.
\n\n
To import your own key material, begin by creating a symmetric encryption KMS key with no key\n material. To do this, use the Origin
parameter of CreateKey
\n with a value of EXTERNAL
. Next, use GetParametersForImport operation to get a public key and import token, and use the public key to encrypt\n your key material. Then, use ImportKeyMaterial with your import token\n to import the key material. For step-by-step instructions, see Importing Key Material in the \n Key Management Service Developer Guide\n .
This feature supports only symmetric encryption KMS keys, including multi-Region symmetric encryption KMS keys. You cannot import key\n material into any other type of KMS key.
\nTo create a multi-Region primary key with imported key material, use the\n Origin
parameter of CreateKey
with a value of\n EXTERNAL
and the MultiRegion
parameter with a value of\n True
. To create replicas of the multi-Region primary key, use the ReplicateKey operation. For more information about multi-Region keys, see Multi-Region keys in KMS in the Key Management Service Developer Guide.
\n
To create a symmetric encryption KMS key in a custom key store, use the\n CustomKeyStoreId
parameter to specify the custom key store. You must also\n use the Origin
parameter with a value of AWS_CLOUDHSM
. The\n CloudHSM cluster that is associated with the custom key store must have at least two active\n HSMs in different Availability Zones in the Amazon Web Services Region.
Custom key stores support only symmetric encryption KMS keys. You cannot create an\n HMAC KMS key or an asymmetric KMS key in a custom key store. For information about\n custom key stores in KMS see Custom key stores in KMS in\n the \n Key Management Service Developer Guide\n .
\n\n Cross-account use: No. You cannot use this operation to\n create a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:CreateKey (IAM policy). To use the\n Tags
parameter, kms:TagResource (IAM policy). For examples and information about related\n permissions, see Allow a user to create\n KMS keys in the Key Management Service Developer Guide.
\n Related operations:\n
\n\n DescribeKey\n
\n\n ListKeys\n
\n\n ScheduleKeyDeletion\n
\nDetermines the cryptographic operations for which you can use the KMS key. The default value is\n ENCRYPT_DECRYPT
. This parameter is required only for asymmetric KMS keys. You\n can't change the KeyUsage
value after the KMS key is created.
Select only one valid value.
\nFor symmetric KMS keys, omit the parameter or specify\n ENCRYPT_DECRYPT
.
For asymmetric KMS keys with RSA key material, specify ENCRYPT_DECRYPT
or\n SIGN_VERIFY
.
For asymmetric KMS keys with ECC key material, specify\n SIGN_VERIFY
.
Determines the cryptographic operations for which you can use the KMS key. The default value is\n ENCRYPT_DECRYPT
. This parameter is optional when you are creating a symmetric\n encryption KMS key; otherwise, it is required. You\n can't change the KeyUsage
value after the KMS key is created.
Select only one valid value.
\nFor symmetric encryption KMS keys, omit the parameter or specify\n ENCRYPT_DECRYPT
.
For HMAC KMS keys (symmetric), specify GENERATE_VERIFY_MAC
.
For asymmetric KMS keys with RSA key material, specify ENCRYPT_DECRYPT
or\n SIGN_VERIFY
.
For asymmetric KMS keys with ECC key material, specify\n SIGN_VERIFY
.
Specifies the type of KMS key to create. The default value,\n SYMMETRIC_DEFAULT
, creates a KMS key with a 256-bit symmetric key for encryption\n and decryption. For help choosing a key spec for your KMS key, see How to Choose Your KMS key\n Configuration in the \n Key Management Service Developer Guide\n .
The KeySpec
determines whether the KMS key contains a symmetric key or an\n asymmetric key pair. It also determines the encryption algorithms or signing algorithms that\n the KMS key supports. You can't change the KeySpec
after the KMS key is created.\n To further restrict the algorithms that can be used with the KMS key, use a condition key in\n its key policy or IAM policy. For more information, see kms:EncryptionAlgorithm or kms:Signing Algorithm in the \n Key Management Service Developer Guide\n .
\n Amazon Web Services services that\n are integrated with KMS use symmetric KMS keys to protect your data. These\n services do not support asymmetric KMS keys. For help determining whether a KMS key is\n symmetric or asymmetric, see Identifying Symmetric and Asymmetric\n KMS keys in the Key Management Service Developer Guide.
\nKMS supports the following key specs for KMS keys:
\nSymmetric key (default)
\n\n SYMMETRIC_DEFAULT
(AES-256-GCM)
Asymmetric RSA key pairs
\n\n RSA_2048
\n
\n RSA_3072
\n
\n RSA_4096
\n
Asymmetric NIST-recommended elliptic curve key pairs
\n\n ECC_NIST_P256
(secp256r1)
\n ECC_NIST_P384
(secp384r1)
\n ECC_NIST_P521
(secp521r1)
Other asymmetric elliptic curve key pairs
\n\n ECC_SECG_P256K1
(secp256k1), commonly used for\n cryptocurrencies.
Specifies the type of KMS key to create. The default value,\n SYMMETRIC_DEFAULT
, creates a KMS key with a 256-bit symmetric key for encryption\n and decryption. For help choosing a key spec for your KMS key, see Choosing a KMS key type in\n the \n Key Management Service Developer Guide\n .
The KeySpec
determines whether the KMS key contains a symmetric key or an\n asymmetric key pair. It also determines the algorithms that the KMS key supports. You can't\n change the KeySpec
after the KMS key is created. To further restrict the\n algorithms that can be used with the KMS key, use a condition key in its key policy or IAM\n policy. For more information, see kms:EncryptionAlgorithm, kms:MacAlgorithm or kms:Signing Algorithm in the \n Key Management Service Developer Guide\n .
\n Amazon Web Services services that\n are integrated with KMS use symmetric encryption KMS keys to protect your data.\n These services do not support asymmetric KMS keys or HMAC KMS keys.
\nKMS supports the following key specs for KMS keys:
\nSymmetric encryption key (default)
\n\n SYMMETRIC_DEFAULT
(AES-256-GCM)
HMAC keys (symmetric)
\n\n HMAC_224
\n
\n HMAC_256
\n
\n HMAC_384
\n
\n HMAC_512
\n
Asymmetric RSA key pairs
\n\n RSA_2048
\n
\n RSA_3072
\n
\n RSA_4096
\n
Asymmetric NIST-recommended elliptic curve key pairs
\n\n ECC_NIST_P256
(secp256r1)
\n ECC_NIST_P384
(secp384r1)
\n ECC_NIST_P521
(secp521r1)
Other asymmetric elliptic curve key pairs
\n\n ECC_SECG_P256K1
(secp256k1), commonly used for\n cryptocurrencies.
The source of the key material for the KMS key. You cannot change the origin after you\n create the KMS key. The default is AWS_KMS
, which means that KMS creates the\n key material.
To create a KMS key with no key material (for imported key material), set the value to\n EXTERNAL
. For more information about importing key material into KMS, see\n Importing Key\n Material in the Key Management Service Developer Guide. This value is valid only for symmetric KMS\n keys.
To create a KMS key in an KMS custom key store and create its key material in the\n associated CloudHSM cluster, set this value to AWS_CLOUDHSM
. You must also use the\n CustomKeyStoreId
parameter to identify the custom key store. This value is\n valid only for symmetric KMS keys.
The source of the key material for the KMS key. You cannot change the origin after you\n create the KMS key. The default is AWS_KMS
, which means that KMS creates the\n key material.
To create a KMS key with no key material (for imported key material), set the value to\n EXTERNAL
. For more information about importing key material into KMS, see\n Importing Key\n Material in the Key Management Service Developer Guide. This value is valid only for symmetric encryption KMS keys.
To create a KMS key in an KMS custom key store and create its key material in the\n associated CloudHSM cluster, set this value to AWS_CLOUDHSM
. You must also use the\n CustomKeyStoreId
parameter to identify the custom key store. This value is\n valid only for symmetric encryption KMS keys.
Creates the KMS key in the specified custom key store and the key material in its\n associated CloudHSM cluster. To create a KMS key in a custom key store, you must also specify the\n Origin
parameter with a value of AWS_CLOUDHSM
. The CloudHSM cluster\n that is associated with the custom key store must have at least two active HSMs, each in a\n different Availability Zone in the Region.
This parameter is valid only for symmetric KMS keys and regional KMS keys. You cannot\n create an asymmetric KMS key or a multi-Region key in a custom key store.
\nTo find the ID of a custom key store, use the DescribeCustomKeyStores operation.
\nThe response includes the custom key store ID and the ID of the CloudHSM cluster.
\nThis operation is part of the Custom Key Store feature feature in KMS, which\ncombines the convenience and extensive integration of KMS with the isolation and control of a\nsingle-tenant key store.
" + "smithy.api#documentation": "Creates the KMS key in the specified custom key store and the key material in its\n associated CloudHSM cluster. To create a KMS key in a custom key store, you must also specify the\n Origin
parameter with a value of AWS_CLOUDHSM
. The CloudHSM cluster\n that is associated with the custom key store must have at least two active HSMs, each in a\n different Availability Zone in the Region.
This parameter is valid only for symmetric encryption KMS keys in a single Region. You \n cannot create any other type of KMS key in a custom key store.
\nTo find the ID of a custom key store, use the DescribeCustomKeyStores operation.
\nThe response includes the custom key store ID and the ID of the CloudHSM cluster.
\nThis operation is part of the Custom Key Store feature feature in KMS, which\ncombines the convenience and extensive integration of KMS with the isolation and control of a\nsingle-tenant key store.
" } }, "BypassPolicyLockoutSafetyCheck": { @@ -740,13 +743,13 @@ "Tags": { "target": "com.amazonaws.kms#TagList", "traits": { - "smithy.api#documentation": "Assigns one or more tags to the KMS key. Use this parameter to tag the KMS key when it is\n created. To tag an existing KMS key, use the TagResource operation.
\nTagging or untagging a KMS key can allow or deny permission to the KMS key. For details, see Using ABAC in KMS in the Key Management Service Developer Guide.
\nTo use this parameter, you must have kms:TagResource permission in an IAM policy.
\nEach tag consists of a tag key and a tag value. Both the tag key and the tag value are\n required, but the tag value can be an empty (null) string. You cannot have more than one tag\n on a KMS key with the same tag key. If you specify an existing tag key with a different tag\n value, KMS replaces the current tag value with the specified one.
\nWhen you add tags to an Amazon Web Services resource, Amazon Web Services generates a cost allocation\n report with usage and costs aggregated by tags. Tags can also be used to control access to a KMS key. For details,\n see Tagging Keys.
" + "smithy.api#documentation": "Assigns one or more tags to the KMS key. Use this parameter to tag the KMS key when it is\n created. To tag an existing KMS key, use the TagResource operation.
\nTagging or untagging a KMS key can allow or deny permission to the KMS key. For details, see ABAC in KMS in the Key Management Service Developer Guide.
\nTo use this parameter, you must have kms:TagResource permission in an IAM policy.
\nEach tag consists of a tag key and a tag value. Both the tag key and the tag value are\n required, but the tag value can be an empty (null) string. You cannot have more than one tag\n on a KMS key with the same tag key. If you specify an existing tag key with a different tag\n value, KMS replaces the current tag value with the specified one.
\nWhen you add tags to an Amazon Web Services resource, Amazon Web Services generates a cost allocation\n report with usage and costs aggregated by tags. Tags can also be used to control access to a KMS key. For details,\n see Tagging Keys.
" } }, "MultiRegion": { "target": "com.amazonaws.kms#NullableBooleanType", "traits": { - "smithy.api#documentation": "Creates a multi-Region primary key that you can replicate into other Amazon Web Services Regions. You\n cannot change this value after you create the KMS key.
\nFor a multi-Region key, set this parameter to True
. For a single-Region KMS\n key, omit this parameter or set it to False
. The default value is\n False
.
This operation supports multi-Region keys, an KMS feature that lets you create multiple\n interoperable KMS keys in different Amazon Web Services Regions. Because these KMS keys have the same key ID, key\n material, and other metadata, you can use them interchangeably to encrypt data in one Amazon Web Services Region and decrypt\n it in a different Amazon Web Services Region without re-encrypting the data or making a cross-Region call. For more information about multi-Region keys, see Using multi-Region keys in the Key Management Service Developer Guide.
\nThis value creates a primary key, not a replica. To create a\n replica key, use the ReplicateKey operation.
\nYou can create a symmetric or asymmetric multi-Region key, and you can create a\n multi-Region key with imported key material. However, you cannot create a multi-Region key in\n a custom key store.
" + "smithy.api#documentation": "Creates a multi-Region primary key that you can replicate into other Amazon Web Services Regions. You\n cannot change this value after you create the KMS key.
\nFor a multi-Region key, set this parameter to True
. For a single-Region KMS\n key, omit this parameter or set it to False
. The default value is\n False
.
This operation supports multi-Region keys, an KMS feature that lets you create multiple\n interoperable KMS keys in different Amazon Web Services Regions. Because these KMS keys have the same key ID, key\n material, and other metadata, you can use them interchangeably to encrypt data in one Amazon Web Services Region and decrypt\n it in a different Amazon Web Services Region without re-encrypting the data or making a cross-Region call. For more information about multi-Region keys, see Multi-Region keys in KMS in the Key Management Service Developer Guide.
\nThis value creates a primary key, not a replica. To create a\n replica key, use the ReplicateKey operation.
\nYou can create a symmetric or asymmetric multi-Region key, and you can create a\n multi-Region key with imported key material. However, you cannot create a multi-Region key in\n a custom key store.
" } } } @@ -942,6 +945,22 @@ { "value": "SYMMETRIC_DEFAULT", "name": "SYMMETRIC_DEFAULT" + }, + { + "value": "HMAC_224", + "name": "HMAC_224" + }, + { + "value": "HMAC_256", + "name": "HMAC_256" + }, + { + "value": "HMAC_384", + "name": "HMAC_384" + }, + { + "value": "HMAC_512", + "name": "HMAC_512" } ] } @@ -1040,7 +1059,7 @@ } ], "traits": { - "smithy.api#documentation": "Decrypts ciphertext that was encrypted by a KMS key using any of the following\n operations:
\n\n Encrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\nYou can use this operation to decrypt ciphertext that was encrypted under a symmetric or\n asymmetric KMS key. When the KMS key is asymmetric, you must specify the KMS key and the\n encryption algorithm that was used to encrypt the ciphertext. For information about symmetric and asymmetric KMS keys, see Using Symmetric and Asymmetric KMS keys in the Key Management Service Developer Guide.
\nThe Decrypt operation also decrypts ciphertext that was encrypted outside of KMS by the\n public key in an KMS asymmetric KMS key. However, it cannot decrypt ciphertext produced by\n other libraries, such as the Amazon Web Services\n Encryption SDK or Amazon S3 client-side encryption.\n These libraries return a ciphertext format that is incompatible with KMS.
\nIf the ciphertext was encrypted under a symmetric KMS key, the KeyId
\n parameter is optional. KMS can get this information from metadata that it adds to the\n symmetric ciphertext blob. This feature adds durability to your implementation by ensuring\n that authorized users can decrypt ciphertext decades after it was encrypted, even if they've\n lost track of the key ID. However, specifying the KMS key is always recommended as a best\n practice. When you use the KeyId
parameter to specify a KMS key, KMS only uses\n the KMS key you specify. If the ciphertext was encrypted under a different KMS key, the\n Decrypt
operation fails. This practice ensures that you use the KMS key that\n you intend.
Whenever possible, use key policies to give users permission to call the\n Decrypt
operation on a particular KMS key, instead of using IAM policies.\n Otherwise, you might create an IAM user policy that gives the user Decrypt
\n permission on all KMS keys. This user could decrypt ciphertext that was encrypted by KMS keys\n in other accounts if the key policy for the cross-account KMS key permits it. If you must use\n an IAM policy for Decrypt
permissions, limit the user to particular KMS keys or\n particular trusted accounts. For details, see Best practices for IAM\n policies in the Key Management Service Developer Guide.
Applications in Amazon Web Services Nitro Enclaves can call this operation by using the Amazon Web Services Nitro Enclaves Development Kit. For information about the supporting parameters, see How Amazon Web Services Nitro Enclaves use KMS in the Key Management Service Developer Guide.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account\n use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:Decrypt (key policy)
\n\n Related operations:\n
\n\n Encrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\n\n ReEncrypt\n
\nDecrypts ciphertext that was encrypted by a KMS key using any of the following\n operations:
\n\n Encrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\nYou can use this operation to decrypt ciphertext that was encrypted under a symmetric encryption KMS key or an\n asymmetric encryption KMS key. When the KMS key is asymmetric, you must specify the KMS key and the\n encryption algorithm that was used to encrypt the ciphertext. For information about asymmetric KMS keys, see Asymmetric KMS keys in the Key Management Service Developer Guide.
\nThe Decrypt
operation also decrypts ciphertext that was encrypted outside of KMS by the\n public key in an KMS asymmetric KMS key. However, it cannot decrypt symmetric ciphertext produced by\n other libraries, such as the Amazon Web Services\n Encryption SDK or Amazon S3 client-side encryption.\n These libraries return a ciphertext format that is incompatible with KMS.
If the ciphertext was encrypted under a symmetric encryption KMS key, the KeyId
\n parameter is optional. KMS can get this information from metadata that it adds to the\n symmetric ciphertext blob. This feature adds durability to your implementation by ensuring\n that authorized users can decrypt ciphertext decades after it was encrypted, even if they've\n lost track of the key ID. However, specifying the KMS key is always recommended as a best\n practice. When you use the KeyId
parameter to specify a KMS key, KMS only uses\n the KMS key you specify. If the ciphertext was encrypted under a different KMS key, the\n Decrypt
operation fails. This practice ensures that you use the KMS key that\n you intend.
Whenever possible, use key policies to give users permission to call the\n Decrypt
operation on a particular KMS key, instead of using IAM policies.\n Otherwise, you might create an IAM user policy that gives the user Decrypt
\n permission on all KMS keys. This user could decrypt ciphertext that was encrypted by KMS keys\n in other accounts if the key policy for the cross-account KMS key permits it. If you must use\n an IAM policy for Decrypt
permissions, limit the user to particular KMS keys or\n particular trusted accounts. For details, see Best practices for IAM\n policies in the Key Management Service Developer Guide.
Applications in Amazon Web Services Nitro Enclaves can call this operation by using the Amazon Web Services Nitro Enclaves Development Kit. For information about the supporting parameters, see How Amazon Web Services Nitro Enclaves use KMS in the Key Management Service Developer Guide.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:Decrypt (key policy)
\n\n Related operations:\n
\n\n Encrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\n\n ReEncrypt\n
\nSpecifies the encryption context to use when decrypting the data.\n An encryption context is valid only for cryptographic operations with a symmetric KMS key. The standard asymmetric encryption algorithms that KMS uses do not support an encryption context.
\nAn encryption context is a collection of non-secret key-value pairs that represents additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is optional when encrypting with a symmetric KMS key, but it is highly recommended.
\nFor more information, see\n Encryption\n Context in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "Specifies the encryption context to use when decrypting the data.\n An encryption context is valid only for cryptographic operations with a symmetric encryption KMS key. The standard asymmetric encryption algorithms and HMAC algorithms that KMS uses do not support an encryption context.
\nAn encryption context is a collection of non-secret key-value pairs that represent additional authenticated data. \nWhen you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported\nonly on operations with symmetric encryption KMS keys. On operations with symmetric encryption KMS keys, an encryption context is optional, but it is strongly recommended.
\nFor more information, see\nEncryption context in the Key Management Service Developer Guide.
" } }, "GrantTokens": { @@ -1068,13 +1087,13 @@ "KeyId": { "target": "com.amazonaws.kms#KeyIdType", "traits": { - "smithy.api#documentation": "Specifies the KMS key that KMS uses to decrypt the ciphertext. Enter a key ID of the KMS\n key that was used to encrypt the ciphertext.
\n\nThis parameter is required only when the ciphertext was encrypted under an asymmetric KMS\n key. If you used a symmetric KMS key, KMS can get the KMS key from metadata that it adds to\n the symmetric ciphertext blob. However, it is always recommended as a best practice. This\n practice ensures that you use the KMS key that you intend.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
" + "smithy.api#documentation": "Specifies the KMS key that KMS uses to decrypt the ciphertext.
\n \nEnter a key ID of the KMS\n key that was used to encrypt the ciphertext. If you identify a different KMS key, the Decrypt
operation throws an IncorrectKeyException
.
This parameter is required only when the ciphertext was encrypted under an asymmetric KMS\n key. If you used a symmetric encryption KMS key, KMS can get the KMS key from metadata that it adds to\n the symmetric ciphertext blob. However, it is always recommended as a best practice. This\n practice ensures that you use the KMS key that you intend.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
" } }, "EncryptionAlgorithm": { "target": "com.amazonaws.kms#EncryptionAlgorithmSpec", "traits": { - "smithy.api#documentation": "Specifies the encryption algorithm that will be used to decrypt the ciphertext. Specify\n the same algorithm that was used to encrypt the data. If you specify a different algorithm,\n the Decrypt
operation fails.
This parameter is required only when the ciphertext was encrypted under an asymmetric KMS\n key. The default value, SYMMETRIC_DEFAULT
, represents the only supported\n algorithm that is valid for symmetric KMS keys.
Specifies the encryption algorithm that will be used to decrypt the ciphertext. Specify\n the same algorithm that was used to encrypt the data. If you specify a different algorithm,\n the Decrypt
operation fails.
This parameter is required only when the ciphertext was encrypted under an asymmetric KMS\n key. The default value, SYMMETRIC_DEFAULT
, represents the only supported\n algorithm that is valid for symmetric encryption KMS keys.
Deletes the specified alias.
\nAdding, deleting, or updating an alias can allow or deny permission to the KMS key. For details, see Using ABAC in KMS in the Key Management Service Developer Guide.
\nBecause an alias is not a property of a KMS key, you can delete and change the aliases of\n a KMS key without affecting the KMS key. Also, aliases do not appear in the response from the\n DescribeKey operation. To get the aliases of all KMS keys, use the ListAliases operation.
\nEach KMS key can have multiple aliases. To change the alias of a KMS key, use DeleteAlias to delete the current alias and CreateAlias to\n create a new alias. To associate an existing alias with a different KMS key, call UpdateAlias.
\n\n Cross-account use: No. You cannot perform this operation on an alias in a different Amazon Web Services account.
\n\n Required permissions\n
\n\n kms:DeleteAlias on\n the alias (IAM policy).
\n\n kms:DeleteAlias on\n the KMS key (key policy).
\nFor details, see Controlling access to aliases in the\n Key Management Service Developer Guide.
\n\n Related operations:\n
\n\n CreateAlias\n
\n\n ListAliases\n
\n\n UpdateAlias\n
\nDeletes the specified alias.
\nAdding, deleting, or updating an alias can allow or deny permission to the KMS key. For details, see ABAC in KMS in the Key Management Service Developer Guide.
\nBecause an alias is not a property of a KMS key, you can delete and change the aliases of\n a KMS key without affecting the KMS key. Also, aliases do not appear in the response from the\n DescribeKey operation. To get the aliases of all KMS keys, use the ListAliases operation.
\nEach KMS key can have multiple aliases. To change the alias of a KMS key, use DeleteAlias to delete the current alias and CreateAlias to\n create a new alias. To associate an existing alias with a different KMS key, call UpdateAlias.
\n\n Cross-account use: No. You cannot perform this operation on an alias in a different Amazon Web Services account.
\n\n Required permissions\n
\n\n kms:DeleteAlias on\n the alias (IAM policy).
\n\n kms:DeleteAlias on\n the KMS key (key policy).
\nFor details, see Controlling access to aliases in the\n Key Management Service Developer Guide.
\n\n Related operations:\n
\n\n CreateAlias\n
\n\n ListAliases\n
\n\n UpdateAlias\n
\nDeletes key material that you previously imported. This operation makes the specified KMS\n key unusable. For more information about importing key material into KMS, see Importing Key Material\n in the Key Management Service Developer Guide.
\nWhen the specified KMS key is in the PendingDeletion
state, this operation\n does not change the KMS key's state. Otherwise, it changes the KMS key's state to\n PendingImport
.
After you delete key material, you can use ImportKeyMaterial to reimport\n the same key material into the KMS key.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:DeleteImportedKeyMaterial (key policy)
\n\n Related operations:\n
\n\n ImportKeyMaterial\n
\nDeletes key material that you previously imported. This operation makes the specified KMS\n key unusable. For more information about importing key material into KMS, see Importing Key Material\n in the Key Management Service Developer Guide.
\nWhen the specified KMS key is in the PendingDeletion
state, this operation\n does not change the KMS key's state. Otherwise, it changes the KMS key's state to\n PendingImport
.
After you delete key material, you can use ImportKeyMaterial to reimport\n the same key material into the KMS key.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:DeleteImportedKeyMaterial (key policy)
\n\n Related operations:\n
\n\n ImportKeyMaterial\n
\nProvides detailed information about a KMS key. You can run DescribeKey
on a\n customer managed\n key or an Amazon Web Services managed key.
This detailed information includes the key ARN, creation date (and deletion date, if\n applicable), the key state, and the origin and expiration date (if any) of the key material.\n It includes fields, like KeySpec
, that help you distinguish symmetric from\n asymmetric KMS keys. It also provides information that is particularly important to asymmetric\n keys, such as the key usage (encryption or signing) and the encryption algorithms or signing\n algorithms that the KMS key supports. For KMS keys in custom key stores, it includes\n information about the custom key store, such as the key store ID and the CloudHSM cluster ID. For\n multi-Region keys, it displays the primary key and all related replica keys.
\n DescribeKey
does not return the following information:
Aliases associated with the KMS key. To get this information, use ListAliases.
\nWhether automatic key rotation is enabled on the KMS key. To get this information, use\n GetKeyRotationStatus. Also, some key states prevent a KMS key from\n being automatically rotated. For details, see How Automatic Key Rotation\n Works in Key Management Service Developer Guide.
\nTags on the KMS key. To get this information, use ListResourceTags.
\nKey policies and grants on the KMS key. To get this information, use GetKeyPolicy and ListGrants.
\nIf you call the DescribeKey
operation on a predefined Amazon Web Services\n alias, that is, an Amazon Web Services alias with no key ID, KMS creates an Amazon Web Services managed\n key. Then, it associates the alias with the new KMS key, and returns the\n KeyId
and Arn
of the new KMS key in the response.
\n Cross-account use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:DescribeKey (key policy)
\n\n Related operations:\n
\n\n GetKeyPolicy\n
\n\n GetKeyRotationStatus\n
\n\n ListAliases\n
\n\n ListGrants\n
\n\n ListKeys\n
\n\n ListResourceTags\n
\n\n ListRetirableGrants\n
\nProvides detailed information about a KMS key. You can run DescribeKey
on a\n customer managed\n key or an Amazon Web Services managed key.
This detailed information includes the key ARN, creation date (and deletion date, if\n applicable), the key state, and the origin and expiration date (if any) of the key material.\n It includes fields, like KeySpec
, that help you distinguish different types of KMS keys. It also displays the key usage (encryption, signing, or generating and verifying MACs) and the algorithms that the KMS key supports. For KMS keys in custom key stores, it includes\n information about the custom key store, such as the key store ID and the CloudHSM cluster ID. For\n multi-Region keys, it displays the primary key and all related replica keys.
\n DescribeKey
does not return the following information:
Aliases associated with the KMS key. To get this information, use ListAliases.
\nWhether automatic key rotation is enabled on the KMS key. To get this information, use\n GetKeyRotationStatus. Also, some key states prevent a KMS key from\n being automatically rotated. For details, see How Automatic Key Rotation\n Works in Key Management Service Developer Guide.
\nTags on the KMS key. To get this information, use ListResourceTags.
\nKey policies and grants on the KMS key. To get this information, use GetKeyPolicy and ListGrants.
\nIn general, DescribeKey
is a non-mutating operation. It returns data about\n KMS keys, but doesn't change them. However, Amazon Web Services services use DescribeKey
to\n create Amazon Web Services\n managed keys from a predefined Amazon Web Services alias with no key\n ID.
\n Cross-account use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:DescribeKey (key policy)
\n\n Related operations:\n
\n\n GetKeyPolicy\n
\n\n GetKeyRotationStatus\n
\n\n ListAliases\n
\n\n ListGrants\n
\n\n ListKeys\n
\n\n ListResourceTags\n
\n\n ListRetirableGrants\n
\nSets the state of a KMS key to disabled. This change temporarily prevents use of the KMS\n key for cryptographic operations.
\nFor more information about how key state affects the use of a KMS key, see Key state: Effect on your KMS\n key in the \n Key Management Service Developer Guide\n .
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:DisableKey (key policy)
\n\n Related operations: EnableKey\n
" + "smithy.api#documentation": "Sets the state of a KMS key to disabled. This change temporarily prevents use of the KMS\n key for cryptographic operations.
\nFor more information about how key state affects the use of a KMS key, see Key states of KMS keys in the \n Key Management Service Developer Guide\n .
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:DisableKey (key policy)
\n\n Related operations: EnableKey\n
" } }, "com.amazonaws.kms#DisableKeyRequest": { @@ -1419,6 +1447,9 @@ "input": { "target": "com.amazonaws.kms#DisableKeyRotationRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kms#DependencyTimeoutException" @@ -1443,7 +1474,7 @@ } ], "traits": { - "smithy.api#documentation": "Disables automatic\n rotation of the key material for the specified symmetric KMS key.
\nYou cannot enable automatic rotation of asymmetric KMS keys, KMS keys with imported key material, or KMS keys in a custom key store. To enable or disable automatic rotation of a set of related multi-Region keys, set the property on the primary key.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:DisableKeyRotation (key policy)
\n\n Related operations:\n
\n\n EnableKeyRotation\n
\n\n GetKeyRotationStatus\n
\nDisables automatic\n rotation of the key material for the specified symmetric encryption KMS key.
\nYou cannot enable automatic rotation of asymmetric KMS keys, HMAC KMS keys, KMS keys with imported key material, or KMS keys in a custom key store. To enable or disable automatic rotation of a set of related multi-Region keys, set the property on the primary key.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:DisableKeyRotation (key policy)
\n\n Related operations:\n
\n\n EnableKeyRotation\n
\n\n GetKeyRotationStatus\n
\nIdentifies a symmetric KMS key. You cannot enable or disable automatic rotation of asymmetric\n KMS keys, KMS keys with imported key material, or KMS keys in a\n custom key store.
\nSpecify the key ID or key ARN of the KMS key.
\nFor example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
", + "smithy.api#documentation": "Identifies a symmetric encryption KMS key. You cannot enable or disable automatic rotation of asymmetric\n KMS keys, HMAC KMS keys, KMS keys with imported key material, or KMS keys in a\n custom key store.
\nSpecify the key ID or key ARN of the KMS key.
\nFor example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
", "smithy.api#required": {} } } @@ -1519,6 +1550,9 @@ "input": { "target": "com.amazonaws.kms#EnableKeyRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kms#DependencyTimeoutException" @@ -1540,7 +1574,7 @@ } ], "traits": { - "smithy.api#documentation": "Sets the key state of a KMS key to enabled. This allows you to use the KMS key for\n cryptographic operations.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:EnableKey (key policy)
\n\n Related operations: DisableKey\n
" + "smithy.api#documentation": "Sets the key state of a KMS key to enabled. This allows you to use the KMS key for\n cryptographic operations.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:EnableKey (key policy)
\n\n Related operations: DisableKey\n
" } }, "com.amazonaws.kms#EnableKeyRequest": { @@ -1560,6 +1594,9 @@ "input": { "target": "com.amazonaws.kms#EnableKeyRotationRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kms#DependencyTimeoutException" @@ -1584,7 +1621,7 @@ } ], "traits": { - "smithy.api#documentation": "Enables automatic rotation\n of the key material for the specified symmetric KMS key.
\nYou cannot enable automatic rotation of asymmetric KMS keys, KMS keys with imported key material, or KMS keys in a custom key store. To enable or disable automatic rotation of a set of related multi-Region keys, set the property on the primary key.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:EnableKeyRotation (key policy)
\n\n Related operations:\n
\n\n DisableKeyRotation\n
\n\n GetKeyRotationStatus\n
\nEnables automatic rotation\n of the key material for the specified symmetric encryption KMS key.
\nYou cannot enable automatic rotation of asymmetric KMS keys, HMAC KMS keys, KMS keys with imported key material, or KMS keys in a custom key store. To enable or disable automatic rotation of a set of related multi-Region keys, set the property on the primary key.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:EnableKeyRotation (key policy)
\n\n Related operations:\n
\n\n DisableKeyRotation\n
\n\n GetKeyRotationStatus\n
\nIdentifies a symmetric KMS key. You cannot enable automatic rotation of asymmetric KMS keys, KMS keys with imported key material, or KMS keys in a custom key store. To enable or disable automatic rotation of a set of related multi-Region keys, set the property on the primary key.
\nSpecify the key ID or key ARN of the KMS key.
\nFor example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
", + "smithy.api#documentation": "Identifies a symmetric encryption KMS key. You cannot enable automatic rotation of asymmetric KMS keys, HMAC KMS keys, KMS keys with imported key material, or KMS keys in a custom key store. To enable or disable automatic rotation of a set of related multi-Region keys, set the property on the primary key.
\nSpecify the key ID or key ARN of the KMS key.
\nFor example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
", "smithy.api#required": {} } } @@ -1634,7 +1671,7 @@ } ], "traits": { - "smithy.api#documentation": "Encrypts plaintext into ciphertext by using a KMS key. The Encrypt
operation\n has two primary use cases:
You can encrypt small amounts of arbitrary data, such as a personal identifier or\n database password, or other sensitive information.
\nYou can use the Encrypt
operation to move encrypted data from one Amazon Web Services\n Region to another. For example, in Region A, generate a data key and use the plaintext key\n to encrypt your data. Then, in Region A, use the Encrypt
operation to encrypt\n the plaintext data key under a KMS key in Region B. Now, you can move the encrypted data\n and the encrypted data key to Region B. When necessary, you can decrypt the encrypted data\n key and the encrypted data entirely within in Region B.
You don't need to use the Encrypt
operation to encrypt a data key. The GenerateDataKey and GenerateDataKeyPair operations return a\n plaintext data key and an encrypted copy of that data key.
When you encrypt data, you must specify a symmetric or asymmetric KMS key to use in the\n encryption operation. The KMS key must have a KeyUsage
value of\n ENCRYPT_DECRYPT.
To find the KeyUsage
of a KMS key, use the DescribeKey operation.
If you use a symmetric KMS key, you can use an encryption context to add additional\n security to your encryption operation. If you specify an EncryptionContext
when\n encrypting data, you must specify the same encryption context (a case-sensitive exact match)\n when decrypting the data. Otherwise, the request to decrypt fails with an\n InvalidCiphertextException
. For more information, see Encryption\n Context in the Key Management Service Developer Guide.
If you specify an asymmetric KMS key, you must also specify the encryption algorithm. The\n algorithm must be compatible with the KMS key type.
\nWhen you use an asymmetric KMS key to encrypt or reencrypt data, be sure to record the KMS key and encryption algorithm that you choose. You will be required to provide the same KMS key and encryption algorithm when you decrypt the data. If the KMS key and algorithm do not match the values used to encrypt the data, the decrypt operation fails.
\nYou are not required to supply the key ID and encryption algorithm when you decrypt with symmetric KMS keys because KMS stores this information in the ciphertext blob. KMS cannot store metadata in ciphertext generated with asymmetric keys. The standard format for asymmetric key ciphertext does not include configurable fields.
\nThe maximum size of the data that you can encrypt varies with the type of KMS key and the\n encryption algorithm that you choose.
\nSymmetric KMS keys
\n\n SYMMETRIC_DEFAULT
: 4096 bytes
\n RSA_2048
\n
\n RSAES_OAEP_SHA_1
: 214 bytes
\n RSAES_OAEP_SHA_256
: 190 bytes
\n RSA_3072
\n
\n RSAES_OAEP_SHA_1
: 342 bytes
\n RSAES_OAEP_SHA_256
: 318 bytes
\n RSA_4096
\n
\n RSAES_OAEP_SHA_1
: 470 bytes
\n RSAES_OAEP_SHA_256
: 446 bytes
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes.\n To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:Encrypt (key policy)
\n\n Related operations:\n
\n\n Decrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\nEncrypts plaintext of up to 4,096 bytes using a KMS key. You can use a symmetric or\n asymmetric KMS key with a KeyUsage
of ENCRYPT_DECRYPT
.
You can use this operation to encrypt small amounts of arbitrary data, such as a personal identifier or\n database password, or other sensitive information. You don't need to use the Encrypt
operation to encrypt a data key. The GenerateDataKey and GenerateDataKeyPair operations return a\n plaintext data key and an encrypted copy of that data key.
If you use a symmetric encryption KMS key, you can use an encryption context to add additional\n security to your encryption operation. If you specify an EncryptionContext
when\n encrypting data, you must specify the same encryption context (a case-sensitive exact match)\n when decrypting the data. Otherwise, the request to decrypt fails with an\n InvalidCiphertextException
. For more information, see Encryption\n Context in the Key Management Service Developer Guide.
If you specify an asymmetric KMS key, you must also specify the encryption algorithm. The\n algorithm must be compatible with the KMS key type.
\nWhen you use an asymmetric KMS key to encrypt or reencrypt data, be sure to record the KMS key and encryption algorithm that you choose. You will be required to provide the same KMS key and encryption algorithm when you decrypt the data. If the KMS key and algorithm do not match the values used to encrypt the data, the decrypt operation fails.
\nYou are not required to supply the key ID and encryption algorithm when you decrypt with symmetric encryption KMS keys because KMS stores this information in the ciphertext blob. KMS cannot store metadata in ciphertext generated with asymmetric keys. The standard format for asymmetric key ciphertext does not include configurable fields.
\nThe maximum size of the data that you can encrypt varies with the type of KMS key and the\n encryption algorithm that you choose.
\nSymmetric encryption KMS keys
\n\n SYMMETRIC_DEFAULT
: 4096 bytes
\n RSA_2048
\n
\n RSAES_OAEP_SHA_1
: 214 bytes
\n RSAES_OAEP_SHA_256
: 190 bytes
\n RSA_3072
\n
\n RSAES_OAEP_SHA_1
: 342 bytes
\n RSAES_OAEP_SHA_256
: 318 bytes
\n RSA_4096
\n
\n RSAES_OAEP_SHA_1
: 470 bytes
\n RSAES_OAEP_SHA_256
: 446 bytes
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes.\n To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:Encrypt (key policy)
\n\n Related operations:\n
\n\n Decrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\nIdentifies the KMS key to use in the encryption operation.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", + "smithy.api#documentation": "Identifies the KMS key to use in the encryption operation. The KMS key must have a\n KeyUsage
of ENCRYPT_DECRYPT
. To find the KeyUsage
of\n a KMS key, use the DescribeKey operation.
To specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", "smithy.api#required": {} } }, @@ -1657,7 +1694,7 @@ "EncryptionContext": { "target": "com.amazonaws.kms#EncryptionContextType", "traits": { - "smithy.api#documentation": "Specifies the encryption context that will be used to encrypt the data.\n An encryption context is valid only for cryptographic operations with a symmetric KMS key. The standard asymmetric encryption algorithms that KMS uses do not support an encryption context.
\nAn encryption context is a collection of non-secret key-value pairs that represents additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is optional when encrypting with a symmetric KMS key, but it is highly recommended.
\nFor more information, see\n Encryption\n Context in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "Specifies the encryption context that will be used to encrypt the data.\n An encryption context is valid only for cryptographic operations with a symmetric encryption KMS key. The standard asymmetric encryption algorithms and HMAC algorithms that KMS uses do not support an encryption context.
\nAn encryption context is a collection of non-secret key-value pairs that represent additional authenticated data. \nWhen you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported\nonly on operations with symmetric encryption KMS keys. On operations with symmetric encryption KMS keys, an encryption context is optional, but it is strongly recommended.
\nFor more information, see\nEncryption context in the Key Management Service Developer Guide.
" } }, "GrantTokens": { @@ -1669,7 +1706,7 @@ "EncryptionAlgorithm": { "target": "com.amazonaws.kms#EncryptionAlgorithmSpec", "traits": { - "smithy.api#documentation": "Specifies the encryption algorithm that KMS will use to encrypt the plaintext message.\n The algorithm must be compatible with the KMS key that you specify.
\nThis parameter is required only for asymmetric KMS keys. The default value,\n SYMMETRIC_DEFAULT
, is the algorithm used for symmetric KMS keys. If you are\n using an asymmetric KMS key, we recommend RSAES_OAEP_SHA_256.
Specifies the encryption algorithm that KMS will use to encrypt the plaintext message.\n The algorithm must be compatible with the KMS key that you specify.
\nThis parameter is required only for asymmetric KMS keys. The default value,\n SYMMETRIC_DEFAULT
, is the algorithm used for symmetric encryption KMS keys. If you are\n using an asymmetric KMS key, we recommend RSAES_OAEP_SHA_256.
Generates a unique symmetric data key for client-side encryption. This operation returns a\n plaintext copy of the data key and a copy that is encrypted under a KMS key that you specify.\n You can use the plaintext key to encrypt your data outside of KMS and store the encrypted\n data key with the encrypted data.
\n\n\n GenerateDataKey
returns a unique data key for each request. The bytes in the\n plaintext key are not related to the caller or the KMS key.
To generate a data key, specify the symmetric KMS key that will be used to encrypt the\n data key. You cannot use an asymmetric KMS key to generate data keys. To get the type of your\n KMS key, use the DescribeKey operation. You must also specify the length of\n the data key. Use either the KeySpec
or NumberOfBytes
parameters\n (but not both). For 128-bit and 256-bit data keys, use the KeySpec
parameter.
To get only an encrypted copy of the data key, use GenerateDataKeyWithoutPlaintext. To generate an asymmetric data key pair, use\n the GenerateDataKeyPair or GenerateDataKeyPairWithoutPlaintext operation. To get a cryptographically secure\n random byte string, use GenerateRandom.
\n\nYou can use the optional encryption context to add additional security to the encryption\n operation. If you specify an EncryptionContext
, you must specify the same\n encryption context (a case-sensitive exact match) when decrypting the encrypted data key.\n Otherwise, the request to decrypt fails with an InvalidCiphertextException
. For more information, see Encryption Context in the\n Key Management Service Developer Guide.
Applications in Amazon Web Services Nitro Enclaves can call this operation by using the Amazon Web Services Nitro Enclaves Development Kit. For information about the supporting parameters, see How Amazon Web Services Nitro Enclaves use KMS in the Key Management Service Developer Guide.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n How to use your data\n key\n
\nWe recommend that you use the following pattern to encrypt data locally in your\n application. You can write your own code or use a client-side encryption library, such as the\n Amazon Web Services Encryption SDK, the\n Amazon DynamoDB Encryption Client,\n or Amazon S3\n client-side encryption to do these tasks for you.
\nTo encrypt data outside of KMS:
\nUse the GenerateDataKey
operation to get a data key.
Use the plaintext data key (in the Plaintext
field of the response) to\n encrypt your data outside of KMS. Then erase the plaintext data key from memory.
Store the encrypted data key (in the CiphertextBlob
field of the\n response) with the encrypted data.
To decrypt data outside of KMS:
\nUse the Decrypt operation to decrypt the encrypted data key. The\n operation returns a plaintext copy of the data key.
\nUse the plaintext data key to decrypt data outside of KMS, then erase the plaintext\n data key from memory.
\n\n Cross-account use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GenerateDataKey (key policy)
\n\n Related operations:\n
\n\n Decrypt\n
\n\n Encrypt\n
\n\n GenerateDataKeyPair\n
\nReturns a unique symmetric data key for use outside of KMS. This operation returns a\n plaintext copy of the data key and a copy that is encrypted under a symmetric encryption KMS\n key that you specify. The bytes in the plaintext key are random; they are not related to the caller or the KMS\n key. You can use the plaintext key to encrypt your data outside of KMS and store the\n encrypted data key with the encrypted data.
\n\nTo generate a data key, specify the symmetric encryption KMS key that will be used to\n encrypt the data key. You cannot use an asymmetric KMS key to encrypt data keys. To get the\n type of your KMS key, use the DescribeKey operation. You must also specify\n the length of the data key. Use either the KeySpec
or NumberOfBytes
\n parameters (but not both). For 128-bit and 256-bit data keys, use the KeySpec
\n parameter.
To get only an encrypted copy of the data key, use GenerateDataKeyWithoutPlaintext. To generate an asymmetric data key pair, use\n the GenerateDataKeyPair or GenerateDataKeyPairWithoutPlaintext operation. To get a cryptographically secure\n random byte string, use GenerateRandom.
\n\nYou can use an optional encryption context to add additional security to the encryption\n operation. If you specify an EncryptionContext
, you must specify the same\n encryption context (a case-sensitive exact match) when decrypting the encrypted data key.\n Otherwise, the request to decrypt fails with an InvalidCiphertextException
. For more information, see Encryption Context in the\n Key Management Service Developer Guide.
Applications in Amazon Web Services Nitro Enclaves can call this operation by using the Amazon Web Services Nitro Enclaves Development Kit. For information about the supporting parameters, see How Amazon Web Services Nitro Enclaves use KMS in the Key Management Service Developer Guide.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n How to use your data\n key\n
\nWe recommend that you use the following pattern to encrypt data locally in your\n application. You can write your own code or use a client-side encryption library, such as the\n Amazon Web Services Encryption SDK, the\n Amazon DynamoDB Encryption Client,\n or Amazon S3\n client-side encryption to do these tasks for you.
\nTo encrypt data outside of KMS:
\nUse the GenerateDataKey
operation to get a data key.
Use the plaintext data key (in the Plaintext
field of the response) to\n encrypt your data outside of KMS. Then erase the plaintext data key from memory.
Store the encrypted data key (in the CiphertextBlob
field of the\n response) with the encrypted data.
To decrypt data outside of KMS:
\nUse the Decrypt operation to decrypt the encrypted data key. The\n operation returns a plaintext copy of the data key.
\nUse the plaintext data key to decrypt data outside of KMS, then erase the plaintext\n data key from memory.
\n\n Cross-account use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GenerateDataKey (key policy)
\n\n Related operations:\n
\n\n Decrypt\n
\n\n Encrypt\n
\n\n GenerateDataKeyPair\n
\nGenerates a unique asymmetric data key pair. The GenerateDataKeyPair
\n operation returns a plaintext public key, a plaintext private key, and a copy of the private\n key that is encrypted under the symmetric KMS key you specify. You can use the data key pair\n to perform asymmetric cryptography and implement digital signatures outside of KMS.
You can use the public key that GenerateDataKeyPair
returns to encrypt data\n or verify a signature outside of KMS. Then, store the encrypted private key with the data.\n When you are ready to decrypt data or sign a message, you can use the Decrypt operation to decrypt the encrypted private key.
To generate a data key pair, you must specify a symmetric KMS key to encrypt the private\n key in a data key pair. You cannot use an asymmetric KMS key or a KMS key in a custom key\n store. To get the type and origin of your KMS key, use the DescribeKey\n operation.
\nUse the KeyPairSpec
parameter to choose an RSA or Elliptic Curve (ECC) data\n key pair. KMS recommends that your use ECC key pairs for signing, and use RSA key pairs for\n either encryption or signing, but not both. However, KMS cannot enforce any restrictions on\n the use of data key pairs outside of KMS.
If you are using the data key pair to encrypt data, or for any operation where you don't\n immediately need a private key, consider using the GenerateDataKeyPairWithoutPlaintext operation.\n GenerateDataKeyPairWithoutPlaintext
returns a plaintext public key and an\n encrypted private key, but omits the plaintext private key that you need only to decrypt\n ciphertext or sign a message. Later, when you need to decrypt the data or sign a message, use\n the Decrypt operation to decrypt the encrypted private key in the data key\n pair.
\n GenerateDataKeyPair
returns a unique data key pair for each request. The\n bytes in the keys are not related to the caller or the KMS key that is used to encrypt the\n private key. The public key is a DER-encoded X.509 SubjectPublicKeyInfo, as specified in\n RFC 5280. The private key is a\n DER-encoded PKCS8 PrivateKeyInfo, as specified in RFC 5958.
You can use the optional encryption context to add additional security to the encryption\n operation. If you specify an EncryptionContext
, you must specify the same\n encryption context (a case-sensitive exact match) when decrypting the encrypted data key.\n Otherwise, the request to decrypt fails with an InvalidCiphertextException
. For more information, see Encryption Context in the\n Key Management Service Developer Guide.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account\n use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GenerateDataKeyPair (key policy)
\n\n Related operations:\n
\n\n Decrypt\n
\n\n Encrypt\n
\n\n GenerateDataKey\n
\nReturns a unique asymmetric data key pair for use outside of KMS. This operation returns\n a plaintext public key, a plaintext private key, and a copy of the private key that is\n encrypted under the symmetric encryption KMS key you specify. You can use the data key pair to\n perform asymmetric cryptography and implement digital signatures outside of KMS. The bytes\n in the keys are random; they not related to the caller or to the KMS key that is used to encrypt the\n private key.
\n\nYou can use the public key that GenerateDataKeyPair
returns to encrypt data\n or verify a signature outside of KMS. Then, store the encrypted private key with the data.\n When you are ready to decrypt data or sign a message, you can use the Decrypt operation to decrypt the encrypted private key.
To generate a data key pair, you must specify a symmetric encryption KMS key to encrypt\n the private key in a data key pair. You cannot use an asymmetric KMS key or a KMS key in a\n custom key store. To get the type and origin of your KMS key, use the DescribeKey operation.
\nUse the KeyPairSpec
parameter to choose an RSA or Elliptic Curve (ECC) data\n key pair. KMS recommends that your use ECC key pairs for signing, and use RSA key pairs for\n either encryption or signing, but not both. However, KMS cannot enforce any restrictions on\n the use of data key pairs outside of KMS.
If you are using the data key pair to encrypt data, or for any operation where you don't\n immediately need a private key, consider using the GenerateDataKeyPairWithoutPlaintext operation.\n GenerateDataKeyPairWithoutPlaintext
returns a plaintext public key and an\n encrypted private key, but omits the plaintext private key that you need only to decrypt\n ciphertext or sign a message. Later, when you need to decrypt the data or sign a message, use\n the Decrypt operation to decrypt the encrypted private key in the data key\n pair.
\n GenerateDataKeyPair
returns a unique data key pair for each request. The\n bytes in the keys are random; they are not related to the caller or the KMS key that is used to encrypt the\n private key. The public key is a DER-encoded X.509 SubjectPublicKeyInfo, as specified in\n RFC 5280. The private key is a\n DER-encoded PKCS8 PrivateKeyInfo, as specified in RFC 5958.
You can use an optional encryption context to add additional security to the encryption\n operation. If you specify an EncryptionContext
, you must specify the same\n encryption context (a case-sensitive exact match) when decrypting the encrypted data key.\n Otherwise, the request to decrypt fails with an InvalidCiphertextException
. For more information, see Encryption Context in the\n Key Management Service Developer Guide.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GenerateDataKeyPair (key policy)
\n\n Related operations:\n
\n\n Decrypt\n
\n\n Encrypt\n
\n\n GenerateDataKey\n
\nSpecifies the encryption context that will be used when encrypting the private key in the\n data key pair.
\nAn encryption context is a collection of non-secret key-value pairs that represents additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is optional when encrypting with a symmetric KMS key, but it is highly recommended.
\nFor more information, see\n Encryption\n Context in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "Specifies the encryption context that will be used when encrypting the private key in the\n data key pair.
\nAn encryption context is a collection of non-secret key-value pairs that represent additional authenticated data. \nWhen you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported\nonly on operations with symmetric encryption KMS keys. On operations with symmetric encryption KMS keys, an encryption context is optional, but it is strongly recommended.
\nFor more information, see\nEncryption context in the Key Management Service Developer Guide.
" } }, "KeyId": { "target": "com.amazonaws.kms#KeyIdType", "traits": { - "smithy.api#documentation": "Specifies the symmetric KMS key that encrypts the private key in the data key pair. You\n cannot specify an asymmetric KMS key or a KMS key in a custom key store. To get the type and\n origin of your KMS key, use the DescribeKey operation.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", + "smithy.api#documentation": "Specifies the symmetric encryption KMS key that encrypts the private key in the data key\n pair. You cannot specify an asymmetric KMS key or a KMS key in a custom key store. To get the\n type and origin of your KMS key, use the DescribeKey operation.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", "smithy.api#required": {} } }, @@ -1900,7 +1937,7 @@ "PublicKey": { "target": "com.amazonaws.kms#PublicKeyType", "traits": { - "smithy.api#documentation": "The public key (in plaintext).
" + "smithy.api#documentation": "The public key (in plaintext). When you use the HTTP API or the Amazon Web Services CLI, the value is Base64-encoded. Otherwise, it is not Base64-encoded.
" } }, "KeyId": { @@ -1955,7 +1992,7 @@ } ], "traits": { - "smithy.api#documentation": "Generates a unique asymmetric data key pair. The\n GenerateDataKeyPairWithoutPlaintext
operation returns a plaintext public key\n and a copy of the private key that is encrypted under the symmetric KMS key you specify.\n Unlike GenerateDataKeyPair, this operation does not return a plaintext\n private key.
You can use the public key that GenerateDataKeyPairWithoutPlaintext
returns\n to encrypt data or verify a signature outside of KMS. Then, store the encrypted private key\n with the data. When you are ready to decrypt data or sign a message, you can use the Decrypt operation to decrypt the encrypted private key.
To generate a data key pair, you must specify a symmetric KMS key to encrypt the private\n key in a data key pair. You cannot use an asymmetric KMS key or a KMS key in a custom key\n store. To get the type and origin of your KMS key, use the DescribeKey\n operation.
\nUse the KeyPairSpec
parameter to choose an RSA or Elliptic Curve (ECC) data\n key pair. KMS recommends that your use ECC key pairs for signing, and use RSA key pairs for\n either encryption or signing, but not both. However, KMS cannot enforce any restrictions on\n the use of data key pairs outside of KMS.
\n GenerateDataKeyPairWithoutPlaintext
returns a unique data key pair for each\n request. The bytes in the key are not related to the caller or KMS key that is used to encrypt\n the private key. The public key is a DER-encoded X.509 SubjectPublicKeyInfo, as specified in\n RFC 5280.
You can use the optional encryption context to add additional security to the encryption\n operation. If you specify an EncryptionContext
, you must specify the same\n encryption context (a case-sensitive exact match) when decrypting the encrypted data key.\n Otherwise, the request to decrypt fails with an InvalidCiphertextException
. For more information, see Encryption Context in the\n Key Management Service Developer Guide.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account\n use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GenerateDataKeyPairWithoutPlaintext (key\n policy)
\n\n Related operations:\n
\n\n Decrypt\n
\n\n Encrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\nReturns a unique asymmetric data key pair for use outside of KMS. This operation returns\n a plaintext public key and a copy of the private key that is encrypted under the symmetric\n encryption KMS key you specify. Unlike GenerateDataKeyPair, this operation\n does not return a plaintext private key. The bytes in the keys are random; they are not related to the caller\n or to the KMS key that is used to encrypt the private key.
\nYou can use the public key that GenerateDataKeyPairWithoutPlaintext
returns\n to encrypt data or verify a signature outside of KMS. Then, store the encrypted private key\n with the data. When you are ready to decrypt data or sign a message, you can use the Decrypt operation to decrypt the encrypted private key.
To generate a data key pair, you must specify a symmetric encryption KMS key to encrypt\n the private key in a data key pair. You cannot use an asymmetric KMS key or a KMS key in a\n custom key store. To get the type and origin of your KMS key, use the DescribeKey operation.
\nUse the KeyPairSpec
parameter to choose an RSA or Elliptic Curve (ECC) data\n key pair. KMS recommends that your use ECC key pairs for signing, and use RSA key pairs for\n either encryption or signing, but not both. However, KMS cannot enforce any restrictions on\n the use of data key pairs outside of KMS.
\n GenerateDataKeyPairWithoutPlaintext
returns a unique data key pair for each\n request. The bytes in the key are not related to the caller or KMS key that is used to encrypt\n the private key. The public key is a DER-encoded X.509 SubjectPublicKeyInfo, as specified in\n RFC 5280.
You can use an optional encryption context to add additional security to the encryption\n operation. If you specify an EncryptionContext
, you must specify the same\n encryption context (a case-sensitive exact match) when decrypting the encrypted data key.\n Otherwise, the request to decrypt fails with an InvalidCiphertextException
. For more information, see Encryption Context in the\n Key Management Service Developer Guide.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GenerateDataKeyPairWithoutPlaintext (key\n policy)
\n\n Related operations:\n
\n\n Decrypt\n
\n\n Encrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\nSpecifies the encryption context that will be used when encrypting the private key in the\n data key pair.
\nAn encryption context is a collection of non-secret key-value pairs that represents additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is optional when encrypting with a symmetric KMS key, but it is highly recommended.
\nFor more information, see\n Encryption\n Context in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "Specifies the encryption context that will be used when encrypting the private key in the\n data key pair.
\nAn encryption context is a collection of non-secret key-value pairs that represent additional authenticated data. \nWhen you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported\nonly on operations with symmetric encryption KMS keys. On operations with symmetric encryption KMS keys, an encryption context is optional, but it is strongly recommended.
\nFor more information, see\nEncryption context in the Key Management Service Developer Guide.
" } }, "KeyId": { "target": "com.amazonaws.kms#KeyIdType", "traits": { - "smithy.api#documentation": "Specifies the KMS key that encrypts the private key in the data key pair. You must specify\n a symmetric KMS key. You cannot use an asymmetric KMS key or a KMS key in a custom key store.\n To get the type and origin of your KMS key, use the DescribeKey operation.\n
\nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", + "smithy.api#documentation": "Specifies the symmetric encryption KMS key that encrypts the private key in the data key\n pair. You cannot specify an asymmetric KMS key or a KMS key in a custom key store. To get the\n type and origin of your KMS key, use the DescribeKey operation.
\nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", "smithy.api#required": {} } }, @@ -2001,7 +2038,7 @@ "PublicKey": { "target": "com.amazonaws.kms#PublicKeyType", "traits": { - "smithy.api#documentation": "The public key (in plaintext).
" + "smithy.api#documentation": "The public key (in plaintext). When you use the HTTP API or the Amazon Web Services CLI, the value is Base64-encoded. Otherwise, it is not Base64-encoded.
" } }, "KeyId": { @@ -2024,14 +2061,14 @@ "KeyId": { "target": "com.amazonaws.kms#KeyIdType", "traits": { - "smithy.api#documentation": "Identifies the symmetric KMS key that encrypts the data key.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", + "smithy.api#documentation": "Specifies the symmetric encryption KMS key that encrypts the data key. You cannot specify\n an asymmetric KMS key or a KMS key in a custom key store. To get the type and origin of your\n KMS key, use the DescribeKey operation.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", "smithy.api#required": {} } }, "EncryptionContext": { "target": "com.amazonaws.kms#EncryptionContextType", "traits": { - "smithy.api#documentation": "Specifies the encryption context that will be used when encrypting the data key.
\nAn encryption context is a collection of non-secret key-value pairs that represents additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is optional when encrypting with a symmetric KMS key, but it is highly recommended.
\nFor more information, see\n Encryption\n Context in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "Specifies the encryption context that will be used when encrypting the data key.
\nAn encryption context is a collection of non-secret key-value pairs that represent additional authenticated data. \nWhen you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported\nonly on operations with symmetric encryption KMS keys. On operations with symmetric encryption KMS keys, an encryption context is optional, but it is strongly recommended.
\nFor more information, see\nEncryption context in the Key Management Service Developer Guide.
" } }, "NumberOfBytes": { @@ -2112,7 +2149,7 @@ } ], "traits": { - "smithy.api#documentation": "Generates a unique symmetric data key. This operation returns a data key that is encrypted\n under a KMS key that you specify. To request an asymmetric data key pair, use the GenerateDataKeyPair or GenerateDataKeyPairWithoutPlaintext\n operations.
\n\n GenerateDataKeyWithoutPlaintext
is identical to the GenerateDataKey operation except that returns only the encrypted copy of the\n data key. This operation is useful for systems that need to encrypt data at some point, but\n not immediately. When you need to encrypt the data, you call the Decrypt\n operation on the encrypted copy of the key.
It's also useful in distributed systems with different levels of trust. For example, you\n might store encrypted data in containers. One component of your system creates new containers\n and stores an encrypted data key with each container. Then, a different component puts the\n data into the containers. That component first decrypts the data key, uses the plaintext data\n key to encrypt data, puts the encrypted data into the container, and then destroys the\n plaintext data key. In this system, the component that creates the containers never sees the\n plaintext data key.
\n\n GenerateDataKeyWithoutPlaintext
returns a unique data key for each request.\n The bytes in the keys are not related to the caller or KMS key that is used to encrypt the\n private key.
To generate a data key, you must specify the symmetric KMS key that is used to encrypt the\n data key. You cannot use an asymmetric KMS key to generate a data key. To get the type of your\n KMS key, use the DescribeKey operation.
\n\nIf the operation succeeds, you will find the encrypted copy of the data key in the\n CiphertextBlob
field.
You can use the optional encryption context to add additional security to the encryption\n operation. If you specify an EncryptionContext
, you must specify the same\n encryption context (a case-sensitive exact match) when decrypting the encrypted data key.\n Otherwise, the request to decrypt fails with an InvalidCiphertextException
. For more information, see Encryption Context in the\n Key Management Service Developer Guide.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account\n use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GenerateDataKeyWithoutPlaintext (key\n policy)
\n\n Related operations:\n
\n\n Decrypt\n
\n\n Encrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\nReturns a unique symmetric data key for use outside of KMS. This operation returns a\n data key that is encrypted under a symmetric encryption KMS key that you specify. The bytes in\n the key are random; they are not related to the caller or to the KMS key.
\n\n GenerateDataKeyWithoutPlaintext
is identical to the GenerateDataKey operation except that it does not return a plaintext copy of the\n data key.
This operation is useful for systems that need to encrypt data at some point, but not\n immediately. When you need to encrypt the data, you call the Decrypt\n operation on the encrypted copy of the key. It's also useful in distributed systems with\n different levels of trust. For example, you might store encrypted data in containers. One\n component of your system creates new containers and stores an encrypted data key with each\n container. Then, a different component puts the data into the containers. That component first\n decrypts the data key, uses the plaintext data key to encrypt data, puts the encrypted data\n into the container, and then destroys the plaintext data key. In this system, the component\n that creates the containers never sees the plaintext data key.
\nTo request an asymmetric data key pair, use the GenerateDataKeyPair or\n GenerateDataKeyPairWithoutPlaintext operations.
\n\nTo generate a data key, you must specify the symmetric encryption KMS key that is used to\n encrypt the data key. You cannot use an asymmetric KMS key or a key in a custom key store to generate a data key. To get the\n type of your KMS key, use the DescribeKey operation.
\nIf the operation succeeds, you will find the encrypted copy of the data key in the\n CiphertextBlob
field.
You can use an optional encryption context to add additional security to the encryption\n operation. If you specify an EncryptionContext
, you must specify the same\n encryption context (a case-sensitive exact match) when decrypting the encrypted data key.\n Otherwise, the request to decrypt fails with an InvalidCiphertextException
. For more information, see Encryption Context in the\n Key Management Service Developer Guide.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GenerateDataKeyWithoutPlaintext (key\n policy)
\n\n Related operations:\n
\n\n Decrypt\n
\n\n Encrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\nThe identifier of the symmetric KMS key that encrypts the data key.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", + "smithy.api#documentation": "Specifies the symmetric encryption KMS key that encrypts the data key. You cannot specify\n an asymmetric KMS key or a KMS key in a custom key store. To get the type and origin of your\n KMS key, use the DescribeKey operation.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", "smithy.api#required": {} } }, "EncryptionContext": { "target": "com.amazonaws.kms#EncryptionContextType", "traits": { - "smithy.api#documentation": "Specifies the encryption context that will be used when encrypting the data key.
\nAn encryption context is a collection of non-secret key-value pairs that represents additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is optional when encrypting with a symmetric KMS key, but it is highly recommended.
\nFor more information, see\n Encryption\n Context in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "Specifies the encryption context that will be used when encrypting the data key.
\nAn encryption context is a collection of non-secret key-value pairs that represent additional authenticated data. \nWhen you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported\nonly on operations with symmetric encryption KMS keys. On operations with symmetric encryption KMS keys, an encryption context is optional, but it is strongly recommended.
\nFor more information, see\nEncryption context in the Key Management Service Developer Guide.
" } }, "KeySpec": { @@ -2168,6 +2205,96 @@ } } }, + "com.amazonaws.kms#GenerateMac": { + "type": "operation", + "input": { + "target": "com.amazonaws.kms#GenerateMacRequest" + }, + "output": { + "target": "com.amazonaws.kms#GenerateMacResponse" + }, + "errors": [ + { + "target": "com.amazonaws.kms#DisabledException" + }, + { + "target": "com.amazonaws.kms#InvalidGrantTokenException" + }, + { + "target": "com.amazonaws.kms#InvalidKeyUsageException" + }, + { + "target": "com.amazonaws.kms#KeyUnavailableException" + }, + { + "target": "com.amazonaws.kms#KMSInternalException" + }, + { + "target": "com.amazonaws.kms#KMSInvalidStateException" + }, + { + "target": "com.amazonaws.kms#NotFoundException" + } + ], + "traits": { + "smithy.api#documentation": "Generates a hash-based message authentication code (HMAC) for a message using an HMAC KMS\n key and a MAC algorithm that the key supports. The MAC algorithm computes the HMAC for the\n message and the key as described in RFC 2104.
\nYou can use the HMAC that this operation generates with the VerifyMac\n operation to demonstrate that the original message has not changed. Also, because a secret key\n is used to create the hash, you can verify that the party that generated the hash has the\n required secret key. This operation is part of KMS support for HMAC KMS keys.\n For details, see HMAC keys in KMS in the \n Key Management Service Developer Guide\n .
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GenerateMac (key policy)
\n\n Related operations: VerifyMac\n
" + } + }, + "com.amazonaws.kms#GenerateMacRequest": { + "type": "structure", + "members": { + "Message": { + "target": "com.amazonaws.kms#PlaintextType", + "traits": { + "smithy.api#documentation": "The message to be hashed. Specify a message of up to 4,096 bytes.
\n\n GenerateMac
and VerifyMac do not provide special handling\n for message digests. If you generate an HMAC for a hash digest of a message, you must verify\n the HMAC of the same hash digest.
The HMAC KMS key to use in the operation. The MAC algorithm computes the HMAC for the message and the key as described in RFC 2104.
\nTo identify an HMAC KMS key, use the DescribeKey operation and see the\n KeySpec
field in the response.
The MAC algorithm used in the operation.
\n The algorithm must be compatible with the HMAC KMS key that you specify. To find the MAC\n algorithms that your HMAC KMS key supports, use the DescribeKey operation\n and see the MacAlgorithms
field in the DescribeKey
response.
A list of grant tokens.
\nUse a grant token when your permission to call this operation comes from a new grant that has not yet achieved eventual consistency. For more information, see Grant token and Using a grant token in the\n Key Management Service Developer Guide.
" + } + } + } + }, + "com.amazonaws.kms#GenerateMacResponse": { + "type": "structure", + "members": { + "Mac": { + "target": "com.amazonaws.kms#CiphertextType", + "traits": { + "smithy.api#documentation": "The hash-based message authentication code (HMAC) for the given message, key, and MAC\n algorithm.
" + } + }, + "MacAlgorithm": { + "target": "com.amazonaws.kms#MacAlgorithmSpec", + "traits": { + "smithy.api#documentation": "The MAC algorithm that was used to generate the HMAC.
" + } + }, + "KeyId": { + "target": "com.amazonaws.kms#KeyIdType", + "traits": { + "smithy.api#documentation": "The HMAC KMS key used in the operation.
" + } + } + } + }, "com.amazonaws.kms#GenerateRandom": { "type": "operation", "input": { @@ -2310,7 +2437,7 @@ } ], "traits": { - "smithy.api#documentation": "Gets a Boolean value that indicates whether automatic rotation of the key material is\n enabled for the specified KMS key.
\nYou cannot enable automatic rotation of asymmetric KMS keys, KMS keys with imported key material, or KMS keys in a custom key store. To enable or disable automatic rotation of a set of related multi-Region keys, set the property on the primary key. The key rotation status for these KMS keys is always\n false
.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\nDisabled: The key rotation status does not change when you disable a KMS key. However,\n while the KMS key is disabled, KMS does not rotate the key material.
\nPending deletion: While a KMS key is pending deletion, its key rotation status is\n false
and KMS does not rotate the key material. If you cancel the\n deletion, the original key rotation status is restored.
\n Cross-account use: Yes. To perform this operation on a KMS key in a different Amazon Web Services account, specify the key\n ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GetKeyRotationStatus (key policy)
\n\n Related operations:\n
\n\n DisableKeyRotation\n
\n\n EnableKeyRotation\n
\nGets a Boolean value that indicates whether automatic rotation of the key material is\n enabled for the specified KMS key.
\nYou cannot enable automatic rotation of asymmetric KMS keys, HMAC KMS keys, KMS keys with imported key material, or KMS keys in a custom key store. To enable or disable automatic rotation of a set of related multi-Region keys, set the property on the primary key. The key rotation status for these KMS keys is always\n false
.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\nDisabled: The key rotation status does not change when you disable a KMS key. However,\n while the KMS key is disabled, KMS does not rotate the key material.
\nPending deletion: While a KMS key is pending deletion, its key rotation status is\n false
and KMS does not rotate the key material. If you cancel the\n deletion, the original key rotation status is restored.
\n Cross-account use: Yes. To perform this operation on a KMS key in a different Amazon Web Services account, specify the key\n ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GetKeyRotationStatus (key policy)
\n\n Related operations:\n
\n\n DisableKeyRotation\n
\n\n EnableKeyRotation\n
\nReturns the items you need to import key material into a symmetric, customer managed KMS\n key. For more information about importing key material into KMS, see Importing Key Material\n in the Key Management Service Developer Guide.
\nThis operation returns a public key and an import token. Use the public key to encrypt the\n symmetric key material. Store the import token to send with a subsequent ImportKeyMaterial request.
\nYou must specify the key ID of the symmetric KMS key into which you will import key\n material. This KMS key's Origin
must be EXTERNAL
. You must also\n specify the wrapping algorithm and type of wrapping key (public key) that you will use to\n encrypt the key material. You cannot perform this operation on an asymmetric KMS key or on any KMS key in a different Amazon Web Services account.
To import key material, you must use the public key and import token from the same\n response. These items are valid for 24 hours. The expiration date and time appear in the\n GetParametersForImport
response. You cannot use an expired token in an ImportKeyMaterial request. If your key and token expire, send another\n GetParametersForImport
request.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:GetParametersForImport (key policy)
\n\n Related operations:\n
\n\n ImportKeyMaterial\n
\nReturns the items you need to import key material into a symmetric encryption KMS key. For\n more information about importing key material into KMS, see Importing key material in the\n Key Management Service Developer Guide.
\nThis operation returns a public key and an import token. Use the public key to encrypt the\n symmetric key material. Store the import token to send with a subsequent ImportKeyMaterial request.
\nYou must specify the key ID of the symmetric encryption KMS key into which you will import\n key material. This KMS key's Origin
must be EXTERNAL
. You must also\n specify the wrapping algorithm and type of wrapping key (public key) that you will use to\n encrypt the key material. You cannot perform this operation on an asymmetric KMS key, an HMAC KMS key, or on any KMS key in a different Amazon Web Services account.
To import key material, you must use the public key and import token from the same\n response. These items are valid for 24 hours. The expiration date and time appear in the\n GetParametersForImport
response. You cannot use an expired token in an ImportKeyMaterial request. If your key and token expire, send another\n GetParametersForImport
request.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:GetParametersForImport (key policy)
\n\n Related operations:\n
\n\n ImportKeyMaterial\n
\nThe identifier of the symmetric KMS key into which you will import key material. The\n Origin
of the KMS key must be EXTERNAL
.
Specify the key ID or key ARN of the KMS key.
\nFor example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
", + "smithy.api#documentation": "The identifier of the symmetric encryption KMS key into which you will import key material. The\n Origin
of the KMS key must be EXTERNAL
.
Specify the key ID or key ARN of the KMS key.
\nFor example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
", "smithy.api#required": {} } }, @@ -2464,7 +2591,7 @@ } ], "traits": { - "smithy.api#documentation": "Returns the public key of an asymmetric KMS key. Unlike the private key of a asymmetric\n KMS key, which never leaves KMS unencrypted, callers with kms:GetPublicKey
\n permission can download the public key of an asymmetric KMS key. You can share the public key\n to allow others to encrypt messages and verify signatures outside of KMS.\n For information about symmetric and asymmetric KMS keys, see Using Symmetric and Asymmetric KMS keys in the Key Management Service Developer Guide.
You do not need to download the public key. Instead, you can use the public key within\n KMS by calling the Encrypt, ReEncrypt, or Verify operations with the identifier of an asymmetric KMS key. When you use the\n public key within KMS, you benefit from the authentication, authorization, and logging that\n are part of every KMS operation. You also reduce of risk of encrypting data that cannot be\n decrypted. These features are not effective outside of KMS. For details, see Special\n Considerations for Downloading Public Keys.
\nTo help you use the public key safely outside of KMS, GetPublicKey
returns\n important information about the public key in the response, including:
\n KeySpec: The type of key material in the public key, such as\n RSA_4096
or ECC_NIST_P521
.
\n KeyUsage: Whether the key is used for encryption or signing.
\n\n EncryptionAlgorithms or SigningAlgorithms: A list of the encryption algorithms or the signing\n algorithms for the key.
\nAlthough KMS cannot enforce these restrictions on external operations, it is crucial\n that you use this information to prevent the public key from being used improperly. For\n example, you can prevent a public signing key from being used encrypt data, or prevent a\n public key from being used with an encryption algorithm that is not supported by KMS. You\n can also avoid errors, such as using the wrong signing algorithm in a verification\n operation.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use:\n Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GetPublicKey (key policy)
\n\n Related operations: CreateKey\n
" + "smithy.api#documentation": "Returns the public key of an asymmetric KMS key. Unlike the private key of a asymmetric\n KMS key, which never leaves KMS unencrypted, callers with kms:GetPublicKey
\n permission can download the public key of an asymmetric KMS key. You can share the public key\n to allow others to encrypt messages and verify signatures outside of KMS.\n For information about asymmetric KMS keys, see Asymmetric KMS keys in the Key Management Service Developer Guide.
You do not need to download the public key. Instead, you can use the public key within\n KMS by calling the Encrypt, ReEncrypt, or Verify operations with the identifier of an asymmetric KMS key. When you use the\n public key within KMS, you benefit from the authentication, authorization, and logging that\n are part of every KMS operation. You also reduce of risk of encrypting data that cannot be\n decrypted. These features are not effective outside of KMS. For details, see Special\n Considerations for Downloading Public Keys.
\nTo help you use the public key safely outside of KMS, GetPublicKey
returns\n important information about the public key in the response, including:
\n KeySpec: The type of key material in the public key, such as\n RSA_4096
or ECC_NIST_P521
.
\n KeyUsage: Whether the key is used for encryption or signing.
\n\n EncryptionAlgorithms or SigningAlgorithms: A list of the encryption algorithms or the signing\n algorithms for the key.
\nAlthough KMS cannot enforce these restrictions on external operations, it is crucial\n that you use this information to prevent the public key from being used improperly. For\n example, you can prevent a public signing key from being used encrypt data, or prevent a\n public key from being used with an encryption algorithm that is not supported by KMS. You\n can also avoid errors, such as using the wrong signing algorithm in a verification\n operation.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use:\n Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:GetPublicKey (key policy)
\n\n Related operations: CreateKey\n
" } }, "com.amazonaws.kms#GetPublicKeyRequest": { @@ -2552,7 +2679,7 @@ } }, "traits": { - "smithy.api#documentation": "Use this structure to allow cryptographic operations in the grant only when the operation request\n includes the specified encryption context.
\nKMS applies the grant constraints only to cryptographic operations that support an\n encryption context, that is, all cryptographic operations with a symmetric KMS key. Grant\n constraints are not applied to operations that do not support an encryption context, such as\n cryptographic operations with asymmetric KMS keys and management operations, such as DescribeKey or RetireGrant.
\nIn a cryptographic operation, the encryption context in the decryption operation must be\n an exact, case-sensitive match for the keys and values in the encryption context of the\n encryption operation. Only the order of the pairs can vary.
\nHowever, in a grant constraint, the key in each key-value pair is not case sensitive,\n but the value is case sensitive.
\nTo avoid confusion, do not use multiple encryption context pairs that differ only by\n case. To require a fully case-sensitive encryption context, use the\n kms:EncryptionContext:
and kms:EncryptionContextKeys
conditions\n in an IAM or key policy. For details, see kms:EncryptionContext: in the \n Key Management Service Developer Guide\n .
Use this structure to allow cryptographic operations in the grant only when the operation request\n includes the specified encryption context.
\nKMS applies the grant constraints only to cryptographic operations that support an\n encryption context, that is, all cryptographic operations with a symmetric encryption KMS key. Grant\n constraints are not applied to operations that do not support an encryption context, such as\n cryptographic operations with HMAC KMS keys or asymmetric KMS keys, and management operations, such as DescribeKey or RetireGrant.
\nIn a cryptographic operation, the encryption context in the decryption operation must be\n an exact, case-sensitive match for the keys and values in the encryption context of the\n encryption operation. Only the order of the pairs can vary.
\nHowever, in a grant constraint, the key in each key-value pair is not case sensitive,\n but the value is case sensitive.
\nTo avoid confusion, do not use multiple encryption context pairs that differ only by\n case. To require a fully case-sensitive encryption context, use the\n kms:EncryptionContext:
and kms:EncryptionContextKeys
conditions\n in an IAM or key policy. For details, see kms:EncryptionContext: in the \n Key Management Service Developer Guide\n .
Imports key material into an existing symmetric KMS KMS key that was created without key\n material. After you successfully import key material into a KMS key, you can reimport\n the same key material into that KMS key, but you cannot import different key\n material.
\nYou cannot perform this operation on an asymmetric KMS key or on any KMS key in a different Amazon Web Services account. For more information about creating KMS keys with no key material\n and then importing key material, see Importing Key Material in the\n Key Management Service Developer Guide.
\nBefore using this operation, call GetParametersForImport. Its response\n includes a public key and an import token. Use the public key to encrypt the key material.\n Then, submit the import token from the same GetParametersForImport
\n response.
When calling this operation, you must specify the following values:
\nThe key ID or key ARN of a KMS key with no key material. Its Origin
must\n be EXTERNAL
.
To create a KMS key with no key material, call CreateKey and set the\n value of its Origin
parameter to EXTERNAL
. To get the\n Origin
of a KMS key, call DescribeKey.)
The encrypted key material. To get the public key to encrypt the key material, call\n GetParametersForImport.
\nThe import token that GetParametersForImport returned. You must use\n a public key and token from the same GetParametersForImport
response.
Whether the key material expires and if so, when. If you set an expiration date, KMS\n deletes the key material from the KMS key on the specified date, and the KMS key becomes\n unusable. To use the KMS key again, you must reimport the same key material. The only way\n to change an expiration date is by reimporting the same key material and specifying a new\n expiration date.
\nWhen this operation is successful, the key state of the KMS key changes from\n PendingImport
to Enabled
, and you can use the KMS key.
If this operation fails, use the exception to help determine the problem. If the error is\n related to the key material, the import token, or wrapping key, use GetParametersForImport to get a new public key and import token for the KMS key\n and repeat the import procedure. For help, see How To Import Key\n Material in the Key Management Service Developer Guide.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:ImportKeyMaterial (key policy)
\n\n Related operations:\n
\nImports key material into an existing symmetric encryption KMS key that was created\n without key material. After you successfully import key material into a KMS key, you can\n reimport the same key material into that KMS key, but you cannot import different\n key material.
\nYou cannot perform this operation on an asymmetric KMS key, an HMAC KMS key, or on any KMS key in a different Amazon Web Services account. For more information about\n creating KMS keys with no key material and then importing key material, see Importing Key Material\n in the Key Management Service Developer Guide.
\nBefore using this operation, call GetParametersForImport. Its response\n includes a public key and an import token. Use the public key to encrypt the key material.\n Then, submit the import token from the same GetParametersForImport
\n response.
When calling this operation, you must specify the following values:
\nThe key ID or key ARN of a KMS key with no key material. Its Origin
must\n be EXTERNAL
.
To create a KMS key with no key material, call CreateKey and set the\n value of its Origin
parameter to EXTERNAL
. To get the\n Origin
of a KMS key, call DescribeKey.)
The encrypted key material. To get the public key to encrypt the key material, call\n GetParametersForImport.
\nThe import token that GetParametersForImport returned. You must use\n a public key and token from the same GetParametersForImport
response.
Whether the key material expires and if so, when. If you set an expiration date, KMS\n deletes the key material from the KMS key on the specified date, and the KMS key becomes\n unusable. To use the KMS key again, you must reimport the same key material. The only way\n to change an expiration date is by reimporting the same key material and specifying a new\n expiration date.
\nWhen this operation is successful, the key state of the KMS key changes from\n PendingImport
to Enabled
, and you can use the KMS key.
If this operation fails, use the exception to help determine the problem. If the error is\n related to the key material, the import token, or wrapping key, use GetParametersForImport to get a new public key and import token for the KMS key\n and repeat the import procedure. For help, see How To Import Key\n Material in the Key Management Service Developer Guide.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:ImportKeyMaterial (key policy)
\n\n Related operations:\n
\nThe identifier of the symmetric KMS key that receives the imported key material. The KMS\n key's Origin
must be EXTERNAL
. This must be the same KMS key\n specified in the KeyID
parameter of the corresponding GetParametersForImport request.
Specify the key ID or key ARN of the KMS key.
\nFor example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
", + "smithy.api#documentation": "The identifier of the symmetric encryption KMS key that receives the imported key\n material. This must be the same KMS key specified in the KeyID
parameter of the\n corresponding GetParametersForImport request. The Origin
of the\n KMS key must be EXTERNAL
. You cannot perform this operation on an asymmetric KMS\n key, an HMAC KMS key, a KMS key in a custom key store, or on a KMS key in a different\n Amazon Web Services account
Specify the key ID or key ARN of the KMS key.
\nFor example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
", "smithy.api#required": {} } }, @@ -2830,7 +2965,7 @@ "code": "IncorrectKeyException", "httpResponseCode": 400 }, - "smithy.api#documentation": "The request was rejected because the specified KMS key cannot decrypt the data. The\n KeyId
in a Decrypt request and the SourceKeyId
\n in a ReEncrypt request must identify the same KMS key that was used to\n encrypt the ciphertext.
The request was rejected because the specified KMS key cannot decrypt the data. The\n KeyId
in a Decrypt
request and the SourceKeyId
\n in a ReEncrypt
request must identify the same KMS key that was used to\n encrypt the ciphertext.
The request was rejected for one of the following reasons:
\nThe KeyUsage
value of the KMS key is incompatible with the API\n operation.
The encryption algorithm or signing algorithm specified for the operation is\n incompatible with the type of key material in the KMS key (KeySpec
).
For encrypting, decrypting, re-encrypting, and generating data keys, the\n KeyUsage
must be ENCRYPT_DECRYPT
. For signing and verifying, the\n KeyUsage
must be SIGN_VERIFY
. To find the KeyUsage
of\n a KMS key, use the DescribeKey operation.
To find the encryption or signing algorithms supported for a particular KMS key, use the\n DescribeKey operation.
", + "smithy.api#documentation": "The request was rejected for one of the following reasons:
\nThe KeyUsage
value of the KMS key is incompatible with the API\n operation.
The encryption algorithm or signing algorithm specified for the operation is\n incompatible with the type of key material in the KMS key (KeySpec
).
For encrypting, decrypting, re-encrypting, and generating data keys, the\n KeyUsage
must be ENCRYPT_DECRYPT
. For signing and verifying\n messages, the KeyUsage
must be SIGN_VERIFY
. For generating and\n verifying message authentication codes (MACs), the KeyUsage
must be\n GENERATE_VERIFY_MAC
. To find the KeyUsage
of a KMS key, use the\n DescribeKey operation.
To find the encryption or signing algorithms supported for a particular KMS key, use the\n DescribeKey operation.
", "smithy.api#error": "client", "smithy.api#httpError": 400 } @@ -3022,6 +3157,23 @@ "smithy.api#httpError": 500 } }, + "com.amazonaws.kms#KMSInvalidMacException": { + "type": "structure", + "members": { + "message": { + "target": "com.amazonaws.kms#ErrorMessageType" + } + }, + "traits": { + "aws.protocols#awsQueryError": { + "code": "KMSInvalidMac", + "httpResponseCode": 400 + }, + "smithy.api#documentation": "The request was rejected because the HMAC verification failed. HMAC verification\n fails when the HMAC computed by using the specified message, HMAC KMS key, and MAC algorithm does not match the HMAC specified in the request.
", + "smithy.api#error": "client", + "smithy.api#httpError": 400 + } + }, "com.amazonaws.kms#KMSInvalidSignatureException": { "type": "structure", "members": { @@ -3051,7 +3203,7 @@ "code": "KMSInvalidStateException", "httpResponseCode": 409 }, - "smithy.api#documentation": "The request was rejected because the state of the specified resource is not valid for this\n request.
\nFor more information about how key state affects the use of a KMS key, see Key state: Effect on your KMS\n key in the \n Key Management Service Developer Guide\n .
", + "smithy.api#documentation": "The request was rejected because the state of the specified resource is not valid for this\n request.
\nFor more information about how key state affects the use of a KMS key, see Key states of KMS keys in the \n Key Management Service Developer Guide\n .
", "smithy.api#error": "client", "smithy.api#httpError": 409 } @@ -3155,7 +3307,7 @@ "KeyState": { "target": "com.amazonaws.kms#KeyState", "traits": { - "smithy.api#documentation": "The current status of the KMS key.
\nFor more information about how key state affects the use of a KMS key, see Key state: Effect on your KMS\n key in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "The current status of the KMS key.
\nFor more information about how key state affects the use of a KMS key, see Key states of KMS keys in the Key Management Service Developer Guide.
" } }, "DeletionDate": { @@ -3230,7 +3382,7 @@ "MultiRegion": { "target": "com.amazonaws.kms#NullableBooleanType", "traits": { - "smithy.api#documentation": "Indicates whether the KMS key is a multi-Region (True
) or regional\n (False
) key. This value is True
for multi-Region primary and\n replica keys and False
for regional KMS keys.
For more information about multi-Region keys, see Using multi-Region keys in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "Indicates whether the KMS key is a multi-Region (True
) or regional\n (False
) key. This value is True
for multi-Region primary and\n replica keys and False
for regional KMS keys.
For more information about multi-Region keys, see Multi-Region keys in KMS in the Key Management Service Developer Guide.
" } }, "MultiRegionConfiguration": { @@ -3244,6 +3396,12 @@ "traits": { "smithy.api#documentation": "The waiting period before the primary key in a multi-Region key is deleted. This waiting\n period begins when the last of its replica keys is deleted. This value is present only when\n the KeyState
of the KMS key is PendingReplicaDeletion
. That\n indicates that the KMS key is the primary key in a multi-Region key, it is scheduled for\n deletion, and it still has existing replica keys.
When a single-Region KMS key or a multi-Region replica key is scheduled for deletion, its\n deletion date is displayed in the DeletionDate
field. However, when the primary\n key in a multi-Region key is scheduled for deletion, its waiting period doesn't begin until\n all of its replica keys are deleted. This value displays that waiting period. When the last\n replica key in the multi-Region key is deleted, the KeyState
of the scheduled\n primary key changes from PendingReplicaDeletion
to PendingDeletion
\n and the deletion date appears in the DeletionDate
field.
The message authentication code (MAC) algorithm that the HMAC KMS key supports.
\nThis value is present only when the KeyUsage
of the KMS key is\n GENERATE_VERIFY_MAC
.
Gets a list of all grants for the specified KMS key.
\nYou must specify the KMS key in all requests. You can filter the grant list by grant ID or\n grantee principal.
\nFor detailed information about grants, including grant terminology, see Using grants in the\n \n Key Management Service Developer Guide\n . For examples of working with grants in several\n programming languages, see Programming grants.
\nThe GranteePrincipal
field in the ListGrants
response usually contains the\n user or role designated as the grantee principal in the grant. However, when the grantee\n principal in the grant is an Amazon Web Services service, the GranteePrincipal
field contains\n the service\n principal, which might represent several different grantee principals.
\n Cross-account use: Yes. To perform this operation on a KMS key in a different Amazon Web Services account, specify the key\n ARN in the value of the KeyId
parameter.
\n Required permissions: kms:ListGrants (key policy)
\n\n Related operations:\n
\n\n CreateGrant\n
\n\n ListRetirableGrants\n
\n\n RetireGrant\n
\n\n RevokeGrant\n
\nGets a list of all grants for the specified KMS key.
\nYou must specify the KMS key in all requests. You can filter the grant list by grant ID or\n grantee principal.
\nFor detailed information about grants, including grant terminology, see Grants in KMS in the\n \n Key Management Service Developer Guide\n . For examples of working with grants in several\n programming languages, see Programming grants.
\nThe GranteePrincipal
field in the ListGrants
response usually contains the\n user or role designated as the grantee principal in the grant. However, when the grantee\n principal in the grant is an Amazon Web Services service, the GranteePrincipal
field contains\n the service\n principal, which might represent several different grantee principals.
\n Cross-account use: Yes. To perform this operation on a KMS key in a different Amazon Web Services account, specify the key\n ARN in the value of the KeyId
parameter.
\n Required permissions: kms:ListGrants (key policy)
\n\n Related operations:\n
\n\n CreateGrant\n
\n\n ListRetirableGrants\n
\n\n RetireGrant\n
\n\n RevokeGrant\n
\nA list of tags. Each tag consists of a tag key and a tag value.
\nTagging or untagging a KMS key can allow or deny permission to the KMS key. For details, see Using ABAC in KMS in the Key Management Service Developer Guide.
\nA list of tags. Each tag consists of a tag key and a tag value.
\nTagging or untagging a KMS key can allow or deny permission to the KMS key. For details, see ABAC in KMS in the Key Management Service Developer Guide.
\nReturns information about all grants in the Amazon Web Services account and Region that have the\n specified retiring principal.
\nYou can specify any principal in your Amazon Web Services account. The grants that are returned include\n grants for KMS keys in your Amazon Web Services account and other Amazon Web Services accounts. You might use this\n operation to determine which grants you may retire. To retire a grant, use the RetireGrant operation.
\nFor detailed information about grants, including grant terminology, see Using grants in the\n \n Key Management Service Developer Guide\n . For examples of working with grants in several\n programming languages, see Programming grants.
\n\n Cross-account use: You must specify a principal in your\n Amazon Web Services account. However, this operation can return grants in any Amazon Web Services account. You do not need\n kms:ListRetirableGrants
permission (or any other additional permission) in any\n Amazon Web Services account other than your own.
\n Required permissions: kms:ListRetirableGrants (IAM policy) in your\n Amazon Web Services account.
\n\n Related operations:\n
\n\n CreateGrant\n
\n\n ListGrants\n
\n\n RetireGrant\n
\n\n RevokeGrant\n
\nReturns information about all grants in the Amazon Web Services account and Region that have the\n specified retiring principal.
\nYou can specify any principal in your Amazon Web Services account. The grants that are returned include\n grants for KMS keys in your Amazon Web Services account and other Amazon Web Services accounts. You might use this\n operation to determine which grants you may retire. To retire a grant, use the RetireGrant operation.
\nFor detailed information about grants, including grant terminology, see Grants in KMS in the\n \n Key Management Service Developer Guide\n . For examples of working with grants in several\n programming languages, see Programming grants.
\n\n Cross-account use: You must specify a principal in your\n Amazon Web Services account. However, this operation can return grants in any Amazon Web Services account. You do not need\n kms:ListRetirableGrants
permission (or any other additional permission) in any\n Amazon Web Services account other than your own.
\n Required permissions: kms:ListRetirableGrants (IAM policy) in your\n Amazon Web Services account.
\n\n Related operations:\n
\n\n CreateGrant\n
\n\n ListGrants\n
\n\n RetireGrant\n
\n\n RevokeGrant\n
\nDecrypts ciphertext and then reencrypts it entirely within KMS. You can use this\n operation to change the KMS key under which data is encrypted, such as when you manually\n rotate a KMS key or change the KMS key that protects a ciphertext. You can also use\n it to reencrypt ciphertext under the same KMS key, such as to change the encryption\n context of a ciphertext.
\nThe ReEncrypt
operation can decrypt ciphertext that was encrypted by using an\n KMS KMS key in an KMS operation, such as Encrypt or GenerateDataKey. It can also decrypt ciphertext that was encrypted by using the\n public key of an asymmetric KMS key\n outside of KMS. However, it cannot decrypt ciphertext produced by other libraries, such as\n the Amazon Web Services Encryption SDK or\n Amazon S3\n client-side encryption. These libraries return a ciphertext format that is\n incompatible with KMS.
When you use the ReEncrypt
operation, you need to provide information for the\n decrypt operation and the subsequent encrypt operation.
If your ciphertext was encrypted under an asymmetric KMS key, you must use the\n SourceKeyId
parameter to identify the KMS key that encrypted the\n ciphertext. You must also supply the encryption algorithm that was used. This information\n is required to decrypt the data.
If your ciphertext was encrypted under a symmetric KMS key, the\n SourceKeyId
parameter is optional. KMS can get this information from\n metadata that it adds to the symmetric ciphertext blob. This feature adds durability to\n your implementation by ensuring that authorized users can decrypt ciphertext decades after\n it was encrypted, even if they've lost track of the key ID. However, specifying the source\n KMS key is always recommended as a best practice. When you use the\n SourceKeyId
parameter to specify a KMS key, KMS uses only the KMS key you\n specify. If the ciphertext was encrypted under a different KMS key, the\n ReEncrypt
operation fails. This practice ensures that you use the KMS key\n that you intend.
To reencrypt the data, you must use the DestinationKeyId
parameter\n specify the KMS key that re-encrypts the data after it is decrypted. You can select a\n symmetric or asymmetric KMS key. If the destination KMS key is an asymmetric KMS key, you\n must also provide the encryption algorithm. The algorithm that you choose must be\n compatible with the KMS key.
When you use an asymmetric KMS key to encrypt or reencrypt data, be sure to record the KMS key and encryption algorithm that you choose. You will be required to provide the same KMS key and encryption algorithm when you decrypt the data. If the KMS key and algorithm do not match the values used to encrypt the data, the decrypt operation fails.
\nYou are not required to supply the key ID and encryption algorithm when you decrypt with symmetric KMS keys because KMS stores this information in the ciphertext blob. KMS cannot store metadata in ciphertext generated with asymmetric keys. The standard format for asymmetric key ciphertext does not include configurable fields.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes.\n The source KMS key and destination KMS key can be in different Amazon Web Services accounts. Either or both\n KMS keys can be in a different account than the caller. To specify a KMS key in a different\n account, you must use its key ARN or alias ARN.
\n\n\n Required permissions:
\n\n kms:ReEncryptFrom\n permission on the source KMS key (key policy)
\n\n kms:ReEncryptTo\n permission on the destination KMS key (key policy)
\nTo permit reencryption from or to a KMS key, include the \"kms:ReEncrypt*\"
\n permission in your key policy. This permission is\n automatically included in the key policy when you use the console to create a KMS key. But you\n must include it manually when you create a KMS key programmatically or when you use the PutKeyPolicy operation to set a key policy.
\n Related operations:\n
\n\n Decrypt\n
\n\n Encrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\nDecrypts ciphertext and then reencrypts it entirely within KMS. You can use this\n operation to change the KMS key under which data is encrypted, such as when you manually\n rotate a KMS key or change the KMS key that protects a ciphertext. You can also use\n it to reencrypt ciphertext under the same KMS key, such as to change the encryption\n context of a ciphertext.
\nThe ReEncrypt
operation can decrypt ciphertext that was encrypted by using a\n KMS key in an KMS operation, such as Encrypt or GenerateDataKey. It can also decrypt ciphertext that was encrypted by using the\n public key of an asymmetric KMS key\n outside of KMS. However, it cannot decrypt ciphertext produced by other libraries, such as\n the Amazon Web Services Encryption SDK or\n Amazon S3\n client-side encryption. These libraries return a ciphertext format that is\n incompatible with KMS.
When you use the ReEncrypt
operation, you need to provide information for the\n decrypt operation and the subsequent encrypt operation.
If your ciphertext was encrypted under an asymmetric KMS key, you must use the\n SourceKeyId
parameter to identify the KMS key that encrypted the\n ciphertext. You must also supply the encryption algorithm that was used. This information\n is required to decrypt the data.
If your ciphertext was encrypted under a symmetric encryption KMS key, the\n SourceKeyId
parameter is optional. KMS can get this information from\n metadata that it adds to the symmetric ciphertext blob. This feature adds durability to\n your implementation by ensuring that authorized users can decrypt ciphertext decades after\n it was encrypted, even if they've lost track of the key ID. However, specifying the source\n KMS key is always recommended as a best practice. When you use the\n SourceKeyId
parameter to specify a KMS key, KMS uses only the KMS key you\n specify. If the ciphertext was encrypted under a different KMS key, the\n ReEncrypt
operation fails. This practice ensures that you use the KMS key\n that you intend.
To reencrypt the data, you must use the DestinationKeyId
parameter\n specify the KMS key that re-encrypts the data after it is decrypted. If the destination\n KMS key is an asymmetric KMS key, you must also provide the encryption algorithm. The\n algorithm that you choose must be compatible with the KMS key.
When you use an asymmetric KMS key to encrypt or reencrypt data, be sure to record the KMS key and encryption algorithm that you choose. You will be required to provide the same KMS key and encryption algorithm when you decrypt the data. If the KMS key and algorithm do not match the values used to encrypt the data, the decrypt operation fails.
\nYou are not required to supply the key ID and encryption algorithm when you decrypt with symmetric encryption KMS keys because KMS stores this information in the ciphertext blob. KMS cannot store metadata in ciphertext generated with asymmetric keys. The standard format for asymmetric key ciphertext does not include configurable fields.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes.\n The source KMS key and destination KMS key can be in different Amazon Web Services accounts. Either or both\n KMS keys can be in a different account than the caller. To specify a KMS key in a different\n account, you must use its key ARN or alias ARN.
\n\n\n Required permissions:
\n\n kms:ReEncryptFrom\n permission on the source KMS key (key policy)
\n\n kms:ReEncryptTo\n permission on the destination KMS key (key policy)
\nTo permit reencryption from or to a KMS key, include the \"kms:ReEncrypt*\"
\n permission in your key policy. This permission is\n automatically included in the key policy when you use the console to create a KMS key. But you\n must include it manually when you create a KMS key programmatically or when you use the PutKeyPolicy operation to set a key policy.
\n Related operations:\n
\n\n Decrypt\n
\n\n Encrypt\n
\n\n GenerateDataKey\n
\n\n GenerateDataKeyPair\n
\nSpecifies the encryption context to use to decrypt the ciphertext. Enter the same\n encryption context that was used to encrypt the ciphertext.
\nAn encryption context is a collection of non-secret key-value pairs that represents additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is optional when encrypting with a symmetric KMS key, but it is highly recommended.
\nFor more information, see\n Encryption\n Context in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "Specifies the encryption context to use to decrypt the ciphertext. Enter the same\n encryption context that was used to encrypt the ciphertext.
\nAn encryption context is a collection of non-secret key-value pairs that represent additional authenticated data. \nWhen you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported\nonly on operations with symmetric encryption KMS keys. On operations with symmetric encryption KMS keys, an encryption context is optional, but it is strongly recommended.
\nFor more information, see\nEncryption context in the Key Management Service Developer Guide.
" } }, "SourceKeyId": { "target": "com.amazonaws.kms#KeyIdType", "traits": { - "smithy.api#documentation": "Specifies the KMS key that KMS will use to decrypt the ciphertext before it is\n re-encrypted. Enter a key ID of the KMS key that was used to encrypt the ciphertext.
\nThis parameter is required only when the ciphertext was encrypted under an asymmetric KMS\n key. If you used a symmetric KMS key, KMS can get the KMS key from metadata that it adds to\n the symmetric ciphertext blob. However, it is always recommended as a best practice. This\n practice ensures that you use the KMS key that you intend.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
" + "smithy.api#documentation": "Specifies the KMS key that KMS will use to decrypt the ciphertext before it is\n re-encrypted.
\nEnter a key ID of the KMS key that was used to encrypt the ciphertext. If you identify a different KMS key, the ReEncrypt
operation throws an IncorrectKeyException
.
This parameter is required only when the ciphertext was encrypted under an asymmetric KMS\n key. If you used a symmetric encryption KMS key, KMS can get the KMS key from metadata that\n it adds to the symmetric ciphertext blob. However, it is always recommended as a best\n practice. This practice ensures that you use the KMS key that you intend.
\n \nTo specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
" } }, "DestinationKeyId": { "target": "com.amazonaws.kms#KeyIdType", "traits": { - "smithy.api#documentation": "A unique identifier for the KMS key that is used to reencrypt the data. Specify a\n symmetric or asymmetric KMS key with a KeyUsage
value of\n ENCRYPT_DECRYPT
. To find the KeyUsage
value of a KMS key, use the\n DescribeKey operation.
To specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", + "smithy.api#documentation": "A unique identifier for the KMS key that is used to reencrypt the data. Specify a\n symmetric encryption KMS key or an asymmetric KMS key with a KeyUsage
value of\n ENCRYPT_DECRYPT
. To find the KeyUsage
value of a KMS key, use the\n DescribeKey operation.
To specify a KMS key, use its key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a KMS key in a different Amazon Web Services account, you must use the key ARN or alias ARN.
For example:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
\n
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", "smithy.api#required": {} } }, "DestinationEncryptionContext": { "target": "com.amazonaws.kms#EncryptionContextType", "traits": { - "smithy.api#documentation": "Specifies that encryption context to use when the reencrypting the data.
\nA destination encryption context is valid only when the destination KMS key is a symmetric\n KMS key. The standard ciphertext format for asymmetric KMS keys does not include fields for\n metadata.
\nAn encryption context is a collection of non-secret key-value pairs that represents additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is optional when encrypting with a symmetric KMS key, but it is highly recommended.
\nFor more information, see\n Encryption\n Context in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "Specifies that encryption context to use when the reencrypting the data.
\nA destination encryption context is valid only when the destination KMS key is a symmetric encryption KMS key. The standard ciphertext format for asymmetric KMS keys does not include fields for\n metadata.
\nAn encryption context is a collection of non-secret key-value pairs that represent additional authenticated data. \nWhen you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported\nonly on operations with symmetric encryption KMS keys. On operations with symmetric encryption KMS keys, an encryption context is optional, but it is strongly recommended.
\nFor more information, see\nEncryption context in the Key Management Service Developer Guide.
" } }, "SourceEncryptionAlgorithm": { "target": "com.amazonaws.kms#EncryptionAlgorithmSpec", "traits": { - "smithy.api#documentation": "Specifies the encryption algorithm that KMS will use to decrypt the ciphertext before it\n is reencrypted. The default value, SYMMETRIC_DEFAULT
, represents the algorithm\n used for symmetric KMS keys.
Specify the same algorithm that was used to encrypt the ciphertext. If you specify a\n different algorithm, the decrypt attempt fails.
\nThis parameter is required only when the ciphertext was encrypted under an asymmetric KMS\n key.
" + "smithy.api#documentation": "Specifies the encryption algorithm that KMS will use to decrypt the ciphertext before it\n is reencrypted. The default value, SYMMETRIC_DEFAULT
, represents the algorithm\n used for symmetric encryption KMS keys.
Specify the same algorithm that was used to encrypt the ciphertext. If you specify a\n different algorithm, the decrypt attempt fails.
\nThis parameter is required only when the ciphertext was encrypted under an asymmetric KMS\n key.
" } }, "DestinationEncryptionAlgorithm": { "target": "com.amazonaws.kms#EncryptionAlgorithmSpec", "traits": { - "smithy.api#documentation": "Specifies the encryption algorithm that KMS will use to reecrypt the data after it has\n decrypted it. The default value, SYMMETRIC_DEFAULT
, represents the encryption\n algorithm used for symmetric KMS keys.
This parameter is required only when the destination KMS key is an asymmetric KMS\n key.
" + "smithy.api#documentation": "Specifies the encryption algorithm that KMS will use to reecrypt the data after it has\n decrypted it. The default value, SYMMETRIC_DEFAULT
, represents the encryption\n algorithm used for symmetric encryption KMS keys.
This parameter is required only when the destination KMS key is an asymmetric KMS\n key.
" } }, "GrantTokens": { @@ -4333,7 +4543,7 @@ } ], "traits": { - "smithy.api#documentation": "Replicates a multi-Region key into the specified Region. This operation creates a\n multi-Region replica key based on a multi-Region primary key in a different Region of the same\n Amazon Web Services partition. You can create multiple replicas of a primary key, but each must be in a\n different Region. To create a multi-Region primary key, use the CreateKey\n operation.
\nThis operation supports multi-Region keys, an KMS feature that lets you create multiple\n interoperable KMS keys in different Amazon Web Services Regions. Because these KMS keys have the same key ID, key\n material, and other metadata, you can use them interchangeably to encrypt data in one Amazon Web Services Region and decrypt\n it in a different Amazon Web Services Region without re-encrypting the data or making a cross-Region call. For more information about multi-Region keys, see Using multi-Region keys in the Key Management Service Developer Guide.
\nA replica key is a fully-functional KMS key that can be used\n independently of its primary and peer replica keys. A primary key and its replica keys share\n properties that make them interoperable. They have the same key ID and key material. They also\n have the same key\n spec, key\n usage, key\n material origin, and automatic key rotation status. KMS automatically synchronizes these shared\n properties among related multi-Region keys. All other properties of a replica key can differ,\n including its key\n policy, tags, aliases, and key\n state. KMS pricing and quotas for KMS keys apply to each primary key and replica\n key.
\nWhen this operation completes, the new replica key has a transient key state of\n Creating
. This key state changes to Enabled
(or\n PendingImport
) after a few seconds when the process of creating the new replica\n key is complete. While the key state is Creating
, you can manage key, but you\n cannot yet use it in cryptographic operations. If you are creating and using the replica key\n programmatically, retry on KMSInvalidStateException
or call\n DescribeKey
to check its KeyState
value before using it. For\n details about the Creating
key state, see Key state: Effect on your KMS key in the\n Key Management Service Developer Guide.
The CloudTrail log of a ReplicateKey
operation records a\n ReplicateKey
operation in the primary key's Region and a CreateKey operation in the replica key's Region.
If you replicate a multi-Region primary key with imported key material, the replica key is\n created with no key material. You must import the same key material that you imported into the\n primary key. For details, see Importing key material into multi-Region keys in the Key Management Service Developer Guide.
\nTo convert a replica key to a primary key, use the UpdatePrimaryRegion\n operation.
\n\n ReplicateKey
uses different default values for the KeyPolicy
\n and Tags
parameters than those used in the KMS console. For details, see the\n parameter descriptions.
\n Cross-account use: No. You cannot use this operation to\n create a replica key in a different Amazon Web Services account.
\n\n Required permissions:
\n\n kms:ReplicateKey
on the primary key (in the primary key's Region).\n Include this permission in the primary key's key policy.
\n kms:CreateKey
in an IAM policy in the replica Region.
To use the Tags
parameter, kms:TagResource
in an IAM policy\n in the replica Region.
\n Related operations\n
\n\n CreateKey\n
\n\n UpdatePrimaryRegion\n
\nReplicates a multi-Region key into the specified Region. This operation creates a\n multi-Region replica key based on a multi-Region primary key in a different Region of the same\n Amazon Web Services partition. You can create multiple replicas of a primary key, but each must be in a\n different Region. To create a multi-Region primary key, use the CreateKey\n operation.
\nThis operation supports multi-Region keys, an KMS feature that lets you create multiple\n interoperable KMS keys in different Amazon Web Services Regions. Because these KMS keys have the same key ID, key\n material, and other metadata, you can use them interchangeably to encrypt data in one Amazon Web Services Region and decrypt\n it in a different Amazon Web Services Region without re-encrypting the data or making a cross-Region call. For more information about multi-Region keys, see Multi-Region keys in KMS in the Key Management Service Developer Guide.
\nA replica key is a fully-functional KMS key that can be used\n independently of its primary and peer replica keys. A primary key and its replica keys share\n properties that make them interoperable. They have the same key ID and key material. They also\n have the same key\n spec, key\n usage, key\n material origin, and automatic key rotation status. KMS automatically synchronizes these shared\n properties among related multi-Region keys. All other properties of a replica key can differ,\n including its key\n policy, tags, aliases, and Key states of KMS keys. KMS pricing and quotas for KMS keys apply to each primary key and replica\n key.
\nWhen this operation completes, the new replica key has a transient key state of\n Creating
. This key state changes to Enabled
(or\n PendingImport
) after a few seconds when the process of creating the new replica\n key is complete. While the key state is Creating
, you can manage key, but you\n cannot yet use it in cryptographic operations. If you are creating and using the replica key\n programmatically, retry on KMSInvalidStateException
or call\n DescribeKey
to check its KeyState
value before using it. For\n details about the Creating
key state, see Key states of KMS keys in the\n Key Management Service Developer Guide.
You cannot create more than one replica of a primary key in any Region. If the Region\n already includes a replica of the key you're trying to replicate, ReplicateKey
\n returns an AlreadyExistsException
error. If the key state of the existing replica\n is PendingDeletion
, you can cancel the scheduled key deletion (CancelKeyDeletion) or wait for the key to be deleted. The new replica key you create\n will have the same shared properties as the original replica key.
The CloudTrail log of a ReplicateKey
operation records a\n ReplicateKey
operation in the primary key's Region and a CreateKey operation in the replica key's Region.
If you replicate a multi-Region primary key with imported key material, the replica key is\n created with no key material. You must import the same key material that you imported into the\n primary key. For details, see Importing key material into multi-Region keys in the Key Management Service Developer Guide.
\nTo convert a replica key to a primary key, use the UpdatePrimaryRegion\n operation.
\n\n ReplicateKey
uses different default values for the KeyPolicy
\n and Tags
parameters than those used in the KMS console. For details, see the\n parameter descriptions.
\n Cross-account use: No. You cannot use this operation to\n create a replica key in a different Amazon Web Services account.
\n\n Required permissions:
\n\n kms:ReplicateKey
on the primary key (in the primary key's Region).\n Include this permission in the primary key's key policy.
\n kms:CreateKey
in an IAM policy in the replica Region.
To use the Tags
parameter, kms:TagResource
in an IAM policy\n in the replica Region.
\n Related operations\n
\n\n CreateKey\n
\n\n UpdatePrimaryRegion\n
\nThe Region ID of the Amazon Web Services Region for this replica key.
\nEnter the Region ID, such as us-east-1
or ap-southeast-2
. For a\n list of Amazon Web Services Regions in which KMS is supported, see KMS service endpoints in the\n Amazon Web Services General Reference.
The replica must be in a different Amazon Web Services Region than its primary key and other replicas of\n that primary key, but in the same Amazon Web Services partition. KMS must be available in the replica\n Region. If the Region is not enabled by default, the Amazon Web Services account must be enabled in the\n Region.
\nFor information about Amazon Web Services partitions, see Amazon Resource Names (ARNs) in the\n Amazon Web Services General Reference. For information about enabling and disabling Regions, see Enabling a\n Region and Disabling a Region in the\n Amazon Web Services General Reference.
", + "smithy.api#documentation": "The Region ID of the Amazon Web Services Region for this replica key.
\nEnter the Region ID, such as us-east-1
or ap-southeast-2
. For a\n list of Amazon Web Services Regions in which KMS is supported, see KMS service endpoints in the\n Amazon Web Services General Reference.
HMAC KMS keys are not supported in all Amazon Web Services Regions. If you try to replicate an HMAC\n KMS key in an Amazon Web Services Region in which HMAC keys are not supported, the\n ReplicateKey
operation returns an UnsupportedOperationException
.\n For a list of Regions in which HMAC KMS keys are supported, see HMAC keys in KMS in the Key Management Service Developer Guide.
The replica must be in a different Amazon Web Services Region than its primary key and other replicas of\n that primary key, but in the same Amazon Web Services partition. KMS must be available in the replica\n Region. If the Region is not enabled by default, the Amazon Web Services account must be enabled in the\n Region. For information about Amazon Web Services partitions, see Amazon Resource Names (ARNs) in the\n Amazon Web Services General Reference. For information about enabling and disabling Regions, see Enabling a\n Region and Disabling a Region in the\n Amazon Web Services General Reference.
", "smithy.api#required": {} } }, @@ -4374,7 +4584,7 @@ "Tags": { "target": "com.amazonaws.kms#TagList", "traits": { - "smithy.api#documentation": "Assigns one or more tags to the replica key. Use this parameter to tag the KMS key when it\n is created. To tag an existing KMS key, use the TagResource\n operation.
\nTagging or untagging a KMS key can allow or deny permission to the KMS key. For details, see Using ABAC in KMS in the Key Management Service Developer Guide.
\nTo use this parameter, you must have kms:TagResource permission in an IAM policy.
\nTags are not a shared property of multi-Region keys. You can specify the same tags or\n different tags for each key in a set of related multi-Region keys. KMS does not synchronize\n this property.
\nEach tag consists of a tag key and a tag value. Both the tag key and the tag value are\n required, but the tag value can be an empty (null) string. You cannot have more than one tag\n on a KMS key with the same tag key. If you specify an existing tag key with a different tag\n value, KMS replaces the current tag value with the specified one.
\nWhen you add tags to an Amazon Web Services resource, Amazon Web Services generates a cost allocation\n report with usage and costs aggregated by tags. Tags can also be used to control access to a KMS key. For details,\n see Tagging Keys.
" + "smithy.api#documentation": "Assigns one or more tags to the replica key. Use this parameter to tag the KMS key when it\n is created. To tag an existing KMS key, use the TagResource\n operation.
\nTagging or untagging a KMS key can allow or deny permission to the KMS key. For details, see ABAC in KMS in the Key Management Service Developer Guide.
\nTo use this parameter, you must have kms:TagResource permission in an IAM policy.
\nTags are not a shared property of multi-Region keys. You can specify the same tags or\n different tags for each key in a set of related multi-Region keys. KMS does not synchronize\n this property.
\nEach tag consists of a tag key and a tag value. Both the tag key and the tag value are\n required, but the tag value can be an empty (null) string. You cannot have more than one tag\n on a KMS key with the same tag key. If you specify an existing tag key with a different tag\n value, KMS replaces the current tag value with the specified one.
\nWhen you add tags to an Amazon Web Services resource, Amazon Web Services generates a cost allocation\n report with usage and costs aggregated by tags. Tags can also be used to control access to a KMS key. For details,\n see Tagging Keys.
" } } } @@ -4385,7 +4595,7 @@ "ReplicaKeyMetadata": { "target": "com.amazonaws.kms#KeyMetadata", "traits": { - "smithy.api#documentation": "Displays details about the new replica key, including its Amazon Resource Name (key ARN) and\n key state. It also\n includes the ARN and Amazon Web Services Region of its primary key and other replica keys.
" + "smithy.api#documentation": "Displays details about the new replica key, including its Amazon Resource Name (key ARN) and\n Key states of KMS keys. It also\n includes the ARN and Amazon Web Services Region of its primary key and other replica keys.
" } }, "ReplicaPolicy": { @@ -4407,6 +4617,9 @@ "input": { "target": "com.amazonaws.kms#RetireGrantRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kms#DependencyTimeoutException" @@ -4431,7 +4644,7 @@ } ], "traits": { - "smithy.api#documentation": "Deletes a grant. Typically, you retire a grant when you no longer need its permissions. To\n identify the grant to retire, use a grant token, or both the grant ID and a\n key identifier (key ID or key ARN) of the KMS key. The CreateGrant operation\n returns both values.
\nThis operation can be called by the retiring principal for a grant,\n by the grantee principal if the grant allows the RetireGrant
\n operation, and by the Amazon Web Services account (root user) in which the grant is created. It can also be\n called by principals to whom permission for retiring a grant is delegated. For details, see\n Retiring and\n revoking grants in the Key Management Service Developer Guide.
For detailed information about grants, including grant terminology, see Using grants in the\n \n Key Management Service Developer Guide\n . For examples of working with grants in several\n programming languages, see Programming grants.
\n\n Cross-account use: Yes. You can retire a grant on a KMS\n key in a different Amazon Web Services account.
\n\n Required permissions::Permission to retire a grant is\n determined primarily by the grant. For details, see Retiring and revoking grants in\n the Key Management Service Developer Guide.
\n\n Related operations:\n
\n\n CreateGrant\n
\n\n ListGrants\n
\n\n ListRetirableGrants\n
\n\n RevokeGrant\n
\nDeletes a grant. Typically, you retire a grant when you no longer need its permissions. To\n identify the grant to retire, use a grant token, or both the grant ID and a\n key identifier (key ID or key ARN) of the KMS key. The CreateGrant operation\n returns both values.
\nThis operation can be called by the retiring principal for a grant,\n by the grantee principal if the grant allows the RetireGrant
\n operation, and by the Amazon Web Services account in which the grant is created. It can also be called by\n principals to whom permission for retiring a grant is delegated. For details, see Retiring and revoking\n grants in the Key Management Service Developer Guide.
For detailed information about grants, including grant terminology, see Grants in KMS in the\n \n Key Management Service Developer Guide\n . For examples of working with grants in several\n programming languages, see Programming grants.
\n\n Cross-account use: Yes. You can retire a grant on a KMS\n key in a different Amazon Web Services account.
\n\n Required permissions::Permission to retire a grant is\n determined primarily by the grant. For details, see Retiring and revoking grants in\n the Key Management Service Developer Guide.
\n\n Related operations:\n
\n\n CreateGrant\n
\n\n ListGrants\n
\n\n ListRetirableGrants\n
\n\n RevokeGrant\n
\nDeletes the specified grant. You revoke a grant to terminate the permissions that the\n grant allows. For more information, see Retiring and revoking grants in\n the \n Key Management Service Developer Guide\n .
\nWhen you create, retire, or revoke a grant, there might be a brief delay, usually less than five minutes, until the grant is available throughout KMS. This state is known as eventual consistency. For details, see Eventual consistency in\n the \n Key Management Service Developer Guide\n .
\nFor detailed information about grants, including grant terminology, see Using grants in the\n \n Key Management Service Developer Guide\n . For examples of working with grants in several\n programming languages, see Programming grants.
\n\n Cross-account use: Yes. To perform this operation on a KMS key in a different Amazon Web Services account, specify the key\n ARN in the value of the KeyId
parameter.
\n Required permissions: kms:RevokeGrant (key policy).
\n\n Related operations:\n
\n\n CreateGrant\n
\n\n ListGrants\n
\n\n ListRetirableGrants\n
\n\n RetireGrant\n
\nDeletes the specified grant. You revoke a grant to terminate the permissions that the\n grant allows. For more information, see Retiring and revoking grants in\n the \n Key Management Service Developer Guide\n .
\nWhen you create, retire, or revoke a grant, there might be a brief delay, usually less than five minutes, until the grant is available throughout KMS. This state is known as eventual consistency. For details, see Eventual consistency in\n the \n Key Management Service Developer Guide\n .
\nFor detailed information about grants, including grant terminology, see Grants in KMS in the\n \n Key Management Service Developer Guide\n . For examples of working with grants in several\n programming languages, see Programming grants.
\n\n Cross-account use: Yes. To perform this operation on a KMS key in a different Amazon Web Services account, specify the key\n ARN in the value of the KeyId
parameter.
\n Required permissions: kms:RevokeGrant (key policy).
\n\n Related operations:\n
\n\n CreateGrant\n
\n\n ListGrants\n
\n\n ListRetirableGrants\n
\n\n RetireGrant\n
\nSchedules the deletion of a KMS key. By default, KMS applies a waiting period of 30\n days, but you can specify a waiting period of 7-30 days. When this operation is successful,\n the key state of the KMS key changes to PendingDeletion
and the key can't be used\n in any cryptographic operations. It remains in this state for the duration of the waiting\n period. Before the waiting period ends, you can use CancelKeyDeletion to\n cancel the deletion of the KMS key. After the waiting period ends, KMS deletes the KMS key,\n its key material, and all KMS data associated with it, including all aliases that refer to\n it.
Deleting a KMS key is a destructive and potentially dangerous operation. When a KMS key\n is deleted, all data that was encrypted under the KMS key is unrecoverable. (The only\n exception is a multi-Region replica key.) To prevent the use of a KMS key without deleting\n it, use DisableKey.
\nIf you schedule deletion of a KMS key from a custom key store, when the waiting period\n expires, ScheduleKeyDeletion
deletes the KMS key from KMS. Then KMS makes a\n best effort to delete the key material from the associated CloudHSM cluster. However, you might\n need to manually delete the orphaned key\n material from the cluster and its backups.
You can schedule the deletion of a multi-Region primary key and its replica keys at any\n time. However, KMS will not delete a multi-Region primary key with existing replica keys. If\n you schedule the deletion of a primary key with replicas, its key state changes to\n PendingReplicaDeletion
and it cannot be replicated or used in cryptographic\n operations. This status can continue indefinitely. When the last of its replicas keys is\n deleted (not just scheduled), the key state of the primary key changes to\n PendingDeletion
and its waiting period (PendingWindowInDays
)\n begins. For details, see Deleting multi-Region keys in the\n Key Management Service Developer Guide.
For more information about scheduling a KMS key for deletion, see Deleting KMS keys in the\n Key Management Service Developer Guide.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n\n Required permissions: kms:ScheduleKeyDeletion (key\n policy)
\n\n Related operations\n
\n\n CancelKeyDeletion\n
\n\n DisableKey\n
\nSchedules the deletion of a KMS key. By default, KMS applies a waiting period of 30\n days, but you can specify a waiting period of 7-30 days. When this operation is successful,\n the key state of the KMS key changes to PendingDeletion
and the key can't be used\n in any cryptographic operations. It remains in this state for the duration of the waiting\n period. Before the waiting period ends, you can use CancelKeyDeletion to\n cancel the deletion of the KMS key. After the waiting period ends, KMS deletes the KMS key,\n its key material, and all KMS data associated with it, including all aliases that refer to\n it.
Deleting a KMS key is a destructive and potentially dangerous operation. When a KMS key\n is deleted, all data that was encrypted under the KMS key is unrecoverable. (The only\n exception is a multi-Region replica key.) To prevent the use of a KMS key without deleting\n it, use DisableKey.
\nIf you schedule deletion of a KMS key from a custom key store, when the waiting period\n expires, ScheduleKeyDeletion
deletes the KMS key from KMS. Then KMS makes a\n best effort to delete the key material from the associated CloudHSM cluster. However, you might\n need to manually delete the orphaned key\n material from the cluster and its backups.
You can schedule the deletion of a multi-Region primary key and its replica keys at any\n time. However, KMS will not delete a multi-Region primary key with existing replica keys. If\n you schedule the deletion of a primary key with replicas, its key state changes to\n PendingReplicaDeletion
and it cannot be replicated or used in cryptographic\n operations. This status can continue indefinitely. When the last of its replicas keys is\n deleted (not just scheduled), the key state of the primary key changes to\n PendingDeletion
and its waiting period (PendingWindowInDays
)\n begins. For details, see Deleting multi-Region keys in the\n Key Management Service Developer Guide.
For more information about scheduling a KMS key for deletion, see Deleting KMS keys in the\n Key Management Service Developer Guide.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n\n Required permissions: kms:ScheduleKeyDeletion (key\n policy)
\n\n Related operations\n
\n\n CancelKeyDeletion\n
\n\n DisableKey\n
\nThe current status of the KMS key.
\nFor more information about how key state affects the use of a KMS key, see Key state: Effect on your KMS\n key in the Key Management Service Developer Guide.
" + "smithy.api#documentation": "The current status of the KMS key.
\nFor more information about how key state affects the use of a KMS key, see Key states of KMS keys in the Key Management Service Developer Guide.
" } }, "PendingWindowInDays": { @@ -4616,7 +4832,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a digital\n signature for a message or message digest by using the private key in an asymmetric\n KMS key. To verify the signature, use the Verify operation, or use the\n public key in the same asymmetric KMS key outside of KMS. For information about symmetric and asymmetric KMS keys, see Using Symmetric and Asymmetric KMS keys in the Key Management Service Developer Guide.
\nDigital signatures are generated and verified by using asymmetric key pair, such as an RSA\n or ECC pair that is represented by an asymmetric KMS key. The key owner (or an authorized\n user) uses their private key to sign a message. Anyone with the public key can verify that the\n message was signed with that particular private key and that the message hasn't changed since\n it was signed.
\nTo use the Sign
operation, provide the following information:
Use the KeyId
parameter to identify an asymmetric KMS key with a\n KeyUsage
value of SIGN_VERIFY
. To get the\n KeyUsage
value of a KMS key, use the DescribeKey\n operation. The caller must have kms:Sign
permission on the KMS key.
Use the Message
parameter to specify the message or message digest to\n sign. You can submit messages of up to 4096 bytes. To sign a larger message, generate a\n hash digest of the message, and then provide the hash digest in the Message
\n parameter. To indicate whether the message is a full message or a digest, use the\n MessageType
parameter.
Choose a signing algorithm that is compatible with the KMS key.
\nWhen signing a message, be sure to record the KMS key and the signing algorithm. This\n information is required to verify the signature.
\nTo verify the signature that this operation generates, use the Verify\n operation. Or use the GetPublicKey operation to download the public key and\n then use the public key to verify the signature outside of KMS.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:Sign (key policy)
\n\n Related operations: Verify\n
" + "smithy.api#documentation": "Creates a digital\n signature for a message or message digest by using the private key in an asymmetric\n signing KMS key. To verify the signature, use the Verify operation, or use\n the public key in the same asymmetric KMS key outside of KMS. For information about asymmetric KMS keys, see Asymmetric KMS keys in the Key Management Service Developer Guide.
\nDigital signatures are generated and verified by using asymmetric key pair, such as an RSA\n or ECC pair that is represented by an asymmetric KMS key. The key owner (or an authorized\n user) uses their private key to sign a message. Anyone with the public key can verify that the\n message was signed with that particular private key and that the message hasn't changed since\n it was signed.
\nTo use the Sign
operation, provide the following information:
Use the KeyId
parameter to identify an asymmetric KMS key with a\n KeyUsage
value of SIGN_VERIFY
. To get the\n KeyUsage
value of a KMS key, use the DescribeKey\n operation. The caller must have kms:Sign
permission on the KMS key.
Use the Message
parameter to specify the message or message digest to\n sign. You can submit messages of up to 4096 bytes. To sign a larger message, generate a\n hash digest of the message, and then provide the hash digest in the Message
\n parameter. To indicate whether the message is a full message or a digest, use the\n MessageType
parameter.
Choose a signing algorithm that is compatible with the KMS key.
\nWhen signing a message, be sure to record the KMS key and the signing algorithm. This\n information is required to verify the signature.
\nTo verify the signature that this operation generates, use the Verify\n operation. Or use the GetPublicKey operation to download the public key and\n then use the public key to verify the signature outside of KMS.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:Sign (key policy)
\n\n Related operations: Verify\n
" } }, "com.amazonaws.kms#SignRequest": { @@ -4794,6 +5010,9 @@ "input": { "target": "com.amazonaws.kms#TagResourceRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.kms#InvalidArnException" @@ -4815,7 +5034,7 @@ } ], "traits": { - "smithy.api#documentation": "Adds or edits tags on a customer managed key.
\nTagging or untagging a KMS key can allow or deny permission to the KMS key. For details, see Using ABAC in KMS in the Key Management Service Developer Guide.
\nEach tag consists of a tag key and a tag value, both of which are case-sensitive strings.\n The tag value can be an empty (null) string. To add a tag, specify a new tag key and a tag\n value. To edit a tag, specify an existing tag key and a new tag value.
\nYou can use this operation to tag a customer managed key, but you cannot\n tag an Amazon Web Services\n managed key, an Amazon Web Services owned key, a custom key\n store, or an alias.
\nYou can also add tags to a KMS key while creating it (CreateKey) or\n replicating it (ReplicateKey).
\nFor information about using tags in KMS, see Tagging keys. For general information about\n tags, including the format and syntax, see Tagging Amazon Web Services resources in the Amazon\n Web Services General Reference.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:TagResource (key policy)
\n\n Related operations\n
\n\n CreateKey\n
\n\n ListResourceTags\n
\n\n ReplicateKey\n
\n\n UntagResource\n
\nAdds or edits tags on a customer managed key.
\nTagging or untagging a KMS key can allow or deny permission to the KMS key. For details, see ABAC in KMS in the Key Management Service Developer Guide.
\nEach tag consists of a tag key and a tag value, both of which are case-sensitive strings.\n The tag value can be an empty (null) string. To add a tag, specify a new tag key and a tag\n value. To edit a tag, specify an existing tag key and a new tag value.
\nYou can use this operation to tag a customer managed key, but you cannot\n tag an Amazon Web Services\n managed key, an Amazon Web Services owned key, a custom key\n store, or an alias.
\nYou can also add tags to a KMS key while creating it (CreateKey) or\n replicating it (ReplicateKey).
\nFor information about using tags in KMS, see Tagging keys. For general information about\n tags, including the format and syntax, see Tagging Amazon Web Services resources in the Amazon\n Web Services General Reference.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:TagResource (key policy)
\n\n Related operations\n
\n\n CreateKey\n
\n\n ListResourceTags\n
\n\n ReplicateKey\n
\n\n UntagResource\n
\nKey Management Service (KMS) is an encryption and key management web service. This guide describes\n the KMS operations that you can call programmatically. For general information about KMS,\n see the \n Key Management Service Developer Guide\n .
\nKMS is replacing the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term.
\nAmazon Web Services provides SDKs that consist of libraries and sample code for various programming\n languages and platforms (Java, Ruby, .Net, macOS, Android, etc.). The SDKs provide a\n convenient way to create programmatic access to KMS and other Amazon Web Services services. For example,\n the SDKs take care of tasks such as signing requests (see below), managing errors, and\n retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to\n download and install them, see Tools for Amazon Web\n Services.
\nWe recommend that you use the Amazon Web Services SDKs to make programmatic API calls to KMS.
\nIf you need to use FIPS 140-2 validated cryptographic modules when communicating with\n Amazon Web Services, use the FIPS endpoint in your preferred Amazon Web Services Region. For more information about the\n available FIPS endpoints, see Service endpoints in the Key Management Service topic of the Amazon Web Services General Reference.
\nClients must support TLS (Transport Layer Security) 1.0. We recommend TLS 1.2. Clients\n must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral\n Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems\n such as Java 7 and later support these modes.
\n\n Signing Requests\n
\nRequests must be signed by using an access key ID and a secret access key. We strongly\n recommend that you do not use your Amazon Web Services account (root) access key ID and\n secret key for everyday work with KMS. Instead, use the access key ID and secret access key\n for an IAM user. You can also use the Amazon Web Services Security Token Service to generate temporary\n security credentials that you can use to sign requests.
\nAll KMS operations require Signature Version 4.
\n\n Logging API Requests\n
\nKMS supports CloudTrail, a service that logs Amazon Web Services API calls and related events for your\n Amazon Web Services account and delivers them to an Amazon S3 bucket that you specify. By using the\n information collected by CloudTrail, you can determine what requests were made to KMS, who made\n the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it\n on and find your log files, see the CloudTrail User Guide.
\n\n Additional Resources\n
\nFor more information about credentials and request signing, see the following:
\n\n Amazon Web Services\n Security Credentials - This topic provides general information about the types\n of credentials used to access Amazon Web Services.
\n\n Temporary\n Security Credentials - This section of the IAM User Guide\n describes how to create and use temporary security credentials.
\n\n Signature Version\n 4 Signing Process - This set of topics walks you through the process of signing\n a request using an access key ID and a secret access key.
\n\n Commonly Used API Operations\n
\nOf the API operations discussed in this guide, the following will prove the most useful\n for most applications. You will likely perform operations other than these, such as creating\n keys and assigning policies, by using the console.
\n\n Encrypt\n
\n\n Decrypt\n
\n\n GenerateDataKey\n
\nKey Management Service (KMS) is an encryption and key management web service. This guide describes\n the KMS operations that you can call programmatically. For general information about KMS,\n see the \n Key Management Service Developer Guide\n .
\nKMS is replacing the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term.
\nAmazon Web Services provides SDKs that consist of libraries and sample code for various programming\n languages and platforms (Java, Ruby, .Net, macOS, Android, etc.). The SDKs provide a\n convenient way to create programmatic access to KMS and other Amazon Web Services services. For example,\n the SDKs take care of tasks such as signing requests (see below), managing errors, and\n retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to\n download and install them, see Tools for Amazon Web\n Services.
\nWe recommend that you use the Amazon Web Services SDKs to make programmatic API calls to KMS.
\nClients must support TLS (Transport Layer Security) 1.0. We recommend TLS 1.2. Clients\n must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral\n Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems\n such as Java 7 and later support these modes.
\n\n Signing Requests\n
\nRequests must be signed by using an access key ID and a secret access key. We strongly\n recommend that you do not use your Amazon Web Services account (root) access key ID and\n secret key for everyday work with KMS. Instead, use the access key ID and secret access key\n for an IAM user. You can also use the Amazon Web Services Security Token Service to generate temporary\n security credentials that you can use to sign requests.
\nAll KMS operations require Signature Version 4.
\n\n Logging API Requests\n
\nKMS supports CloudTrail, a service that logs Amazon Web Services API calls and related events for your\n Amazon Web Services account and delivers them to an Amazon S3 bucket that you specify. By using the\n information collected by CloudTrail, you can determine what requests were made to KMS, who made\n the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it\n on and find your log files, see the CloudTrail User Guide.
\n\n Additional Resources\n
\nFor more information about credentials and request signing, see the following:
\n\n Amazon Web Services\n Security Credentials - This topic provides general information about the types\n of credentials used to access Amazon Web Services.
\n\n Temporary\n Security Credentials - This section of the IAM User Guide\n describes how to create and use temporary security credentials.
\n\n Signature Version\n 4 Signing Process - This set of topics walks you through the process of signing\n a request using an access key ID and a secret access key.
\n\n Commonly Used API Operations\n
\nOf the API operations discussed in this guide, the following will prove the most useful\n for most applications. You will likely perform operations other than these, such as creating\n keys and assigning policies, by using the console.
\n\n Encrypt\n
\n\n Decrypt\n
\n\n GenerateDataKey\n
\nDeletes tags from a customer managed key. To delete a tag,\n specify the tag key and the KMS key.
\nTagging or untagging a KMS key can allow or deny permission to the KMS key. For details, see Using ABAC in KMS in the Key Management Service Developer Guide.
\nWhen it succeeds, the UntagResource
operation doesn't return any output.\n Also, if the specified tag key isn't found on the KMS key, it doesn't throw an exception or\n return a response. To confirm that the operation worked, use the ListResourceTags operation.
For information about using tags in KMS, see Tagging keys. For general information about\n tags, including the format and syntax, see Tagging Amazon Web Services resources in the Amazon\n Web Services General Reference.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:UntagResource (key policy)
\n\n Related operations\n
\n\n CreateKey\n
\n\n ListResourceTags\n
\n\n ReplicateKey\n
\n\n TagResource\n
\nDeletes tags from a customer managed key. To delete a tag,\n specify the tag key and the KMS key.
\nTagging or untagging a KMS key can allow or deny permission to the KMS key. For details, see ABAC in KMS in the Key Management Service Developer Guide.
\nWhen it succeeds, the UntagResource
operation doesn't return any output.\n Also, if the specified tag key isn't found on the KMS key, it doesn't throw an exception or\n return a response. To confirm that the operation worked, use the ListResourceTags operation.
For information about using tags in KMS, see Tagging keys. For general information about\n tags, including the format and syntax, see Tagging Amazon Web Services resources in the Amazon\n Web Services General Reference.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:UntagResource (key policy)
\n\n Related operations\n
\n\n CreateKey\n
\n\n ListResourceTags\n
\n\n ReplicateKey\n
\n\n TagResource\n
\nAssociates an existing KMS alias with a different KMS key. Each alias is associated with\n only one KMS key at a time, although a KMS key can have multiple aliases. The alias and the\n KMS key must be in the same Amazon Web Services account and Region.
\nAdding, deleting, or updating an alias can allow or deny permission to the KMS key. For details, see Using ABAC in KMS in the Key Management Service Developer Guide.
\nThe current and new KMS key must be the same type (both symmetric or both asymmetric), and\n they must have the same key usage (ENCRYPT_DECRYPT
or SIGN_VERIFY
).\n This restriction prevents errors in code that uses aliases. If you must assign an alias to a\n different type of KMS key, use DeleteAlias to delete the old alias and CreateAlias to create a new alias.
You cannot use UpdateAlias
to change an alias name. To change an alias name,\n use DeleteAlias to delete the old alias and CreateAlias to\n create a new alias.
Because an alias is not a property of a KMS key, you can create, update, and delete the\n aliases of a KMS key without affecting the KMS key. Also, aliases do not appear in the\n response from the DescribeKey operation. To get the aliases of all KMS keys\n in the account, use the ListAliases operation.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n Required permissions\n
\n\n kms:UpdateAlias on\n the alias (IAM policy).
\n\n kms:UpdateAlias on\n the current KMS key (key policy).
\n\n kms:UpdateAlias on\n the new KMS key (key policy).
\nFor details, see Controlling access to aliases in the\n Key Management Service Developer Guide.
\n\n Related operations:\n
\n\n CreateAlias\n
\n\n DeleteAlias\n
\n\n ListAliases\n
\nAssociates an existing KMS alias with a different KMS key. Each alias is associated with\n only one KMS key at a time, although a KMS key can have multiple aliases. The alias and the\n KMS key must be in the same Amazon Web Services account and Region.
\nAdding, deleting, or updating an alias can allow or deny permission to the KMS key. For details, see ABAC in KMS in the Key Management Service Developer Guide.
\nThe current and new KMS key must be the same type (both symmetric or both asymmetric), and\n they must have the same key usage (ENCRYPT_DECRYPT
or SIGN_VERIFY
).\n This restriction prevents errors in code that uses aliases. If you must assign an alias to a\n different type of KMS key, use DeleteAlias to delete the old alias and CreateAlias to create a new alias.
You cannot use UpdateAlias
to change an alias name. To change an alias name,\n use DeleteAlias to delete the old alias and CreateAlias to\n create a new alias.
Because an alias is not a property of a KMS key, you can create, update, and delete the\n aliases of a KMS key without affecting the KMS key. Also, aliases do not appear in the\n response from the DescribeKey operation. To get the aliases of all KMS keys\n in the account, use the ListAliases operation.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n Required permissions\n
\n\n kms:UpdateAlias on\n the alias (IAM policy).
\n\n kms:UpdateAlias on\n the current KMS key (key policy).
\n\n kms:UpdateAlias on\n the new KMS key (key policy).
\nFor details, see Controlling access to aliases in the\n Key Management Service Developer Guide.
\n\n Related operations:\n
\n\n CreateAlias\n
\n\n DeleteAlias\n
\n\n ListAliases\n
\nUpdates the description of a KMS key. To see the description of a KMS key, use DescribeKey.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:UpdateKeyDescription (key policy)
\n\n Related operations\n
\n\n CreateKey\n
\n\n DescribeKey\n
\nUpdates the description of a KMS key. To see the description of a KMS key, use DescribeKey.
\nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account\n use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
\n\n\n Required permissions: kms:UpdateKeyDescription (key policy)
\n\n Related operations\n
\n\n CreateKey\n
\n\n DescribeKey\n
\nChanges the primary key of a multi-Region key.
\nThis operation changes the replica key in the specified Region to a primary key and\n changes the former primary key to a replica key. For example, suppose you have a primary key\n in us-east-1
and a replica key in eu-west-2
. If you run\n UpdatePrimaryRegion
with a PrimaryRegion
value of\n eu-west-2
, the primary key is now the key in eu-west-2
, and the\n key in us-east-1
becomes a replica key. For details, see Updating the primary Region in the Key Management Service Developer Guide.
This operation supports multi-Region keys, an KMS feature that lets you create multiple\n interoperable KMS keys in different Amazon Web Services Regions. Because these KMS keys have the same key ID, key\n material, and other metadata, you can use them interchangeably to encrypt data in one Amazon Web Services Region and decrypt\n it in a different Amazon Web Services Region without re-encrypting the data or making a cross-Region call. For more information about multi-Region keys, see Using multi-Region keys in the Key Management Service Developer Guide.
\nThe primary key of a multi-Region key is the source for properties\n that are always shared by primary and replica keys, including the key material, key ID, key spec, key usage, key material\n origin, and automatic\n key rotation. It's the only key that can be replicated. You cannot delete the primary\n key until all replica keys are deleted.
\nThe key ID and primary Region that you specify uniquely identify the replica key that will\n become the primary key. The primary Region must already have a replica key. This operation\n does not create a KMS key in the specified Region. To find the replica keys, use the DescribeKey operation on the primary key or any replica key. To create a replica\n key, use the ReplicateKey operation.
\nYou can run this operation while using the affected multi-Region keys in cryptographic\n operations. This operation should not delay, interrupt, or cause failures in cryptographic\n operations.
\nEven after this operation completes, the process of updating the primary Region might\n still be in progress for a few more seconds. Operations such as DescribeKey
might\n display both the old and new primary keys as replicas. The old and new primary keys have a\n transient key state of Updating
. The original key state is restored when the\n update is complete. While the key state is Updating
, you can use the keys in\n cryptographic operations, but you cannot replicate the new primary key or perform certain\n management operations, such as enabling or disabling these keys. For details about the\n Updating
key state, see Key state:\n Effect on your KMS key in the Key Management Service Developer Guide.
This operation does not return any output. To verify that primary key is changed, use the\n DescribeKey operation.
\n\n Cross-account use: No. You cannot use this operation in a\n different Amazon Web Services account.
\n\n Required permissions:
\n\n kms:UpdatePrimaryRegion
on the current primary key (in the primary key's\n Region). Include this permission primary key's key policy.
\n kms:UpdatePrimaryRegion
on the current replica key (in the replica key's\n Region). Include this permission in the replica key's key policy.
\n Related operations\n
\n\n CreateKey\n
\n\n ReplicateKey\n
\nChanges the primary key of a multi-Region key.
\nThis operation changes the replica key in the specified Region to a primary key and\n changes the former primary key to a replica key. For example, suppose you have a primary key\n in us-east-1
and a replica key in eu-west-2
. If you run\n UpdatePrimaryRegion
with a PrimaryRegion
value of\n eu-west-2
, the primary key is now the key in eu-west-2
, and the\n key in us-east-1
becomes a replica key. For details, see Updating the primary Region in the Key Management Service Developer Guide.
This operation supports multi-Region keys, an KMS feature that lets you create multiple\n interoperable KMS keys in different Amazon Web Services Regions. Because these KMS keys have the same key ID, key\n material, and other metadata, you can use them interchangeably to encrypt data in one Amazon Web Services Region and decrypt\n it in a different Amazon Web Services Region without re-encrypting the data or making a cross-Region call. For more information about multi-Region keys, see Multi-Region keys in KMS in the Key Management Service Developer Guide.
\nThe primary key of a multi-Region key is the source for properties\n that are always shared by primary and replica keys, including the key material, key ID, key spec, key usage, key material\n origin, and automatic\n key rotation. It's the only key that can be replicated. You cannot delete the primary\n key until all replica keys are deleted.
\nThe key ID and primary Region that you specify uniquely identify the replica key that will\n become the primary key. The primary Region must already have a replica key. This operation\n does not create a KMS key in the specified Region. To find the replica keys, use the DescribeKey operation on the primary key or any replica key. To create a replica\n key, use the ReplicateKey operation.
\nYou can run this operation while using the affected multi-Region keys in cryptographic\n operations. This operation should not delay, interrupt, or cause failures in cryptographic\n operations.
\nEven after this operation completes, the process of updating the primary Region might\n still be in progress for a few more seconds. Operations such as DescribeKey
might\n display both the old and new primary keys as replicas. The old and new primary keys have a\n transient key state of Updating
. The original key state is restored when the\n update is complete. While the key state is Updating
, you can use the keys in\n cryptographic operations, but you cannot replicate the new primary key or perform certain\n management operations, such as enabling or disabling these keys. For details about the\n Updating
key state, see Key states of KMS keys in the Key Management Service Developer Guide.
This operation does not return any output. To verify that primary key is changed, use the\n DescribeKey operation.
\n\n Cross-account use: No. You cannot use this operation in a\n different Amazon Web Services account.
\n\n Required permissions:
\n\n kms:UpdatePrimaryRegion
on the current primary key (in the primary key's\n Region). Include this permission primary key's key policy.
\n kms:UpdatePrimaryRegion
on the current replica key (in the replica key's\n Region). Include this permission in the replica key's key policy.
\n Related operations\n
\n\n CreateKey\n
\n\n ReplicateKey\n
\nVerifies a digital signature that was generated by the Sign operation.
\n \nVerification confirms that an authorized user signed the message with the specified KMS\n key and signing algorithm, and the message hasn't changed since it was signed. If the\n signature is verified, the value of the SignatureValid
field in the response is\n True
. If the signature verification fails, the Verify
operation\n fails with an KMSInvalidSignatureException
exception.
A digital signature is generated by using the private key in an asymmetric KMS key. The\n signature is verified by using the public key in the same asymmetric KMS key.\n For information about symmetric and asymmetric KMS keys, see Using Symmetric and Asymmetric KMS keys in the Key Management Service Developer Guide.
\nTo verify a digital signature, you can use the Verify
operation. Specify the\n same asymmetric KMS key, message, and signing algorithm that were used to produce the\n signature.
You can also verify the digital signature by using the public key of the KMS key outside\n of KMS. Use the GetPublicKey operation to download the public key in the\n asymmetric KMS key and then use the public key to verify the signature outside of KMS. The\n advantage of using the Verify
operation is that it is performed within KMS. As\n a result, it's easy to call, the operation is performed within the FIPS boundary, it is logged\n in CloudTrail, and you can use key policy and IAM policy to determine who is authorized to use\n the KMS key to verify signatures.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key state: Effect on your KMS key in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:Verify (key policy)
\n\n Related operations: Sign\n
" + "smithy.api#documentation": "Verifies a digital signature that was generated by the Sign operation.
\n \nVerification confirms that an authorized user signed the message with the specified KMS\n key and signing algorithm, and the message hasn't changed since it was signed. If the\n signature is verified, the value of the SignatureValid
field in the response is\n True
. If the signature verification fails, the Verify
operation\n fails with an KMSInvalidSignatureException
exception.
A digital signature is generated by using the private key in an asymmetric KMS key. The\n signature is verified by using the public key in the same asymmetric KMS key.\n For information about asymmetric KMS keys, see Asymmetric KMS keys in the Key Management Service Developer Guide.
\nTo verify a digital signature, you can use the Verify
operation. Specify the\n same asymmetric KMS key, message, and signing algorithm that were used to produce the\n signature.
You can also verify the digital signature by using the public key of the KMS key outside\n of KMS. Use the GetPublicKey operation to download the public key in the\n asymmetric KMS key and then use the public key to verify the signature outside of KMS. The\n advantage of using the Verify
operation is that it is performed within KMS. As\n a result, it's easy to call, the operation is performed within the FIPS boundary, it is logged\n in CloudTrail, and you can use key policy and IAM policy to determine who is authorized to use\n the KMS key to verify signatures.
The KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:Verify (key policy)
\n\n Related operations: Sign\n
" + } + }, + "com.amazonaws.kms#VerifyMac": { + "type": "operation", + "input": { + "target": "com.amazonaws.kms#VerifyMacRequest" + }, + "output": { + "target": "com.amazonaws.kms#VerifyMacResponse" + }, + "errors": [ + { + "target": "com.amazonaws.kms#DisabledException" + }, + { + "target": "com.amazonaws.kms#InvalidGrantTokenException" + }, + { + "target": "com.amazonaws.kms#InvalidKeyUsageException" + }, + { + "target": "com.amazonaws.kms#KeyUnavailableException" + }, + { + "target": "com.amazonaws.kms#KMSInternalException" + }, + { + "target": "com.amazonaws.kms#KMSInvalidMacException" + }, + { + "target": "com.amazonaws.kms#KMSInvalidStateException" + }, + { + "target": "com.amazonaws.kms#NotFoundException" + } + ], + "traits": { + "smithy.api#documentation": "Verifies the hash-based message authentication code (HMAC) for a specified message, HMAC\n KMS key, and MAC algorithm. To verify the HMAC, VerifyMac
computes an HMAC using\n the message, HMAC KMS key, and MAC algorithm that you specify, and compares the computed HMAC\n to the HMAC that you specify. If the HMACs are identical, the verification succeeds;\n otherwise, it fails.
Verification indicates that the message hasn't changed since the HMAC was calculated, and\n the specified key was used to generate and verify the HMAC.
\nThis operation is part of KMS support for HMAC KMS keys. For details, see\n HMAC keys in KMS in the Key Management Service Developer Guide.
\n \nThe KMS key that you use for this operation must be in a compatible key state. For\ndetails, see Key states of KMS keys in the Key Management Service Developer Guide.
\n\n Cross-account use: Yes. To perform this operation with a KMS key in a different Amazon Web Services account, specify\n the key ARN or alias ARN in the value of the KeyId
parameter.
\n Required permissions: kms:VerifyMac (key policy)
\n\n Related operations: GenerateMac\n
" + } + }, + "com.amazonaws.kms#VerifyMacRequest": { + "type": "structure", + "members": { + "Message": { + "target": "com.amazonaws.kms#PlaintextType", + "traits": { + "smithy.api#documentation": "The message that will be used in the verification. Enter the same message that was used to\n generate the HMAC.
\n\n GenerateMac and VerifyMac
do not provide special handling\n for message digests. If you generated an HMAC for a hash digest of a message, you must verify\n the HMAC for the same hash digest.
The KMS key that will be used in the verification.
\n \nEnter a key ID of the KMS\n key that was used to generate the HMAC. If you identify a different KMS key, the VerifyMac
operation fails.
The MAC algorithm that will be used in the verification. Enter the same MAC algorithm that was used to compute the HMAC. This algorithm must be supported by the HMAC KMS key identified by the KeyId
parameter.
The HMAC to verify. Enter the HMAC that was generated by the GenerateMac operation when you specified the same message, HMAC KMS key, and MAC algorithm as the values specified in this request.
", + "smithy.api#required": {} + } + }, + "GrantTokens": { + "target": "com.amazonaws.kms#GrantTokenList", + "traits": { + "smithy.api#documentation": "A list of grant tokens.
\nUse a grant token when your permission to call this operation comes from a new grant that has not yet achieved eventual consistency. For more information, see Grant token and Using a grant token in the\n Key Management Service Developer Guide.
" + } + } + } + }, + "com.amazonaws.kms#VerifyMacResponse": { + "type": "structure", + "members": { + "KeyId": { + "target": "com.amazonaws.kms#KeyIdType", + "traits": { + "smithy.api#documentation": "The HMAC KMS key used in the verification.
" + } + }, + "MacValid": { + "target": "com.amazonaws.kms#BooleanType", + "traits": { + "smithy.api#documentation": "A Boolean value that indicates whether the HMAC was verified. A value of \n True
indicates that the HMAC (Mac
) was generated with the specified\n Message
, HMAC KMS key (KeyID
) and\n MacAlgorithm.
.
If the HMAC is not verified, the VerifyMac
operation fails with a\n KMSInvalidMacException
exception. This exception indicates that one or more of\n the inputs changed since the HMAC was computed.
The MAC algorithm used in the verification.
" + } + } } }, "com.amazonaws.kms#VerifyRequest": { diff --git a/aws/sdk/aws-models/lightsail.json b/aws/sdk/aws-models/lightsail.json index 7b61e4d6b8..3e888844b1 100644 --- a/aws/sdk/aws-models/lightsail.json +++ b/aws/sdk/aws-models/lightsail.json @@ -177,6 +177,61 @@ ] } }, + "com.amazonaws.lightsail#AccountLevelBpaSync": { + "type": "structure", + "members": { + "status": { + "target": "com.amazonaws.lightsail#AccountLevelBpaSyncStatus", + "traits": { + "smithy.api#documentation": "The status of the account-level BPA synchronization.
\n\nThe following statuses are possible:
\n\n InSync
- Account-level BPA is synchronized. The Amazon S3\n account-level BPA configuration applies to your Lightsail buckets.
\n NeverSynced
- Synchronization has not yet happened. The Amazon S3\n account-level BPA configuration does not apply to your Lightsail buckets.
\n Failed
- Synchronization failed. The Amazon S3 account-level BPA\n configuration does not apply to your Lightsail buckets.
\n Defaulted
- Synchronization failed and account-level BPA for your\n Lightsail buckets is defaulted to active.
You might need to complete further actions if the status is Failed
or\n Defaulted
. The message
parameter provides more information for\n those statuses.
The timestamp of when the account-level BPA configuration was last synchronized. This\n value is null when the account-level BPA configuration has not been synchronized.
" + } + }, + "message": { + "target": "com.amazonaws.lightsail#BPAStatusMessage", + "traits": { + "smithy.api#documentation": "A message that provides a reason for a Failed
or Defaulted
\n synchronization status.
The following messages are possible:
\n\n SYNC_ON_HOLD
- The synchronization has not yet happened. This status\n message occurs immediately after you create your first Lightsail bucket. This status\n message should change after the first synchronization happens, approximately 1 hour after\n the first bucket is created.
\n DEFAULTED_FOR_SLR_MISSING
- The synchronization failed because the\n required service-linked role is missing from your Amazon Web Services account. The\n account-level BPA configuration for your Lightsail buckets is defaulted to\n active until the synchronization can occur. This means that all\n your buckets are private and not publicly accessible. For more information about how to\n create the required service-linked role to allow synchronization, see Using Service-Linked Roles for Amazon Lightsail in the\n Amazon Lightsail Developer Guide.
\n DEFAULTED_FOR_SLR_MISSING_ON_HOLD
- The synchronization failed because\n the required service-linked role is missing from your Amazon Web Services account.\n Account-level BPA is not yet configured for your Lightsail buckets. Therefore, only the\n bucket access permissions and individual object access permissions apply to your\n Lightsail buckets. For more information about how to create the required service-linked\n role to allow synchronization, see Using Service-Linked Roles for Amazon Lightsail in the\n Amazon Lightsail Developer Guide.
\n Unknown
- The reason that synchronization failed is unknown. Contact\n Amazon Web Services Support for more information.
A Boolean value that indicates whether account-level block public access is affecting your\n Lightsail buckets.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Describes the synchronization status of the Amazon Simple Storage Service (Amazon S3)\n account-level block public access (BPA) feature for your Lightsail buckets.
\n\nThe account-level BPA feature of Amazon S3 provides centralized controls to limit\n public access to all Amazon S3 buckets in an account. BPA can make all Amazon S3 buckets in an Amazon Web Services account private regardless of the individual bucket and\n object permissions that are configured. Lightsail buckets take into account the\n Amazon S3 account-level BPA configuration when allowing or denying public access. To\n do this, Lightsail periodically fetches the account-level BPA configuration\n from Amazon S3. When the account-level BPA status is InSync
, the Amazon S3 account-level BPA configuration is synchronized and it applies to your Lightsail\n buckets. For more information about Amazon Simple Storage Service account-level BPA and how it affects\n Lightsail buckets, see Block public access for buckets in Amazon Lightsail in the\n Amazon Lightsail Developer Guide.
Attaches an SSL/TLS certificate to your Amazon Lightsail content delivery network (CDN)\n distribution.
\nAfter the certificate is attached, your distribution accepts HTTPS traffic for all of the\n domains that are associated with the certificate.
\nUse the CreateCertificate
action to create a certificate that you can attach\n to your distribution.
Only certificates created in the us-east-1
AWS Region can be attached to\n Lightsail distributions. Lightsail distributions are global resources that can reference\n an origin in any AWS Region, and distribute its content globally. However, all\n distributions are located in the us-east-1
Region.
Attaches an SSL/TLS certificate to your Amazon Lightsail content delivery network (CDN)\n distribution.
\nAfter the certificate is attached, your distribution accepts HTTPS traffic for all of the\n domains that are associated with the certificate.
\nUse the CreateCertificate
action to create a certificate that you can attach\n to your distribution.
Only certificates created in the us-east-1
\n Amazon Web Services Region can be attached to Lightsail distributions. Lightsail\n distributions are global resources that can reference an origin in any Amazon Web Services\n Region, and distribute its content globally. However, all distributions are located in the\n us-east-1
Region.
An object that describes the location of the bucket, such as the Amazon Web Services Region\n and Availability Zone.
" + } }, "name": { "target": "com.amazonaws.lightsail#BucketName", @@ -1175,13 +1256,13 @@ "ableToUpdateBundle": { "target": "com.amazonaws.lightsail#boolean", "traits": { - "smithy.api#documentation": "Indicates whether the bundle that is currently applied to a bucket can be changed to\n another bundle.
\n\nYou can update a bucket's bundle only one time within a monthly AWS billing\n cycle.
\n\nUse the UpdateBucketBundle action to change a\n bucket's bundle.
" + "smithy.api#documentation": "Indicates whether the bundle that is currently applied to a bucket can be changed to\n another bundle.
\n\nYou can update a bucket's bundle only one time within a monthly Amazon Web Services billing\n cycle.
\n\nUse the UpdateBucketBundle action to change a\n bucket's bundle.
" } }, "readonlyAccessAccounts": { "target": "com.amazonaws.lightsail#PartnerIdList", "traits": { - "smithy.api#documentation": "An array of strings that specify the AWS account IDs that have read-only access to the\n bucket.
" + "smithy.api#documentation": "An array of strings that specify the Amazon Web Services account IDs that have read-only\n access to the bucket.
" } }, "resourcesReceivingAccess": { @@ -1220,7 +1301,7 @@ "destination": { "target": "com.amazonaws.lightsail#BucketName", "traits": { - "smithy.api#documentation": "The name of the bucket where the access is saved. The destination can be a Lightsail\n bucket in the same account, and in the same AWS Region as the source bucket.
\nThis parameter is required when enabling the access log for a bucket, and should be\n omitted when disabling the access log.
\nThe name of the bucket where the access logs are saved. The destination can be a\n Lightsail bucket in the same account, and in the same Amazon Web Services Region as the\n source bucket.
\nThis parameter is required when enabling the access log for a bucket, and should be\n omitted when disabling the access log.
\nA list of objects describing the Availability Zone and AWS Region of the CloudFormation\n stack record.
" + "smithy.api#documentation": "A list of objects describing the Availability Zone and Amazon Web Services Region of the\n CloudFormation stack record.
" } }, "resourceType": { @@ -1981,7 +2062,10 @@ } }, "location": { - "target": "com.amazonaws.lightsail#ResourceLocation" + "target": "com.amazonaws.lightsail#ResourceLocation", + "traits": { + "smithy.api#documentation": "An object that describes the location of the contact method, such as the Amazon Web Services Region and Availability Zone.
" + } }, "resourceType": { "target": "com.amazonaws.lightsail#ResourceType", @@ -2174,7 +2258,7 @@ "location": { "target": "com.amazonaws.lightsail#ResourceLocation", "traits": { - "smithy.api#documentation": "An object that describes the location of the container service, such as the AWS Region\n and Availability Zone.
" + "smithy.api#documentation": "An object that describes the location of the container service, such as the Amazon Web Services Region and Availability Zone.
" } }, "resourceType": { @@ -2240,7 +2324,7 @@ "principalArn": { "target": "com.amazonaws.lightsail#string", "traits": { - "smithy.api#documentation": "The principal ARN of the container service.
\n\nThe principal ARN can be used to create a trust relationship between your standard AWS\n account and your Lightsail container service. This allows you to give your service\n permission to access resources in your standard AWS account.
" + "smithy.api#documentation": "The principal ARN of the container service.
\n\nThe principal ARN can be used to create a trust relationship between your standard Amazon Web Services account and your Lightsail container service. This allows you to give your\n service permission to access resources in your standard Amazon Web Services account.
" } }, "privateDomainName": { @@ -2817,7 +2901,7 @@ } ], "traits": { - "smithy.api#documentation": "Copies a manual snapshot of an instance or disk as another manual snapshot, or copies an\n automatic snapshot of an instance or disk as a manual snapshot. This operation can also be\n used to copy a manual or automatic snapshot of an instance or a disk from one AWS Region to\n another in Amazon Lightsail.
\nWhen copying a manual snapshot, be sure to define the source\n region
, source snapshot name
, and target snapshot name
\n parameters.
When copying an automatic snapshot, be sure to define the\n source region
, source resource name
, target snapshot\n name
, and either the restore date
or the use latest restorable\n auto snapshot
parameters.
Copies a manual snapshot of an instance or disk as another manual snapshot, or copies an\n automatic snapshot of an instance or disk as a manual snapshot. This operation can also be\n used to copy a manual or automatic snapshot of an instance or a disk from one Amazon Web Services Region to another in Amazon Lightsail.
\nWhen copying a manual snapshot, be sure to define the source\n region
, source snapshot name
, and target snapshot name
\n parameters.
When copying an automatic snapshot, be sure to define the\n source region
, source resource name
, target snapshot\n name
, and either the restore date
or the use latest restorable\n auto snapshot
parameters.
The AWS Region where the source manual or automatic snapshot is located.
", + "smithy.api#documentation": "The Amazon Web Services Region where the source manual or automatic snapshot is\n located.
", "smithy.api#required": {} } } @@ -3047,7 +3131,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates an SSL/TLS certificate for an Amazon Lightsail content delivery network (CDN)\n distribution and a container service.
\nAfter the certificate is valid, use the AttachCertificateToDistribution
\n action to use the certificate and its domains with your distribution. Or use the\n UpdateContainerService
action to use the certificate and its domains with your\n container service.
Only certificates created in the us-east-1
AWS Region can be attached to\n Lightsail distributions. Lightsail distributions are global resources that can reference\n an origin in any AWS Region, and distribute its content globally. However, all\n distributions are located in the us-east-1
Region.
Creates an SSL/TLS certificate for an Amazon Lightsail content delivery network (CDN)\n distribution and a container service.
\nAfter the certificate is valid, use the AttachCertificateToDistribution
\n action to use the certificate and its domains with your distribution. Or use the\n UpdateContainerService
action to use the certificate and its domains with your\n container service.
Only certificates created in the us-east-1
\n Amazon Web Services Region can be attached to Lightsail distributions. Lightsail\n distributions are global resources that can reference an origin in any Amazon Web Services\n Region, and distribute its content globally. However, all distributions are located in the\n us-east-1
Region.
Creates an email or SMS text message contact method.
\nA contact method is used to send you notifications about your Amazon Lightsail resources.\n You can add one email address and one mobile phone number contact method in each AWS Region.\n However, SMS text messaging is not supported in some AWS Regions, and SMS text messages\n cannot be sent to some countries/regions. For more information, see Notifications in Amazon Lightsail.
", + "smithy.api#documentation": "Creates an email or SMS text message contact method.
\nA contact method is used to send you notifications about your Amazon Lightsail resources.\n You can add one email address and one mobile phone number contact method in each Amazon Web Services Region. However, SMS text messaging is not supported in some Amazon Web Services\n Regions, and SMS text messages cannot be sent to some countries/regions. For more information,\n see Notifications in Amazon Lightsail.
", "smithy.api#http": { "method": "POST", "uri": "/ls/api/2016-11-28/CreateContactMethod", @@ -3209,7 +3293,7 @@ "protocol": { "target": "com.amazonaws.lightsail#ContactProtocol", "traits": { - "smithy.api#documentation": "The protocol of the contact method, such as Email
or SMS
(text\n messaging).
The SMS
protocol is supported only in the following AWS Regions.
US East (N. Virginia) (us-east-1
)
US West (Oregon) (us-west-2
)
Europe (Ireland) (eu-west-1
)
Asia Pacific (Tokyo) (ap-northeast-1
)
Asia Pacific (Singapore) (ap-southeast-1
)
Asia Pacific (Sydney) (ap-southeast-2
)
For a list of countries/regions where SMS text messages can be sent, and the latest AWS\n Regions where SMS text messaging is supported, see Supported Regions and Countries in the Amazon SNS Developer\n Guide.
\nFor more information about notifications in Amazon Lightsail, see Notifications in Amazon Lightsail.
", + "smithy.api#documentation": "The protocol of the contact method, such as Email
or SMS
(text\n messaging).
The SMS
protocol is supported only in the following Amazon Web Services\n Regions.
US East (N. Virginia) (us-east-1
)
US West (Oregon) (us-west-2
)
Europe (Ireland) (eu-west-1
)
Asia Pacific (Tokyo) (ap-northeast-1
)
Asia Pacific (Singapore) (ap-southeast-1
)
Asia Pacific (Sydney) (ap-southeast-2
)
For a list of countries/regions where SMS text messages can be sent, and the latest\n Amazon Web Services Regions where SMS text messaging is supported, see Supported Regions and Countries in the Amazon SNS Developer\n Guide.
\nFor more information about notifications in Amazon Lightsail, see Notifications in Amazon Lightsail.
", "smithy.api#required": {} } }, @@ -3293,7 +3377,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a deployment for your Amazon Lightsail container service.
\n\nA deployment specifies the containers that will be launched on the container service and\n their settings, such as the ports to open, the environment variables to apply, and the launch\n command to run. It also specifies the container that will serve as the public endpoint of the\n deployment and its settings, such as the HTTP or HTTPS port to use, and the health check\n configuration.
\n\nYou can deploy containers to your container service using container images from a public\n registry like Docker Hub, or from your local machine. For more information, see Creating container images for your Amazon Lightsail container services in the\n Amazon Lightsail Developer Guide.
", + "smithy.api#documentation": "Creates a deployment for your Amazon Lightsail container service.
\n\nA deployment specifies the containers that will be launched on the container service and\n their settings, such as the ports to open, the environment variables to apply, and the launch\n command to run. It also specifies the container that will serve as the public endpoint of the\n deployment and its settings, such as the HTTP or HTTPS port to use, and the health check\n configuration.
\n\nYou can deploy containers to your container service using container images from a public\n registry such as Amazon ECR Public, or from your local machine. For more information, see\n Creating container images for your Amazon Lightsail container services in the\n Amazon Lightsail Developer Guide.
", "smithy.api#http": { "method": "POST", "uri": "/ls/api/2016-11-28/container-services/{serviceName}/deployments", @@ -3392,7 +3476,7 @@ "serviceName": { "target": "com.amazonaws.lightsail#ContainerServiceName", "traits": { - "smithy.api#documentation": "The name for the container service.
\n\nThe name that you specify for your container service will make up part of its default\n domain. The default domain of a container service is typically\n https://
.\n If the name of your container service is container-service-1
, and it's located in\n the US East (Ohio) AWS region (us-east-2
), then the domain for your container\n service will be like the following example:\n https://container-service-1.ur4EXAMPLE2uq.us-east-2.cs.amazonlightsail.com
\n
The following are the requirements for container service names:
\n\nMust be unique within each AWS Region in your Lightsail account.
\nMust contain 1 to 63 characters.
\nMust contain only alphanumeric characters and hyphens.
\nA hyphen (-) can separate words but cannot be at the start or end of the name.
\nThe name for the container service.
\n\nThe name that you specify for your container service will make up part of its default\n domain. The default domain of a container service is typically\n https://
.\n If the name of your container service is container-service-1
, and it's located in\n the US East (Ohio) AWS region (us-east-2
), then the domain for your container\n service will be like the following example:\n https://container-service-1.ur4EXAMPLE2uq.us-east-2.cs.amazonlightsail.com
\n
The following are the requirements for container service names:
\n\nMust be unique within each Amazon Web Services Region in your Lightsail\n account.
\nMust contain 1 to 63 characters.
\nMust contain only alphanumeric characters and hyphens.
\nA hyphen (-) can separate words but cannot be at the start or end of the name.
\nThe IP address type for the load balancer.
\n\nThe possible values are ipv4
for IPv4 only, and dualstack
for\n IPv4 and IPv6.
The default value is dualstack
.
The name of the TLS policy to apply to the load balancer.
\n\nUse the GetLoadBalancerTlsPolicies action to get a list of TLS policy names that you can\n specify.
\n\nFor more information about load balancer TLS policies, see Load balancer TLS security policies in the Amazon Lightsail Developer\n Guide.
" + } } } }, @@ -5057,7 +5147,7 @@ } ], "traits": { - "smithy.api#documentation": "Deletes a Amazon Lightsail bucket.
\n\nWhen you delete your bucket, the bucket name is released and can be reused for a new\n bucket in your account or another AWS account.
\nDeletes a Amazon Lightsail bucket.
\n\nWhen you delete your bucket, the bucket name is released and can be reused for a new\n bucket in your account or another Amazon Web Services account.
\nDeletes a contact method.
\nA contact method is used to send you notifications about your Amazon Lightsail resources.\n You can add one email address and one mobile phone number contact method in each AWS Region.\n However, SMS text messaging is not supported in some AWS Regions, and SMS text messages\n cannot be sent to some countries/regions. For more information, see Notifications in Amazon Lightsail.
", + "smithy.api#documentation": "Deletes a contact method.
\nA contact method is used to send you notifications about your Amazon Lightsail resources.\n You can add one email address and one mobile phone number contact method in each Amazon Web Services Region. However, SMS text messaging is not supported in some Amazon Web Services\n Regions, and SMS text messages cannot be sent to some countries/regions. For more information,\n see Notifications in Amazon Lightsail.
", "smithy.api#http": { "method": "POST", "uri": "/ls/api/2016-11-28/DeleteContactMethod", @@ -7393,7 +7483,7 @@ } ], "traits": { - "smithy.api#documentation": "Exports an Amazon Lightsail instance or block storage disk snapshot to Amazon Elastic Compute Cloud (Amazon EC2).\n This operation results in an export snapshot record that can be used with the create\n cloud formation stack
operation to create new Amazon EC2 instances.
Exported instance snapshots appear in Amazon EC2 as Amazon Machine Images (AMIs), and the\n instance system disk appears as an Amazon Elastic Block Store (Amazon EBS) volume. Exported disk snapshots appear in\n Amazon EC2 as Amazon EBS volumes. Snapshots are exported to the same Amazon Web Services Region in Amazon EC2 as the\n source Lightsail snapshot.
\n \nThe export snapshot
operation supports tag-based access control via resource\n tags applied to the resource identified by source snapshot name
. For more\n information, see the Amazon Lightsail Developer Guide.
Use the get instance snapshots
or get disk snapshots
\n operations to get a list of snapshots that you can export to Amazon EC2.
Exports an Amazon Lightsail instance or block storage disk snapshot to Amazon Elastic Compute Cloud (Amazon EC2).\n This operation results in an export snapshot record that can be used with the create\n cloud formation stack
operation to create new Amazon EC2 instances.
Exported instance snapshots appear in Amazon EC2 as Amazon Machine Images (AMIs), and the\n instance system disk appears as an Amazon Elastic Block Store (Amazon EBS) volume. Exported disk snapshots appear in\n Amazon EC2 as Amazon EBS volumes. Snapshots are exported to the same Amazon Web Services Region in\n Amazon EC2 as the source Lightsail snapshot.
\n \nThe export snapshot
operation supports tag-based access control via resource\n tags applied to the resource identified by source snapshot name
. For more\n information, see the Amazon Lightsail Developer Guide.
Use the get instance snapshots
or get disk snapshots
\n operations to get a list of snapshots that you can export to Amazon EC2.
Returns information about one or more Amazon Lightsail buckets.
\n\nFor more information about buckets, see Buckets in Amazon Lightsail in the Amazon Lightsail Developer\n Guide..
", + "smithy.api#documentation": "Returns information about one or more Amazon Lightsail buckets. The information returned\n includes the synchronization status of the Amazon Simple Storage Service (Amazon S3)\n account-level block public access feature for your Lightsail buckets.
\n\nFor more information about buckets, see Buckets in Amazon Lightsail in the Amazon Lightsail Developer\n Guide.
", "smithy.api#http": { "method": "POST", "uri": "/ls/api/2016-11-28/GetBuckets", @@ -8122,7 +8212,7 @@ "bucketName": { "target": "com.amazonaws.lightsail#BucketName", "traits": { - "smithy.api#documentation": "The name of the bucket for which to return information.
\n\nWhen omitted, the response includes all of your buckets in the AWS Region where the\n request is made.
" + "smithy.api#documentation": "The name of the bucket for which to return information.
\n\nWhen omitted, the response includes all of your buckets in the Amazon Web Services Region\n where the request is made.
" } }, "pageToken": { @@ -8153,6 +8243,12 @@ "traits": { "smithy.api#documentation": "The token to advance to the next page of results from your request.
\n\nA next page token is not returned if there are no more results to display.
\n\nTo get the next page of results, perform another GetBuckets
request and\n specify the next page token using the pageToken
parameter.
An object that describes the synchronization status of the Amazon S3 account-level\n block public access feature for your Lightsail buckets.
\n\nFor more information about this feature and how it affects Lightsail buckets, see Block public access for buckets in Amazon Lightsail.
" + } } } }, @@ -8270,7 +8366,7 @@ "certificateStatuses": { "target": "com.amazonaws.lightsail#CertificateStatusList", "traits": { - "smithy.api#documentation": "The status of the certificates for which to return information.
\nFor example, specify ISSUED
to return only certificates with an\n ISSUED
status.
When omitted, the response includes all of your certificates in the AWS Region where the\n request is made, regardless of their current status.
" + "smithy.api#documentation": "The status of the certificates for which to return information.
\nFor example, specify ISSUED
to return only certificates with an\n ISSUED
status.
When omitted, the response includes all of your certificates in the Amazon Web Services\n Region where the request is made, regardless of their current status.
" } }, "includeCertificateDetails": { @@ -8282,7 +8378,7 @@ "certificateName": { "target": "com.amazonaws.lightsail#CertificateName", "traits": { - "smithy.api#documentation": "The name for the certificate for which to return information.
\nWhen omitted, the response includes all of your certificates in the AWS Region where the\n request is made.
" + "smithy.api#documentation": "The name for the certificate for which to return information.
\nWhen omitted, the response includes all of your certificates in the Amazon Web Services\n Region where the request is made.
" } } } @@ -8395,7 +8491,7 @@ } ], "traits": { - "smithy.api#documentation": "Returns information about the configured contact methods. Specify a protocol in your\n request to return information about a specific contact method.
\nA contact method is used to send you notifications about your Amazon Lightsail resources.\n You can add one email address and one mobile phone number contact method in each AWS Region.\n However, SMS text messaging is not supported in some AWS Regions, and SMS text messages\n cannot be sent to some countries/regions. For more information, see Notifications in Amazon Lightsail.
", + "smithy.api#documentation": "Returns information about the configured contact methods. Specify a protocol in your\n request to return information about a specific contact method.
\nA contact method is used to send you notifications about your Amazon Lightsail resources.\n You can add one email address and one mobile phone number contact method in each Amazon Web Services Region. However, SMS text messaging is not supported in some Amazon Web Services\n Regions, and SMS text messages cannot be sent to some countries/regions. For more information,\n see Notifications in Amazon Lightsail.
", "smithy.api#http": { "method": "GET", "uri": "/ls/api/2016-11-28/GetContactMethods", @@ -8877,7 +8973,7 @@ "serviceName": { "target": "com.amazonaws.lightsail#ContainerServiceName", "traits": { - "smithy.api#documentation": "The name of the container service for which to return information.
\n\nWhen omitted, the response includes all of your container services in the AWS Region\n where the request is made.
", + "smithy.api#documentation": "The name of the container service for which to return information.
\n\nWhen omitted, the response includes all of your container services in the Amazon Web Services Region where the request is made.
", "smithy.api#httpQuery": "serviceName" } } @@ -9174,7 +9270,7 @@ } ], "traits": { - "smithy.api#documentation": "Returns the bundles that can be applied to your Amazon Lightsail content delivery network\n (CDN) distributions.
\nA distribution bundle specifies the monthly network transfer quota and monthly cost of\n your dsitribution.
", + "smithy.api#documentation": "Returns the bundles that can be applied to your Amazon Lightsail content delivery network\n (CDN) distributions.
\nA distribution bundle specifies the monthly network transfer quota and monthly cost of\n your distribution.
", "smithy.api#http": { "method": "POST", "uri": "/ls/api/2016-11-28/GetDistributionBundles", @@ -9413,7 +9509,7 @@ "distributionName": { "target": "com.amazonaws.lightsail#ResourceName", "traits": { - "smithy.api#documentation": "The name of the distribution for which to return information.
\n \nWhen omitted, the response includes all of your distributions in the AWS Region where\n the request is made.
" + "smithy.api#documentation": "The name of the distribution for which to return information.
\n \nWhen omitted, the response includes all of your distributions in the Amazon Web Services\n Region where the request is made.
" } }, "pageToken": { @@ -10582,6 +10678,68 @@ } } }, + "com.amazonaws.lightsail#GetLoadBalancerTlsPolicies": { + "type": "operation", + "input": { + "target": "com.amazonaws.lightsail#GetLoadBalancerTlsPoliciesRequest" + }, + "output": { + "target": "com.amazonaws.lightsail#GetLoadBalancerTlsPoliciesResult" + }, + "errors": [ + { + "target": "com.amazonaws.lightsail#AccessDeniedException" + }, + { + "target": "com.amazonaws.lightsail#AccountSetupInProgressException" + }, + { + "target": "com.amazonaws.lightsail#InvalidInputException" + }, + { + "target": "com.amazonaws.lightsail#ServiceException" + }, + { + "target": "com.amazonaws.lightsail#UnauthenticatedException" + } + ], + "traits": { + "smithy.api#documentation": "Returns a list of TLS security policies that you can apply to Lightsail load\n balancers.
\n\nFor more information about load balancer TLS security policies, see Load balancer TLS security policies in the Amazon Lightsail Developer\n Guide.
", + "smithy.api#http": { + "method": "POST", + "uri": "/ls/api/2016-11-28/GetLoadBalancerTlsPolicies", + "code": 200 + } + } + }, + "com.amazonaws.lightsail#GetLoadBalancerTlsPoliciesRequest": { + "type": "structure", + "members": { + "pageToken": { + "target": "com.amazonaws.lightsail#string", + "traits": { + "smithy.api#documentation": "The token to advance to the next page of results from your request.
\n\nTo get a page token, perform an initial GetLoadBalancerTlsPolicies
request.\n If your results are paginated, the response will return a next page token that you can specify\n as the page token in a subsequent request.
An array of objects that describe the TLS security policies that are available.
" + } + }, + "nextPageToken": { + "target": "com.amazonaws.lightsail#string", + "traits": { + "smithy.api#documentation": "The token to advance to the next page of results from your request.
\n\nA next page token is not returned if there are no more results to display.
\n\nTo get the next page of results, perform another GetLoadBalancerTlsPolicies
\n request and specify the next page token using the pageToken
parameter.
Lightsail throws this exception when user input does not conform to the validation rules\n of an input field.
\nDomain and distribution APIs are only available in the N. Virginia\n (us-east-1
) AWS Region. Please set your AWS Region configuration to\n us-east-1
to create, view, or edit these resources.
Lightsail throws this exception when user input does not conform to the validation rules\n of an input field.
\nDomain and distribution APIs are only available in the N. Virginia\n (us-east-1
) Amazon Web Services Region. Please set your Amazon Web Services\n Region configuration to us-east-1
to create, view, or edit these\n resources.
An object that describes the location of the distribution, such as the AWS Region and\n Availability Zone.
\nLightsail distributions are global resources that can reference an origin in any AWS\n Region, and distribute its content globally. However, all distributions are located in the\n us-east-1
Region.
An object that describes the location of the distribution, such as the Amazon Web Services\n Region and Availability Zone.
\nLightsail distributions are global resources that can reference an origin in any\n Amazon Web Services Region, and distribute its content globally. However, all distributions\n are located in the us-east-1
Region.
Amazon Lightsail is the easiest way to get started with Amazon Web Services (AWS) for developers\n who need to build websites or web applications. It includes everything you need to launch your\n project quickly - instances (virtual private servers), container services, storage buckets,\n managed databases, SSD-based block storage, static IP addresses, load balancers, content\n delivery network (CDN) distributions, DNS management of registered domains, and resource\n snapshots (backups) - for a low, predictable monthly price.
\n\nYou can manage your Lightsail resources using the Lightsail console, Lightsail API,\n AWS Command Line Interface (AWS CLI), or SDKs. For more information about Lightsail concepts\n and tasks, see the Amazon Lightsail Developer Guide.
\n\nThis API Reference provides detailed information about the actions, data types,\n parameters, and errors of the Lightsail service. For more information about the supported\n AWS Regions, endpoints, and service quotas of the Lightsail service, see Amazon Lightsail Endpoints and\n Quotas in the AWS General Reference.
", + "smithy.api#documentation": "Amazon Lightsail is the easiest way to get started with Amazon Web Services (Amazon Web Services) for developers who need to build websites or web applications. It includes\n everything you need to launch your project quickly - instances (virtual private servers),\n container services, storage buckets, managed databases, SSD-based block storage, static IP\n addresses, load balancers, content delivery network (CDN) distributions, DNS management of\n registered domains, and resource snapshots (backups) - for a low, predictable monthly\n price.
\n\nYou can manage your Lightsail resources using the Lightsail console, Lightsail API,\n AWS Command Line Interface (AWS CLI), or SDKs. For more information about Lightsail concepts\n and tasks, see the Amazon Lightsail Developer Guide.
\n\nThis API Reference provides detailed information about the actions, data types,\n parameters, and errors of the Lightsail service. For more information about the supported\n Amazon Web Services Regions, endpoints, and service quotas of the Lightsail service, see\n Amazon Lightsail Endpoints\n and Quotas in the Amazon Web Services General Reference.
", "smithy.api#title": "Amazon Lightsail" }, "version": "2016-11-28", @@ -13669,6 +13827,9 @@ { "target": "com.amazonaws.lightsail#GetLoadBalancerTlsCertificates" }, + { + "target": "com.amazonaws.lightsail#GetLoadBalancerTlsPolicies" + }, { "target": "com.amazonaws.lightsail#GetOperation" }, @@ -13922,6 +14083,18 @@ "traits": { "smithy.api#documentation": "The IP address type of the load balancer.
\n\nThe possible values are ipv4
for IPv4 only, and dualstack
for\n IPv4 and IPv6.
A Boolean value that indicates whether HTTPS redirection is enabled for the load\n balancer.
" + } + }, + "tlsPolicyName": { + "target": "com.amazonaws.lightsail#ResourceName", + "traits": { + "smithy.api#documentation": "The name of the TLS security policy for the load balancer.
\n\nThe following TLS security policy names are possible:
\n\n TLS-2016-08
\n
\n TLS-FS-Res-1-2-2019-08
\n
The name of the TLS security policy.
\n\nThe following TLS security policy names are possible:
\n\n\n TLS-2016-08
\n
\n TLS-FS-Res-1-2-2019-08
\n
You can specify either of these values for the tlsSecurityPolicyName
request\n parameter in the CreateLoadBalancer action, and the attributeValue
request parameter in\n the UpdateLoadBalancerAttribute action.
A Boolean value that indicates whether the TLS security policy is the default.
" + } + }, + "description": { + "target": "com.amazonaws.lightsail#string", + "traits": { + "smithy.api#documentation": "The description of the TLS security policy.
" + } + }, + "protocols": { + "target": "com.amazonaws.lightsail#StringList", + "traits": { + "smithy.api#documentation": "The protocols used in a given TLS security policy.
\n\nThe following protocols are possible:
\n\n Protocol-TLSv1
\n
\n Protocol-TLSv1.1
\n
\n Protocol-TLSv1.2
\n
The ciphers used by the TLS security policy.
\nThe ciphers are listed in order of preference.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Describes the TLS security policies that are available for Lightsail load\n balancers.
\n\nFor more information about load balancer TLS security policies, see Load balancer TLS security policies in the Amazon Lightsail\n Developer Guide.
" + } + }, + "com.amazonaws.lightsail#LoadBalancerTlsPolicyList": { + "type": "list", + "member": { + "target": "com.amazonaws.lightsail#LoadBalancerTlsPolicy" + } + }, "com.amazonaws.lightsail#LogEvent": { "type": "structure", "members": { @@ -15037,7 +15262,7 @@ "location": { "target": "com.amazonaws.lightsail#ResourceLocation", "traits": { - "smithy.api#documentation": "The AWS Region and Availability Zone.
" + "smithy.api#documentation": "The Amazon Web Services Region and Availability Zone.
" } }, "isTerminal": { @@ -15881,7 +16106,7 @@ "contactProtocols": { "target": "com.amazonaws.lightsail#ContactProtocolsList", "traits": { - "smithy.api#documentation": "The contact protocols to use for the alarm, such as Email
, SMS
\n (text messaging), or both.
A notification is sent via the specified contact protocol if notifications are enabled for\n the alarm, and when the alarm is triggered.
\nA notification is not sent if a contact protocol is not specified, if the specified\n contact protocol is not configured in the AWS Region, or if notifications are not enabled\n for the alarm using the notificationEnabled
paramater.
Use the CreateContactMethod
action to configure a contact protocol in an\n AWS Region.
The contact protocols to use for the alarm, such as Email
, SMS
\n (text messaging), or both.
A notification is sent via the specified contact protocol if notifications are enabled for\n the alarm, and when the alarm is triggered.
\nA notification is not sent if a contact protocol is not specified, if the specified\n contact protocol is not configured in the Amazon Web Services Region, or if notifications are\n not enabled for the alarm using the notificationEnabled
paramater.
Use the CreateContactMethod
action to configure a contact protocol in an\n Amazon Web Services Region.
An object that describes a container image that is registered to a Lightsail container\n service
" + } } } }, @@ -17342,7 +17570,7 @@ } ], "traits": { - "smithy.api#documentation": "Sends a verification request to an email contact method to ensure it's owned by the\n requester. SMS contact methods don't need to be verified.
\nA contact method is used to send you notifications about your Amazon Lightsail resources.\n You can add one email address and one mobile phone number contact method in each AWS Region.\n However, SMS text messaging is not supported in some AWS Regions, and SMS text messages\n cannot be sent to some countries/regions. For more information, see Notifications in Amazon Lightsail.
\nA verification request is sent to the contact method when you initially create it. Use\n this action to send another verification request if a previous verification request was\n deleted, or has expired.
\nNotifications are not sent to an email contact method until after it is verified, and\n confirmed as valid.
\nSends a verification request to an email contact method to ensure it's owned by the\n requester. SMS contact methods don't need to be verified.
\nA contact method is used to send you notifications about your Amazon Lightsail resources.\n You can add one email address and one mobile phone number contact method in each Amazon Web Services Region. However, SMS text messaging is not supported in some Amazon Web Services\n Regions, and SMS text messages cannot be sent to some countries/regions. For more information,\n see Notifications in Amazon Lightsail.
\nA verification request is sent to the contact method when you initially create it. Use\n this action to send another verification request if a previous verification request was\n deleted, or has expired.
\nNotifications are not sent to an email contact method until after it is verified, and\n confirmed as valid.
\nThe resource type.
\nThe possible values are Distribution
, Instance
, and\n LoadBalancer
.
Distribution-related APIs are available only in the N. Virginia (us-east-1
)\n AWS Region. Set your AWS Region configuration to us-east-1
to create, view,\n or edit distributions.
The resource type.
\nThe possible values are Distribution
, Instance
, and\n LoadBalancer
.
Distribution-related APIs are available only in the N. Virginia (us-east-1
)\n Amazon Web Services Region. Set your Amazon Web Services Region configuration to\n us-east-1
to create, view, or edit distributions.
Sets the Amazon Lightsail resources that can access the specified Lightsail\n bucket.
\n\nLightsail buckets currently support setting access for Lightsail instances in the same\n AWS Region.
", + "smithy.api#documentation": "Sets the Amazon Lightsail resources that can access the specified Lightsail\n bucket.
\n\nLightsail buckets currently support setting access for Lightsail instances in the same\n Amazon Web Services Region.
", "smithy.api#http": { "method": "POST", "uri": "/ls/api/2016-11-28/SetResourceAccessForBucket", @@ -18309,7 +18537,7 @@ } ], "traits": { - "smithy.api#documentation": "Updates an existing Amazon Lightsail bucket.
\n\nUse this action to update the configuration of an existing bucket, such as versioning,\n public accessibility, and the AWS accounts that can access the bucket.
", + "smithy.api#documentation": "Updates an existing Amazon Lightsail bucket.
\n\nUse this action to update the configuration of an existing bucket, such as versioning,\n public accessibility, and the Amazon Web Services accounts that can access the bucket.
", "smithy.api#http": { "method": "POST", "uri": "/ls/api/2016-11-28/UpdateBucket", @@ -18406,7 +18634,7 @@ "readonlyAccessAccounts": { "target": "com.amazonaws.lightsail#PartnerIdList", "traits": { - "smithy.api#documentation": "An array of strings to specify the AWS account IDs that can access the bucket.
\n\nYou can give a maximum of 10 AWS accounts access to a bucket.
" + "smithy.api#documentation": "An array of strings to specify the Amazon Web Services account IDs that can access the\n bucket.
\n\nYou can give a maximum of 10 Amazon Web Services accounts access to a bucket.
" } }, "accessLogConfig": { @@ -18582,7 +18810,7 @@ } ], "traits": { - "smithy.api#documentation": "Updates the bundle of your Amazon Lightsail content delivery network (CDN)\n distribution.
\nA distribution bundle specifies the monthly network transfer quota and monthly cost of\n your dsitribution.
\nUpdate your distribution's bundle if your distribution is going over its monthly network\n transfer quota and is incurring an overage fee.
\nYou can update your distribution's bundle only one time within your monthly AWS billing\n cycle. To determine if you can update your distribution's bundle, use the\n GetDistributions
action. The ableToUpdateBundle
parameter in the\n result will indicate whether you can currently update your distribution's bundle.
Updates the bundle of your Amazon Lightsail content delivery network (CDN)\n distribution.
\nA distribution bundle specifies the monthly network transfer quota and monthly cost of\n your distribution.
\nUpdate your distribution's bundle if your distribution is going over its monthly network\n transfer quota and is incurring an overage fee.
\nYou can update your distribution's bundle only one time within your monthly AWS billing\n cycle. To determine if you can update your distribution's bundle, use the\n GetDistributions
action. The ableToUpdateBundle
parameter in the\n result will indicate whether you can currently update your distribution's bundle.
An object that describes the result of the action, such as the status of the request, the\n timestamp of the request, and the resources affected by the request.
" + } } } }, @@ -18791,14 +19022,14 @@ "attributeName": { "target": "com.amazonaws.lightsail#LoadBalancerAttributeName", "traits": { - "smithy.api#documentation": "The name of the attribute you want to update. Valid values are below.
", + "smithy.api#documentation": "The name of the attribute you want to update.
", "smithy.api#required": {} } }, "attributeValue": { "target": "com.amazonaws.lightsail#StringMax256", "traits": { - "smithy.api#documentation": "The value that you want to specify for the attribute name.
", + "smithy.api#documentation": "The value that you want to specify for the attribute name.
\nThe following values are supported depending on what you specify for the\n attributeName
request parameter:
If you specify HealthCheckPath
for the attributeName
request\n parameter, then the attributeValue
request parameter must be the path to ping\n on the target (for example, /weather/us/wa/seattle
).
If you specify SessionStickinessEnabled
for the\n attributeName
request parameter, then the attributeValue
\n request parameter must be true
or false
.
If you specify SessionStickiness_LB_CookieDurationSeconds
for the\n attributeName
request parameter, then the attributeValue
\n request parameter must be an interger that represents the cookie duration in\n seconds.
If you specify HttpsRedirectionEnabled
for the attributeName
\n request parameter, then the attributeValue
request parameter must be\n true
or false
.
If you specify TlsPolicyName
for the attributeName
request\n parameter, then the attributeValue
request parameter must be TLS\n version 1.0, 1.1, and 1.2
or TLS version 1.2
.
Amazon Lookout for Equipment is a machine learning service that uses advanced analytics to identify\n anomalies in machines from sensor data for use in predictive maintenance.
", + "smithy.api#title": "Amazon Lookout for Equipment" + }, "version": "2020-12-15", "operations": [ { @@ -78,6 +93,9 @@ { "target": "com.amazonaws.lookoutequipment#ListModels" }, + { + "target": "com.amazonaws.lookoutequipment#ListSensorStatistics" + }, { "target": "com.amazonaws.lookoutequipment#ListTagsForResource" }, @@ -99,22 +117,7 @@ { "target": "com.amazonaws.lookoutequipment#UpdateInferenceScheduler" } - ], - "traits": { - "aws.api#service": { - "sdkId": "LookoutEquipment", - "arnNamespace": "lookoutequipment", - "cloudFormationName": "LookoutEquipment", - "cloudTrailEventSource": "lookoutequipment.amazonaws.com", - "endpointPrefix": "lookoutequipment" - }, - "aws.auth#sigv4": { - "name": "lookoutequipment" - }, - "aws.protocols#awsJson1_0": {}, - "smithy.api#documentation": "Amazon Lookout for Equipment is a machine learning service that uses advanced analytics to identify\n anomalies in machines from sensor data for use in predictive maintenance.
", - "smithy.api#title": "Amazon Lookout for Equipment" - } + ] }, "com.amazonaws.lookoutequipment#AccessDeniedException": { "type": "structure", @@ -141,6 +144,9 @@ } } }, + "com.amazonaws.lookoutequipment#Boolean": { + "type": "boolean" + }, "com.amazonaws.lookoutequipment#BoundedLengthString": { "type": "string", "traits": { @@ -151,6 +157,37 @@ "smithy.api#pattern": "^[\\P{M}\\p{M}]{1,5000}$" } }, + "com.amazonaws.lookoutequipment#CategoricalValues": { + "type": "structure", + "members": { + "Status": { + "target": "com.amazonaws.lookoutequipment#StatisticalIssueStatus", + "traits": { + "smithy.api#documentation": "\nIndicates whether there is a potential data issue related to categorical values.\n
", + "smithy.api#required": {} + } + }, + "NumberOfCategory": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\nIndicates the number of categories in the data.\n
" + } + } + }, + "traits": { + "smithy.api#documentation": "\nEntity that comprises information on categorical values in data.\n
" + } + }, + "com.amazonaws.lookoutequipment#ComponentName": { + "type": "string", + "traits": { + "smithy.api#length": { + "min": 1, + "max": 200 + }, + "smithy.api#pattern": "^[0-9a-zA-Z._\\-]{1,200}$" + } + }, "com.amazonaws.lookoutequipment#ComponentTimestampDelimiter": { "type": "string", "traits": { @@ -177,6 +214,28 @@ "smithy.api#httpError": 409 } }, + "com.amazonaws.lookoutequipment#CountPercent": { + "type": "structure", + "members": { + "Count": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\n\nIndicates the count of occurences of the given statistic.\n\n
", + "smithy.api#required": {} + } + }, + "Percentage": { + "target": "com.amazonaws.lookoutequipment#Float", + "traits": { + "smithy.api#documentation": "\n\nIndicates the percentage of occurances of the given statistic.\n\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\n\nEntity that comprises information of count and percentage.\n\n
" + } + }, "com.amazonaws.lookoutequipment#CreateDataset": { "type": "operation", "input": { @@ -222,8 +281,7 @@ "DatasetSchema": { "target": "com.amazonaws.lookoutequipment#DatasetSchema", "traits": { - "smithy.api#documentation": "A JSON description of the data that is in each time series dataset, including names,\n column names, and data types.
", - "smithy.api#required": {} + "smithy.api#documentation": "A JSON description of the data that is in each time series dataset, including names,\n column names, and data types.
" } }, "ServerSideKmsKeyId": { @@ -587,7 +645,7 @@ "IngestionInputConfiguration": { "target": "com.amazonaws.lookoutequipment#IngestionInputConfiguration", "traits": { - "smithy.api#documentation": "Specifies information for the input data for the data inference job, including data S3\n location parameters.
" + "smithy.api#documentation": "Specifies information for the input data for the data inference job, including data Amazon S3\n location parameters.
" } }, "Status": { @@ -615,6 +673,58 @@ "smithy.api#documentation": "The configuration is the TargetSamplingRate
, which is the sampling rate of \n the data after post processing by \n Amazon Lookout for Equipment. For example, if you provide data that \n has been collected at a 1 second level and you want the system to resample \n the data at a 1 minute rate before training, the TargetSamplingRate
is 1 minute.
When providing a value for the TargetSamplingRate
, you must \n attach the prefix \"PT\" to the rate you want. The value for a 1 second rate \n is therefore PT1S, the value for a 15 minute rate \n is PT15M, and the value for a 1 hour rate \n is PT1H\n
\n\nParameter that gives information about insufficient data for sensors in the dataset. This includes information about those sensors that have complete data missing and those with a short date range.\n\n
", + "smithy.api#required": {} + } + }, + "MissingSensorData": { + "target": "com.amazonaws.lookoutequipment#MissingSensorData", + "traits": { + "smithy.api#documentation": "\n\nParameter that gives information about data that is missing over all the sensors in the input data.\n\n
", + "smithy.api#required": {} + } + }, + "InvalidSensorData": { + "target": "com.amazonaws.lookoutequipment#InvalidSensorData", + "traits": { + "smithy.api#documentation": "\n\nParameter that gives information about data that is invalid over all the sensors in the input data.\n\n
", + "smithy.api#required": {} + } + }, + "UnsupportedTimestamps": { + "target": "com.amazonaws.lookoutequipment#UnsupportedTimestamps", + "traits": { + "smithy.api#documentation": "\n\nParameter that gives information about unsupported timestamps in the input data.\n\n
", + "smithy.api#required": {} + } + }, + "DuplicateTimestamps": { + "target": "com.amazonaws.lookoutequipment#DuplicateTimestamps", + "traits": { + "smithy.api#documentation": "\n\nParameter that gives information about duplicate timestamps in the input data.\n\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\n\nDataQualitySummary gives aggregated statistics over all the sensors about a completed ingestion job. It primarily gives more information about statistics over different incorrect data like MissingCompleteSensorData, MissingSensorData, UnsupportedDateFormats, InsufficientSensorData, DuplicateTimeStamps.\n\n
" + } + }, + "com.amazonaws.lookoutequipment#DataSizeInBytes": { + "type": "long", + "traits": { + "smithy.api#box": {}, + "smithy.api#range": { + "min": 0 + } + } + }, "com.amazonaws.lookoutequipment#DataUploadFrequency": { "type": "string", "traits": { @@ -886,7 +996,7 @@ } ], "traits": { - "smithy.api#documentation": "Provides information on a specific data ingestion job such as creation time, dataset\n ARN, status, and so on.
" + "smithy.api#documentation": "Provides information on a specific data ingestion job such as creation time, dataset\n ARN, and status.
" } }, "com.amazonaws.lookoutequipment#DescribeDataIngestionJobRequest": { @@ -945,6 +1055,39 @@ "traits": { "smithy.api#documentation": "Specifies the reason for failure when a data ingestion job has failed.
" } + }, + "DataQualitySummary": { + "target": "com.amazonaws.lookoutequipment#DataQualitySummary", + "traits": { + "smithy.api#documentation": "\nGives statistics about a completed ingestion job. These statistics primarily relate to quantifying incorrect data such as MissingCompleteSensorData, MissingSensorData, UnsupportedDateFormats, InsufficientSensorData, and DuplicateTimeStamps.\n
" + } + }, + "IngestedFilesSummary": { + "target": "com.amazonaws.lookoutequipment#IngestedFilesSummary" + }, + "StatusDetail": { + "target": "com.amazonaws.lookoutequipment#BoundedLengthString", + "traits": { + "smithy.api#documentation": "\n Provides details about status of the ingestion job that is currently in progress.\n
" + } + }, + "IngestedDataSize": { + "target": "com.amazonaws.lookoutequipment#DataSizeInBytes", + "traits": { + "smithy.api#documentation": "\n Indicates the size of the ingested dataset.\n
" + } + }, + "DataStartTime": { + "target": "com.amazonaws.lookoutequipment#Timestamp", + "traits": { + "smithy.api#documentation": "\n Indicates the earliest timestamp corresponding to data that was successfully ingested during this specific ingestion job.\n
" + } + }, + "DataEndTime": { + "target": "com.amazonaws.lookoutequipment#Timestamp", + "traits": { + "smithy.api#documentation": "\n Indicates the latest timestamp corresponding to data that was successfully ingested during this specific ingestion job.\n
" + } } } }, @@ -974,7 +1117,7 @@ } ], "traits": { - "smithy.api#documentation": "Provides a JSON description of the data that is in each time series dataset, including names, column names, and data types.
" + "smithy.api#documentation": "Provides a JSON description of the data in each time series dataset, including names, column names, and data types.
" } }, "com.amazonaws.lookoutequipment#DescribeDatasetRequest": { @@ -1039,6 +1182,36 @@ "traits": { "smithy.api#documentation": "Specifies the S3 location configuration for the data input for the data ingestion job.
" } + }, + "DataQualitySummary": { + "target": "com.amazonaws.lookoutequipment#DataQualitySummary", + "traits": { + "smithy.api#documentation": "\nGives statistics associated with the given dataset for the latest successful associated ingestion job id. These statistics primarily relate to quantifying incorrect data such as MissingCompleteSensorData, MissingSensorData, UnsupportedDateFormats, InsufficientSensorData, and DuplicateTimeStamps.\n
" + } + }, + "IngestedFilesSummary": { + "target": "com.amazonaws.lookoutequipment#IngestedFilesSummary", + "traits": { + "smithy.api#documentation": "\nIngestedFilesSummary associated with the given dataset for the latest successful associated ingestion job id.\n
" + } + }, + "RoleArn": { + "target": "com.amazonaws.lookoutequipment#IamRoleArn", + "traits": { + "smithy.api#documentation": "\n The Amazon Resource Name (ARN) of the IAM role that you are using for this the data ingestion job. \n
" + } + }, + "DataStartTime": { + "target": "com.amazonaws.lookoutequipment#Timestamp", + "traits": { + "smithy.api#documentation": "\n Indicates the earliest timestamp corresponding to data that was successfully ingested during the most recent ingestion of this particular dataset.\n
" + } + }, + "DataEndTime": { + "target": "com.amazonaws.lookoutequipment#Timestamp", + "traits": { + "smithy.api#documentation": "\n Indicates the latest timestamp corresponding to data that was successfully ingested during the most recent ingestion of this particular dataset.\n
" + } } } }, @@ -1338,12 +1511,30 @@ } } }, + "com.amazonaws.lookoutequipment#DuplicateTimestamps": { + "type": "structure", + "members": { + "TotalNumberOfDuplicateTimestamps": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\nIndicates the total number of duplicate timestamps.\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\n\nEntity that comprises information abount duplicate timestamps in the dataset.\n\n
" + } + }, "com.amazonaws.lookoutequipment#FileNameTimestampFormat": { "type": "string", "traits": { "smithy.api#pattern": "^EPOCH|yyyy-MM-dd-HH-mm-ss|yyyyMMddHHmmss$" } }, + "com.amazonaws.lookoutequipment#Float": { + "type": "float" + }, "com.amazonaws.lookoutequipment#IamRoleArn": { "type": "string", "traits": { @@ -1443,7 +1634,7 @@ "DataOutputConfiguration": { "target": "com.amazonaws.lookoutequipment#InferenceOutputConfiguration", "traits": { - "smithy.api#documentation": "Specifies configuration information for the output results from for the inference\n execution, including the output S3 location.
" + "smithy.api#documentation": "Specifies configuration information for the output results from for the inference\n execution, including the output Amazon S3 location.
" } }, "CustomerResultObject": { @@ -1475,13 +1666,13 @@ "S3InputConfiguration": { "target": "com.amazonaws.lookoutequipment#InferenceS3InputConfiguration", "traits": { - "smithy.api#documentation": "Specifies configuration information for the input data for the inference, including S3\n location of input data..
" + "smithy.api#documentation": "Specifies configuration information for the input data for the inference, including Amazon S3\n location of input data.
" } }, "InputTimeZoneOffset": { "target": "com.amazonaws.lookoutequipment#TimeZoneOffset", "traits": { - "smithy.api#documentation": "Indicates the difference between your time zone and Greenwich Mean Time (GMT).
" + "smithy.api#documentation": "Indicates the difference between your time zone and Coordinated Universal Time (UTC).
" } }, "InferenceInputNameConfiguration": { @@ -1492,7 +1683,7 @@ } }, "traits": { - "smithy.api#documentation": "Specifies configuration information for the input data for the inference, including S3\n location of input data..
" + "smithy.api#documentation": "Specifies configuration information for the input data for the inference, including Amazon S3\n location of input data..
" } }, "com.amazonaws.lookoutequipment#InferenceInputNameConfiguration": { @@ -1687,6 +1878,34 @@ "smithy.api#documentation": "Contains information about the specific inference scheduler, including data delay\n offset, model name and ARN, status, and so on.
" } }, + "com.amazonaws.lookoutequipment#IngestedFilesSummary": { + "type": "structure", + "members": { + "TotalNumberOfFiles": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "Indicates the total number of files that were submitted for ingestion.
", + "smithy.api#required": {} + } + }, + "IngestedNumberOfFiles": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "Indicates the number of files that were successfully ingested.
", + "smithy.api#required": {} + } + }, + "DiscardedFiles": { + "target": "com.amazonaws.lookoutequipment#ListOfDiscardedFiles", + "traits": { + "smithy.api#documentation": "Indicates the number of files that were discarded. A file could be discarded because its format is invalid (for example, a jpg or pdf) or not readable.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Gives statistics about how many files have been ingested, and which files have not been ingested, for a particular ingestion job.
" + } + }, "com.amazonaws.lookoutequipment#IngestionInputConfiguration": { "type": "structure", "members": { @@ -1746,12 +1965,46 @@ "traits": { "smithy.api#documentation": "The prefix for the S3 location being used for the input data for the data ingestion.\n
" } + }, + "KeyPattern": { + "target": "com.amazonaws.lookoutequipment#KeyPattern", + "traits": { + "smithy.api#documentation": "\nPattern for matching the Amazon S3 files which will be used for ingestion.\nIf no KeyPattern is provided, we will use the default hierarchy file structure, which is same as KeyPattern {prefix}/{component_name}/*\n
" + } } }, "traits": { "smithy.api#documentation": "Specifies S3 configuration information for the input data for the data ingestion job.\n
" } }, + "com.amazonaws.lookoutequipment#InsufficientSensorData": { + "type": "structure", + "members": { + "MissingCompleteSensorData": { + "target": "com.amazonaws.lookoutequipment#MissingCompleteSensorData", + "traits": { + "smithy.api#documentation": "\n\nParameter that describes the total number of sensors that have data completely missing for it.\n\n
", + "smithy.api#required": {} + } + }, + "SensorsWithShortDateRange": { + "target": "com.amazonaws.lookoutequipment#SensorsWithShortDateRange", + "traits": { + "smithy.api#documentation": "\n\nParameter that describes the total number of sensors that have a short date range of less than 90 days of data overall.\n\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\n\nEntity that comprises aggregated information on sensors having insufficient data.\n\n
" + } + }, + "com.amazonaws.lookoutequipment#Integer": { + "type": "integer", + "traits": { + "smithy.api#box": {} + } + }, "com.amazonaws.lookoutequipment#InternalServerException": { "type": "structure", "members": { @@ -1768,6 +2021,37 @@ "smithy.api#httpError": 500 } }, + "com.amazonaws.lookoutequipment#InvalidSensorData": { + "type": "structure", + "members": { + "AffectedSensorCount": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\n\nIndicates the number of sensors that have at least some invalid values.\n\n
", + "smithy.api#required": {} + } + }, + "TotalNumberOfInvalidValues": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\n\nIndicates the total number of invalid values across all the sensors.\n\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\n\nEntity that comprises aggregated information on sensors having insufficient data.\n\n
" + } + }, + "com.amazonaws.lookoutequipment#KeyPattern": { + "type": "string", + "traits": { + "smithy.api#length": { + "min": 1, + "max": 2048 + } + } + }, "com.amazonaws.lookoutequipment#KmsKeyArn": { "type": "string", "traits": { @@ -1814,6 +2098,33 @@ "smithy.api#documentation": "The location information (prefix and bucket name) for the s3 location being used for\n label data.
" } }, + "com.amazonaws.lookoutequipment#LargeTimestampGaps": { + "type": "structure", + "members": { + "Status": { + "target": "com.amazonaws.lookoutequipment#StatisticalIssueStatus", + "traits": { + "smithy.api#documentation": "\nIndicates whether there is a potential data issue related to large gaps in timestamps.\n
", + "smithy.api#required": {} + } + }, + "NumberOfLargeTimestampGaps": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\nIndicates the number of large timestamp gaps, if there are any.\n
" + } + }, + "MaxTimestampGapInDays": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\nIndicates the size of the largest timestamp gap, in days.\n
" + } + } + }, + "traits": { + "smithy.api#documentation": "\nEntity that comprises information on large gaps between consecutive timestamps in data.\n
" + } + }, "com.amazonaws.lookoutequipment#ListDataIngestionJobs": { "type": "operation", "input": { @@ -2215,6 +2526,98 @@ } } }, + "com.amazonaws.lookoutequipment#ListOfDiscardedFiles": { + "type": "list", + "member": { + "target": "com.amazonaws.lookoutequipment#S3Object" + }, + "traits": { + "smithy.api#length": { + "min": 0 + } + } + }, + "com.amazonaws.lookoutequipment#ListSensorStatistics": { + "type": "operation", + "input": { + "target": "com.amazonaws.lookoutequipment#ListSensorStatisticsRequest" + }, + "output": { + "target": "com.amazonaws.lookoutequipment#ListSensorStatisticsResponse" + }, + "errors": [ + { + "target": "com.amazonaws.lookoutequipment#AccessDeniedException" + }, + { + "target": "com.amazonaws.lookoutequipment#InternalServerException" + }, + { + "target": "com.amazonaws.lookoutequipment#ResourceNotFoundException" + }, + { + "target": "com.amazonaws.lookoutequipment#ThrottlingException" + }, + { + "target": "com.amazonaws.lookoutequipment#ValidationException" + } + ], + "traits": { + "smithy.api#documentation": "\nLists statistics about the data collected for each of the sensors that have been successfully ingested in the particular dataset. Can also be used to retreive Sensor Statistics for a previous ingestion job.\n
", + "smithy.api#paginated": { + "inputToken": "NextToken", + "outputToken": "NextToken", + "pageSize": "MaxResults" + } + } + }, + "com.amazonaws.lookoutequipment#ListSensorStatisticsRequest": { + "type": "structure", + "members": { + "DatasetName": { + "target": "com.amazonaws.lookoutequipment#DatasetName", + "traits": { + "smithy.api#documentation": "\nThe name of the dataset associated with the list of Sensor Statistics.\n
", + "smithy.api#required": {} + } + }, + "IngestionJobId": { + "target": "com.amazonaws.lookoutequipment#IngestionJobId", + "traits": { + "smithy.api#documentation": "\nThe ingestion job id associated with the list of Sensor Statistics. To get sensor statistics for a particular ingestion job id, both dataset name and ingestion job id must be submitted as inputs.\n
" + } + }, + "MaxResults": { + "target": "com.amazonaws.lookoutequipment#MaxResults", + "traits": { + "smithy.api#documentation": "\nSpecifies the maximum number of sensors for which to retrieve statistics.\n
" + } + }, + "NextToken": { + "target": "com.amazonaws.lookoutequipment#NextToken", + "traits": { + "smithy.api#documentation": "\nAn opaque pagination token indicating where to continue the listing of sensor statistics.\n
" + } + } + } + }, + "com.amazonaws.lookoutequipment#ListSensorStatisticsResponse": { + "type": "structure", + "members": { + "SensorStatisticsSummaries": { + "target": "com.amazonaws.lookoutequipment#SensorStatisticsSummaries", + "traits": { + "smithy.api#documentation": "\nProvides ingestion-based statistics regarding the specified sensor with respect to various validation types, such as whether data exists, the number and percentage of missing values, and the number and percentage of duplicate timestamps.\n
" + } + }, + "NextToken": { + "target": "com.amazonaws.lookoutequipment#NextToken", + "traits": { + "smithy.api#documentation": "\nAn opaque pagination token indicating where to continue the listing of sensor statistics.\n
" + } + } + } + }, "com.amazonaws.lookoutequipment#ListTagsForResource": { "type": "operation", "input": { @@ -2277,6 +2680,43 @@ } } }, + "com.amazonaws.lookoutequipment#MissingCompleteSensorData": { + "type": "structure", + "members": { + "AffectedSensorCount": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\n\nIndicates the number of sensors that have data missing completely.\n\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\n\nEntity that comprises information on sensors that have sensor data completely missing.\n\n
" + } + }, + "com.amazonaws.lookoutequipment#MissingSensorData": { + "type": "structure", + "members": { + "AffectedSensorCount": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\n\nIndicates the number of sensors that have atleast some data missing.\n\n
", + "smithy.api#required": {} + } + }, + "TotalNumberOfMissingValues": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\n\nIndicates the total number of missing values across all the sensors.\n\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\n\nEntity that comprises aggregated information on sensors having missing data.\n\n
" + } + }, "com.amazonaws.lookoutequipment#ModelArn": { "type": "string", "traits": { @@ -2366,6 +2806,61 @@ "smithy.api#documentation": "Provides information about the specified ML model, including dataset and model names and\n ARNs, as well as status.
" } }, + "com.amazonaws.lookoutequipment#MonotonicValues": { + "type": "structure", + "members": { + "Status": { + "target": "com.amazonaws.lookoutequipment#StatisticalIssueStatus", + "traits": { + "smithy.api#documentation": "\nIndicates whether there is a potential data issue related to having monotonic values.\n
", + "smithy.api#required": {} + } + }, + "Monotonicity": { + "target": "com.amazonaws.lookoutequipment#Monotonicity", + "traits": { + "smithy.api#documentation": "\nIndicates the monotonicity of values. Can be INCREASING, DECREASING, or STATIC.\n
" + } + } + }, + "traits": { + "smithy.api#documentation": "\nEntity that comprises information on monotonic values in the data.\n
" + } + }, + "com.amazonaws.lookoutequipment#Monotonicity": { + "type": "string", + "traits": { + "smithy.api#enum": [ + { + "value": "DECREASING", + "name": "DECREASING" + }, + { + "value": "INCREASING", + "name": "INCREASING" + }, + { + "value": "STATIC", + "name": "STATIC" + } + ] + } + }, + "com.amazonaws.lookoutequipment#MultipleOperatingModes": { + "type": "structure", + "members": { + "Status": { + "target": "com.amazonaws.lookoutequipment#StatisticalIssueStatus", + "traits": { + "smithy.api#documentation": "\n Indicates whether there is a potential data issue related to having multiple operating modes.\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\nEntity that comprises information on operating modes in data.\n
" + } + }, "com.amazonaws.lookoutequipment#NameOrArn": { "type": "string", "traits": { @@ -2463,6 +2958,123 @@ "smithy.api#pattern": "^(^$)|([\\P{M}\\p{M}]{1,1023}/$)$" } }, + "com.amazonaws.lookoutequipment#SensorName": { + "type": "string", + "traits": { + "smithy.api#length": { + "min": 1, + "max": 200 + }, + "smithy.api#pattern": "^[0-9a-zA-Z:#$.\\-_]{1,200}$" + } + }, + "com.amazonaws.lookoutequipment#SensorStatisticsSummaries": { + "type": "list", + "member": { + "target": "com.amazonaws.lookoutequipment#SensorStatisticsSummary" + } + }, + "com.amazonaws.lookoutequipment#SensorStatisticsSummary": { + "type": "structure", + "members": { + "ComponentName": { + "target": "com.amazonaws.lookoutequipment#ComponentName", + "traits": { + "smithy.api#documentation": "\n\nName of the component to which the particular sensor belongs for which the statistics belong to.\n\n
" + } + }, + "SensorName": { + "target": "com.amazonaws.lookoutequipment#SensorName", + "traits": { + "smithy.api#documentation": "\n\nName of the sensor that the statistics belong to.\n\n
" + } + }, + "DataExists": { + "target": "com.amazonaws.lookoutequipment#Boolean", + "traits": { + "smithy.api#documentation": "\n\nParameter that indicates whether data exists for the sensor that the statistics belong to.\n\n
" + } + }, + "MissingValues": { + "target": "com.amazonaws.lookoutequipment#CountPercent", + "traits": { + "smithy.api#documentation": "\n\nParameter that describes the total number of, and percentage of, values that are missing for the sensor that the statistics belong to.\n\n
" + } + }, + "InvalidValues": { + "target": "com.amazonaws.lookoutequipment#CountPercent", + "traits": { + "smithy.api#documentation": "\n\nParameter that describes the total number of, and percentage of, values that are invalid for the sensor that the statistics belong to.\n\n
" + } + }, + "InvalidDateEntries": { + "target": "com.amazonaws.lookoutequipment#CountPercent", + "traits": { + "smithy.api#documentation": "\n\nParameter that describes the total number of invalid date entries associated with the sensor that the statistics belong to.\n\n
" + } + }, + "DuplicateTimestamps": { + "target": "com.amazonaws.lookoutequipment#CountPercent", + "traits": { + "smithy.api#documentation": "\nParameter that describes the total number of duplicate timestamp records associated with the sensor that the statistics belong to.\n
" + } + }, + "CategoricalValues": { + "target": "com.amazonaws.lookoutequipment#CategoricalValues", + "traits": { + "smithy.api#documentation": "\nParameter that describes potential risk about whether data associated with the sensor is categorical.\n
" + } + }, + "MultipleOperatingModes": { + "target": "com.amazonaws.lookoutequipment#MultipleOperatingModes", + "traits": { + "smithy.api#documentation": "\nParameter that describes potential risk about whether data associated with the sensor has more than one operating mode.\n
" + } + }, + "LargeTimestampGaps": { + "target": "com.amazonaws.lookoutequipment#LargeTimestampGaps", + "traits": { + "smithy.api#documentation": "\nParameter that describes potential risk about whether data associated with the sensor contains one or more large gaps between consecutive timestamps.\n
" + } + }, + "MonotonicValues": { + "target": "com.amazonaws.lookoutequipment#MonotonicValues", + "traits": { + "smithy.api#documentation": "\nParameter that describes potential risk about whether data associated with the sensor is mostly monotonic.\n
" + } + }, + "DataStartTime": { + "target": "com.amazonaws.lookoutequipment#Timestamp", + "traits": { + "smithy.api#documentation": "\nIndicates the time reference to indicate the beginning of valid data associated with the sensor that the statistics belong to.\n
" + } + }, + "DataEndTime": { + "target": "com.amazonaws.lookoutequipment#Timestamp", + "traits": { + "smithy.api#documentation": "\nIndicates the time reference to indicate the end of valid data associated with the sensor that the statistics belong to.\n
" + } + } + }, + "traits": { + "smithy.api#documentation": "\n\nSummary of ingestion statistics like whether data exists, number of missing values, number of invalid values and so on related to the particular sensor.\n\n
" + } + }, + "com.amazonaws.lookoutequipment#SensorsWithShortDateRange": { + "type": "structure", + "members": { + "AffectedSensorCount": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\n\nIndicates the number of sensors that have less than 90 days of data.\n\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\n\nEntity that comprises information on sensors that have shorter date range.\n\n
" + } + }, "com.amazonaws.lookoutequipment#ServiceQuotaExceededException": { "type": "structure", "members": { @@ -2644,6 +3256,21 @@ } } }, + "com.amazonaws.lookoutequipment#StatisticalIssueStatus": { + "type": "string", + "traits": { + "smithy.api#enum": [ + { + "value": "POTENTIAL_ISSUE_DETECTED", + "name": "POTENTIAL_ISSUE_DETECTED" + }, + { + "value": "NO_ISSUE_DETECTED", + "name": "NO_ISSUE_DETECTED" + } + ] + } + }, "com.amazonaws.lookoutequipment#StopInferenceScheduler": { "type": "operation", "input": { @@ -2940,6 +3567,21 @@ "com.amazonaws.lookoutequipment#Timestamp": { "type": "timestamp" }, + "com.amazonaws.lookoutequipment#UnsupportedTimestamps": { + "type": "structure", + "members": { + "TotalNumberOfUnsupportedTimestamps": { + "target": "com.amazonaws.lookoutequipment#Integer", + "traits": { + "smithy.api#documentation": "\n\nIndicates the total number of unsupported timestamps across the ingested data.\n\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\n\nEntity that comprises information abount unsupported timestamps in the dataset.\n\n
" + } + }, "com.amazonaws.lookoutequipment#UntagResource": { "type": "operation", "input": { diff --git a/aws/sdk/aws-models/lookoutmetrics.json b/aws/sdk/aws-models/lookoutmetrics.json index f16c167faf..adcc54b210 100644 --- a/aws/sdk/aws-models/lookoutmetrics.json +++ b/aws/sdk/aws-models/lookoutmetrics.json @@ -728,6 +728,84 @@ "smithy.api#pattern": "^arn:([a-z\\d-]+):.*:.*:.*:.+$" } }, + "com.amazonaws.lookoutmetrics#AttributeValue": { + "type": "structure", + "members": { + "S": { + "target": "com.amazonaws.lookoutmetrics#StringAttributeValue", + "traits": { + "smithy.api#documentation": "A string.
" + } + }, + "N": { + "target": "com.amazonaws.lookoutmetrics#NumberAttributeValue", + "traits": { + "smithy.api#documentation": "A number.
" + } + }, + "B": { + "target": "com.amazonaws.lookoutmetrics#BinaryAttributeValue", + "traits": { + "smithy.api#documentation": "A binary value.
" + } + }, + "SS": { + "target": "com.amazonaws.lookoutmetrics#StringListAttributeValue", + "traits": { + "smithy.api#documentation": "A list of strings.
" + } + }, + "NS": { + "target": "com.amazonaws.lookoutmetrics#NumberListAttributeValue", + "traits": { + "smithy.api#documentation": "A list of numbers.
" + } + }, + "BS": { + "target": "com.amazonaws.lookoutmetrics#BinaryListAttributeValue", + "traits": { + "smithy.api#documentation": "A list of binary values.
" + } + } + }, + "traits": { + "smithy.api#documentation": "An attribute value.
" + } + }, + "com.amazonaws.lookoutmetrics#AutoDetectionMetricSource": { + "type": "structure", + "members": { + "S3SourceConfig": { + "target": "com.amazonaws.lookoutmetrics#AutoDetectionS3SourceConfig", + "traits": { + "smithy.api#documentation": "The source's source config.
" + } + } + }, + "traits": { + "smithy.api#documentation": "An auto detection metric source.
" + } + }, + "com.amazonaws.lookoutmetrics#AutoDetectionS3SourceConfig": { + "type": "structure", + "members": { + "TemplatedPathList": { + "target": "com.amazonaws.lookoutmetrics#TemplatedPathList", + "traits": { + "smithy.api#documentation": "The config's templated path list.
" + } + }, + "HistoricalDataPathList": { + "target": "com.amazonaws.lookoutmetrics#HistoricalDataPathList", + "traits": { + "smithy.api#documentation": "The config's historical data path list.
" + } + } + }, + "traits": { + "smithy.api#documentation": "An auto detection source config.
" + } + }, "com.amazonaws.lookoutmetrics#BackTestAnomalyDetector": { "type": "operation", "input": { @@ -778,6 +856,15 @@ "type": "structure", "members": {} }, + "com.amazonaws.lookoutmetrics#BinaryAttributeValue": { + "type": "string" + }, + "com.amazonaws.lookoutmetrics#BinaryListAttributeValue": { + "type": "list", + "member": { + "target": "com.amazonaws.lookoutmetrics#BinaryAttributeValue" + } + }, "com.amazonaws.lookoutmetrics#Boolean": { "type": "boolean", "traits": { @@ -833,6 +920,25 @@ "smithy.api#pattern": "^[a-zA-Z0-9][a-zA-Z0-9\\-_]*$" } }, + "com.amazonaws.lookoutmetrics#Confidence": { + "type": "string", + "traits": { + "smithy.api#enum": [ + { + "value": "HIGH", + "name": "HIGH" + }, + { + "value": "LOW", + "name": "LOW" + }, + { + "value": "NONE", + "name": "NONE" + } + ] + } + }, "com.amazonaws.lookoutmetrics#ConflictException": { "type": "structure", "members": { @@ -1810,6 +1916,234 @@ } } }, + "com.amazonaws.lookoutmetrics#DetectMetricSetConfig": { + "type": "operation", + "input": { + "target": "com.amazonaws.lookoutmetrics#DetectMetricSetConfigRequest" + }, + "output": { + "target": "com.amazonaws.lookoutmetrics#DetectMetricSetConfigResponse" + }, + "errors": [ + { + "target": "com.amazonaws.lookoutmetrics#AccessDeniedException" + }, + { + "target": "com.amazonaws.lookoutmetrics#InternalServerException" + }, + { + "target": "com.amazonaws.lookoutmetrics#ResourceNotFoundException" + }, + { + "target": "com.amazonaws.lookoutmetrics#TooManyRequestsException" + }, + { + "target": "com.amazonaws.lookoutmetrics#ValidationException" + } + ], + "traits": { + "smithy.api#documentation": "Detects an Amazon S3 dataset's file format, interval, and offset.
", + "smithy.api#http": { + "method": "POST", + "uri": "/DetectMetricSetConfig", + "code": 200 + } + } + }, + "com.amazonaws.lookoutmetrics#DetectMetricSetConfigRequest": { + "type": "structure", + "members": { + "AnomalyDetectorArn": { + "target": "com.amazonaws.lookoutmetrics#Arn", + "traits": { + "smithy.api#documentation": "An anomaly detector ARN.
", + "smithy.api#required": {} + } + }, + "AutoDetectionMetricSource": { + "target": "com.amazonaws.lookoutmetrics#AutoDetectionMetricSource", + "traits": { + "smithy.api#documentation": "A data source.
", + "smithy.api#required": {} + } + } + } + }, + "com.amazonaws.lookoutmetrics#DetectMetricSetConfigResponse": { + "type": "structure", + "members": { + "DetectedMetricSetConfig": { + "target": "com.amazonaws.lookoutmetrics#DetectedMetricSetConfig", + "traits": { + "smithy.api#documentation": "The inferred dataset configuration for the datasource.
" + } + } + } + }, + "com.amazonaws.lookoutmetrics#DetectedCsvFormatDescriptor": { + "type": "structure", + "members": { + "FileCompression": { + "target": "com.amazonaws.lookoutmetrics#DetectedField", + "traits": { + "smithy.api#documentation": "The format's file compression.
" + } + }, + "Charset": { + "target": "com.amazonaws.lookoutmetrics#DetectedField", + "traits": { + "smithy.api#documentation": "The format's charset.
" + } + }, + "ContainsHeader": { + "target": "com.amazonaws.lookoutmetrics#DetectedField", + "traits": { + "smithy.api#documentation": "Whether the format includes a header.
" + } + }, + "Delimiter": { + "target": "com.amazonaws.lookoutmetrics#DetectedField", + "traits": { + "smithy.api#documentation": "The format's delimiter.
" + } + }, + "HeaderList": { + "target": "com.amazonaws.lookoutmetrics#DetectedField", + "traits": { + "smithy.api#documentation": "The format's header list.
" + } + }, + "QuoteSymbol": { + "target": "com.amazonaws.lookoutmetrics#DetectedField", + "traits": { + "smithy.api#documentation": "The format's quote symbol.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Properties of an inferred CSV format.
" + } + }, + "com.amazonaws.lookoutmetrics#DetectedField": { + "type": "structure", + "members": { + "Value": { + "target": "com.amazonaws.lookoutmetrics#AttributeValue", + "traits": { + "smithy.api#documentation": "The field's value.
" + } + }, + "Confidence": { + "target": "com.amazonaws.lookoutmetrics#Confidence", + "traits": { + "smithy.api#documentation": "The field's confidence.
" + } + }, + "Message": { + "target": "com.amazonaws.lookoutmetrics#Message", + "traits": { + "smithy.api#documentation": "The field's message.
" + } + } + }, + "traits": { + "smithy.api#documentation": "An inferred field.
" + } + }, + "com.amazonaws.lookoutmetrics#DetectedFileFormatDescriptor": { + "type": "structure", + "members": { + "CsvFormatDescriptor": { + "target": "com.amazonaws.lookoutmetrics#DetectedCsvFormatDescriptor", + "traits": { + "smithy.api#documentation": "Details about a CSV format.
" + } + }, + "JsonFormatDescriptor": { + "target": "com.amazonaws.lookoutmetrics#DetectedJsonFormatDescriptor", + "traits": { + "smithy.api#documentation": "Details about a JSON format.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Properties of an inferred data format.
" + } + }, + "com.amazonaws.lookoutmetrics#DetectedJsonFormatDescriptor": { + "type": "structure", + "members": { + "FileCompression": { + "target": "com.amazonaws.lookoutmetrics#DetectedField", + "traits": { + "smithy.api#documentation": "The format's file compression.
" + } + }, + "Charset": { + "target": "com.amazonaws.lookoutmetrics#DetectedField", + "traits": { + "smithy.api#documentation": "The format's character set.
" + } + } + }, + "traits": { + "smithy.api#documentation": "A detected JSON format descriptor.
" + } + }, + "com.amazonaws.lookoutmetrics#DetectedMetricSetConfig": { + "type": "structure", + "members": { + "Offset": { + "target": "com.amazonaws.lookoutmetrics#DetectedField", + "traits": { + "smithy.api#documentation": "The dataset's offset.
" + } + }, + "MetricSetFrequency": { + "target": "com.amazonaws.lookoutmetrics#DetectedField", + "traits": { + "smithy.api#documentation": "The dataset's interval.
" + } + }, + "MetricSource": { + "target": "com.amazonaws.lookoutmetrics#DetectedMetricSource", + "traits": { + "smithy.api#documentation": "The dataset's data source.
" + } + } + }, + "traits": { + "smithy.api#documentation": "An inferred dataset configuration.
" + } + }, + "com.amazonaws.lookoutmetrics#DetectedMetricSource": { + "type": "structure", + "members": { + "S3SourceConfig": { + "target": "com.amazonaws.lookoutmetrics#DetectedS3SourceConfig", + "traits": { + "smithy.api#documentation": "The data source's source configuration.
" + } + } + }, + "traits": { + "smithy.api#documentation": "An inferred data source.
" + } + }, + "com.amazonaws.lookoutmetrics#DetectedS3SourceConfig": { + "type": "structure", + "members": { + "FileFormatDescriptor": { + "target": "com.amazonaws.lookoutmetrics#DetectedFileFormatDescriptor", + "traits": { + "smithy.api#documentation": "The source's file format descriptor.
" + } + } + }, + "traits": { + "smithy.api#documentation": "An inferred source configuration.
" + } + }, "com.amazonaws.lookoutmetrics#DimensionContribution": { "type": "structure", "members": { @@ -3046,6 +3380,9 @@ { "target": "com.amazonaws.lookoutmetrics#DescribeMetricSet" }, + { + "target": "com.amazonaws.lookoutmetrics#DetectMetricSetConfig" + }, { "target": "com.amazonaws.lookoutmetrics#GetAnomalyGroup" }, @@ -3339,6 +3676,15 @@ "smithy.api#pattern": "\\S" } }, + "com.amazonaws.lookoutmetrics#NumberAttributeValue": { + "type": "string" + }, + "com.amazonaws.lookoutmetrics#NumberListAttributeValue": { + "type": "list", + "member": { + "target": "com.amazonaws.lookoutmetrics#NumberAttributeValue" + } + }, "com.amazonaws.lookoutmetrics#Offset": { "type": "integer", "traits": { @@ -3808,6 +4154,15 @@ "smithy.api#httpError": 402 } }, + "com.amazonaws.lookoutmetrics#StringAttributeValue": { + "type": "string" + }, + "com.amazonaws.lookoutmetrics#StringListAttributeValue": { + "type": "list", + "member": { + "target": "com.amazonaws.lookoutmetrics#StringAttributeValue" + } + }, "com.amazonaws.lookoutmetrics#SubnetId": { "type": "string", "traits": { diff --git a/aws/sdk/aws-models/macie2.json b/aws/sdk/aws-models/macie2.json index a5804c15f6..689b53c614 100644 --- a/aws/sdk/aws-models/macie2.json +++ b/aws/sdk/aws-models/macie2.json @@ -1110,6 +1110,13 @@ "smithy.api#jsonName": "jobId" } }, + "originType": { + "target": "com.amazonaws.macie2#OriginType", + "traits": { + "smithy.api#documentation": "Specifies how Amazon Macie found the sensitive data that produced the finding: SENSITIVE_DATA_DISCOVERY_JOB, for a classification job.
", + "smithy.api#jsonName": "originType" + } + }, "result": { "target": "com.amazonaws.macie2#ClassificationResult", "traits": { @@ -1119,7 +1126,7 @@ } }, "traits": { - "smithy.api#documentation": "Provides information about a sensitive data finding, including the classification job that produced the finding.
" + "smithy.api#documentation": "Provides information about a sensitive data finding and the details of the finding.
" } }, "com.amazonaws.macie2#ClassificationExportConfiguration": { @@ -4423,7 +4430,7 @@ "findingIds": { "target": "com.amazonaws.macie2#__listOf__string", "traits": { - "smithy.api#documentation": "An array of strings that lists the unique identifiers for the findings to retrieve.
", + "smithy.api#documentation": "An array of strings that lists the unique identifiers for the findings to retrieve. You can specify as many as 50 unique identifiers in this array.
", "smithy.api#jsonName": "findingIds", "smithy.api#required": {} } @@ -5262,7 +5269,7 @@ "com.amazonaws.macie2#JobComparator": { "type": "string", "traits": { - "smithy.api#documentation": "The operator to use in a condition. Valid values are:
", + "smithy.api#documentation": "The operator to use in a condition. Depending on the type of condition, possible values are:
", "smithy.api#enum": [ { "value": "EQ", @@ -7009,6 +7016,18 @@ ] } }, + "com.amazonaws.macie2#OriginType": { + "type": "string", + "traits": { + "smithy.api#documentation": "Specifies how Amazon Macie found the sensitive data that produced a finding. The only possible value is:
", + "smithy.api#enum": [ + { + "value": "SENSITIVE_DATA_DISCOVERY_JOB", + "name": "SENSITIVE_DATA_DISCOVERY_JOB" + } + ] + } + }, "com.amazonaws.macie2#Page": { "type": "structure", "members": { @@ -7231,6 +7250,9 @@ "type": "list", "member": { "target": "com.amazonaws.macie2#Range" + }, + "traits": { + "smithy.api#documentation": "Specifies the locations of occurrences of sensitive data in a non-binary text file.
" } }, "com.amazonaws.macie2#Record": { @@ -7259,6 +7281,9 @@ "type": "list", "member": { "target": "com.amazonaws.macie2#Record" + }, + "traits": { + "smithy.api#documentation": "Specifies the locations of occurrences of sensitive data in an Apache Avro object container or a structured data file.
" } }, "com.amazonaws.macie2#RelationshipStatus": { @@ -8909,7 +8934,7 @@ "tagKeys": { "target": "com.amazonaws.macie2#__listOf__string", "traits": { - "smithy.api#documentation": "The key of the tag to remove from the resource. To remove multiple tags, append the tagKeys parameter and argument for each additional tag to remove, separated by an ampersand (&).
", + "smithy.api#documentation": "One or more tags (keys) to remove from the resource. In an HTTP request to remove multiple tags, append the tagKeys parameter and argument for each tag to remove, and separate them with an ampersand (&).
", "smithy.api#httpQuery": "tagKeys", "smithy.api#required": {} } diff --git a/aws/sdk/aws-models/mediatailor.json b/aws/sdk/aws-models/mediatailor.json index 48783982f2..e6301ccd19 100644 --- a/aws/sdk/aws-models/mediatailor.json +++ b/aws/sdk/aws-models/mediatailor.json @@ -234,7 +234,7 @@ "AdSegmentUrlPrefix": { "target": "com.amazonaws.mediatailor#__string", "traits": { - "smithy.api#documentation": "A non-default content delivery network (CDN) to serve ad segments. By default, AWS Elemental MediaTailor uses Amazon CloudFront with default cache settings as its CDN for ad segments. To set up an alternate CDN, create a rule in your CDN for the origin ads.mediatailor.<region>.amazonaws.com. Then specify the rule's name in this AdSegmentUrlPrefix. When AWS Elemental MediaTailor serves a manifest, it reports your CDN as the source for ad segments.
" + "smithy.api#documentation": "A non-default content delivery network (CDN) to serve ad segments. By default, AWS Elemental MediaTailor uses Amazon CloudFront with default cache settings as its CDN for ad segments. To set up an alternate CDN, create a rule in your CDN for the origin ads.mediatailor.<region>.amazonaws.com. Then specify the rule's name in this AdSegmentUrlPrefix. When AWS Elemental MediaTailor serves a manifest, it reports your CDN as the source for ad segments.
" } }, "ContentSegmentUrlPrefix": { @@ -310,6 +310,13 @@ "smithy.api#documentation": "The tags to assign to the channel.
", "smithy.api#jsonName": "tags" } + }, + "Tier": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The tier for this channel. STANDARD tier channels can contain live programs.
", + "smithy.api#required": {} + } } }, "traits": { @@ -477,6 +484,12 @@ "smithy.api#documentation": "The tags to assign to the channel.
", "smithy.api#jsonName": "tags" } + }, + "Tier": { + "target": "com.amazonaws.mediatailor#Tier", + "traits": { + "smithy.api#documentation": "The tier of the channel.
" + } } } }, @@ -537,6 +550,112 @@ "smithy.api#documentation": "The tags assigned to the channel.
", "smithy.api#jsonName": "tags" } + }, + "Tier": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The channel's tier.
" + } + } + } + }, + "com.amazonaws.mediatailor#CreateLiveSource": { + "type": "operation", + "input": { + "target": "com.amazonaws.mediatailor#CreateLiveSourceRequest" + }, + "output": { + "target": "com.amazonaws.mediatailor#CreateLiveSourceResponse" + }, + "traits": { + "smithy.api#documentation": "Creates name for a specific live source in a source location.
", + "smithy.api#http": { + "method": "POST", + "uri": "/sourceLocation/{SourceLocationName}/liveSource/{LiveSourceName}", + "code": 200 + } + } + }, + "com.amazonaws.mediatailor#CreateLiveSourceRequest": { + "type": "structure", + "members": { + "HttpPackageConfigurations": { + "target": "com.amazonaws.mediatailor#HttpPackageConfigurations", + "traits": { + "smithy.api#documentation": "A list of HTTP package configuration parameters for this live source.
", + "smithy.api#required": {} + } + }, + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The identifier for the live source you are working on.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + }, + "SourceLocationName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The identifier for the source location you are working on.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + }, + "Tags": { + "target": "com.amazonaws.mediatailor#__mapOf__string", + "traits": { + "smithy.api#documentation": "The tags to assign to the live source.
", + "smithy.api#jsonName": "tags" + } + } + } + }, + "com.amazonaws.mediatailor#CreateLiveSourceResponse": { + "type": "structure", + "members": { + "Arn": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The ARN of the live source.
" + } + }, + "CreationTime": { + "target": "com.amazonaws.mediatailor#__timestampUnix", + "traits": { + "smithy.api#documentation": "The timestamp that indicates when the live source was created.
" + } + }, + "HttpPackageConfigurations": { + "target": "com.amazonaws.mediatailor#HttpPackageConfigurations", + "traits": { + "smithy.api#documentation": "The HTTP package configurations.
" + } + }, + "LastModifiedTime": { + "target": "com.amazonaws.mediatailor#__timestampUnix", + "traits": { + "smithy.api#documentation": "The timestamp that indicates when the live source was modified.
" + } + }, + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the live source.
" + } + }, + "SourceLocationName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the source location associated with the VOD source.
" + } + }, + "Tags": { + "target": "com.amazonaws.mediatailor#__mapOf__string", + "traits": { + "smithy.api#documentation": "The tags assigned to the live source.
", + "smithy.api#jsonName": "tags" + } } } }, @@ -673,6 +792,12 @@ "smithy.api#required": {} } }, + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the LiveSource for this Program.
" + } + }, "ProgramName": { "target": "com.amazonaws.mediatailor#__string", "traits": { @@ -698,8 +823,7 @@ "VodSourceName": { "target": "com.amazonaws.mediatailor#__string", "traits": { - "smithy.api#documentation": "The name that's used to refer to a VOD source.
", - "smithy.api#required": {} + "smithy.api#documentation": "The name that's used to refer to a VOD source.
" } } } @@ -731,6 +855,12 @@ "smithy.api#documentation": "The timestamp of when the program was created.
" } }, + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the LiveSource for this Program.
" + } + }, "ProgramName": { "target": "com.amazonaws.mediatailor#__string", "traits": { @@ -797,7 +927,10 @@ } }, "SegmentDeliveryConfigurations": { - "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration" + "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration", + "traits": { + "smithy.api#documentation": "A list of the segment delivery configurations associated with this resource.
" + } }, "SourceLocationName": { "target": "com.amazonaws.mediatailor#__string", @@ -856,7 +989,10 @@ } }, "SegmentDeliveryConfigurations": { - "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration" + "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration", + "traits": { + "smithy.api#documentation": "A list of the segment delivery configurations associated with this resource.
" + } }, "SourceLocationName": { "target": "com.amazonaws.mediatailor#__string", @@ -896,7 +1032,7 @@ "HttpPackageConfigurations": { "target": "com.amazonaws.mediatailor#HttpPackageConfigurations", "traits": { - "smithy.api#documentation": "An array of HTTP package configuration parameters for this VOD source.
", + "smithy.api#documentation": "A list of HTTP package configuration parameters for this VOD source.
", "smithy.api#required": {} } }, @@ -949,7 +1085,7 @@ "LastModifiedTime": { "target": "com.amazonaws.mediatailor#__timestampUnix", "traits": { - "smithy.api#documentation": "The ARN for the VOD source.
" + "smithy.api#documentation": "The last modified time of the VOD source.
" } }, "SourceLocationName": { @@ -1133,6 +1269,48 @@ "type": "structure", "members": {} }, + "com.amazonaws.mediatailor#DeleteLiveSource": { + "type": "operation", + "input": { + "target": "com.amazonaws.mediatailor#DeleteLiveSourceRequest" + }, + "output": { + "target": "com.amazonaws.mediatailor#DeleteLiveSourceResponse" + }, + "traits": { + "smithy.api#documentation": "Deletes a specific live source in a specific source location.
", + "smithy.api#http": { + "method": "DELETE", + "uri": "/sourceLocation/{SourceLocationName}/liveSource/{LiveSourceName}", + "code": 200 + } + } + }, + "com.amazonaws.mediatailor#DeleteLiveSourceRequest": { + "type": "structure", + "members": { + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The identifier for the live source you are working on.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + }, + "SourceLocationName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The identifier for the source location you are working on.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + } + } + }, + "com.amazonaws.mediatailor#DeleteLiveSourceResponse": { + "type": "structure", + "members": {} + }, "com.amazonaws.mediatailor#DeletePlaybackConfiguration": { "type": "operation", "input": { @@ -1414,6 +1592,98 @@ "smithy.api#documentation": "The tags assigned to the channel.
", "smithy.api#jsonName": "tags" } + }, + "Tier": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The channel's tier.
" + } + } + } + }, + "com.amazonaws.mediatailor#DescribeLiveSource": { + "type": "operation", + "input": { + "target": "com.amazonaws.mediatailor#DescribeLiveSourceRequest" + }, + "output": { + "target": "com.amazonaws.mediatailor#DescribeLiveSourceResponse" + }, + "traits": { + "smithy.api#documentation": "Provides details about a specific live source in a specific source location.
", + "smithy.api#http": { + "method": "GET", + "uri": "/sourceLocation/{SourceLocationName}/liveSource/{LiveSourceName}", + "code": 200 + } + } + }, + "com.amazonaws.mediatailor#DescribeLiveSourceRequest": { + "type": "structure", + "members": { + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The identifier for the live source you are working on.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + }, + "SourceLocationName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The identifier for the source location you are working on.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + } + } + }, + "com.amazonaws.mediatailor#DescribeLiveSourceResponse": { + "type": "structure", + "members": { + "Arn": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The ARN of the live source.
" + } + }, + "CreationTime": { + "target": "com.amazonaws.mediatailor#__timestampUnix", + "traits": { + "smithy.api#documentation": "The timestamp that indicates when the live source was created.
" + } + }, + "HttpPackageConfigurations": { + "target": "com.amazonaws.mediatailor#HttpPackageConfigurations", + "traits": { + "smithy.api#documentation": "The HTTP package configurations.
" + } + }, + "LastModifiedTime": { + "target": "com.amazonaws.mediatailor#__timestampUnix", + "traits": { + "smithy.api#documentation": "The timestamp that indicates when the live source was modified.
" + } + }, + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the live source.
" + } + }, + "SourceLocationName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the source location associated with the VOD source.
" + } + }, + "Tags": { + "target": "com.amazonaws.mediatailor#__mapOf__string", + "traits": { + "smithy.api#documentation": "The tags assigned to the live source.
", + "smithy.api#jsonName": "tags" + } } } }, @@ -1482,6 +1752,12 @@ "smithy.api#documentation": "The timestamp of when the program was created.
" } }, + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the LiveSource for this Program.
" + } + }, "ProgramName": { "target": "com.amazonaws.mediatailor#__string", "traits": { @@ -1578,7 +1854,10 @@ } }, "SegmentDeliveryConfigurations": { - "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration" + "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration", + "traits": { + "smithy.api#documentation": "A list of the segment delivery configurations associated with this resource.
" + } }, "SourceLocationName": { "target": "com.amazonaws.mediatailor#__string", @@ -1657,7 +1936,7 @@ "LastModifiedTime": { "target": "com.amazonaws.mediatailor#__timestampUnix", "traits": { - "smithy.api#documentation": "The ARN for the VOD source.
" + "smithy.api#documentation": "The last modified time of the VOD source.
" } }, "SourceLocationName": { @@ -1785,7 +2064,7 @@ "Items": { "target": "com.amazonaws.mediatailor#__listOfScheduleEntry", "traits": { - "smithy.api#documentation": "An array of schedule entries for the channel.
" + "smithy.api#documentation": "A list of schedule entries for the channel.
" } }, "NextToken": { @@ -2162,7 +2441,7 @@ "Items": { "target": "com.amazonaws.mediatailor#__listOfAlert", "traits": { - "smithy.api#documentation": "An array of alerts that are associated with this resource.
" + "smithy.api#documentation": "A list of alerts that are associated with this resource.
" } }, "NextToken": { @@ -2221,7 +2500,7 @@ "Items": { "target": "com.amazonaws.mediatailor#__listOfChannel", "traits": { - "smithy.api#documentation": "An array of channels that are associated with this account.
" + "smithy.api#documentation": "A list of channels that are associated with this account.
" } }, "NextToken": { @@ -2232,6 +2511,73 @@ } } }, + "com.amazonaws.mediatailor#ListLiveSources": { + "type": "operation", + "input": { + "target": "com.amazonaws.mediatailor#ListLiveSourcesRequest" + }, + "output": { + "target": "com.amazonaws.mediatailor#ListLiveSourcesResponse" + }, + "traits": { + "smithy.api#documentation": "lists all the live sources in a source location.
", + "smithy.api#http": { + "method": "GET", + "uri": "/sourceLocation/{SourceLocationName}/liveSources", + "code": 200 + }, + "smithy.api#paginated": { + "inputToken": "NextToken", + "outputToken": "NextToken", + "items": "Items", + "pageSize": "MaxResults" + } + } + }, + "com.amazonaws.mediatailor#ListLiveSourcesRequest": { + "type": "structure", + "members": { + "MaxResults": { + "target": "com.amazonaws.mediatailor#MaxResults", + "traits": { + "smithy.api#documentation": "Upper bound on number of records to return. The maximum number of results is 100.
", + "smithy.api#httpQuery": "maxResults" + } + }, + "NextToken": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "Pagination token from the GET list request. Use the token to fetch the next page of results.
", + "smithy.api#httpQuery": "nextToken" + } + }, + "SourceLocationName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The identifier for the source location you are working on.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + } + } + }, + "com.amazonaws.mediatailor#ListLiveSourcesResponse": { + "type": "structure", + "members": { + "Items": { + "target": "com.amazonaws.mediatailor#__listOfLiveSource", + "traits": { + "smithy.api#documentation": "Lists the live sources.
" + } + }, + "NextToken": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "Pagination token from the list request. Use the token to fetch the next page of results.
" + } + } + } + }, "com.amazonaws.mediatailor#ListPlaybackConfigurations": { "type": "operation", "input": { @@ -2410,7 +2756,7 @@ "Items": { "target": "com.amazonaws.mediatailor#__listOfSourceLocation", "traits": { - "smithy.api#documentation": "An array of source locations.
" + "smithy.api#documentation": "A list of source locations.
" } }, "NextToken": { @@ -2555,6 +2901,61 @@ "smithy.api#documentation": "The configuration for pre-roll ad insertion.
" } }, + "com.amazonaws.mediatailor#LiveSource": { + "type": "structure", + "members": { + "Arn": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The ARN for the live source.
", + "smithy.api#required": {} + } + }, + "CreationTime": { + "target": "com.amazonaws.mediatailor#__timestampUnix", + "traits": { + "smithy.api#documentation": "The timestamp that indicates when the live source was created.
" + } + }, + "HttpPackageConfigurations": { + "target": "com.amazonaws.mediatailor#HttpPackageConfigurations", + "traits": { + "smithy.api#documentation": "The HTTP package configurations for the live source.
", + "smithy.api#required": {} + } + }, + "LastModifiedTime": { + "target": "com.amazonaws.mediatailor#__timestampUnix", + "traits": { + "smithy.api#documentation": "The timestamp that indicates when the live source was last modified.
" + } + }, + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name that's used to refer to a live source.
", + "smithy.api#required": {} + } + }, + "SourceLocationName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the source location.
", + "smithy.api#required": {} + } + }, + "Tags": { + "target": "com.amazonaws.mediatailor#__mapOf__string", + "traits": { + "smithy.api#documentation": "The tags assigned to the live source.
", + "smithy.api#jsonName": "tags" + } + } + }, + "traits": { + "smithy.api#documentation": "Live source configuration parameters.
" + } + }, "com.amazonaws.mediatailor#LogConfiguration": { "type": "structure", "members": { @@ -2618,6 +3019,9 @@ { "target": "com.amazonaws.mediatailor#CreateChannel" }, + { + "target": "com.amazonaws.mediatailor#CreateLiveSource" + }, { "target": "com.amazonaws.mediatailor#CreatePrefetchSchedule" }, @@ -2636,6 +3040,9 @@ { "target": "com.amazonaws.mediatailor#DeleteChannelPolicy" }, + { + "target": "com.amazonaws.mediatailor#DeleteLiveSource" + }, { "target": "com.amazonaws.mediatailor#DeletePlaybackConfiguration" }, @@ -2654,6 +3061,9 @@ { "target": "com.amazonaws.mediatailor#DescribeChannel" }, + { + "target": "com.amazonaws.mediatailor#DescribeLiveSource" + }, { "target": "com.amazonaws.mediatailor#DescribeProgram" }, @@ -2681,6 +3091,9 @@ { "target": "com.amazonaws.mediatailor#ListChannels" }, + { + "target": "com.amazonaws.mediatailor#ListLiveSources" + }, { "target": "com.amazonaws.mediatailor#ListPlaybackConfigurations" }, @@ -2717,6 +3130,9 @@ { "target": "com.amazonaws.mediatailor#UpdateChannel" }, + { + "target": "com.amazonaws.mediatailor#UpdateLiveSource" + }, { "target": "com.amazonaws.mediatailor#UpdateSourceLocation" }, @@ -3467,6 +3883,12 @@ "smithy.api#required": {} } }, + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the live source used for the program.
" + } + }, "ProgramName": { "target": "com.amazonaws.mediatailor#__string", "traits": { @@ -3496,8 +3918,7 @@ "VodSourceName": { "target": "com.amazonaws.mediatailor#__string", "traits": { - "smithy.api#documentation": "The name of the VOD source.
", - "smithy.api#required": {} + "smithy.api#documentation": "The name of the VOD source.
" } } }, @@ -3550,11 +3971,20 @@ "type": "structure", "members": { "BaseUrl": { - "target": "com.amazonaws.mediatailor#__string" + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The base URL of the host or path of the segment delivery server that you're using to serve segments. This is typically a content delivery network (CDN). The URL can be absolute or relative. To use an absolute URL include the protocol, such as https://example.com/some/path. To use a relative URL specify the relative path, such as /some/path*.
" + } }, "Name": { - "target": "com.amazonaws.mediatailor#__string" + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "A unique identifier used to distinguish between multiple segment delivery configurations in a source location.
" + } } + }, + "traits": { + "smithy.api#documentation": "The base URL of the host or path of the segment delivery server that you're using to serve segments. This is typically a content delivery network (CDN). The URL can be absolute or relative. To use an absolute URL include the protocol, such as https://example.com/some/path. To use a relative URL specify the relative path, such as /some/path*.
" } }, "com.amazonaws.mediatailor#SlateSource": { @@ -3619,7 +4049,10 @@ } }, "SegmentDeliveryConfigurations": { - "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration" + "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration", + "traits": { + "smithy.api#documentation": "The segment delivery configurations for the source location.
" + } }, "SourceLocationName": { "target": "com.amazonaws.mediatailor#__string", @@ -3780,9 +4213,30 @@ } } }, + "com.amazonaws.mediatailor#Tier": { + "type": "string", + "traits": { + "smithy.api#enum": [ + { + "value": "BASIC", + "name": "BASIC" + }, + { + "value": "STANDARD", + "name": "STANDARD" + } + ] + } + }, "com.amazonaws.mediatailor#Transition": { "type": "structure", "members": { + "DurationMillis": { + "target": "com.amazonaws.mediatailor#__long", + "traits": { + "smithy.api#documentation": "The duration of the live program in seconds.
" + } + }, "RelativePosition": { "target": "com.amazonaws.mediatailor#RelativePosition", "traits": { @@ -3969,6 +4423,105 @@ "smithy.api#documentation": "The tags assigned to the channel.
", "smithy.api#jsonName": "tags" } + }, + "Tier": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The channel's tier.
" + } + } + } + }, + "com.amazonaws.mediatailor#UpdateLiveSource": { + "type": "operation", + "input": { + "target": "com.amazonaws.mediatailor#UpdateLiveSourceRequest" + }, + "output": { + "target": "com.amazonaws.mediatailor#UpdateLiveSourceResponse" + }, + "traits": { + "smithy.api#documentation": "Updates a specific live source in a specific source location.
", + "smithy.api#http": { + "method": "PUT", + "uri": "/sourceLocation/{SourceLocationName}/liveSource/{LiveSourceName}", + "code": 200 + } + } + }, + "com.amazonaws.mediatailor#UpdateLiveSourceRequest": { + "type": "structure", + "members": { + "HttpPackageConfigurations": { + "target": "com.amazonaws.mediatailor#HttpPackageConfigurations", + "traits": { + "smithy.api#documentation": "A list of HTTP package configurations for the live source on this account.
", + "smithy.api#required": {} + } + }, + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The identifier for the live source you are working on.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + }, + "SourceLocationName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The identifier for the source location you are working on.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + } + } + }, + "com.amazonaws.mediatailor#UpdateLiveSourceResponse": { + "type": "structure", + "members": { + "Arn": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The ARN of the live source.
" + } + }, + "CreationTime": { + "target": "com.amazonaws.mediatailor#__timestampUnix", + "traits": { + "smithy.api#documentation": "The timestamp that indicates when the live source was created.
" + } + }, + "HttpPackageConfigurations": { + "target": "com.amazonaws.mediatailor#HttpPackageConfigurations", + "traits": { + "smithy.api#documentation": "The HTTP package configurations.
" + } + }, + "LastModifiedTime": { + "target": "com.amazonaws.mediatailor#__timestampUnix", + "traits": { + "smithy.api#documentation": "The timestamp that indicates when the live source was modified.
" + } + }, + "LiveSourceName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the live source.
" + } + }, + "SourceLocationName": { + "target": "com.amazonaws.mediatailor#__string", + "traits": { + "smithy.api#documentation": "The name of the source location associated with the VOD source.
" + } + }, + "Tags": { + "target": "com.amazonaws.mediatailor#__mapOf__string", + "traits": { + "smithy.api#documentation": "The tags assigned to the live source.
", + "smithy.api#jsonName": "tags" + } } } }, @@ -4012,7 +4565,10 @@ } }, "SegmentDeliveryConfigurations": { - "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration" + "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration", + "traits": { + "smithy.api#documentation": "A list of the segment delivery configurations associated with this resource.
" + } }, "SourceLocationName": { "target": "com.amazonaws.mediatailor#__string", @@ -4064,7 +4620,10 @@ } }, "SegmentDeliveryConfigurations": { - "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration" + "target": "com.amazonaws.mediatailor#__listOfSegmentDeliveryConfiguration", + "traits": { + "smithy.api#documentation": "A list of the segment delivery configurations associated with this resource.
" + } }, "SourceLocationName": { "target": "com.amazonaws.mediatailor#__string", @@ -4104,7 +4663,7 @@ "HttpPackageConfigurations": { "target": "com.amazonaws.mediatailor#HttpPackageConfigurations", "traits": { - "smithy.api#documentation": "An array of HTTP package configurations for the VOD source on this account.
", + "smithy.api#documentation": "A list of HTTP package configurations for the VOD source on this account.
", "smithy.api#required": {} } }, @@ -4150,7 +4709,7 @@ "LastModifiedTime": { "target": "com.amazonaws.mediatailor#__timestampUnix", "traits": { - "smithy.api#documentation": "The ARN for the VOD source.
" + "smithy.api#documentation": "The last modified time of the VOD source.
" } }, "SourceLocationName": { @@ -4276,6 +4835,12 @@ "target": "com.amazonaws.mediatailor#Channel" } }, + "com.amazonaws.mediatailor#__listOfLiveSource": { + "type": "list", + "member": { + "target": "com.amazonaws.mediatailor#LiveSource" + } + }, "com.amazonaws.mediatailor#__listOfPlaybackConfiguration": { "type": "list", "member": { diff --git a/aws/sdk/aws-models/mgn.json b/aws/sdk/aws-models/mgn.json index 8e91c899b4..1c761d1211 100644 --- a/aws/sdk/aws-models/mgn.json +++ b/aws/sdk/aws-models/mgn.json @@ -50,9 +50,8 @@ "aws.api#service": { "sdkId": "mgn", "arnNamespace": "mgn", - "cloudFormationName": "ApplicationMigrationService", - "cloudTrailEventSource": "mgn.amazonaws.com", - "endpointPrefix": "mgn" + "awsProductName": "mgn", + "cloudTrailEventSource": "mgn.amazonaws.com" }, "aws.auth#sigv4": { "name": "mgn" @@ -60,16 +59,16 @@ "aws.protocols#restJson1": {}, "smithy.api#cors": { "additionalAllowedHeaders": [ + "content-type", "x-amz-content-sha256", - "x-amzn-trace-id", "x-amz-user-agent", - "content-type" + "x-amzn-trace-id" ], "additionalExposedHeaders": [ - "x-amz-apigw-id", - "x-amzn-trace-id", "x-amzn-errortype", - "x-amzn-requestid" + "x-amzn-requestid", + "x-amzn-trace-id", + "x-amz-apigw-id" ] }, "smithy.api#documentation": "The Application Migration Service service.
", @@ -113,12 +112,12 @@ "traits": { "smithy.api#enum": [ { - "value": "LEGACY_BIOS", - "name": "LEGACY_BIOS" + "name": "LEGACY_BIOS", + "value": "LEGACY_BIOS" }, { - "value": "UEFI", - "name": "UEFI" + "name": "UEFI", + "value": "UEFI" } ] } @@ -177,8 +176,8 @@ "traits": { "smithy.api#documentation": "Allows the user to set the SourceServer.LifeCycle.state property for specific Source Server IDs to one of the following: READY_FOR_TEST or READY_FOR_CUTOVER. This command only works if the Source Server is already launchable (dataReplicationInfo.lagDuration is not null.)
", "smithy.api#http": { - "method": "POST", "uri": "/ChangeServerLifeCycleState", + "method": "POST", "code": 200 } } @@ -222,16 +221,16 @@ "traits": { "smithy.api#enum": [ { - "value": "READY_FOR_TEST", - "name": "READY_FOR_TEST" + "name": "READY_FOR_TEST", + "value": "READY_FOR_TEST" }, { - "value": "READY_FOR_CUTOVER", - "name": "READY_FOR_CUTOVER" + "name": "READY_FOR_CUTOVER", + "value": "READY_FOR_CUTOVER" }, { - "value": "CUTOVER", - "name": "CUTOVER" + "name": "CUTOVER", + "value": "CUTOVER" } ] } @@ -298,8 +297,8 @@ "traits": { "smithy.api#documentation": "Creates a new ReplicationConfigurationTemplate.
", "smithy.api#http": { - "method": "POST", "uri": "/CreateReplicationConfigurationTemplate", + "method": "POST", "code": 201 } } @@ -423,68 +422,68 @@ "traits": { "smithy.api#enum": [ { - "value": "AGENT_NOT_SEEN", - "name": "AGENT_NOT_SEEN" + "name": "AGENT_NOT_SEEN", + "value": "AGENT_NOT_SEEN" }, { - "value": "SNAPSHOTS_FAILURE", - "name": "SNAPSHOTS_FAILURE" + "name": "SNAPSHOTS_FAILURE", + "value": "SNAPSHOTS_FAILURE" }, { - "value": "NOT_CONVERGING", - "name": "NOT_CONVERGING" + "name": "NOT_CONVERGING", + "value": "NOT_CONVERGING" }, { - "value": "UNSTABLE_NETWORK", - "name": "UNSTABLE_NETWORK" + "name": "UNSTABLE_NETWORK", + "value": "UNSTABLE_NETWORK" }, { - "value": "FAILED_TO_CREATE_SECURITY_GROUP", - "name": "FAILED_TO_CREATE_SECURITY_GROUP" + "name": "FAILED_TO_CREATE_SECURITY_GROUP", + "value": "FAILED_TO_CREATE_SECURITY_GROUP" }, { - "value": "FAILED_TO_LAUNCH_REPLICATION_SERVER", - "name": "FAILED_TO_LAUNCH_REPLICATION_SERVER" + "name": "FAILED_TO_LAUNCH_REPLICATION_SERVER", + "value": "FAILED_TO_LAUNCH_REPLICATION_SERVER" }, { - "value": "FAILED_TO_BOOT_REPLICATION_SERVER", - "name": "FAILED_TO_BOOT_REPLICATION_SERVER" + "name": "FAILED_TO_BOOT_REPLICATION_SERVER", + "value": "FAILED_TO_BOOT_REPLICATION_SERVER" }, { - "value": "FAILED_TO_AUTHENTICATE_WITH_SERVICE", - "name": "FAILED_TO_AUTHENTICATE_WITH_SERVICE" + "name": "FAILED_TO_AUTHENTICATE_WITH_SERVICE", + "value": "FAILED_TO_AUTHENTICATE_WITH_SERVICE" }, { - "value": "FAILED_TO_DOWNLOAD_REPLICATION_SOFTWARE", - "name": "FAILED_TO_DOWNLOAD_REPLICATION_SOFTWARE" + "name": "FAILED_TO_DOWNLOAD_REPLICATION_SOFTWARE", + "value": "FAILED_TO_DOWNLOAD_REPLICATION_SOFTWARE" }, { - "value": "FAILED_TO_CREATE_STAGING_DISKS", - "name": "FAILED_TO_CREATE_STAGING_DISKS" + "name": "FAILED_TO_CREATE_STAGING_DISKS", + "value": "FAILED_TO_CREATE_STAGING_DISKS" }, { - "value": "FAILED_TO_ATTACH_STAGING_DISKS", - "name": "FAILED_TO_ATTACH_STAGING_DISKS" + "name": "FAILED_TO_ATTACH_STAGING_DISKS", + "value": "FAILED_TO_ATTACH_STAGING_DISKS" }, { - "value": "FAILED_TO_PAIR_REPLICATION_SERVER_WITH_AGENT", - "name": "FAILED_TO_PAIR_REPLICATION_SERVER_WITH_AGENT" + "name": "FAILED_TO_PAIR_REPLICATION_SERVER_WITH_AGENT", + "value": "FAILED_TO_PAIR_REPLICATION_SERVER_WITH_AGENT" }, { - "value": "FAILED_TO_CONNECT_AGENT_TO_REPLICATION_SERVER", - "name": "FAILED_TO_CONNECT_AGENT_TO_REPLICATION_SERVER" + "name": "FAILED_TO_CONNECT_AGENT_TO_REPLICATION_SERVER", + "value": "FAILED_TO_CONNECT_AGENT_TO_REPLICATION_SERVER" }, { - "value": "FAILED_TO_START_DATA_TRANSFER", - "name": "FAILED_TO_START_DATA_TRANSFER" + "name": "FAILED_TO_START_DATA_TRANSFER", + "value": "FAILED_TO_START_DATA_TRANSFER" }, { - "value": "UNSUPPORTED_VM_CONFIGURATION", - "name": "UNSUPPORTED_VM_CONFIGURATION" + "name": "UNSUPPORTED_VM_CONFIGURATION", + "value": "UNSUPPORTED_VM_CONFIGURATION" }, { - "value": "LAST_SNAPSHOT_JOB_FAILED", - "name": "LAST_SNAPSHOT_JOB_FAILED" + "name": "LAST_SNAPSHOT_JOB_FAILED", + "value": "LAST_SNAPSHOT_JOB_FAILED" } ] } @@ -691,24 +690,24 @@ "traits": { "smithy.api#enum": [ { - "value": "NOT_STARTED", - "name": "NOT_STARTED" + "name": "NOT_STARTED", + "value": "NOT_STARTED" }, { - "value": "IN_PROGRESS", - "name": "IN_PROGRESS" + "name": "IN_PROGRESS", + "value": "IN_PROGRESS" }, { - "value": "SUCCEEDED", - "name": "SUCCEEDED" + "name": "SUCCEEDED", + "value": "SUCCEEDED" }, { - "value": "FAILED", - "name": "FAILED" + "name": "FAILED", + "value": "FAILED" }, { - "value": "SKIPPED", - "name": "SKIPPED" + "name": "SKIPPED", + "value": "SKIPPED" } ] } @@ -724,52 +723,52 @@ "traits": { "smithy.api#enum": [ { - "value": "STOPPED", - "name": "STOPPED" + "name": "STOPPED", + "value": "STOPPED" }, { - "value": "INITIATING", - "name": "INITIATING" + "name": "INITIATING", + "value": "INITIATING" }, { - "value": "INITIAL_SYNC", - "name": "INITIAL_SYNC" + "name": "INITIAL_SYNC", + "value": "INITIAL_SYNC" }, { - "value": "BACKLOG", - "name": "BACKLOG" + "name": "BACKLOG", + "value": "BACKLOG" }, { - "value": "CREATING_SNAPSHOT", - "name": "CREATING_SNAPSHOT" + "name": "CREATING_SNAPSHOT", + "value": "CREATING_SNAPSHOT" }, { - "value": "CONTINUOUS", - "name": "CONTINUOUS" + "name": "CONTINUOUS", + "value": "CONTINUOUS" }, { - "value": "PAUSED", - "name": "PAUSED" + "name": "PAUSED", + "value": "PAUSED" }, { - "value": "RESCAN", - "name": "RESCAN" + "name": "RESCAN", + "value": "RESCAN" }, { - "value": "STALLED", - "name": "STALLED" + "name": "STALLED", + "value": "STALLED" }, { - "value": "DISCONNECTED", - "name": "DISCONNECTED" + "name": "DISCONNECTED", + "value": "DISCONNECTED" }, { - "value": "PENDING_SNAPSHOT_SHIPPING", - "name": "PENDING_SNAPSHOT_SHIPPING" + "name": "PENDING_SNAPSHOT_SHIPPING", + "value": "PENDING_SNAPSHOT_SHIPPING" }, { - "value": "SHIPPING_SNAPSHOT", - "name": "SHIPPING_SNAPSHOT" + "name": "SHIPPING_SNAPSHOT", + "value": "SHIPPING_SNAPSHOT" } ] } @@ -796,8 +795,8 @@ "traits": { "smithy.api#documentation": "Deletes a single Job by ID.
", "smithy.api#http": { - "method": "POST", "uri": "/DeleteJob", + "method": "POST", "code": 204 }, "smithy.api#idempotent": {} @@ -841,8 +840,8 @@ "traits": { "smithy.api#documentation": "Deletes a single Replication Configuration Template by ID
", "smithy.api#http": { - "method": "POST", "uri": "/DeleteReplicationConfigurationTemplate", + "method": "POST", "code": 204 }, "smithy.api#idempotent": {} @@ -886,8 +885,8 @@ "traits": { "smithy.api#documentation": "Deletes a single source server by ID.
", "smithy.api#http": { - "method": "POST", "uri": "/DeleteSourceServer", + "method": "POST", "code": 204 }, "smithy.api#idempotent": {} @@ -931,8 +930,8 @@ "traits": { "smithy.api#documentation": "Deletes a given vCenter client by ID.
", "smithy.api#http": { - "method": "POST", "uri": "/DeleteVcenterClient", + "method": "POST", "code": 204 }, "smithy.api#idempotent": {} @@ -969,15 +968,15 @@ "traits": { "smithy.api#documentation": "Retrieves detailed job log items with paging.
", "smithy.api#http": { - "method": "POST", "uri": "/DescribeJobLogItems", + "method": "POST", "code": 200 }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "items", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "items" }, "smithy.api#readonly": {} } @@ -1042,15 +1041,15 @@ "traits": { "smithy.api#documentation": "Returns a list of Jobs. Use the JobsID and fromDate and toData filters to limit which jobs are returned. The response is sorted by creationDataTime - latest date first. Jobs are normally created by the StartTest, StartCutover, and TerminateTargetInstances APIs. Jobs are also created by DiagnosticLaunch and TerminateDiagnosticInstances, which are APIs available only to *Support* and only used in response to relevant support tickets.
", "smithy.api#http": { - "method": "POST", "uri": "/DescribeJobs", + "method": "POST", "code": 200 }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "items", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "items" }, "smithy.api#readonly": {} } @@ -1061,8 +1060,7 @@ "filters": { "target": "com.amazonaws.mgn#DescribeJobsRequestFilters", "traits": { - "smithy.api#documentation": "Request to describe Job log filters.
", - "smithy.api#required": {} + "smithy.api#documentation": "Request to describe Job log filters.
" } }, "maxResults": { @@ -1156,15 +1154,15 @@ "traits": { "smithy.api#documentation": "Lists all ReplicationConfigurationTemplates, filtered by Source Server IDs.
", "smithy.api#http": { - "method": "POST", "uri": "/DescribeReplicationConfigurationTemplates", + "method": "POST", "code": 200 }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "items", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "items" }, "smithy.api#readonly": {} } @@ -1175,8 +1173,7 @@ "replicationConfigurationTemplateIDs": { "target": "com.amazonaws.mgn#ReplicationConfigurationTemplateIDs", "traits": { - "smithy.api#documentation": "Request to describe Replication Configuration template by template IDs.
", - "smithy.api#required": {} + "smithy.api#documentation": "Request to describe Replication Configuration template by template IDs.
" } }, "maxResults": { @@ -1229,15 +1226,15 @@ "traits": { "smithy.api#documentation": "Retrieves all SourceServers or multiple SourceServers by ID.
", "smithy.api#http": { - "method": "POST", "uri": "/DescribeSourceServers", + "method": "POST", "code": 200 }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "items", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "items" }, "smithy.api#readonly": {} } @@ -1248,8 +1245,7 @@ "filters": { "target": "com.amazonaws.mgn#DescribeSourceServersRequestFilters", "traits": { - "smithy.api#documentation": "Request to filter Source Servers list.
", - "smithy.api#required": {} + "smithy.api#documentation": "Request to filter Source Servers list.
" } }, "maxResults": { @@ -1349,15 +1345,15 @@ "traits": { "smithy.api#documentation": "Returns a list of the installed vCenter clients.
", "smithy.api#http": { - "method": "GET", "uri": "/DescribeVcenterClients", + "method": "GET", "code": 200 }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "items", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "items" }, "smithy.api#readonly": {} } @@ -1420,8 +1416,8 @@ "traits": { "smithy.api#documentation": "Disconnects specific Source Servers from Application Migration Service. Data replication is stopped immediately. All AWS resources created by Application Migration Service for enabling the replication of these source servers will be terminated / deleted within 90 minutes. Launched Test or Cutover instances will NOT be terminated. If the agent on the source server has not been prevented from communicating with the Application Migration Service service, then it will receive a command to uninstall itself (within approximately 10 minutes). The following properties of the SourceServer will be changed immediately: dataReplicationInfo.dataReplicationState will be set to DISCONNECTED; The totalStorageBytes property for each of dataReplicationInfo.replicatedDisks will be set to zero; dataReplicationInfo.lagDuration and dataReplicationInfo.lagDuration will be nullified.
", "smithy.api#http": { - "method": "POST", "uri": "/DisconnectFromService", + "method": "POST", "code": 200 } } @@ -1514,8 +1510,8 @@ "traits": { "smithy.api#documentation": "Finalizes the cutover immediately for specific Source Servers. All AWS resources created by Application Migration Service for enabling the replication of these source servers will be terminated / deleted within 90 minutes. Launched Test or Cutover instances will NOT be terminated. The AWS Replication Agent will receive a command to uninstall itself (within 10 minutes). The following properties of the SourceServer will be changed immediately: dataReplicationInfo.dataReplicationState will be changed to DISCONNECTED; The SourceServer.lifeCycle.state will be changed to CUTOVER; The totalStorageBytes property fo each of dataReplicationInfo.replicatedDisks will be set to zero; dataReplicationInfo.lagDuration and dataReplicationInfo.lagDuration will be nullified.
", "smithy.api#http": { - "method": "POST", "uri": "/FinalizeCutover", + "method": "POST", "code": 200 } } @@ -1574,8 +1570,8 @@ "traits": { "smithy.api#documentation": "Lists all LaunchConfigurations available, filtered by Source Server IDs.
", "smithy.api#http": { - "method": "POST", "uri": "/GetLaunchConfiguration", + "method": "POST", "code": 200 }, "smithy.api#readonly": {} @@ -1612,8 +1608,8 @@ "traits": { "smithy.api#documentation": "Lists all ReplicationConfigurations, filtered by Source Server ID.
", "smithy.api#http": { - "method": "POST", "uri": "/GetReplicationConfiguration", + "method": "POST", "code": 200 }, "smithy.api#readonly": {} @@ -1704,8 +1700,8 @@ "traits": { "smithy.api#documentation": "Initialize Application Migration Service.
", "smithy.api#http": { - "method": "POST", "uri": "/InitializeService", + "method": "POST", "code": 204 } } @@ -1868,68 +1864,68 @@ "traits": { "smithy.api#enum": [ { - "value": "JOB_START", - "name": "JOB_START" + "name": "JOB_START", + "value": "JOB_START" }, { - "value": "SERVER_SKIPPED", - "name": "SERVER_SKIPPED" + "name": "SERVER_SKIPPED", + "value": "SERVER_SKIPPED" }, { - "value": "CLEANUP_START", - "name": "CLEANUP_START" + "name": "CLEANUP_START", + "value": "CLEANUP_START" }, { - "value": "CLEANUP_END", - "name": "CLEANUP_END" + "name": "CLEANUP_END", + "value": "CLEANUP_END" }, { - "value": "CLEANUP_FAIL", - "name": "CLEANUP_FAIL" + "name": "CLEANUP_FAIL", + "value": "CLEANUP_FAIL" }, { - "value": "SNAPSHOT_START", - "name": "SNAPSHOT_START" + "name": "SNAPSHOT_START", + "value": "SNAPSHOT_START" }, { - "value": "SNAPSHOT_END", - "name": "SNAPSHOT_END" + "name": "SNAPSHOT_END", + "value": "SNAPSHOT_END" }, { - "value": "SNAPSHOT_FAIL", - "name": "SNAPSHOT_FAIL" + "name": "SNAPSHOT_FAIL", + "value": "SNAPSHOT_FAIL" }, { - "value": "USING_PREVIOUS_SNAPSHOT", - "name": "USING_PREVIOUS_SNAPSHOT" + "name": "USING_PREVIOUS_SNAPSHOT", + "value": "USING_PREVIOUS_SNAPSHOT" }, { - "value": "CONVERSION_START", - "name": "CONVERSION_START" + "name": "CONVERSION_START", + "value": "CONVERSION_START" }, { - "value": "CONVERSION_END", - "name": "CONVERSION_END" + "name": "CONVERSION_END", + "value": "CONVERSION_END" }, { - "value": "CONVERSION_FAIL", - "name": "CONVERSION_FAIL" + "name": "CONVERSION_FAIL", + "value": "CONVERSION_FAIL" }, { - "value": "LAUNCH_START", - "name": "LAUNCH_START" + "name": "LAUNCH_START", + "value": "LAUNCH_START" }, { - "value": "LAUNCH_FAILED", - "name": "LAUNCH_FAILED" + "name": "LAUNCH_FAILED", + "value": "LAUNCH_FAILED" }, { - "value": "JOB_CANCEL", - "name": "JOB_CANCEL" + "name": "JOB_CANCEL", + "value": "JOB_CANCEL" }, { - "value": "JOB_END", - "name": "JOB_END" + "name": "JOB_END", + "value": "JOB_END" } ] } @@ -1992,10 +1988,7 @@ ], "traits": { "aws.api#arn": { - "template": "job/{jobID}", - "absolute": false, - "noAccount": false, - "noRegion": false + "template": "job/{jobID}" } } }, @@ -2112,12 +2105,12 @@ "traits": { "smithy.api#enum": [ { - "value": "STOPPED", - "name": "STOPPED" + "name": "STOPPED", + "value": "STOPPED" }, { - "value": "STARTED", - "name": "STARTED" + "name": "STARTED", + "value": "STARTED" } ] } @@ -2392,40 +2385,40 @@ "traits": { "smithy.api#enum": [ { - "value": "STOPPED", - "name": "STOPPED" + "name": "STOPPED", + "value": "STOPPED" }, { - "value": "NOT_READY", - "name": "NOT_READY" + "name": "NOT_READY", + "value": "NOT_READY" }, { - "value": "READY_FOR_TEST", - "name": "READY_FOR_TEST" + "name": "READY_FOR_TEST", + "value": "READY_FOR_TEST" }, { - "value": "TESTING", - "name": "TESTING" + "name": "TESTING", + "value": "TESTING" }, { - "value": "READY_FOR_CUTOVER", - "name": "READY_FOR_CUTOVER" + "name": "READY_FOR_CUTOVER", + "value": "READY_FOR_CUTOVER" }, { - "value": "CUTTING_OVER", - "name": "CUTTING_OVER" + "name": "CUTTING_OVER", + "value": "CUTTING_OVER" }, { - "value": "CUTOVER", - "name": "CUTOVER" + "name": "CUTOVER", + "value": "CUTOVER" }, { - "value": "DISCONNECTED", - "name": "DISCONNECTED" + "name": "DISCONNECTED", + "value": "DISCONNECTED" }, { - "value": "DISCOVERED", - "name": "DISCOVERED" + "name": "DISCOVERED", + "value": "DISCOVERED" } ] } @@ -2471,8 +2464,7 @@ "smithy.api#documentation": "List all tags for your Application Migration Service resources.
", "smithy.api#http": { "method": "GET", - "uri": "/tags/{resourceArn}", - "code": 200 + "uri": "/tags/{resourceArn}" }, "smithy.api#readonly": {} } @@ -2523,8 +2515,8 @@ "traits": { "smithy.api#documentation": "Archives specific Source Servers by setting the SourceServer.isArchived property to true for specified SourceServers by ID. This command only works for SourceServers with a lifecycle. state which equals DISCONNECTED or CUTOVER.
", "smithy.api#http": { - "method": "POST", "uri": "/MarkAsArchived", + "method": "POST", "code": 200 } } @@ -3008,10 +3000,7 @@ }, "traits": { "aws.api#arn": { - "template": "replication-configuration-template/{replicationConfigurationTemplateID}", - "absolute": false, - "noAccount": false, - "noRegion": false + "template": "replication-configuration-template/{replicationConfigurationTemplateID}" } } }, @@ -3038,12 +3027,12 @@ "traits": { "smithy.api#enum": [ { - "value": "AGENT_BASED", - "name": "AGENT_BASED" + "name": "AGENT_BASED", + "value": "AGENT_BASED" }, { - "value": "SNAPSHOT_SHIPPING", - "name": "SNAPSHOT_SHIPPING" + "name": "SNAPSHOT_SHIPPING", + "value": "SNAPSHOT_SHIPPING" } ] } @@ -3110,8 +3099,8 @@ "traits": { "smithy.api#documentation": "Causes the data replication initiation sequence to begin immediately upon next Handshake for specified SourceServer IDs, regardless of when the previous initiation started. This command will not work if the SourceServer is not stalled or is in a DISCONNECTED or STOPPED state.
", "smithy.api#http": { - "method": "POST", "uri": "/RetryDataReplication", + "method": "POST", "code": 200 } } @@ -3170,6 +3159,12 @@ "traits": { "smithy.api#documentation": "Exceeded the service quota code.
" } + }, + "quotaValue": { + "target": "com.amazonaws.mgn#StrictlyPositiveInteger", + "traits": { + "smithy.api#documentation": "Exceeded the service quota value.
" + } } }, "traits": { @@ -3386,10 +3381,7 @@ ], "traits": { "aws.api#arn": { - "template": "source-server/{sourceServerID}", - "absolute": false, - "noAccount": false, - "noRegion": false + "template": "source-server/{sourceServerID}" } } }, @@ -3421,8 +3413,8 @@ "traits": { "smithy.api#documentation": "Launches a Cutover Instance for specific Source Servers. This command starts a LAUNCH job whose initiatedBy property is StartCutover and changes the SourceServer.lifeCycle.state property to CUTTING_OVER.
", "smithy.api#http": { - "method": "POST", "uri": "/StartCutover", + "method": "POST", "code": 202 } } @@ -3496,8 +3488,8 @@ "traits": { "smithy.api#documentation": "Starts replication for SNAPSHOT_SHIPPING agents.
", "smithy.api#http": { - "method": "POST", "uri": "/StartReplication", + "method": "POST", "code": 200 } } @@ -3536,8 +3528,8 @@ "traits": { "smithy.api#documentation": "Launches a Test Instance for specific Source Servers. This command starts a LAUNCH job whose initiatedBy property is StartTest and changes the SourceServer.lifeCycle.state property to TESTING.
", "smithy.api#http": { - "method": "POST", "uri": "/StartTest", + "method": "POST", "code": 202 } } @@ -3648,8 +3640,7 @@ "smithy.api#documentation": "Adds or overwrites only the specified tags for the specified Application Migration Service resource or resources. When you specify an existing tag key, the value is overwritten with the new value. Each resource can have a maximum of 50 tags. Each tag consists of a key and optional value.
", "smithy.api#http": { "method": "POST", - "uri": "/tags/{resourceArn}", - "code": 200 + "uri": "/tags/{resourceArn}" }, "smithy.api#idempotent": {} } @@ -3700,12 +3691,12 @@ "traits": { "smithy.api#enum": [ { - "value": "NONE", - "name": "NONE" + "name": "NONE", + "value": "NONE" }, { - "value": "BASIC", - "name": "BASIC" + "name": "BASIC", + "value": "BASIC" } ] } @@ -3732,8 +3723,8 @@ "traits": { "smithy.api#documentation": "Starts a job that terminates specific launched EC2 Test and Cutover instances. This command will not work for any Source Server with a lifecycle.state of TESTING, CUTTING_OVER, or CUTOVER.
", "smithy.api#http": { - "method": "POST", "uri": "/TerminateTargetInstances", + "method": "POST", "code": 202 } } @@ -3859,8 +3850,7 @@ "smithy.api#documentation": "Deletes the specified set of tags from the specified set of Application Migration Service resources.
", "smithy.api#http": { "method": "DELETE", - "uri": "/tags/{resourceArn}", - "code": 200 + "uri": "/tags/{resourceArn}" }, "smithy.api#idempotent": {} } @@ -3911,8 +3901,8 @@ "traits": { "smithy.api#documentation": "Updates multiple LaunchConfigurations by Source Server ID.
", "smithy.api#http": { - "method": "POST", "uri": "/UpdateLaunchConfiguration", + "method": "POST", "code": 200 }, "smithy.api#idempotent": {} @@ -4000,8 +3990,8 @@ "traits": { "smithy.api#documentation": "Allows you to update multiple ReplicationConfigurations by Source Server ID.
", "smithy.api#http": { - "method": "POST", "uri": "/UpdateReplicationConfiguration", + "method": "POST", "code": 200 }, "smithy.api#idempotent": {} @@ -4128,8 +4118,8 @@ "traits": { "smithy.api#documentation": "Updates multiple ReplicationConfigurationTemplates by ID.
", "smithy.api#http": { - "method": "POST", "uri": "/UpdateReplicationConfigurationTemplate", + "method": "POST", "code": 200 } } @@ -4249,8 +4239,8 @@ "traits": { "smithy.api#documentation": "Allows you to change between the AGENT_BASED replication type and the SNAPSHOT_SHIPPING replication type.
", "smithy.api#http": { - "method": "POST", "uri": "/UpdateSourceServerReplicationType", + "method": "POST", "code": 200 } } @@ -4438,10 +4428,7 @@ }, "traits": { "aws.api#arn": { - "template": "vcenter-client/{vcenterClientID}", - "absolute": false, - "noAccount": false, - "noRegion": false + "template": "vcenter-client/{vcenterClientID}" } } } diff --git a/aws/sdk/aws-models/monitoring.json b/aws/sdk/aws-models/monitoring.json index 230a6a986e..51733b6f5c 100644 --- a/aws/sdk/aws-models/monitoring.json +++ b/aws/sdk/aws-models/monitoring.json @@ -737,6 +737,9 @@ "input": { "target": "com.amazonaws.cloudwatch#DeleteAlarmsInput" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.cloudwatch#ResourceNotFound" @@ -1412,7 +1415,7 @@ "Name": { "target": "com.amazonaws.cloudwatch#DimensionName", "traits": { - "smithy.api#documentation": "The name of the dimension. Dimension names must contain only ASCII characters and must include \n\t\t\tat least one non-whitespace character.
", + "smithy.api#documentation": "The name of the dimension. Dimension names must contain only ASCII characters, must include \n\t\t\tat least one non-whitespace character, and cannot start with a colon (:
).
A dimension is a name/value pair that is part of the identity of a metric. You \n\t\t\tcan assign up to 10 dimensions to a metric. Because dimensions are part of the unique \n\t\t\tidentifier for a metric, whenever you add a unique name/value pair to one of \n\t\t\tyour metrics, you are creating a new variation of that metric.
" + "smithy.api#documentation": "A dimension is a name/value pair that is part of the identity of a metric. Because dimensions are part of the unique \n\t\t\tidentifier for a metric, whenever you add a unique name/value pair to one of \n\t\t\tyour metrics, you are creating a new variation of that metric. For example, many Amazon EC2 metrics publish\n\t\tInstanceId
as a dimension name, and the actual instance ID as the value for that dimension.
You \n\t\tcan assign up to 10 dimensions to a metric.
" } }, "com.amazonaws.cloudwatch#DimensionFilter": { @@ -1496,6 +1499,9 @@ "input": { "target": "com.amazonaws.cloudwatch#DisableAlarmActionsInput" }, + "output": { + "target": "smithy.api#Unit" + }, "traits": { "smithy.api#documentation": "Disables the actions for the specified alarms. When an alarm's actions are disabled, the\n\t\t\talarm actions do not execute when the alarm state changes.
" } @@ -1560,6 +1566,9 @@ "input": { "target": "com.amazonaws.cloudwatch#EnableAlarmActionsInput" }, + "output": { + "target": "smithy.api#Unit" + }, "traits": { "smithy.api#documentation": "Enables the actions for the specified alarms.
" } @@ -1869,7 +1878,7 @@ } ], "traits": { - "smithy.api#documentation": "You can use the GetMetricData
API to retrieve as many as 500 different\n\t\t\tmetrics in a single request, with a total of as many as 100,800 data points. You can also\n\t\t\toptionally perform math expressions on the values of the returned statistics, to create\n\t\t\tnew time series that represent new insights into your data. For example, using Lambda\n\t\t\tmetrics, you could divide the Errors metric by the Invocations metric to get an error\n\t\t\trate time series. For more information about metric math expressions, see Metric Math Syntax and Functions in the Amazon CloudWatch User\n\t\t\t\tGuide.
Calls to the GetMetricData
API have a different pricing structure than \n\t\t\tcalls to GetMetricStatistics
. For more information about pricing, see \n\t\t\tAmazon CloudWatch Pricing.
Amazon CloudWatch retains metric data as follows:
\n\t\tData points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution\n\t\t\t\tmetrics and are available only for custom metrics that have been defined with a StorageResolution
of 1.
Data points with a period of 60 seconds (1-minute) are available for 15 days.
\nData points with a period of 300 seconds (5-minute) are available for 63 days.
\nData points with a period of 3600 seconds (1 hour) are available for 455 days (15 months).
\nData points that are initially published with a shorter period are aggregated together for long-term storage. For example, if you collect \n\t\t\tdata using a period of 1 minute, the data remains available for 15 days with 1-minute resolution. After 15 days, this data is still available, \n\t\t\tbut is aggregated and retrievable only with a resolution of 5 minutes. After 63 days, the data is further aggregated and is available with \n\t\t\ta resolution of 1 hour.
\n\t\t\n\t\tIf you omit Unit
in your request, all data that was collected with any unit is returned, along with the corresponding units that were specified\n\t\t\twhen the data was reported to CloudWatch. If you specify a unit, the operation returns only data that was collected with that unit specified.\n\t\t\tIf you specify a unit that does not match the data collected, the results of the operation are null. CloudWatch does not perform unit conversions.
You can use the GetMetricData
API to retrieve CloudWatch metric values. The operation \n\t\t\tcan also include a CloudWatch Metrics Insights query, and one or more metric math functions.
A GetMetricData
operation that does not include a query can retrieve as many as 500 different\n\t\t\tmetrics in a single request, with a total of as many as 100,800 data points. You can also\n\t\t\toptionally perform metric math expressions on the values of the returned statistics, to create\n\t\t\tnew time series that represent new insights into your data. For example, using Lambda\n\t\t\tmetrics, you could divide the Errors metric by the Invocations metric to get an error\n\t\t\trate time series. For more information about metric math expressions, see Metric Math Syntax and Functions in the Amazon CloudWatch User\n\t\t\t\t\tGuide.
If you include a Metrics Insights query, each GetMetricData
operation can include only one\n\t\t\tquery. But the same GetMetricData
operation can also retrieve other metrics. Metrics Insights queries\n\t\tcan query only the most recent three hours of metric data. For more information about Metrics Insights, \n\t\tsee Query your metrics with CloudWatch Metrics Insights.
Calls to the GetMetricData
API have a different pricing structure than \n\t\t\tcalls to GetMetricStatistics
. For more information about pricing, see \n\t\t\tAmazon CloudWatch Pricing.
Amazon CloudWatch retains metric data as follows:
\n\t\tData points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution\n\t\t\t\tmetrics and are available only for custom metrics that have been defined with a StorageResolution
of 1.
Data points with a period of 60 seconds (1-minute) are available for 15 days.
\nData points with a period of 300 seconds (5-minute) are available for 63 days.
\nData points with a period of 3600 seconds (1 hour) are available for 455 days (15 months).
\nData points that are initially published with a shorter period are aggregated together for long-term storage. For example, if you collect \n\t\t\tdata using a period of 1 minute, the data remains available for 15 days with 1-minute resolution. After 15 days, this data is still available, \n\t\t\tbut is aggregated and retrievable only with a resolution of 5 minutes. After 63 days, the data is further aggregated and is available with \n\t\t\ta resolution of 1 hour.
\n\t\t\n\t\tIf you omit Unit
in your request, all data that was collected with any unit is returned, along with the corresponding units that were specified\n\t\t\twhen the data was reported to CloudWatch. If you specify a unit, the operation returns only data that was collected with that unit specified.\n\t\t\tIf you specify a unit that does not match the data collected, the results of the operation are null. CloudWatch does not perform unit conversions.
\n Using Metrics Insights queries with metric math\n
\n\t\tYou can't mix a Metric Insights query and metric math syntax in the same expression, but \n\t\t\tyou can reference results from a Metrics Insights query within other Metric math expressions. A Metrics Insights \n\t\t\tquery without a GROUP BY clause returns a single time-series (TS), \n\t\t\tand can be used as input for a metric math expression that expects a single time series. A Metrics Insights \n\t\t\tquery with a GROUP BY clause returns an array of time-series (TS[]), \n\t\t\tand can be used as input for a metric math expression that expects an array of time series.
", "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", @@ -1883,7 +1892,7 @@ "MetricDataQueries": { "target": "com.amazonaws.cloudwatch#MetricDataQueries", "traits": { - "smithy.api#documentation": "The metric queries to be returned. A single GetMetricData
call can include as many as 500 MetricDataQuery
\n\t\tstructures. Each of these structures can specify either a metric to retrieve, or a math expression to perform on retrieved data.
The metric queries to be returned. A single GetMetricData
call can include as many as 500 MetricDataQuery
\n\t\tstructures. Each of these structures can specify either a metric to retrieve, a Metrics Insights query,\n\t\tor a math expression to perform on retrieved data.
The output format for the stream. Valid values are json
\n\t\t\tand opentelemetry0.7
. For more information about metric stream\n\t\t\toutput formats, see \n\t\t\t\n\t\t\t\tMetric streams output formats.
Each entry in this array displays information about one or more metrics that include additional statistics\n\t\t\tin the metric stream. For more information about the additional statistics, see \n\t\t\t\n\t\t\t\tCloudWatch statistics definitions.
" } } } @@ -3293,7 +3308,7 @@ "TreatMissingData": { "target": "com.amazonaws.cloudwatch#TreatMissingData", "traits": { - "smithy.api#documentation": "Sets how this alarm is to handle missing data points. If this parameter is omitted, the default behavior of missing
is used.
Sets how this alarm is to handle missing data points. The valid values\n \tare breaching
, notBreaching
, ignore
, and \n \tmissing
. For more information, see\n \tConfiguring how CloudWatch alarms treat missing data.
If this parameter is omitted, the default \n \tbehavior of missing
is used.
The math expression to be performed on the returned data, if this object is performing a math expression. This expression\n\t\t\tcan use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other \n\t\t\texpressions to use the result of those expressions. For more information about metric math expressions, see \n\t\t\tMetric Math Syntax and Functions in the\n\t\t\tAmazon CloudWatch User Guide.
Within each MetricDataQuery object, you must specify either \n\t\t\tExpression
or MetricStat
but not both.
This field can contain either a Metrics Insights query, or a metric math expression to be performed on the \n\t\t\treturned data. For more information about Metrics Insights queries, see \n\t\t\tMetrics Insights query components and syntax in the\n\t\t\tAmazon CloudWatch User Guide.
\n\t\tA math expression\n\t\t\tcan use the Id
of the other metrics or queries to refer to those metrics, and can also use \n\t\t\tthe Id
of other \n\t\t\texpressions to use the result of those expressions. For more information about metric math expressions, see \n\t\t\tMetric Math Syntax and Functions in the\n\t\t\tAmazon CloudWatch User Guide.
Within each MetricDataQuery object, you must specify either \n\t\t\tExpression
or MetricStat
but not both.
This structure is used in both GetMetricData
and PutMetricAlarm
. The supported\n\t\t\tuse of this structure is different for those two operations.
When used in GetMetricData
, it indicates the metric data to return, and whether this call is just retrieving\n\t\t\ta batch set of data for one metric, or is performing a math expression on metric data. A\n\t\t\tsingle GetMetricData
call can include up to 500 MetricDataQuery
\n\t\t\tstructures.
When used in PutMetricAlarm
, it enables you to create an alarm based on a\n\t\t\tmetric math expression. Each MetricDataQuery
in the array specifies either\n\t\t\ta metric to retrieve, or a math expression to be performed on retrieved metrics. A\n\t\t\tsingle PutMetricAlarm
call can include up to 20\n\t\t\t\tMetricDataQuery
structures in the array. The 20 structures can include\n\t\t\tas many as 10 structures that contain a MetricStat
parameter to retrieve a\n\t\t\tmetric, and as many as 10 structures that contain the Expression
parameter\n\t\t\tto perform a math expression. Of those Expression
structures, one must have True
\n\t\tas the value for ReturnData
. The result of this expression is the value the alarm watches.
Any expression used in a PutMetricAlarm
\n\t\t\toperation must return a single time series. For more information, see Metric Math Syntax and Functions in the Amazon CloudWatch User\n\t\t\t\tGuide.
Some of the parameters of this structure also have different uses whether you are using this structure in a GetMetricData
\n\t\t\toperation or a PutMetricAlarm
operation. These differences are explained in the following parameter list.
This structure is used in both GetMetricData
and PutMetricAlarm
. The supported\n\t\t\tuse of this structure is different for those two operations.
When used in GetMetricData
, it indicates the metric data to return, and whether this call is just retrieving\n\t\t\ta batch set of data for one metric, or is performing a Metrics Insights query or a math expression. A\n\t\t\tsingle GetMetricData
call can include up to 500 MetricDataQuery
\n\t\t\tstructures.
When used in PutMetricAlarm
, it enables you to create an alarm based on a\n\t\t\tmetric math expression. Each MetricDataQuery
in the array specifies either\n\t\t\ta metric to retrieve, or a math expression to be performed on retrieved metrics. A\n\t\t\tsingle PutMetricAlarm
call can include up to 20\n\t\t\t\tMetricDataQuery
structures in the array. The 20 structures can include\n\t\t\tas many as 10 structures that contain a MetricStat
parameter to retrieve a\n\t\t\tmetric, and as many as 10 structures that contain the Expression
parameter\n\t\t\tto perform a math expression. Of those Expression
structures, one must have True
\n\t\tas the value for ReturnData
. The result of this expression is the value the alarm watches.
Any expression used in a PutMetricAlarm
\n\t\t\toperation must return a single time series. For more information, see Metric Math Syntax and Functions in the Amazon CloudWatch User\n\t\t\t\tGuide.
Some of the parameters of this structure also have different uses whether you are using this structure in a GetMetricData
\n\t\t\toperation or a PutMetricAlarm
operation. These differences are explained in the following parameter list.
An array of metric name and namespace pairs that stream the additional statistics listed\n\t\tin the value of the AdditionalStatistics
parameter. There can be as many as \n\t\t100 pairs in the array.
All metrics that match the combination of metric name and namespace will be streamed\n\t\twith the additional statistics, no matter their dimensions.
", + "smithy.api#required": {} + } + }, + "AdditionalStatistics": { + "target": "com.amazonaws.cloudwatch#MetricStreamStatisticsAdditionalStatistics", + "traits": { + "smithy.api#documentation": "The list of additional statistics that are to be streamed for the metrics listed\n\t\tin the IncludeMetrics
array in this structure. This list can include as many as 20 statistics.
If the OutputFormat
for the stream is opentelemetry0.7
, the only \n\t\t\tvalid values are p??\n
percentile statistics such as p90
, p99
and so on.
If the OutputFormat
for the stream is json
, \n\t\t\tthe valid values include the abbreviations for all of the statistics listed in \n\t\t\t\n\t\t\t\tCloudWatch statistics definitions. For example, this includes\n\t\ttm98,
\n wm90
, PR(:300)
, and so on.
By default, a metric stream always sends the MAX
, MIN
, SUM
, \n\t\t\tand SAMPLECOUNT
statistics for each metric that is streamed. This structure contains information for\n\t\t\tone metric that includes additional statistics in the stream. For more information about statistics, \n\t\t\tsee CloudWatch, listed in \n\t\t\t\n\t\t\t\tCloudWatch statistics definitions.
The namespace of the metric.
", + "smithy.api#required": {} + } + }, + "MetricName": { + "target": "com.amazonaws.cloudwatch#MetricName", + "traits": { + "smithy.api#documentation": "The name of the metric.
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "This object contains the information for one metric that is to be streamed with \n\t\tadditional statistics.
" + } + }, "com.amazonaws.cloudwatch#MetricWidget": { "type": "string" }, @@ -3885,13 +3965,16 @@ "input": { "target": "com.amazonaws.cloudwatch#PutCompositeAlarmInput" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.cloudwatch#LimitExceededFault" } ], "traits": { - "smithy.api#documentation": "Creates or updates a composite alarm. When you create a composite\n\t\t\talarm, you specify a rule expression for the alarm that takes into account the alarm\n\t\t\tstates of other alarms that you have created. The composite alarm goes into ALARM state\n\t\t\tonly if all conditions of the rule are met.
\n\t\tThe alarms specified in a composite alarm's rule expression can include metric alarms\n\t\t\tand other composite alarms.
\n\t\tUsing composite alarms can reduce\n\t\t\talarm noise. You can create multiple metric alarms,\n\t\t\tand also create a composite alarm and\n\t\t\tset up alerts only\n\t\t\tfor the composite alarm. For example, you could create a composite\n\t\t\talarm that goes into ALARM state only when more than one of the underlying metric alarms\n\t\t\tare in ALARM state.
\n\t\tCurrently, the only alarm actions that can be taken by composite alarms are notifying\n\t\t\tSNS topics.
\n\t\tIt is possible to create a loop or cycle of composite alarms, where composite alarm A depends on composite alarm B, and \n\t\t\tcomposite alarm B also depends on composite alarm A. In this scenario, you can't delete any composite alarm that is part of the cycle \n\t\t\tbecause there is always still a composite alarm that depends on that alarm that you want to delete.
\n\t\t\tTo get out of such a situation, you must\n\t\t\tbreak the cycle by changing the rule of one of the composite alarms in the cycle to remove a dependency that creates the cycle. The simplest\n\t\t\tchange to make to break a cycle is to change the AlarmRule
of one of the alarms to False
.
Additionally, the evaluation of composite alarms stops if CloudWatch detects a cycle in the evaluation path.\n\t\t
\nWhen this operation creates an alarm, the alarm state is immediately set to\n\t\t\t\tINSUFFICIENT_DATA
. The alarm is then evaluated and its state is set\n\t\t\tappropriately. Any actions associated with the new state are then executed. For a\n\t\t\tcomposite alarm, this initial time after creation is the only time that\n\t\t\tthe\n\t\t\talarm can be in INSUFFICIENT_DATA
state.
When you update an existing alarm, its state is left unchanged, but the update\n\t\t\tcompletely overwrites the previous configuration of the alarm.
\n\t\t\n\t\tTo use this operation, you must be signed on with \n\t\t\tthe cloudwatch:PutCompositeAlarm
permission that is scoped to *
. You can't create a\n\t\t\tcomposite alarms if your cloudwatch:PutCompositeAlarm
permission has a narrower scope.
If you are an IAM user, you must have iam:CreateServiceLinkedRole
to create\n\t\t\ta composite alarm that has Systems Manager OpsItem actions.
Creates or updates a composite alarm. When you create a composite\n\t\t\talarm, you specify a rule expression for the alarm that takes into account the alarm\n\t\t\tstates of other alarms that you have created. The composite alarm goes into ALARM state\n\t\t\tonly if all conditions of the rule are met.
\n\t\tThe alarms specified in a composite alarm's rule expression can include metric alarms\n\t\t\tand other composite alarms. The rule expression of a composite alarm can include as many as 100 underlying alarms. \n\t\t\tAny single alarm can be included in the rule expressions of as many as 150 composite alarms.
\n\t\tUsing composite alarms can reduce\n\t\t\talarm noise. You can create multiple metric alarms,\n\t\t\tand also create a composite alarm and\n\t\t\tset up alerts only\n\t\t\tfor the composite alarm. For example, you could create a composite\n\t\t\talarm that goes into ALARM state only when more than one of the underlying metric alarms\n\t\t\tare in ALARM state.
\n\t\tCurrently, the only alarm actions that can be taken by composite alarms are notifying\n\t\t\tSNS topics.
\n\t\tIt is possible to create a loop or cycle of composite alarms, where composite alarm A depends on composite alarm B, and \n\t\t\tcomposite alarm B also depends on composite alarm A. In this scenario, you can't delete any composite alarm that is part of the cycle \n\t\t\tbecause there is always still a composite alarm that depends on that alarm that you want to delete.
\n\t\t\tTo get out of such a situation, you must\n\t\t\tbreak the cycle by changing the rule of one of the composite alarms in the cycle to remove a dependency that creates the cycle. The simplest\n\t\t\tchange to make to break a cycle is to change the AlarmRule
of one of the alarms to False
.
Additionally, the evaluation of composite alarms stops if CloudWatch detects a cycle in the evaluation path.\n\t\t
\nWhen this operation creates an alarm, the alarm state is immediately set to\n\t\t\t\tINSUFFICIENT_DATA
. The alarm is then evaluated and its state is set\n\t\t\tappropriately. Any actions associated with the new state are then executed. For a\n\t\t\tcomposite alarm, this initial time after creation is the only time that\n\t\t\tthe\n\t\t\talarm can be in INSUFFICIENT_DATA
state.
When you update an existing alarm, its state is left unchanged, but the update\n\t\t\tcompletely overwrites the previous configuration of the alarm.
\n\t\t\n\t\tTo use this operation, you must be signed on with \n\t\t\tthe cloudwatch:PutCompositeAlarm
permission that is scoped to *
. You can't create a\n\t\t\tcomposite alarms if your cloudwatch:PutCompositeAlarm
permission has a narrower scope.
If you are an IAM user, you must have iam:CreateServiceLinkedRole
to create\n\t\t\ta composite alarm that has Systems Manager OpsItem actions.
Sets how this alarm is to handle missing data points. If TreatMissingData
is omitted, the default behavior of missing
is used. \n\t\t\tFor more information, see Configuring How CloudWatch \n\t\t\t\tAlarms Treats Missing Data.
Valid Values: breaching | notBreaching | ignore | missing
\n
Sets how this alarm is to handle missing data points. If TreatMissingData
is omitted, the default behavior of missing
is used. \n\t\t\tFor more information, see Configuring How CloudWatch \n\t\t\t\tAlarms Treats Missing Data.
Valid Values: breaching | notBreaching | ignore | missing
\n
Alarms that evaluate metrics in the AWS/DynamoDB
namespace always ignore
\n\t\t\tmissing data even if you choose a different option for TreatMissingData
. When an \n\t\t\tAWS/DynamoDB
metric has missing data, alarms that evaluate that metric remain in their current state.
Creates or updates a metric stream. Metric streams can automatically stream CloudWatch metrics \n\t\t\tto Amazon Web Services destinations including\n\t\t\tAmazon S3 and to many third-party solutions.
\n\t\tFor more information, see \n\t\tUsing Metric Streams.
\n\t\tTo create a metric stream, \n\t\t\tyou must be logged on to an account that has the iam:PassRole
permission\n\t\t\tand either the CloudWatchFullAccess
\n\t\tpolicy or the cloudwatch:PutMetricStream
\n\t\tpermission.
When you create or update a metric stream, you choose one of the following:
\n\t\tStream metrics from all metric namespaces in the account.
\nStream metrics from all metric namespaces in the account, except\n\t\t\t\tfor the namespaces that you list in ExcludeFilters
.
Stream metrics from only the metric namespaces that you list in \n\t\t\t\tIncludeFilters
.
When you use PutMetricStream
to create a new metric stream, the stream \n\t\tis created in the running
state. If you use it to update an existing stream, \n\t\tthe state of the stream is not changed.
Creates or updates a metric stream. Metric streams can automatically stream CloudWatch metrics \n\t\t\tto Amazon Web Services destinations including\n\t\t\tAmazon S3 and to many third-party solutions.
\n\t\tFor more information, see \n\t\tUsing Metric Streams.
\n\t\tTo create a metric stream, \n\t\t\tyou must be logged on to an account that has the iam:PassRole
permission\n\t\t\tand either the CloudWatchFullAccess
\n\t\tpolicy or the cloudwatch:PutMetricStream
\n\t\tpermission.
When you create or update a metric stream, you choose one of the following:
\n\t\tStream metrics from all metric namespaces in the account.
\nStream metrics from all metric namespaces in the account, except\n\t\t\t\tfor the namespaces that you list in ExcludeFilters
.
Stream metrics from only the metric namespaces that you list in \n\t\t\t\tIncludeFilters
.
By default, a metric stream always sends the MAX
, MIN
, SUM
, \n\t\t\tand SAMPLECOUNT
statistics for each metric that is streamed. You can use the\n\t\t\tStatisticsConfigurations
parameter to have \n\t\t\tthe metric stream also send additional statistics in the stream. Streaming additional statistics incurs\n\t\t\tadditional costs. For more information, see Amazon CloudWatch Pricing.
When you use PutMetricStream
to create a new metric stream, the stream \n\t\tis created in the running
state. If you use it to update an existing stream, \n\t\tthe state of the stream is not changed.
A list of key-value pairs to associate with the metric stream. You can associate as \n\t\t\tmany as 50 tags with a metric stream.
\n\t\tTags can help you organize and categorize your resources. You can also use them to scope user\n\t\t\tpermissions by granting a user\n\t\t\tpermission to access or change only resources with certain tag values.
\n\t\tYou can use this parameter only when you are creating a new metric stream. If you are using this operation to update an existing metric stream, any tags\n\t\t\tyou specify in this parameter are ignored. To change the tags of an existing metric stream, use\n\t\t\tTagResource\n\t\t\tor UntagResource.
" } + }, + "StatisticsConfigurations": { + "target": "com.amazonaws.cloudwatch#MetricStreamStatisticsConfigurations", + "traits": { + "smithy.api#documentation": "By default, a metric stream always sends the MAX
, MIN
, SUM
, \n\t\t\tand SAMPLECOUNT
statistics for each metric that is streamed. You can use this parameter to have \n\t\t\tthe metric stream also send additional statistics in the stream. This \n\t\t\tarray can have up to 100 members.
For each entry in this array, you specify one or more metrics and the list of additional statistics to stream\n\t\t\tfor those metrics. The additional statistics that you can stream depend on the stream's OutputFormat
.\n\t\t\tIf the OutputFormat
is json
, you can stream any additional statistic that is supported \n\t\t\tby CloudWatch, listed in \n\t\t\t\n\t\t\t\tCloudWatch statistics definitions. If the OutputFormat
is \n\t\t\topentelemetry0.7
, you can stream percentile statistics such as p95, p99.9 and so on.
The code you can use to resolve your broker issue when the broker is in a CRITICAL_ACTION_REQUIRED state. You can find instructions by choosing the link for your code from the list of action required codes in Amazon MQ action required codes. Each code references a topic with detailed information, instructions, and recommendations for how to resolve the issue and prevent future occurrences.
", + "smithy.api#jsonName": "actionRequiredCode" + } + }, + "ActionRequiredInfo": { + "target": "com.amazonaws.mq#__string", + "traits": { + "smithy.api#documentation": "Information about the action required to resolve your broker issue when the broker is in a CRITICAL_ACTION_REQUIRED state.
", + "smithy.api#jsonName": "actionRequiredInfo" + } + } + }, + "traits": { + "smithy.api#documentation": "The action required to resolve a broker issue when the broker is in a CRITICAL_ACTION_REQUIRED state.
" + } + }, "com.amazonaws.mq#AuthenticationStrategy": { "type": "string", "traits": { @@ -209,6 +231,10 @@ { "value": "REBOOT_IN_PROGRESS", "name": "REBOOT_IN_PROGRESS" + }, + { + "value": "CRITICAL_ACTION_REQUIRED", + "name": "CRITICAL_ACTION_REQUIRED" } ] } @@ -841,6 +867,9 @@ "input": { "target": "com.amazonaws.mq#CreateTagsRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.mq#BadRequestException" @@ -1067,6 +1096,9 @@ "input": { "target": "com.amazonaws.mq#DeleteTagsRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.mq#BadRequestException" @@ -1408,6 +1440,13 @@ "com.amazonaws.mq#DescribeBrokerResponse": { "type": "structure", "members": { + "ActionsRequired": { + "target": "com.amazonaws.mq#__listOfActionRequired", + "traits": { + "smithy.api#documentation": "A list of actions required for a broker.
", + "smithy.api#jsonName": "actionsRequired" + } + }, "AuthenticationStrategy": { "target": "com.amazonaws.mq#AuthenticationStrategy", "traits": { @@ -3327,6 +3366,12 @@ } } }, + "com.amazonaws.mq#__listOfActionRequired": { + "type": "list", + "member": { + "target": "com.amazonaws.mq#ActionRequired" + } + }, "com.amazonaws.mq#__listOfAvailabilityZone": { "type": "list", "member": { @@ -3431,6 +3476,21 @@ }, "com.amazonaws.mq#mq": { "type": "service", + "traits": { + "aws.api#service": { + "sdkId": "mq", + "arnNamespace": "mq", + "cloudFormationName": "AmazonMQ", + "cloudTrailEventSource": "mq.amazonaws.com", + "endpointPrefix": "mq" + }, + "aws.auth#sigv4": { + "name": "mq" + }, + "aws.protocols#restJson1": {}, + "smithy.api#documentation": "Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers in the cloud. A message broker allows software applications and components to communicate using various programming languages, operating systems, and formal messaging protocols.
", + "smithy.api#title": "AmazonMQ" + }, "version": "2017-11-27", "operations": [ { @@ -3499,22 +3559,7 @@ { "target": "com.amazonaws.mq#UpdateUser" } - ], - "traits": { - "aws.api#service": { - "sdkId": "mq", - "arnNamespace": "mq", - "cloudFormationName": "AmazonMQ", - "cloudTrailEventSource": "mq.amazonaws.com", - "endpointPrefix": "mq" - }, - "aws.auth#sigv4": { - "name": "mq" - }, - "aws.protocols#restJson1": {}, - "smithy.api#documentation": "Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers in the cloud. A message broker allows software applications and components to communicate using various programming languages, operating systems, and formal messaging protocols.
", - "smithy.api#title": "AmazonMQ" - } + ] } } } diff --git a/aws/sdk/aws-models/network-firewall.json b/aws/sdk/aws-models/network-firewall.json index 3fccf706bb..c9a228b398 100644 --- a/aws/sdk/aws-models/network-firewall.json +++ b/aws/sdk/aws-models/network-firewall.json @@ -207,7 +207,7 @@ } ], "traits": { - "smithy.api#documentation": "Associates the specified subnets in the Amazon VPC to the firewall. You can specify one\n subnet for each of the Availability Zones that the VPC spans.
\nThis request creates an AWS Network Firewall firewall endpoint in each of the subnets. To\n enable the firewall's protections, you must also modify the VPC's route tables for each\n subnet's Availability Zone, to redirect the traffic that's coming into and going out of the\n zone through the firewall endpoint.
" + "smithy.api#documentation": "Associates the specified subnets in the Amazon VPC to the firewall. You can specify one\n subnet for each of the Availability Zones that the VPC spans.
\nThis request creates an Network Firewall firewall endpoint in each of the subnets. To\n enable the firewall's protections, you must also modify the VPC's route tables for each\n subnet's Availability Zone, to redirect the traffic that's coming into and going out of the\n zone through the firewall endpoint.
" } }, "com.amazonaws.networkfirewall#AssociateSubnetsRequest": { @@ -292,7 +292,7 @@ } }, "traits": { - "smithy.api#documentation": "The configuration and status for a single subnet that you've specified for use by the\n AWS Network Firewall firewall. This is part of the FirewallStatus.
" + "smithy.api#documentation": "The configuration and status for a single subnet that you've specified for use by the\n Network Firewall firewall. This is part of the FirewallStatus.
" } }, "com.amazonaws.networkfirewall#AttachmentStatus": { @@ -387,7 +387,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates an AWS Network Firewall Firewall and accompanying FirewallStatus for a VPC.
\nThe firewall defines the configuration settings for an AWS Network Firewall firewall. The settings that you can define at creation include the firewall policy, the subnets in your VPC to use for the firewall endpoints, and any tags that are attached to the firewall AWS resource.
\nAfter you create a firewall, you can provide additional settings, like the logging configuration.
\nTo update the settings for a firewall, you use the operations that apply to the settings\n themselves, for example UpdateLoggingConfiguration, AssociateSubnets, and UpdateFirewallDeleteProtection.
\nTo manage a firewall's tags, use the standard AWS resource tagging operations, ListTagsForResource, TagResource, and UntagResource.
\nTo retrieve information about firewalls, use ListFirewalls and DescribeFirewall.
" + "smithy.api#documentation": "Creates an Network Firewall Firewall and accompanying FirewallStatus for a VPC.
\nThe firewall defines the configuration settings for an Network Firewall firewall. The settings that you can define at creation include the firewall policy, the subnets in your VPC to use for the firewall endpoints, and any tags that are attached to the firewall Amazon Web Services resource.
\nAfter you create a firewall, you can provide additional settings, like the logging configuration.
\nTo update the settings for a firewall, you use the operations that apply to the settings\n themselves, for example UpdateLoggingConfiguration, AssociateSubnets, and UpdateFirewallDeleteProtection.
\nTo manage a firewall's tags, use the standard Amazon Web Services resource tagging operations, ListTagsForResource, TagResource, and UntagResource.
\nTo retrieve information about firewalls, use ListFirewalls and DescribeFirewall.
" } }, "com.amazonaws.networkfirewall#CreateFirewallPolicy": { @@ -416,7 +416,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates the firewall policy for the firewall according to the specifications.
\nAn AWS Network Firewall firewall policy defines the behavior of a firewall, in a collection of\n stateless and stateful rule groups and other settings. You can use one firewall policy for\n multiple firewalls.
" + "smithy.api#documentation": "Creates the firewall policy for the firewall according to the specifications.
\nAn Network Firewall firewall policy defines the behavior of a firewall, in a collection of\n stateless and stateful rule groups and other settings. You can use one firewall policy for\n multiple firewalls.
" } }, "com.amazonaws.networkfirewall#CreateFirewallPolicyRequest": { @@ -451,7 +451,13 @@ "DryRun": { "target": "com.amazonaws.networkfirewall#Boolean", "traits": { - "smithy.api#documentation": "Indicates whether you want Network Firewall to just check the validity of the request, rather than run the request.
\nIf set to TRUE
, Network Firewall checks whether the request can run successfully, \n but doesn't actually make the requested changes. The call returns the value that the request would return if you ran it with \n dry run set to FALSE
, but doesn't make additions or changes to your resources. This option allows you to make sure that you have \n the required permissions to run the request and that your request parameters are valid.
If set to FALSE
, Network Firewall makes the requested changes to your resources.
Indicates whether you want Network Firewall to just check the validity of the request, rather than run the request.
\nIf set to TRUE
, Network Firewall checks whether the request can run successfully,\n but doesn't actually make the requested changes. The call returns the value that the request would return if you ran it with\n dry run set to FALSE
, but doesn't make additions or changes to your resources. This option allows you to make sure that you have\n the required permissions to run the request and that your request parameters are valid.
If set to FALSE
, Network Firewall makes the requested changes to your resources.
A complex type that contains settings for encryption of your firewall policy resources.
" } } } @@ -515,13 +521,13 @@ "SubnetChangeProtection": { "target": "com.amazonaws.networkfirewall#Boolean", "traits": { - "smithy.api#documentation": "A setting indicating whether the firewall is protected against changes to the subnet associations. \n Use this setting to protect against\n accidentally modifying the subnet associations for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against changes to the subnet associations.\n Use this setting to protect against\n accidentally modifying the subnet associations for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against a change to the firewall policy association. \n Use this setting to protect against\n accidentally modifying the firewall policy for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against a change to the firewall policy association.\n Use this setting to protect against\n accidentally modifying the firewall policy for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
The key:value pairs to associate with the resource.
" } + }, + "EncryptionConfiguration": { + "target": "com.amazonaws.networkfirewall#EncryptionConfiguration", + "traits": { + "smithy.api#documentation": "A complex type that contains settings for encryption of your firewall resources.
" + } } } }, @@ -603,13 +615,13 @@ "Rules": { "target": "com.amazonaws.networkfirewall#RulesString", "traits": { - "smithy.api#documentation": "A string containing stateful rule group rules specifications in Suricata flat format, with one rule\nper line. Use this to import your existing Suricata compatible rule groups.
\nYou must provide either this rules setting or a populated RuleGroup
setting, but not both.
You can provide your rule group specification in Suricata flat format through this setting when you create or update your rule group. The call \nresponse returns a RuleGroup object that Network Firewall has populated from your string.
" + "smithy.api#documentation": "A string containing stateful rule group rules specifications in Suricata flat format, with one rule\nper line. Use this to import your existing Suricata compatible rule groups.
\nYou must provide either this rules setting or a populated RuleGroup
setting, but not both.
You can provide your rule group specification in Suricata flat format through this setting when you create or update your rule group. The call\nresponse returns a RuleGroup object that Network Firewall has populated from your string.
" } }, "Type": { "target": "com.amazonaws.networkfirewall#RuleGroupType", "traits": { - "smithy.api#documentation": "Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains \nstateless rules. If it is stateful, it contains stateful rules.
", + "smithy.api#documentation": "Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains\nstateless rules. If it is stateful, it contains stateful rules.
", "smithy.api#required": {} } }, @@ -622,7 +634,7 @@ "Capacity": { "target": "com.amazonaws.networkfirewall#RuleCapacity", "traits": { - "smithy.api#documentation": "The maximum operating resources that this rule group can use. Rule group capacity is fixed at creation. \n When you update a rule group, you are limited to this capacity. When you reference a rule group \n from a firewall policy, Network Firewall reserves this capacity for the rule group.
\nYou can retrieve the capacity that would be required for a rule group before you create the rule group by calling \n CreateRuleGroup with DryRun
set to TRUE
.
You can't change or exceed this capacity when you update the rule group, so leave\n room for your rule group to grow.
\n\n Capacity for a stateless rule group\n
\nFor a stateless rule group, the capacity required is the sum of the capacity\n requirements of the individual rules that you expect to have in the rule group.
\nTo calculate the capacity requirement of a single rule, multiply the capacity\n requirement values of each of the rule's match settings:
\nA match setting with no criteria specified has a value of 1.
\nA match setting with Any
specified has a value of 1.
All other match settings have a value equal to the number of elements provided in\n the setting. For example, a protocol setting [\"UDP\"] and a source setting\n [\"10.0.0.0/24\"] each have a value of 1. A protocol setting [\"UDP\",\"TCP\"] has a value\n of 2. A source setting [\"10.0.0.0/24\",\"10.0.0.1/24\",\"10.0.0.2/24\"] has a value of 3.\n
\nA rule with no criteria specified in any of its match settings has a capacity\n requirement of 1. A rule with protocol setting [\"UDP\",\"TCP\"], source setting\n [\"10.0.0.0/24\",\"10.0.0.1/24\",\"10.0.0.2/24\"], and a single specification or no specification\n for each of the other match settings has a capacity requirement of 6.
\n\n Capacity for a stateful rule group\n
\nFor\n a stateful rule group, the minimum capacity required is the number of individual rules that\n you expect to have in the rule group.
", + "smithy.api#documentation": "The maximum operating resources that this rule group can use. Rule group capacity is fixed at creation.\n When you update a rule group, you are limited to this capacity. When you reference a rule group\n from a firewall policy, Network Firewall reserves this capacity for the rule group.
\nYou can retrieve the capacity that would be required for a rule group before you create the rule group by calling\n CreateRuleGroup with DryRun
set to TRUE
.
You can't change or exceed this capacity when you update the rule group, so leave\n room for your rule group to grow.
\n\n Capacity for a stateless rule group\n
\nFor a stateless rule group, the capacity required is the sum of the capacity\n requirements of the individual rules that you expect to have in the rule group.
\nTo calculate the capacity requirement of a single rule, multiply the capacity\n requirement values of each of the rule's match settings:
\nA match setting with no criteria specified has a value of 1.
\nA match setting with Any
specified has a value of 1.
All other match settings have a value equal to the number of elements provided in\n the setting. For example, a protocol setting [\"UDP\"] and a source setting\n [\"10.0.0.0/24\"] each have a value of 1. A protocol setting [\"UDP\",\"TCP\"] has a value\n of 2. A source setting [\"10.0.0.0/24\",\"10.0.0.1/24\",\"10.0.0.2/24\"] has a value of 3.\n
\nA rule with no criteria specified in any of its match settings has a capacity\n requirement of 1. A rule with protocol setting [\"UDP\",\"TCP\"], source setting\n [\"10.0.0.0/24\",\"10.0.0.1/24\",\"10.0.0.2/24\"], and a single specification or no specification\n for each of the other match settings has a capacity requirement of 6.
\n\n Capacity for a stateful rule group\n
\nFor\n a stateful rule group, the minimum capacity required is the number of individual rules that\n you expect to have in the rule group.
", "smithy.api#required": {} } }, @@ -635,7 +647,19 @@ "DryRun": { "target": "com.amazonaws.networkfirewall#Boolean", "traits": { - "smithy.api#documentation": "Indicates whether you want Network Firewall to just check the validity of the request, rather than run the request.
\nIf set to TRUE
, Network Firewall checks whether the request can run successfully, \n but doesn't actually make the requested changes. The call returns the value that the request would return if you ran it with \n dry run set to FALSE
, but doesn't make additions or changes to your resources. This option allows you to make sure that you have \n the required permissions to run the request and that your request parameters are valid.
If set to FALSE
, Network Firewall makes the requested changes to your resources.
Indicates whether you want Network Firewall to just check the validity of the request, rather than run the request.
\nIf set to TRUE
, Network Firewall checks whether the request can run successfully,\n but doesn't actually make the requested changes. The call returns the value that the request would return if you ran it with\n dry run set to FALSE
, but doesn't make additions or changes to your resources. This option allows you to make sure that you have\n the required permissions to run the request and that your request parameters are valid.
If set to FALSE
, Network Firewall makes the requested changes to your resources.
A complex type that contains settings for encryption of your rule group resources.
" + } + }, + "SourceMetadata": { + "target": "com.amazonaws.networkfirewall#SourceMetadata", + "traits": { + "smithy.api#documentation": "A complex type that contains metadata about the rule group that your own rule group is copied from. You can use the metadata to keep track of updates made to the originating rule group.
" } } } @@ -716,7 +740,7 @@ } ], "traits": { - "smithy.api#documentation": "Deletes the specified Firewall and its FirewallStatus. \n This operation requires the firewall's DeleteProtection
flag to be\n FALSE
. You can't revert this operation.
You can check whether a firewall is\n in use by reviewing the route tables for the Availability Zones where you have \n firewall subnet mappings. Retrieve the subnet mappings by calling DescribeFirewall.\n You define and update the route tables through Amazon VPC. As needed, update the route tables for the \n zones to remove the firewall endpoints. When the route tables no longer use the firewall endpoints, \n you can remove the firewall safely.
\nTo delete a firewall, remove the delete protection if you need to using UpdateFirewallDeleteProtection,\n then delete the firewall by calling DeleteFirewall.
" + "smithy.api#documentation": "Deletes the specified Firewall and its FirewallStatus.\n This operation requires the firewall's DeleteProtection
flag to be\n FALSE
. You can't revert this operation.
You can check whether a firewall is\n in use by reviewing the route tables for the Availability Zones where you have\n firewall subnet mappings. Retrieve the subnet mappings by calling DescribeFirewall.\n You define and update the route tables through Amazon VPC. As needed, update the route tables for the\n zones to remove the firewall endpoints. When the route tables no longer use the firewall endpoints,\n you can remove the firewall safely.
\nTo delete a firewall, remove the delete protection if you need to using UpdateFirewallDeleteProtection,\n then delete the firewall by calling DeleteFirewall.
" } }, "com.amazonaws.networkfirewall#DeleteFirewallPolicy": { @@ -903,7 +927,7 @@ "Type": { "target": "com.amazonaws.networkfirewall#RuleGroupType", "traits": { - "smithy.api#documentation": "Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains \nstateless rules. If it is stateful, it contains stateful rules.
\nThis setting is required for requests that do not include the RuleGroupARN
.
Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains\nstateless rules. If it is stateful, it contains stateful rules.
\nThis setting is required for requests that do not include the RuleGroupARN
.
The AWS Identity and Access Management policy for the resource.
" + "smithy.api#documentation": "The IAM policy for the resource.
" } } } @@ -1209,7 +1233,7 @@ } ], "traits": { - "smithy.api#documentation": "High-level information about a rule group, returned by operations like create and describe. \n You can use the information provided in the metadata to retrieve and manage a rule group. \n You can retrieve all objects for a rule group by calling DescribeRuleGroup.\n
" + "smithy.api#documentation": "High-level information about a rule group, returned by operations like create and describe.\n You can use the information provided in the metadata to retrieve and manage a rule group.\n You can retrieve all objects for a rule group by calling DescribeRuleGroup.\n
" } }, "com.amazonaws.networkfirewall#DescribeRuleGroupMetadataRequest": { @@ -1230,7 +1254,7 @@ "Type": { "target": "com.amazonaws.networkfirewall#RuleGroupType", "traits": { - "smithy.api#documentation": "Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains \nstateless rules. If it is stateful, it contains stateful rules.
\nThis setting is required for requests that do not include the RuleGroupARN
.
Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains\nstateless rules. If it is stateful, it contains stateful rules.
\nThis setting is required for requests that do not include the RuleGroupARN
.
Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains \nstateless rules. If it is stateful, it contains stateful rules.
\nThis setting is required for requests that do not include the RuleGroupARN
.
Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains\nstateless rules. If it is stateful, it contains stateful rules.
\nThis setting is required for requests that do not include the RuleGroupARN
.
The maximum operating resources that this rule group can use. Rule group capacity is fixed at creation. \n When you update a rule group, you are limited to this capacity. When you reference a rule group \n from a firewall policy, Network Firewall reserves this capacity for the rule group.
\nYou can retrieve the capacity that would be required for a rule group before you create the rule group by calling \n CreateRuleGroup with DryRun
set to TRUE
.
The maximum operating resources that this rule group can use. Rule group capacity is fixed at creation.\n When you update a rule group, you are limited to this capacity. When you reference a rule group\n from a firewall policy, Network Firewall reserves this capacity for the rule group.
\nYou can retrieve the capacity that would be required for a rule group before you create the rule group by calling\n CreateRuleGroup with DryRun
set to TRUE
.
The last time that the rule group was changed.
" + } } } }, @@ -1293,7 +1323,7 @@ "Type": { "target": "com.amazonaws.networkfirewall#RuleGroupType", "traits": { - "smithy.api#documentation": "Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains \nstateless rules. If it is stateful, it contains stateful rules.
\nThis setting is required for requests that do not include the RuleGroupARN
.
Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains\nstateless rules. If it is stateful, it contains stateful rules.
\nThis setting is required for requests that do not include the RuleGroupARN
.
The object that defines the rules in a rule group. This, along with RuleGroupResponse, define the rule group. You can retrieve all objects for a rule group by calling DescribeRuleGroup.
\nAWS Network Firewall uses a rule group to inspect and control network traffic. \n You define stateless rule groups to inspect individual packets and you define stateful rule groups to inspect packets in the context of their\n traffic flow.
\nTo use a rule group, you include it by reference in an Network Firewall firewall policy, then you use the policy in a firewall. You can reference a rule group from \n more than one firewall policy, and you can use a firewall policy in more than one firewall.
" + "smithy.api#documentation": "The object that defines the rules in a rule group. This, along with RuleGroupResponse, define the rule group. You can retrieve all objects for a rule group by calling DescribeRuleGroup.
\nNetwork Firewall uses a rule group to inspect and control network traffic.\n You define stateless rule groups to inspect individual packets and you define stateful rule groups to inspect packets in the context of their\n traffic flow.
\nTo use a rule group, you include it by reference in an Network Firewall firewall policy, then you use the policy in a firewall. You can reference a rule group from\n more than one firewall policy, and you can use a firewall policy in more than one firewall.
" } }, "RuleGroupResponse": { @@ -1355,7 +1385,7 @@ } }, "traits": { - "smithy.api#documentation": "The value to use in an Amazon CloudWatch custom metric dimension. This is used in the\n PublishMetrics
\n CustomAction. A CloudWatch custom metric dimension is a name/value pair that's\n part of the identity of a metric.
AWS Network Firewall sets the dimension name to CustomAction
and you provide the\n dimension value.
For more information about CloudWatch custom metric dimensions, see Publishing Custom Metrics in the Amazon CloudWatch User\n Guide.
" + "smithy.api#documentation": "The value to use in an Amazon CloudWatch custom metric dimension. This is used in the\n PublishMetrics
\n CustomAction. A CloudWatch custom metric dimension is a name/value pair that's\n part of the identity of a metric.
Network Firewall sets the dimension name to CustomAction
and you provide the\n dimension value.
For more information about CloudWatch custom metric dimensions, see Publishing Custom Metrics in the Amazon CloudWatch User\n Guide.
" } }, "com.amazonaws.networkfirewall#DimensionValue": { @@ -1471,6 +1501,42 @@ } } }, + "com.amazonaws.networkfirewall#EncryptionConfiguration": { + "type": "structure", + "members": { + "KeyId": { + "target": "com.amazonaws.networkfirewall#KeyId", + "traits": { + "smithy.api#documentation": "The ID of the Amazon Web Services Key Management Service (KMS) customer managed key. You can use any of the key identifiers that KMS supports, unless you're using a key that's managed by another account. If you're using a key managed by another account, then specify the key ARN. For more information, see Key ID in the Amazon Web Services KMS Developer Guide.
" + } + }, + "Type": { + "target": "com.amazonaws.networkfirewall#EncryptionType", + "traits": { + "smithy.api#documentation": "The type of Amazon Web Services KMS key to use for encryption of your Network Firewall resources.
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "A complex type that contains optional Amazon Web Services Key Management Service (KMS) encryption settings for your Network Firewall resources. Your data is encrypted by default with an Amazon Web Services owned key that Amazon Web Services owns and manages for you. You can use either the Amazon Web Services owned key, or provide your own customer managed key. To learn more about KMS encryption of your Network Firewall resources, see Encryption at rest with Amazon Web Services Key Managment Service in the Network Firewall Developer Guide.
" + } + }, + "com.amazonaws.networkfirewall#EncryptionType": { + "type": "string", + "traits": { + "smithy.api#enum": [ + { + "value": "CUSTOMER_KMS", + "name": "CUSTOMER_KMS" + }, + { + "value": "AWS_OWNED_KMS_KEY", + "name": "AWS_OWNED_KMS_KEY" + } + ] + } + }, "com.amazonaws.networkfirewall#EndpointId": { "type": "string" }, @@ -1522,13 +1588,13 @@ "SubnetChangeProtection": { "target": "com.amazonaws.networkfirewall#Boolean", "traits": { - "smithy.api#documentation": "A setting indicating whether the firewall is protected against changes to the subnet associations. \n Use this setting to protect against\n accidentally modifying the subnet associations for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against changes to the subnet associations.\n Use this setting to protect against\n accidentally modifying the subnet associations for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against a change to the firewall policy association. \n Use this setting to protect against\n accidentally modifying the firewall policy for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against a change to the firewall policy association.\n Use this setting to protect against\n accidentally modifying the firewall policy for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A complex type that contains the Amazon Web Services KMS encryption configuration settings for your firewall.
" + } } }, "traits": { - "smithy.api#documentation": "The firewall defines the configuration settings for an AWS Network Firewall firewall. These settings include the firewall policy, the subnets in your VPC to use for the firewall endpoints, and any tags that are attached to the firewall AWS resource.
\nThe status of the firewall, for example whether it's ready to filter network traffic,\n is provided in the corresponding FirewallStatus. You can retrieve both\n objects by calling DescribeFirewall.
" + "smithy.api#documentation": "The firewall defines the configuration settings for an Network Firewall firewall. These settings include the firewall policy, the subnets in your VPC to use for the firewall endpoints, and any tags that are attached to the firewall Amazon Web Services resource.
\nThe status of the firewall, for example whether it's ready to filter network traffic,\n is provided in the corresponding FirewallStatus. You can retrieve both\n objects by calling DescribeFirewall.
" } }, "com.amazonaws.networkfirewall#FirewallMetadata": { @@ -1600,7 +1672,7 @@ "StatelessFragmentDefaultActions": { "target": "com.amazonaws.networkfirewall#StatelessActions", "traits": { - "smithy.api#documentation": "The actions to take on a fragmented UDP packet if it doesn't match any of the stateless\n rules in the policy. Network Firewall only manages UDP packet fragments and silently drops packet fragments for other protocols. \n If you want non-matching fragmented UDP packets to be forwarded for\n stateful inspection, specify aws:forward_to_sfe
.
You must specify one of the standard actions: aws:pass
,\n aws:drop
, or aws:forward_to_sfe
. In addition, you can specify\n custom actions that are compatible with your standard section choice.
For example, you could specify [\"aws:pass\"]
or you could specify\n [\"aws:pass\", “customActionName”]
. For information about compatibility, see\n the custom action descriptions under CustomAction.
The actions to take on a fragmented UDP packet if it doesn't match any of the stateless\n rules in the policy. Network Firewall only manages UDP packet fragments and silently drops packet fragments for other protocols.\n If you want non-matching fragmented UDP packets to be forwarded for\n stateful inspection, specify aws:forward_to_sfe
.
You must specify one of the standard actions: aws:pass
,\n aws:drop
, or aws:forward_to_sfe
. In addition, you can specify\n custom actions that are compatible with your standard section choice.
For example, you could specify [\"aws:pass\"]
or you could specify\n [\"aws:pass\", “customActionName”]
. For information about compatibility, see\n the custom action descriptions under CustomAction.
The default actions to take on a packet that doesn't match any stateful rules. The stateful default action is optional, \n and is only valid when using the strict rule order.
\nValid values of the stateful default action:
\naws:drop_strict
\naws:drop_established
\naws:alert_strict
\naws:alert_established
\nFor more information, see \n Strict evaluation order in the AWS Network Firewall Developer Guide.\n
" + "smithy.api#documentation": "The default actions to take on a packet that doesn't match any stateful rules. The stateful default action is optional,\n and is only valid when using the strict rule order.
\nValid values of the stateful default action:
\naws:drop_strict
\naws:drop_established
\naws:alert_strict
\naws:alert_established
\nFor more information, see\n Strict evaluation order in the Network Firewall Developer Guide.\n
" } }, "StatefulEngineOptions": { "target": "com.amazonaws.networkfirewall#StatefulEngineOptions", "traits": { - "smithy.api#documentation": "Additional options governing how Network Firewall handles stateful rules. The stateful \n rule groups that you use in your policy must have stateful rule options settings that are compatible with these settings.
" + "smithy.api#documentation": "Additional options governing how Network Firewall handles stateful rules. The stateful\n rule groups that you use in your policy must have stateful rule options settings that are compatible with these settings.
" } } }, @@ -1712,6 +1784,18 @@ "traits": { "smithy.api#documentation": "The number of firewalls that are associated with this firewall policy.
" } + }, + "EncryptionConfiguration": { + "target": "com.amazonaws.networkfirewall#EncryptionConfiguration", + "traits": { + "smithy.api#documentation": "A complex type that contains the Amazon Web Services KMS encryption configuration settings for your firewall policy.
" + } + }, + "LastModifiedTime": { + "target": "com.amazonaws.networkfirewall#LastUpdateTime", + "traits": { + "smithy.api#documentation": "The last time that the firewall policy was changed.
" + } } }, "traits": { @@ -1818,21 +1902,21 @@ "Protocol": { "target": "com.amazonaws.networkfirewall#StatefulRuleProtocol", "traits": { - "smithy.api#documentation": "The protocol to inspect for. To specify all, you can use IP
, because all traffic on AWS and on the internet is IP.
The protocol to inspect for. To specify all, you can use IP
, because all traffic on Amazon Web Services and on the internet is IP.
The source IP address or address range to inspect for, in CIDR notation. \n To match with any address, specify ANY
.
Specify an IP address or a block of IP addresses in Classless Inter-Domain Routing (CIDR) notation. Network Firewall supports all address ranges for IPv4.
\nExamples:
\nTo configure Network Firewall to inspect for the IP address 192.0.2.44, specify 192.0.2.44/32
.
To configure Network Firewall to inspect for IP addresses from 192.0.2.0 to 192.0.2.255, specify 192.0.2.0/24
.
For more information about CIDR notation, see the Wikipedia entry Classless\n Inter-Domain Routing.
", + "smithy.api#documentation": "The source IP address or address range to inspect for, in CIDR notation.\n To match with any address, specify ANY
.
Specify an IP address or a block of IP addresses in Classless Inter-Domain Routing (CIDR) notation. Network Firewall supports all address ranges for IPv4.
\nExamples:
\nTo configure Network Firewall to inspect for the IP address 192.0.2.44, specify 192.0.2.44/32
.
To configure Network Firewall to inspect for IP addresses from 192.0.2.0 to 192.0.2.255, specify 192.0.2.0/24
.
For more information about CIDR notation, see the Wikipedia entry Classless\n Inter-Domain Routing.
", "smithy.api#required": {} } }, "SourcePort": { "target": "com.amazonaws.networkfirewall#Port", "traits": { - "smithy.api#documentation": "The source port to inspect for. You can specify an individual port, for \n example 1994
and you can specify a port\n range, for example 1990:1994
.\n To match with any port, specify ANY
.
The source port to inspect for. You can specify an individual port, for\n example 1994
and you can specify a port\n range, for example 1990:1994
.\n To match with any port, specify ANY
.
The destination IP address or address range to inspect for, in CIDR notation. \n To match with any address, specify ANY
.
Specify an IP address or a block of IP addresses in Classless Inter-Domain Routing (CIDR) notation. Network Firewall supports all address ranges for IPv4.
\nExamples:
\nTo configure Network Firewall to inspect for the IP address 192.0.2.44, specify 192.0.2.44/32
.
To configure Network Firewall to inspect for IP addresses from 192.0.2.0 to 192.0.2.255, specify 192.0.2.0/24
.
For more information about CIDR notation, see the Wikipedia entry Classless\n Inter-Domain Routing.
", + "smithy.api#documentation": "The destination IP address or address range to inspect for, in CIDR notation.\n To match with any address, specify ANY
.
Specify an IP address or a block of IP addresses in Classless Inter-Domain Routing (CIDR) notation. Network Firewall supports all address ranges for IPv4.
\nExamples:
\nTo configure Network Firewall to inspect for the IP address 192.0.2.44, specify 192.0.2.44/32
.
To configure Network Firewall to inspect for IP addresses from 192.0.2.0 to 192.0.2.255, specify 192.0.2.0/24
.
For more information about CIDR notation, see the Wikipedia entry Classless\n Inter-Domain Routing.
", "smithy.api#required": {} } }, "DestinationPort": { "target": "com.amazonaws.networkfirewall#Port", "traits": { - "smithy.api#documentation": "The destination port to inspect for. You can specify an individual port, for \n example 1994
and you can specify\n a port range, for example 1990:1994
.\n To match with any port, specify ANY
.
The destination port to inspect for. You can specify an individual port, for\n example 1994
and you can specify\n a port range, for example 1990:1994
.\n To match with any port, specify ANY
.
The basic rule criteria for AWS Network Firewall to use to inspect packet headers in stateful\n traffic flow inspection. Traffic flows that match the criteria are a match for the\n corresponding StatefulRule.
" + "smithy.api#documentation": "The basic rule criteria for Network Firewall to use to inspect packet headers in stateful\n traffic flow inspection. Traffic flows that match the criteria are a match for the\n corresponding StatefulRule.
" } }, "com.amazonaws.networkfirewall#IPSet": { @@ -1894,7 +1978,7 @@ } }, "traits": { - "smithy.api#documentation": "AWS doesn't currently have enough available capacity to fulfill your request. Try your\n request later.
", + "smithy.api#documentation": "Amazon Web Services doesn't currently have enough available capacity to fulfill your request. Try your\n request later.
", "smithy.api#error": "server" } }, @@ -1958,6 +2042,16 @@ "smithy.api#error": "client" } }, + "com.amazonaws.networkfirewall#KeyId": { + "type": "string", + "traits": { + "smithy.api#length": { + "min": 1, + "max": 2048 + }, + "smithy.api#pattern": "\\S" + } + }, "com.amazonaws.networkfirewall#Keyword": { "type": "string", "traits": { @@ -1968,6 +2062,9 @@ "smithy.api#pattern": ".*" } }, + "com.amazonaws.networkfirewall#LastUpdateTime": { + "type": "timestamp" + }, "com.amazonaws.networkfirewall#LimitExceededException": { "type": "structure", "members": { @@ -2004,6 +2101,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "FirewallPolicies", "pageSize": "MaxResults" } } @@ -2014,13 +2112,13 @@ "NextToken": { "target": "com.amazonaws.networkfirewall#PaginationToken", "traits": { - "smithy.api#documentation": "When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
The maximum number of objects that you want Network Firewall to return for this request. If more \n objects are available, in the response, Network Firewall provides a \n NextToken
value that you can use in a subsequent call to get the next batch of objects.
The maximum number of objects that you want Network Firewall to return for this request. If more\n objects are available, in the response, Network Firewall provides a\n NextToken
value that you can use in a subsequent call to get the next batch of objects.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
The maximum number of objects that you want Network Firewall to return for this request. If more \n objects are available, in the response, Network Firewall provides a \n NextToken
value that you can use in a subsequent call to get the next batch of objects.
The maximum number of objects that you want Network Firewall to return for this request. If more\n objects are available, in the response, Network Firewall provides a\n NextToken
value that you can use in a subsequent call to get the next batch of objects.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
The maximum number of objects that you want Network Firewall to return for this request. If more \n objects are available, in the response, Network Firewall provides a \n NextToken
value that you can use in a subsequent call to get the next batch of objects.
The maximum number of objects that you want Network Firewall to return for this request. If more\n objects are available, in the response, Network Firewall provides a\n NextToken
value that you can use in a subsequent call to get the next batch of objects.
The scope of the request. The default setting of ACCOUNT
or a setting of \n NULL
returns all of the rule groups in your account. A setting of \n MANAGED
returns all available managed rule groups.
The scope of the request. The default setting of ACCOUNT
or a setting of\n NULL
returns all of the rule groups in your account. A setting of\n MANAGED
returns all available managed rule groups.
Indicates the general category of the Amazon Web Services managed rule group.
" + } + }, + "Type": { + "target": "com.amazonaws.networkfirewall#RuleGroupType", + "traits": { + "smithy.api#documentation": "Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains stateless rules. If it is stateful, it contains stateful rules.
" } } } @@ -2167,7 +2279,7 @@ "NextToken": { "target": "com.amazonaws.networkfirewall#PaginationToken", "traits": { - "smithy.api#documentation": "When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
Retrieves the tags associated with the specified resource. Tags are key:value pairs that\n you can use to categorize and manage your resources, for purposes like billing. For\n example, you might set the tag key to \"customer\" and the value to the customer name or ID.\n You can specify one or more tags to add to each AWS resource, up to 50 tags for a\n resource.
\nYou can tag the AWS resources that you manage through AWS Network Firewall: firewalls, firewall\n policies, and rule groups.
", + "smithy.api#documentation": "Retrieves the tags associated with the specified resource. Tags are key:value pairs that\n you can use to categorize and manage your resources, for purposes like billing. For\n example, you might set the tag key to \"customer\" and the value to the customer name or ID.\n You can specify one or more tags to add to each Amazon Web Services resource, up to 50 tags for a\n resource.
\nYou can tag the Amazon Web Services resources that you manage through Network Firewall: firewalls, firewall\n policies, and rule groups.
", "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "Tags", "pageSize": "MaxResults" } } @@ -2215,13 +2328,13 @@ "NextToken": { "target": "com.amazonaws.networkfirewall#PaginationToken", "traits": { - "smithy.api#documentation": "When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
The maximum number of objects that you want Network Firewall to return for this request. If more \n objects are available, in the response, Network Firewall provides a \n NextToken
value that you can use in a subsequent call to get the next batch of objects.
The maximum number of objects that you want Network Firewall to return for this request. If more\n objects are available, in the response, Network Firewall provides a\n NextToken
value that you can use in a subsequent call to get the next batch of objects.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
When you request a list of objects with a MaxResults
setting, if the number of objects that are still available\n for retrieval exceeds the maximum you requested, Network Firewall returns a NextToken
\n value in the response. To retrieve the next batch of objects, use the token returned from the prior request in your next request.
Defines where AWS Network Firewall sends logs for the firewall for one log type. This is used\n in LoggingConfiguration. You can send each type of log to an Amazon S3 bucket, a CloudWatch log group, or a Kinesis Data Firehose delivery stream.
\nNetwork Firewall generates logs for stateful rule groups. You can save alert and flow log\n types. The stateful rules engine records flow logs for all network traffic that it receives. \n It records alert logs for traffic that matches stateful rules that have the rule\n action set to DROP
or ALERT
.
Defines where Network Firewall sends logs for the firewall for one log type. This is used\n in LoggingConfiguration. You can send each type of log to an Amazon S3 bucket, a CloudWatch log group, or a Kinesis Data Firehose delivery stream.
\nNetwork Firewall generates logs for stateful rule groups. You can save alert and flow log\n types. The stateful rules engine records flow logs for all network traffic that it receives.\n It records alert logs for traffic that matches stateful rules that have the rule\n action set to DROP
or ALERT
.
Defines how AWS Network Firewall performs logging for a Firewall.
" + "smithy.api#documentation": "Defines how Network Firewall performs logging for a Firewall.
" } }, "com.amazonaws.networkfirewall#MatchAttributes": { @@ -2418,7 +2531,7 @@ "name": "network-firewall" }, "aws.protocols#awsJson1_0": {}, - "smithy.api#documentation": "This is the API Reference for AWS Network Firewall. This guide is for developers who need\n detailed information about the Network Firewall API actions, data types, and errors.
\nThe REST API requires you to handle connection details, such as calculating\n signatures, handling request retries, and error handling. For general information\n about using the AWS REST APIs, see AWS APIs.
\nTo access Network Firewall using the REST API endpoint:\n https://network-firewall.
\n
Alternatively, you can use one of the AWS SDKs to access an API that's tailored to\n the programming language or platform that you're using. For more information, see\n AWS SDKs.
\nFor descriptions of Network Firewall features, including and step-by-step\n instructions on how to use them through the Network Firewall console, see the Network Firewall Developer\n Guide.
\nNetwork Firewall is a stateful, managed, network firewall and intrusion detection and\n prevention service for Amazon Virtual Private Cloud (Amazon VPC). With Network Firewall, you can filter traffic at the\n perimeter of your VPC. This includes filtering traffic going to and coming from an internet\n gateway, NAT gateway, or over VPN or AWS Direct Connect. Network Firewall uses rules that are compatible\n with Suricata, a free, open source intrusion detection system (IDS) engine. \n AWS Network Firewall supports Suricata version 5.0.2. For information about Suricata, \n see the Suricata website.
\nYou can use Network Firewall to monitor and protect your VPC traffic in a number of ways.\n The following are just a few examples:
\nAllow domains or IP addresses for known AWS service endpoints, such as Amazon S3, and\n block all other forms of traffic.
\nUse custom lists of known bad domains to limit the types of domain names that your\n applications can access.
\nPerform deep packet inspection on traffic entering or leaving your VPC.
\nUse stateful protocol detection to filter protocols like HTTPS, regardless of the\n port used.
\nTo enable Network Firewall for your VPCs, you perform steps in both Amazon VPC and in\n Network Firewall. For information about using Amazon VPC, see Amazon VPC User Guide.
\nTo start using Network Firewall, do the following:
\n(Optional) If you don't already have a VPC that you want to protect, create it in\n Amazon VPC.
\nIn Amazon VPC, in each Availability Zone where you want to have a firewall endpoint, create a\n subnet for the sole use of Network Firewall.
\nIn Network Firewall, create stateless and stateful rule groups, \n to define the components of the network traffic filtering behavior that you want your firewall to have.
\nIn Network Firewall, create a firewall policy that uses your rule groups and\n specifies additional default traffic filtering behavior.
\nIn Network Firewall, create a firewall and specify your new firewall policy and \n VPC subnets. Network Firewall creates a firewall endpoint in each subnet that you\n specify, with the behavior that's defined in the firewall policy.
\nIn Amazon VPC, use ingress routing enhancements to route traffic through the new firewall\n endpoints.
\nThis is the API Reference for Network Firewall. This guide is for developers who need\n detailed information about the Network Firewall API actions, data types, and errors.
\nThe REST API requires you to handle connection details, such as calculating\n signatures, handling request retries, and error handling. For general information\n about using the Amazon Web Services REST APIs, see Amazon Web Services APIs.
\nTo access Network Firewall using the REST API endpoint:\n https://network-firewall.
\n
Alternatively, you can use one of the Amazon Web Services SDKs to access an API that's tailored to\n the programming language or platform that you're using. For more information, see\n Amazon Web Services SDKs.
\nFor descriptions of Network Firewall features, including and step-by-step\n instructions on how to use them through the Network Firewall console, see the Network Firewall Developer\n Guide.
\nNetwork Firewall is a stateful, managed, network firewall and intrusion detection and\n prevention service for Amazon Virtual Private Cloud (Amazon VPC). With Network Firewall, you can filter traffic at the\n perimeter of your VPC. This includes filtering traffic going to and coming from an internet\n gateway, NAT gateway, or over VPN or Direct Connect. Network Firewall uses rules that are compatible\n with Suricata, a free, open source intrusion detection system (IDS) engine.\n Network Firewall supports Suricata version 5.0.2. For information about Suricata,\n see the Suricata website.
\nYou can use Network Firewall to monitor and protect your VPC traffic in a number of ways.\n The following are just a few examples:
\nAllow domains or IP addresses for known Amazon Web Services service endpoints, such as Amazon S3, and\n block all other forms of traffic.
\nUse custom lists of known bad domains to limit the types of domain names that your\n applications can access.
\nPerform deep packet inspection on traffic entering or leaving your VPC.
\nUse stateful protocol detection to filter protocols like HTTPS, regardless of the\n port used.
\nTo enable Network Firewall for your VPCs, you perform steps in both Amazon VPC and in\n Network Firewall. For information about using Amazon VPC, see Amazon VPC User Guide.
\nTo start using Network Firewall, do the following:
\n(Optional) If you don't already have a VPC that you want to protect, create it in\n Amazon VPC.
\nIn Amazon VPC, in each Availability Zone where you want to have a firewall endpoint, create a\n subnet for the sole use of Network Firewall.
\nIn Network Firewall, create stateless and stateful rule groups,\n to define the components of the network traffic filtering behavior that you want your firewall to have.
\nIn Network Firewall, create a firewall policy that uses your rule groups and\n specifies additional default traffic filtering behavior.
\nIn Network Firewall, create a firewall and specify your new firewall policy and\n VPC subnets. Network Firewall creates a firewall endpoint in each subnet that you\n specify, with the behavior that's defined in the firewall policy.
\nIn Amazon VPC, use ingress routing enhancements to route traffic through the new firewall\n endpoints.
\nCreates or updates an AWS Identity and Access Management policy for your rule group or firewall policy. Use this to share rule groups and firewall policies between accounts. This operation works in conjunction with the AWS Resource Access Manager (RAM) service\n to manage resource sharing for Network Firewall.
\nUse this operation to create or update a resource policy for your rule group or firewall policy. In the policy, you specify the accounts that you want to share the resource with and the operations that you want the accounts to be able to perform.
\nWhen you add an account in the resource policy, you then run the following Resource Access Manager (RAM) operations to access and accept the shared rule group or firewall policy.
\n\n GetResourceShareInvitations - Returns the Amazon Resource Names (ARNs) of the resource share invitations.
\n\n AcceptResourceShareInvitation - Accepts the share invitation for a specified resource share.
\nFor additional information about resource sharing using RAM, see AWS Resource Access Manager User Guide.
" + "smithy.api#documentation": "Creates or updates an IAM policy for your rule group or firewall policy. Use this to share rule groups and firewall policies between accounts. This operation works in conjunction with the Amazon Web Services Resource Access Manager (RAM) service\n to manage resource sharing for Network Firewall.
\nUse this operation to create or update a resource policy for your rule group or firewall policy. In the policy, you specify the accounts that you want to share the resource with and the operations that you want the accounts to be able to perform.
\nWhen you add an account in the resource policy, you then run the following Resource Access Manager (RAM) operations to access and accept the shared rule group or firewall policy.
\n\n GetResourceShareInvitations - Returns the Amazon Resource Names (ARNs) of the resource share invitations.
\n\n AcceptResourceShareInvitation - Accepts the share invitation for a specified resource share.
\nFor additional information about resource sharing using RAM, see Resource Access Manager User Guide.
" } }, "com.amazonaws.networkfirewall#PutResourcePolicyRequest": { @@ -2748,7 +2864,7 @@ "Policy": { "target": "com.amazonaws.networkfirewall#PolicyString", "traits": { - "smithy.api#documentation": "The AWS Identity and Access Management policy statement that lists the accounts that you want to share your rule group or firewall policy with \n and the operations that you want the accounts to be able to perform.
\nFor a rule group resource, you can specify the following operations in the Actions section of the statement:
\nnetwork-firewall:CreateFirewallPolicy
\nnetwork-firewall:UpdateFirewallPolicy
\nnetwork-firewall:ListRuleGroups
\nFor a firewall policy resource, you can specify the following operations in the Actions section of the statement:
\nnetwork-firewall:CreateFirewall
\nnetwork-firewall:UpdateFirewall
\nnetwork-firewall:AssociateFirewallPolicy
\nnetwork-firewall:ListFirewallPolicies
\nIn the Resource section of the statement, you specify the ARNs for the rule groups and firewall policies that you want to share with the account that you specified in Arn
.
The IAM policy statement that lists the accounts that you want to share your rule group or firewall policy with\n and the operations that you want the accounts to be able to perform.
\nFor a rule group resource, you can specify the following operations in the Actions section of the statement:
\nnetwork-firewall:CreateFirewallPolicy
\nnetwork-firewall:UpdateFirewallPolicy
\nnetwork-firewall:ListRuleGroups
\nFor a firewall policy resource, you can specify the following operations in the Actions section of the statement:
\nnetwork-firewall:CreateFirewall
\nnetwork-firewall:UpdateFirewall
\nnetwork-firewall:AssociateFirewallPolicy
\nnetwork-firewall:ListFirewallPolicies
\nIn the Resource section of the statement, you specify the ARNs for the rule groups and firewall policies that you want to share with the account that you specified in Arn
.
The inspection criteria and action for a single stateless rule. AWS Network Firewall inspects each packet for the specified matching\n criteria. When a packet matches the criteria, Network Firewall performs the rule's actions on\n the packet.
" + "smithy.api#documentation": "The inspection criteria and action for a single stateless rule. Network Firewall inspects each packet for the specified matching\n criteria. When a packet matches the criteria, Network Firewall performs the rule's actions on\n the packet.
" } }, "com.amazonaws.networkfirewall#RuleGroup": { @@ -2889,12 +3020,12 @@ "StatefulRuleOptions": { "target": "com.amazonaws.networkfirewall#StatefulRuleOptions", "traits": { - "smithy.api#documentation": "Additional options governing how Network Firewall handles stateful rules. The policies where you use your stateful \n rule group must have stateful rule options settings that are compatible with these settings.
" + "smithy.api#documentation": "Additional options governing how Network Firewall handles stateful rules. The policies where you use your stateful\n rule group must have stateful rule options settings that are compatible with these settings.
" } } }, "traits": { - "smithy.api#documentation": "The object that defines the rules in a rule group. This, along with RuleGroupResponse, define the rule group. You can retrieve all objects for a rule group by calling DescribeRuleGroup.
\nAWS Network Firewall uses a rule group to inspect and control network traffic. \n You define stateless rule groups to inspect individual packets and you define stateful rule groups to inspect packets in the context of their\n traffic flow.
\nTo use a rule group, you include it by reference in an Network Firewall firewall policy, then you use the policy in a firewall. You can reference a rule group from \n more than one firewall policy, and you can use a firewall policy in more than one firewall.
" + "smithy.api#documentation": "The object that defines the rules in a rule group. This, along with RuleGroupResponse, define the rule group. You can retrieve all objects for a rule group by calling DescribeRuleGroup.
\nNetwork Firewall uses a rule group to inspect and control network traffic.\n You define stateless rule groups to inspect individual packets and you define stateful rule groups to inspect packets in the context of their\n traffic flow.
\nTo use a rule group, you include it by reference in an Network Firewall firewall policy, then you use the policy in a firewall. You can reference a rule group from\n more than one firewall policy, and you can use a firewall policy in more than one firewall.
" } }, "com.amazonaws.networkfirewall#RuleGroupMetadata": { @@ -2950,13 +3081,13 @@ "Type": { "target": "com.amazonaws.networkfirewall#RuleGroupType", "traits": { - "smithy.api#documentation": "Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains \nstateless rules. If it is stateful, it contains stateful rules.
" + "smithy.api#documentation": "Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains\nstateless rules. If it is stateful, it contains stateful rules.
" } }, "Capacity": { "target": "com.amazonaws.networkfirewall#RuleCapacity", "traits": { - "smithy.api#documentation": "The maximum operating resources that this rule group can use. Rule group capacity is fixed at creation. \n When you update a rule group, you are limited to this capacity. When you reference a rule group \n from a firewall policy, Network Firewall reserves this capacity for the rule group.
\nYou can retrieve the capacity that would be required for a rule group before you create the rule group by calling \n CreateRuleGroup with DryRun
set to TRUE
.
The maximum operating resources that this rule group can use. Rule group capacity is fixed at creation.\n When you update a rule group, you are limited to this capacity. When you reference a rule group\n from a firewall policy, Network Firewall reserves this capacity for the rule group.
\nYou can retrieve the capacity that would be required for a rule group before you create the rule group by calling\n CreateRuleGroup with DryRun
set to TRUE
.
The number of firewall policies that use this rule group.
" } + }, + "EncryptionConfiguration": { + "target": "com.amazonaws.networkfirewall#EncryptionConfiguration", + "traits": { + "smithy.api#documentation": "A complex type that contains the Amazon Web Services KMS encryption configuration settings for your rule group.
" + } + }, + "SourceMetadata": { + "target": "com.amazonaws.networkfirewall#SourceMetadata", + "traits": { + "smithy.api#documentation": "A complex type that contains metadata about the rule group that your own rule group is copied from. You can use the metadata to track the version updates made to the originating rule group.
" + } + }, + "SnsTopic": { + "target": "com.amazonaws.networkfirewall#ResourceArn", + "traits": { + "smithy.api#documentation": "The Amazon resource name (ARN) of the Amazon Simple Notification Service SNS topic that's\nused to record changes to the managed rule group. You can subscribe to the SNS topic to receive\nnotifications when the managed rule group is modified, such as for new versions and for version\nexpiration. For more information, see the Amazon Simple Notification Service Developer Guide..
" + } + }, + "LastModifiedTime": { + "target": "com.amazonaws.networkfirewall#LastUpdateTime", + "traits": { + "smithy.api#documentation": "The last time that the rule group was changed.
" + } } }, "traits": { @@ -3105,7 +3260,7 @@ "StatefulRules": { "target": "com.amazonaws.networkfirewall#StatefulRules", "traits": { - "smithy.api#documentation": "An array of individual stateful rules inspection criteria to be used together in a stateful rule group. \n Use this option to specify simple Suricata rules with protocol, source and destination, ports, direction, and rule options. \n For information about the Suricata Rules
format, see\n Rules Format.
An array of individual stateful rules inspection criteria to be used together in a stateful rule group.\n Use this option to specify simple Suricata rules with protocol, source and destination, ports, direction, and rule options.\n For information about the Suricata Rules
format, see\n Rules Format.
Stateful inspection criteria for a domain list rule group.
\nFor HTTPS traffic, domain filtering is SNI-based. It uses the server name indicator extension of the TLS handshake.
\nBy default, Network Firewall domain list inspection only includes traffic coming from the VPC where you deploy the firewall. To inspect traffic from IP addresses outside of the deployment VPC, you set the HOME_NET
rule variable to include the CIDR range of the deployment VPC plus the other CIDR ranges. For more information, see RuleVariables in this guide and Stateful domain list rule groups in AWS Network Firewall in the Network Firewall Developer Guide.
Stateful inspection criteria for a domain list rule group.
\nFor HTTPS traffic, domain filtering is SNI-based. It uses the server name indicator extension of the TLS handshake.
\nBy default, Network Firewall domain list inspection only includes traffic coming from the VPC where you deploy the firewall. To inspect traffic from IP addresses outside of the deployment VPC, you set the HOME_NET
rule variable to include the CIDR range of the deployment VPC plus the other CIDR ranges. For more information, see RuleVariables in this guide and Stateful domain list rule groups in Network Firewall in the Network Firewall Developer Guide.
The Amazon Resource Name (ARN) of the rule group that your own rule group is copied from.
" + } + }, + "SourceUpdateToken": { + "target": "com.amazonaws.networkfirewall#UpdateToken", + "traits": { + "smithy.api#documentation": "The update token of the Amazon Web Services managed rule group that your own rule group is copied from. To determine the update token for the managed rule group, call DescribeRuleGroup.
" + } + } + }, + "traits": { + "smithy.api#documentation": "High-level information about the managed rule group that your own rule group is copied from. You can use the the metadata to track version updates made to the originating rule group. You can retrieve all objects for a rule group by calling DescribeRuleGroup.
" + } + }, "com.amazonaws.networkfirewall#StatefulAction": { "type": "string", "traits": { @@ -3214,7 +3389,7 @@ "RuleOrder": { "target": "com.amazonaws.networkfirewall#RuleOrder", "traits": { - "smithy.api#documentation": "Indicates how to manage the order of stateful rule evaluation for the policy. DEFAULT_ACTION_ORDER
is\n the default behavior. Stateful rules are provided to the rule engine as Suricata compatible strings, and Suricata evaluates them\n based on certain settings. For more information, see \n Evaluation order for stateful rules in the AWS Network Firewall Developer Guide.\n
Indicates how to manage the order of stateful rule evaluation for the policy. DEFAULT_ACTION_ORDER
is\n the default behavior. Stateful rules are provided to the rule engine as Suricata compatible strings, and Suricata evaluates them\n based on certain settings. For more information, see\n Evaluation order for stateful rules in the Network Firewall Developer Guide.\n
A single Suricata rules specification, for use in a stateful rule group. \n Use this option to specify a simple Suricata rule with protocol, source and destination, ports, direction, and rule options. \n For information about the Suricata Rules
format, see\n Rules Format.
A single Suricata rules specification, for use in a stateful rule group.\n Use this option to specify a simple Suricata rule with protocol, source and destination, ports, direction, and rule options.\n For information about the Suricata Rules
format, see\n Rules Format.
Indicates how to manage the order of the rule evaluation for the rule group. DEFAULT_ACTION_ORDER
is\n the default behavior. Stateful rules are provided to the rule engine as Suricata compatible strings, and Suricata evaluates them\n based on certain settings. For more information, see \n Evaluation order for stateful rules in the AWS Network Firewall Developer Guide.\n
Indicates how to manage the order of the rule evaluation for the rule group. DEFAULT_ACTION_ORDER
is\n the default behavior. Stateful rules are provided to the rule engine as Suricata compatible strings, and Suricata evaluates them\n based on certain settings. For more information, see\n Evaluation order for stateful rules in the Network Firewall Developer Guide.\n
The ID for a subnet that you want to associate with the firewall. This is used with\n CreateFirewall and AssociateSubnets. AWS Network Firewall\n creates an instance of the associated firewall in each subnet that you specify, to filter\n traffic in the subnet's Availability Zone.
" + "smithy.api#documentation": "The ID for a subnet that you want to associate with the firewall. This is used with\n CreateFirewall and AssociateSubnets. Network Firewall\n creates an instance of the associated firewall in each subnet that you specify, to filter\n traffic in the subnet's Availability Zone.
" } }, "com.amazonaws.networkfirewall#SubnetMappings": { @@ -3538,7 +3713,7 @@ } }, "traits": { - "smithy.api#documentation": "The status of the firewall endpoint and firewall policy configuration for a single VPC\n subnet.
\nFor each VPC subnet that you associate with a firewall, AWS Network Firewall does the\n following:
\nInstantiates a firewall endpoint in the subnet, ready to take traffic.
\nConfigures the endpoint with the current firewall policy settings, to provide the\n filtering behavior for the endpoint.
\nWhen you update a firewall, for example to add a subnet association or change a rule\n group in the firewall policy, the affected sync states reflect out-of-sync or not ready\n status until the changes are complete.
" + "smithy.api#documentation": "The status of the firewall endpoint and firewall policy configuration for a single VPC\n subnet.
\nFor each VPC subnet that you associate with a firewall, Network Firewall does the\n following:
\nInstantiates a firewall endpoint in the subnet, ready to take traffic.
\nConfigures the endpoint with the current firewall policy settings, to provide the\n filtering behavior for the endpoint.
\nWhen you update a firewall, for example to add a subnet association or change a rule\n group in the firewall policy, the affected sync states reflect out-of-sync or not ready\n status until the changes are complete.
" } }, "com.amazonaws.networkfirewall#SyncStateConfig": { @@ -3644,7 +3819,7 @@ } }, "traits": { - "smithy.api#documentation": "A key:value pair associated with an AWS resource. The key:value pair can be anything you\n define. Typically, the tag key represents a category (such as \"environment\") and the tag\n value represents a specific value within that category (such as \"test,\" \"development,\" or\n \"production\"). You can add up to 50 tags to each AWS resource.
" + "smithy.api#documentation": "A key:value pair associated with an Amazon Web Services resource. The key:value pair can be anything you\n define. Typically, the tag key represents a category (such as \"environment\") and the tag\n value represents a specific value within that category (such as \"test,\" \"development,\" or\n \"production\"). You can add up to 50 tags to each Amazon Web Services resource.
" } }, "com.amazonaws.networkfirewall#TagKey": { @@ -3704,7 +3879,7 @@ } ], "traits": { - "smithy.api#documentation": "Adds the specified tags to the specified resource. Tags are key:value pairs that you can\n use to categorize and manage your resources, for purposes like billing. For example, you\n might set the tag key to \"customer\" and the value to the customer name or ID. You can\n specify one or more tags to add to each AWS resource, up to 50 tags for a resource.
\nYou can tag the AWS resources that you manage through AWS Network Firewall: firewalls, firewall\n policies, and rule groups.
" + "smithy.api#documentation": "Adds the specified tags to the specified resource. Tags are key:value pairs that you can\n use to categorize and manage your resources, for purposes like billing. For example, you\n might set the tag key to \"customer\" and the value to the customer name or ID. You can\n specify one or more tags to add to each Amazon Web Services resource, up to 50 tags for a resource.
\nYou can tag the Amazon Web Services resources that you manage through Network Firewall: firewalls, firewall\n policies, and rule groups.
" } }, "com.amazonaws.networkfirewall#TagResourceRequest": { @@ -3818,7 +3993,7 @@ } ], "traits": { - "smithy.api#documentation": "Removes the tags with the specified keys from the specified resource. Tags are key:value\n pairs that you can use to categorize and manage your resources, for purposes like billing.\n For example, you might set the tag key to \"customer\" and the value to the customer name or\n ID. You can specify one or more tags to add to each AWS resource, up to 50 tags for a\n resource.
\nYou can manage tags for the AWS resources that you manage through AWS Network Firewall:\n firewalls, firewall policies, and rule groups.
" + "smithy.api#documentation": "Removes the tags with the specified keys from the specified resource. Tags are key:value\n pairs that you can use to categorize and manage your resources, for purposes like billing.\n For example, you might set the tag key to \"customer\" and the value to the customer name or\n ID. You can specify one or more tags to add to each Amazon Web Services resource, up to 50 tags for a\n resource.
\nYou can manage tags for the Amazon Web Services resources that you manage through Network Firewall:\n firewalls, firewall policies, and rule groups.
" } }, "com.amazonaws.networkfirewall#UntagResourceRequest": { @@ -4022,6 +4197,90 @@ } } }, + "com.amazonaws.networkfirewall#UpdateFirewallEncryptionConfiguration": { + "type": "operation", + "input": { + "target": "com.amazonaws.networkfirewall#UpdateFirewallEncryptionConfigurationRequest" + }, + "output": { + "target": "com.amazonaws.networkfirewall#UpdateFirewallEncryptionConfigurationResponse" + }, + "errors": [ + { + "target": "com.amazonaws.networkfirewall#InternalServerError" + }, + { + "target": "com.amazonaws.networkfirewall#InvalidRequestException" + }, + { + "target": "com.amazonaws.networkfirewall#InvalidTokenException" + }, + { + "target": "com.amazonaws.networkfirewall#ResourceNotFoundException" + }, + { + "target": "com.amazonaws.networkfirewall#ResourceOwnerCheckException" + }, + { + "target": "com.amazonaws.networkfirewall#ThrottlingException" + } + ], + "traits": { + "smithy.api#documentation": "A complex type that contains settings for encryption of your firewall resources.
" + } + }, + "com.amazonaws.networkfirewall#UpdateFirewallEncryptionConfigurationRequest": { + "type": "structure", + "members": { + "UpdateToken": { + "target": "com.amazonaws.networkfirewall#UpdateToken", + "traits": { + "smithy.api#documentation": "An optional token that you can use for optimistic locking. Network Firewall returns a token to your requests that access the firewall. The token marks the state of the firewall resource at the time of the request.
\nTo make an unconditional change to the firewall, omit the token in your update request. Without the token, Network Firewall performs your updates regardless of whether the firewall has changed since you last retrieved it.
\nTo make a conditional change to the firewall, provide the token in your update request. Network Firewall uses the token to ensure that the firewall hasn't changed since you last retrieved it. If it has changed, the operation fails with an InvalidTokenException
. If this happens, retrieve the firewall again to get a current copy of it with a new token. Reapply your changes as needed, then try the operation again using the new token.
The Amazon Resource Name (ARN) of the firewall.
" + } + }, + "FirewallName": { + "target": "com.amazonaws.networkfirewall#ResourceName", + "traits": { + "smithy.api#documentation": "The descriptive name of the firewall. You can't change the name of a firewall after you create it.
" + } + }, + "EncryptionConfiguration": { + "target": "com.amazonaws.networkfirewall#EncryptionConfiguration" + } + } + }, + "com.amazonaws.networkfirewall#UpdateFirewallEncryptionConfigurationResponse": { + "type": "structure", + "members": { + "FirewallArn": { + "target": "com.amazonaws.networkfirewall#ResourceArn", + "traits": { + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the firewall.
" + } + }, + "FirewallName": { + "target": "com.amazonaws.networkfirewall#ResourceName", + "traits": { + "smithy.api#documentation": "The descriptive name of the firewall. You can't change the name of a firewall after you create it.
" + } + }, + "UpdateToken": { + "target": "com.amazonaws.networkfirewall#UpdateToken", + "traits": { + "smithy.api#documentation": "An optional token that you can use for optimistic locking. Network Firewall returns a token to your requests that access the firewall. The token marks the state of the firewall resource at the time of the request.
\nTo make an unconditional change to the firewall, omit the token in your update request. Without the token, Network Firewall performs your updates regardless of whether the firewall has changed since you last retrieved it.
\nTo make a conditional change to the firewall, provide the token in your update request. Network Firewall uses the token to ensure that the firewall hasn't changed since you last retrieved it. If it has changed, the operation fails with an InvalidTokenException
. If this happens, retrieve the firewall again to get a current copy of it with a new token. Reapply your changes as needed, then try the operation again using the new token.
Modifies the flag, ChangeProtection
, which indicates whether it \n is possible to change the firewall. If the flag is set to TRUE
, the firewall is protected \n from changes. This setting helps protect against accidentally changing a firewall that's in use.
Modifies the flag, ChangeProtection
, which indicates whether it\n is possible to change the firewall. If the flag is set to TRUE
, the firewall is protected\n from changes. This setting helps protect against accidentally changing a firewall that's in use.
A setting indicating whether the firewall is protected against a change to the firewall policy association. \n Use this setting to protect against\n accidentally modifying the firewall policy for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against a change to the firewall policy association.\n Use this setting to protect against\n accidentally modifying the firewall policy for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against a change to the firewall policy association. \n Use this setting to protect against\n accidentally modifying the firewall policy for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against a change to the firewall policy association.\n Use this setting to protect against\n accidentally modifying the firewall policy for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
Indicates whether you want Network Firewall to just check the validity of the request, rather than run the request.
\nIf set to TRUE
, Network Firewall checks whether the request can run successfully, \n but doesn't actually make the requested changes. The call returns the value that the request would return if you ran it with \n dry run set to FALSE
, but doesn't make additions or changes to your resources. This option allows you to make sure that you have \n the required permissions to run the request and that your request parameters are valid.
If set to FALSE
, Network Firewall makes the requested changes to your resources.
Indicates whether you want Network Firewall to just check the validity of the request, rather than run the request.
\nIf set to TRUE
, Network Firewall checks whether the request can run successfully,\n but doesn't actually make the requested changes. The call returns the value that the request would return if you ran it with\n dry run set to FALSE
, but doesn't make additions or changes to your resources. This option allows you to make sure that you have\n the required permissions to run the request and that your request parameters are valid.
If set to FALSE
, Network Firewall makes the requested changes to your resources.
A complex type that contains settings for encryption of your firewall policy resources.
" } } } @@ -4339,13 +4604,13 @@ "Rules": { "target": "com.amazonaws.networkfirewall#RulesString", "traits": { - "smithy.api#documentation": "A string containing stateful rule group rules specifications in Suricata flat format, with one rule\nper line. Use this to import your existing Suricata compatible rule groups.
\nYou must provide either this rules setting or a populated RuleGroup
setting, but not both.
You can provide your rule group specification in Suricata flat format through this setting when you create or update your rule group. The call \nresponse returns a RuleGroup object that Network Firewall has populated from your string.
" + "smithy.api#documentation": "A string containing stateful rule group rules specifications in Suricata flat format, with one rule\nper line. Use this to import your existing Suricata compatible rule groups.
\nYou must provide either this rules setting or a populated RuleGroup
setting, but not both.
You can provide your rule group specification in Suricata flat format through this setting when you create or update your rule group. The call\nresponse returns a RuleGroup object that Network Firewall has populated from your string.
" } }, "Type": { "target": "com.amazonaws.networkfirewall#RuleGroupType", "traits": { - "smithy.api#documentation": "Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains \nstateless rules. If it is stateful, it contains stateful rules.
\nThis setting is required for requests that do not include the RuleGroupARN
.
Indicates whether the rule group is stateless or stateful. If the rule group is stateless, it contains\nstateless rules. If it is stateful, it contains stateful rules.
\nThis setting is required for requests that do not include the RuleGroupARN
.
Indicates whether you want Network Firewall to just check the validity of the request, rather than run the request.
\nIf set to TRUE
, Network Firewall checks whether the request can run successfully, \n but doesn't actually make the requested changes. The call returns the value that the request would return if you ran it with \n dry run set to FALSE
, but doesn't make additions or changes to your resources. This option allows you to make sure that you have \n the required permissions to run the request and that your request parameters are valid.
If set to FALSE
, Network Firewall makes the requested changes to your resources.
Indicates whether you want Network Firewall to just check the validity of the request, rather than run the request.
\nIf set to TRUE
, Network Firewall checks whether the request can run successfully,\n but doesn't actually make the requested changes. The call returns the value that the request would return if you ran it with\n dry run set to FALSE
, but doesn't make additions or changes to your resources. This option allows you to make sure that you have\n the required permissions to run the request and that your request parameters are valid.
If set to FALSE
, Network Firewall makes the requested changes to your resources.
A complex type that contains settings for encryption of your rule group resources.
" + } + }, + "SourceMetadata": { + "target": "com.amazonaws.networkfirewall#SourceMetadata", + "traits": { + "smithy.api#documentation": "A complex type that contains metadata about the rule group that your own rule group is copied from. You can use the metadata to keep track of updates made to the originating rule group.
" } } } @@ -4437,7 +4714,7 @@ "SubnetChangeProtection": { "target": "com.amazonaws.networkfirewall#Boolean", "traits": { - "smithy.api#documentation": "A setting indicating whether the firewall is protected against changes to the subnet associations. \n Use this setting to protect against\n accidentally modifying the subnet associations for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against changes to the subnet associations.\n Use this setting to protect against\n accidentally modifying the subnet associations for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against changes to the subnet associations. \n Use this setting to protect against\n accidentally modifying the subnet associations for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
A setting indicating whether the firewall is protected against changes to the subnet associations.\n Use this setting to protect against\n accidentally modifying the subnet associations for a firewall that is in use. When you create a firewall, the operation initializes this setting to TRUE
.
Starts a recommender that is INACTIVE. Starting a recommender does not\n create any new models, but resumes billing and automatic retraining for the recommender.
", + "smithy.api#idempotent": {} + } + }, + "com.amazonaws.personalize#StartRecommenderRequest": { + "type": "structure", + "members": { + "recommenderArn": { + "target": "com.amazonaws.personalize#Arn", + "traits": { + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the recommender to start.
", + "smithy.api#required": {} + } + } + } + }, + "com.amazonaws.personalize#StartRecommenderResponse": { + "type": "structure", + "members": { + "recommenderArn": { + "target": "com.amazonaws.personalize#Arn", + "traits": { + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the recommender you started.
" + } + } + } + }, "com.amazonaws.personalize#Status": { "type": "string", "traits": { @@ -6395,11 +6472,61 @@ } } }, + "com.amazonaws.personalize#StopRecommender": { + "type": "operation", + "input": { + "target": "com.amazonaws.personalize#StopRecommenderRequest" + }, + "output": { + "target": "com.amazonaws.personalize#StopRecommenderResponse" + }, + "errors": [ + { + "target": "com.amazonaws.personalize#InvalidInputException" + }, + { + "target": "com.amazonaws.personalize#ResourceInUseException" + }, + { + "target": "com.amazonaws.personalize#ResourceNotFoundException" + } + ], + "traits": { + "smithy.api#documentation": "Stops a recommender that is ACTIVE. Stopping a recommender halts billing and automatic retraining for the recommender.
", + "smithy.api#idempotent": {} + } + }, + "com.amazonaws.personalize#StopRecommenderRequest": { + "type": "structure", + "members": { + "recommenderArn": { + "target": "com.amazonaws.personalize#Arn", + "traits": { + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the recommender to stop.
", + "smithy.api#required": {} + } + } + } + }, + "com.amazonaws.personalize#StopRecommenderResponse": { + "type": "structure", + "members": { + "recommenderArn": { + "target": "com.amazonaws.personalize#Arn", + "traits": { + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the recommender you stopped.
" + } + } + } + }, "com.amazonaws.personalize#StopSolutionVersionCreation": { "type": "operation", "input": { "target": "com.amazonaws.personalize#StopSolutionVersionCreationRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.personalize#InvalidInputException" @@ -6709,7 +6836,7 @@ } ], "traits": { - "smithy.api#documentation": "Updates a campaign by either deploying a new solution or changing the value of the\n campaign's minProvisionedTPS
parameter.
To update a campaign, the campaign status must be ACTIVE or CREATE FAILED.\n Check the campaign status using the DescribeCampaign operation.
\nYou must wait until the status
of the\n updated campaign is ACTIVE
before asking the campaign for recommendations.
For more information on campaigns, see CreateCampaign.
", + "smithy.api#documentation": "Updates a campaign by either deploying a new solution or changing the value of the\n campaign's minProvisionedTPS
parameter.
To update a campaign, the campaign status must be ACTIVE or CREATE FAILED.\n Check the campaign status using the DescribeCampaign operation.
\n\nYou can still get recommendations from a campaign while an update is in progress.\n The campaign will use the previous solution version and campaign configuration to generate recommendations until the latest campaign update status is Active
.\n
For more information on campaigns, see CreateCampaign.
", "smithy.api#idempotent": {} } }, diff --git a/aws/sdk/aws-models/polly.json b/aws/sdk/aws-models/polly.json index 4112e4878d..75f282b23f 100644 --- a/aws/sdk/aws-models/polly.json +++ b/aws/sdk/aws-models/polly.json @@ -558,6 +558,10 @@ { "value": "ca-ES", "name": "ca_ES" + }, + { + "value": "de-AT", + "name": "de_AT" } ] } @@ -1951,6 +1955,10 @@ { "value": "Arlet", "name": "Arlet" + }, + { + "value": "Hannah", + "name": "Hannah" } ] } diff --git a/aws/sdk/aws-models/pricing.json b/aws/sdk/aws-models/pricing.json index 5582f43c42..268e5ceee5 100644 --- a/aws/sdk/aws-models/pricing.json +++ b/aws/sdk/aws-models/pricing.json @@ -31,18 +31,6 @@ "shapes": { "com.amazonaws.pricing#AWSPriceListService": { "type": "service", - "version": "2017-10-15", - "operations": [ - { - "target": "com.amazonaws.pricing#DescribeServices" - }, - { - "target": "com.amazonaws.pricing#GetAttributeValues" - }, - { - "target": "com.amazonaws.pricing#GetProducts" - } - ], "traits": { "aws.api#service": { "sdkId": "Pricing", @@ -55,9 +43,21 @@ "name": "pricing" }, "aws.protocols#awsJson1_1": {}, - "smithy.api#documentation": "Amazon Web Services Price List Service API (Amazon Web Services Price List Service) is a centralized and convenient way to\n programmatically query Amazon Web Services for services, products, and pricing information. The Amazon Web Services Price List Service\n uses standardized product attributes such as Location
, Storage\n Class
, and Operating System
, and provides prices at the SKU\n level. You can use the Amazon Web Services Price List Service to build cost control and scenario planning tools, reconcile\n billing data, forecast future spend for budgeting purposes, and provide cost benefit\n analysis that compare your internal workloads with Amazon Web Services.
Use GetServices
without a service code to retrieve the service codes for all AWS services, then \n GetServices
with a service code to retreive the attribute names for \n that service. After you have the service code and attribute names, you can use GetAttributeValues
\n to see what values are available for an attribute. With the service code and an attribute name and value, \n you can use GetProducts
to find specific products that you're interested in, such as \n an AmazonEC2
instance, with a Provisioned IOPS
\n volumeType
.
Service Endpoint
\nAmazon Web Services Price List Service API provides the following two endpoints:
\nhttps://api.pricing.us-east-1.amazonaws.com
\nhttps://api.pricing.ap-south-1.amazonaws.com
\nAmazon Web Services Price List Service API (Amazon Web Services Price List Service) is a centralized and convenient way to\n programmatically query Amazon Web Services for services, products, and pricing information. The Amazon Web Services Price List Service\n uses standardized product attributes such as Location
, Storage\n Class
, and Operating System
, and provides prices at the SKU\n level. You can use the Amazon Web Services Price List Service to build cost control and scenario planning tools, reconcile\n billing data, forecast future spend for budgeting purposes, and provide cost benefit\n analysis that compare your internal workloads with Amazon Web Services.
Use GetServices
without a service code to retrieve the service codes for all AWS services, then \n GetServices
with a service code to retrieve the attribute names for \n that service. After you have the service code and attribute names, you can use GetAttributeValues
\n to see what values are available for an attribute. With the service code and an attribute name and value, \n you can use GetProducts
to find specific products that you're interested in, such as \n an AmazonEC2
instance, with a Provisioned IOPS
\n volumeType
.
Service Endpoint
\nAmazon Web Services Price List Service API provides the following two endpoints:
\nhttps://api.pricing.us-east-1.amazonaws.com
\nhttps://api.pricing.ap-south-1.amazonaws.com
\nThe pagination token for the next set of retreivable results.
" + "smithy.api#documentation": "The pagination token for the next set of retrievable results.
" } } } @@ -266,7 +266,7 @@ } ], "traits": { - "smithy.api#documentation": "Returns a list of attribute values. Attibutes are similar to the details \n in a Price List API offer file. For a list of available attributes, see \n Offer File Definitions\n in the Amazon Web Services Billing and Cost Management User Guide.
", + "smithy.api#documentation": "Returns a list of attribute values. Attributes are similar to the details \n in a Price List API offer file. For a list of available attributes, see \n Offer File Definitions\n in the Amazon Web Services Billing and Cost Management User Guide.
", "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", diff --git a/aws/sdk/aws-models/rds-data.json b/aws/sdk/aws-models/rds-data.json index 2e3e75b220..1a1d0a768a 100644 --- a/aws/sdk/aws-models/rds-data.json +++ b/aws/sdk/aws-models/rds-data.json @@ -1,6 +1,19 @@ { "smithy": "1.0", "shapes": { + "com.amazonaws.rdsdata#AccessDeniedException": { + "type": "structure", + "members": { + "message": { + "target": "com.amazonaws.rdsdata#ErrorMessage" + } + }, + "traits": { + "smithy.api#documentation": "You do not have sufficient access to perform this action.
", + "smithy.api#error": "client", + "smithy.api#httpError": 403 + } + }, "com.amazonaws.rdsdata#Arn": { "type": "string", "traits": { @@ -31,13 +44,13 @@ "longValues": { "target": "com.amazonaws.rdsdata#LongArray", "traits": { - "smithy.api#documentation": "An array of floating point numbers.
" + "smithy.api#documentation": "An array of integers.
" } }, "doubleValues": { "target": "com.amazonaws.rdsdata#DoubleArray", "traits": { - "smithy.api#documentation": "An array of integers.
" + "smithy.api#documentation": "An array of floating-point numbers.
" } }, "stringValues": { @@ -88,6 +101,9 @@ "target": "com.amazonaws.rdsdata#BatchExecuteStatementResponse" }, "errors": [ + { + "target": "com.amazonaws.rdsdata#AccessDeniedException" + }, { "target": "com.amazonaws.rdsdata#BadRequestException" }, @@ -107,9 +123,9 @@ "traits": { "smithy.api#documentation": "Runs a batch SQL statement over an array of data.
\nYou can run bulk update and insert operations for multiple records using a DML \n statement with different parameter sets. Bulk operations can provide a significant \n performance improvement over individual insert and update operations.
\nIf a call isn't part of a transaction because it doesn't include the\n transactionID
parameter, changes that result from the call are\n committed automatically.
Starts a SQL transaction.
\n \nA transaction can run for a maximum of 24 hours. A transaction is terminated and \n rolled back automatically after 24 hours.
\nA transaction times out if no calls use its transaction ID in three minutes. \n If a transaction times out before it's committed, it's rolled back\n automatically.
\nDDL statements inside a transaction cause an implicit commit. We recommend \n that you run each DDL statement in a separate ExecuteStatement
call with \n continueAfterTimeout
enabled.
Ends a SQL transaction started with the BeginTransaction
operation and\n commits the changes.
An array of floating point numbers.
\nSome array entries can be null.
\nAn array of floating-point numbers.
\nSome array entries can be null.
\nRuns one or more SQL statements.
\nThis operation is deprecated. Use the BatchExecuteStatement
or\n ExecuteStatement
operation.
Runs a SQL statement against a database.
\nIf a call isn't part of a transaction because it doesn't include the\n transactionID
parameter, changes that result from the call are\n committed automatically.
The response size limit is 1 MB. If the call returns more than 1 MB of response data, the call is terminated.
", + "smithy.api#documentation": "Runs a SQL statement against a database.
\nIf a call isn't part of a transaction because it doesn't include the\n transactionID
parameter, changes that result from the call are\n committed automatically.
If the binary response data from the database is more than 1 MB, the call is terminated.
", "smithy.api#http": { + "code": 200, "method": "POST", - "uri": "/Execute", - "code": 200 + "uri": "/Execute" } } }, @@ -703,6 +731,12 @@ "traits": { "smithy.api#documentation": "Options that control how the result set is returned.
" } + }, + "formatRecordsAs": { + "target": "com.amazonaws.rdsdata#RecordsFormatType", + "traits": { + "smithy.api#documentation": "A value that indicates whether to format the result set as a single JSON string.\n This parameter only applies to SELECT
statements and is ignored for\n other types of statements. Allowed values are NONE
and JSON
.\n The default value is NONE
. The result is returned in the formattedRecords
field.
For usage information about the JSON format for result sets, see\n Using the Data API\n in the Amazon Aurora User Guide.
" + } } }, "traits": { @@ -715,13 +749,13 @@ "records": { "target": "com.amazonaws.rdsdata#SqlRecords", "traits": { - "smithy.api#documentation": "The records returned by the SQL statement.
" + "smithy.api#documentation": "The records returned by the SQL statement. This field is blank if the\n formatRecordsAs
parameter is set to JSON
.
Metadata for the columns included in the results.
" + "smithy.api#documentation": "Metadata for the columns included in the results. This field is blank if the\n formatRecordsAs
parameter is set to JSON
.
Values for fields generated during the request.
\n \nThe generatedFields
data isn't supported by Aurora PostgreSQL.\n To get the values of generated fields, use the RETURNING
clause. For\n more information, see Returning Data From\n Modified Rows in the PostgreSQL documentation.
Values for fields generated during a DML request.
\n \nThe generatedFields
data isn't supported by Aurora PostgreSQL.\n To get the values of generated fields, use the RETURNING
clause. For\n more information, see Returning Data From\n Modified Rows in the PostgreSQL documentation.
A string value that represents the result set of a SELECT
statement\n in JSON format. This value is only present when the formatRecordsAs
\n parameter is set to JSON
.
The size limit for this field is currently 10 MB. If the JSON-formatted string representing the\n result set requires more than 10 MB, the call returns an error.
" } } }, @@ -813,6 +853,9 @@ "smithy.api#httpError": 403 } }, + "com.amazonaws.rdsdata#FormattedSqlRecords": { + "type": "string" + }, "com.amazonaws.rdsdata#Id": { "type": "string", "traits": { @@ -845,6 +888,21 @@ "smithy.api#documentation": "An array of integers.
\nSome array entries can be null.
\nAmazon RDS provides an HTTP endpoint to run SQL statements on an Amazon Aurora\n Serverless DB cluster. To run these statements, you work with the Data Service\n API.
\nFor more information about the Data Service API, see\n Using the Data API\n in the Amazon Aurora User Guide.
", + "smithy.api#title": "AWS RDS DataService" + }, "version": "2018-08-01", "operations": [ { @@ -892,23 +963,7 @@ { "target": "com.amazonaws.rdsdata#RollbackTransaction" } - ], - "traits": { - "aws.api#service": { - "sdkId": "RDS Data", - "arnNamespace": "rds-data", - "cloudFormationName": "RdsDataService", - "cloudTrailEventSource": "rds-data.amazonaws.com", - "endpointPrefix": "rds-data" - }, - "aws.auth#sigv4": { - "name": "rds-data" - }, - "aws.protocols#restJson1": {}, - "smithy.api#cors": {}, - "smithy.api#documentation": "Amazon RDS provides an HTTP endpoint to run SQL statements on an Amazon Aurora\n Serverless DB cluster. To run these statements, you work with the Data Service\n API.
\nFor more information about the Data Service API, see Using the Data API for Aurora\n Serverless in the Amazon Aurora User Guide.
", - "smithy.api#title": "AWS RDS DataService" - } + ] }, "com.amazonaws.rdsdata#Record": { "type": "structure", @@ -921,7 +976,7 @@ } }, "traits": { - "smithy.api#documentation": "A record returned by a call.
" + "smithy.api#documentation": "A record returned by a call.
\nThis data structure is only used with the deprecated ExecuteSql
operation.\n Use the BatchExecuteStatement
or ExecuteStatement
operation instead.
The result set returned by a SQL statement.
" + "smithy.api#documentation": "The result set returned by a SQL statement.
\nThis data structure is only used with the deprecated ExecuteSql
operation.\n Use the BatchExecuteStatement
or ExecuteStatement
operation instead.
A value that indicates how a field of DECIMAL
type is represented\n in the response. The value of STRING
, the default, specifies that\n it is converted to a String value. The value of DOUBLE_OR_LONG
\n specifies that it is converted to a Long value if its scale is 0, or to a Double\n value otherwise.
Conversion to Double or Long can result in roundoff errors due to precision loss.\n We recommend converting to String, especially when working with currency values.
\nA value that indicates how a field of LONG
type is represented.\n Allowed values are LONG
and STRING
. The default\n is LONG
. Specify STRING
if the length or\n precision of numeric values might cause truncation or rounding errors.\n
Performs a rollback of a transaction. Rolling back a transaction cancels its changes.
", "smithy.api#http": { + "code": 200, "method": "POST", - "uri": "/RollbackTransaction", - "code": 200 + "uri": "/RollbackTransaction" } } }, @@ -1151,7 +1230,7 @@ } }, "traits": { - "smithy.api#documentation": "The result of a SQL statement.
\n \nThis data type is deprecated.
\nThe result of a SQL statement.
\n \nThis data structure is only used with the deprecated ExecuteSql
operation.\n Use the BatchExecuteStatement
or ExecuteStatement
operation instead.
A structure value returned by a call.
" + "smithy.api#documentation": "A structure value returned by a call.
\nThis data structure is only used with the deprecated ExecuteSql
operation.\n Use the BatchExecuteStatement
or ExecuteStatement
operation instead.
Contains the value of a column.
\n \nThis data type is deprecated.
\nContains the value of a column.
\n \nThis data structure is only used with the deprecated ExecuteSql
operation.\n Use the BatchExecuteStatement
or ExecuteStatement
operation instead.
Creates a custom Availability Zone (AZ).
\nA custom AZ is an on-premises AZ that is integrated with a VMware vSphere cluster.
\nFor more information about RDS on VMware, see the \n \n RDS on VMware User Guide.\n
" - } - }, - "com.amazonaws.rds#CreateCustomAvailabilityZoneMessage": { - "type": "structure", - "members": { - "CustomAvailabilityZoneName": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The name of the custom Availability Zone (AZ).
", - "smithy.api#required": {} - } - }, - "ExistingVpnId": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The ID of an existing virtual private network (VPN) between the Amazon RDS website and\n the VMware vSphere cluster.
" - } - }, - "NewVpnTunnelName": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The name of a new VPN tunnel between the Amazon RDS website and the VMware vSphere cluster.
\nSpecify this parameter only if ExistingVpnId
isn't specified.
The IP address of network traffic from your on-premises data center. A custom AZ receives the network traffic.
\nSpecify this parameter only if ExistingVpnId
isn't specified.
The amount of time, in days, to retain Performance Insights data. Valid values are 7 or 731 (2 years).
\nValid for: Multi-AZ DB clusters only
" } + }, + "ServerlessV2ScalingConfiguration": { + "target": "com.amazonaws.rds#ServerlessV2ScalingConfiguration" } }, "traits": { @@ -2646,7 +2576,7 @@ "AvailabilityZone": { "target": "com.amazonaws.rds#String", "traits": { - "smithy.api#documentation": "The Availability Zone (AZ) where the database will be created. For information on\n Amazon Web Services Regions and Availability Zones, see \n Regions\n and Availability Zones.
\n\n Amazon Aurora\n
\nNot applicable. Availability Zones are managed by the DB cluster.
\nDefault: A random, system-chosen Availability Zone in the endpoint's Amazon Web Services Region.
\nExample: us-east-1d
\n
Constraint: The AvailabilityZone
parameter can't be specified if the DB instance is a Multi-AZ deployment. \n The specified Availability Zone must be in the same Amazon Web Services Region as the current endpoint.
If you're creating a DB instance in an RDS on VMware environment,\n specify the identifier of the custom Availability Zone to create the DB instance\n in.
\nFor more information about RDS on VMware, see the \n \n RDS on VMware User Guide.\n
\nThe Availability Zone (AZ) where the database will be created. For information on\n Amazon Web Services Regions and Availability Zones, see \n Regions\n and Availability Zones.
\n\n Amazon Aurora\n
\nEach Aurora DB cluster hosts copies of its storage in three separate Availability Zones. Specify one of these \n Availability Zones. Aurora automatically chooses an appropriate Availability Zone if you don't specify one.
\nDefault: A random, system-chosen Availability Zone in the endpoint's Amazon Web Services Region.
\nExample: us-east-1d
\n
Constraint: The AvailabilityZone
parameter can't be specified if the DB instance is a Multi-AZ deployment. \n The specified Availability Zone must be in the same Amazon Web Services Region as the current endpoint.
If you're creating a DB instance in an RDS on VMware environment,\n specify the identifier of the custom Availability Zone to create the DB instance\n in.
\nFor more information about RDS on VMware, see the \n \n RDS on VMware User Guide.\n
\nThe identifier of the custom AZ.
\nAmazon RDS generates a unique identifier when a custom AZ is created.
" - } - }, - "CustomAvailabilityZoneName": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The name of the custom AZ.
" - } - }, - "CustomAvailabilityZoneStatus": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The status of the custom AZ.
" - } - }, - "VpnDetails": { - "target": "com.amazonaws.rds#VpnDetails", - "traits": { - "smithy.api#documentation": "Information about the virtual private network (VPN) between the VMware vSphere cluster\n and the Amazon Web Services website.
" - } - } - }, - "traits": { - "smithy.api#documentation": "A custom Availability Zone (AZ) is an on-premises AZ that is integrated with a VMware vSphere cluster.
\nFor more information about RDS on VMware, see the \n \n RDS on VMware User Guide.\n
" - } - }, - "com.amazonaws.rds#CustomAvailabilityZoneAlreadyExistsFault": { - "type": "structure", - "members": { - "message": { - "target": "com.amazonaws.rds#ExceptionMessage" - } - }, - "traits": { - "aws.protocols#awsQueryError": { - "code": "CustomAvailabilityZoneAlreadyExists", - "httpResponseCode": 400 - }, - "smithy.api#documentation": "\n CustomAvailabilityZoneName
is already used by an existing custom\n Availability Zone.
An optional pagination token provided by a previous\n DescribeCustomAvailabilityZones
request.\n If this parameter is specified, the response includes\n only records beyond the marker,\n up to the value specified by MaxRecords
.
The list of CustomAvailabilityZone objects for the Amazon Web Services account.
" - } - } - } - }, "com.amazonaws.rds#CustomAvailabilityZoneNotFoundFault": { "type": "structure", "members": { @@ -3956,23 +3811,6 @@ "smithy.api#httpError": 404 } }, - "com.amazonaws.rds#CustomAvailabilityZoneQuotaExceededFault": { - "type": "structure", - "members": { - "message": { - "target": "com.amazonaws.rds#ExceptionMessage" - } - }, - "traits": { - "aws.protocols#awsQueryError": { - "code": "CustomAvailabilityZoneQuotaExceeded", - "httpResponseCode": 400 - }, - "smithy.api#documentation": "You have exceeded the maximum number of custom Availability Zones.
", - "smithy.api#error": "client", - "smithy.api#httpError": 400 - } - }, "com.amazonaws.rds#CustomDBEngineVersionAlreadyExistsFault": { "type": "structure", "members": { @@ -4471,6 +4309,9 @@ "traits": { "smithy.api#documentation": "The amount of time, in days, to retain Performance Insights data. Valid values are 7 or 731 (2 years).
\nThis setting is only for non-Aurora Multi-AZ DB clusters.
" } + }, + "ServerlessV2ScalingConfiguration": { + "target": "com.amazonaws.rds#ServerlessV2ScalingConfigurationInfo" } }, "traits": { @@ -7869,46 +7710,6 @@ "smithy.api#httpError": 400 } }, - "com.amazonaws.rds#DeleteCustomAvailabilityZone": { - "type": "operation", - "input": { - "target": "com.amazonaws.rds#DeleteCustomAvailabilityZoneMessage" - }, - "output": { - "target": "com.amazonaws.rds#DeleteCustomAvailabilityZoneResult" - }, - "errors": [ - { - "target": "com.amazonaws.rds#CustomAvailabilityZoneNotFoundFault" - }, - { - "target": "com.amazonaws.rds#KMSKeyNotAccessibleFault" - } - ], - "traits": { - "smithy.api#documentation": "Deletes a custom Availability Zone (AZ).
\nA custom AZ is an on-premises AZ that is integrated with a VMware vSphere cluster.
\nFor more information about RDS on VMware, see the \n \n RDS on VMware User Guide.\n
" - } - }, - "com.amazonaws.rds#DeleteCustomAvailabilityZoneMessage": { - "type": "structure", - "members": { - "CustomAvailabilityZoneId": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The custom AZ identifier.
", - "smithy.api#required": {} - } - } - } - }, - "com.amazonaws.rds#DeleteCustomAvailabilityZoneResult": { - "type": "structure", - "members": { - "CustomAvailabilityZone": { - "target": "com.amazonaws.rds#CustomAvailabilityZone" - } - } - }, "com.amazonaws.rds#DeleteCustomDBEngineVersion": { "type": "operation", "input": { @@ -8044,6 +7845,9 @@ "input": { "target": "com.amazonaws.rds#DeleteDBClusterParameterGroupMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.rds#DBParameterGroupNotFoundFault" @@ -8248,6 +8052,9 @@ "input": { "target": "com.amazonaws.rds#DeleteDBParameterGroupMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.rds#DBParameterGroupNotFoundFault" @@ -8366,6 +8173,9 @@ "input": { "target": "com.amazonaws.rds#DeleteDBSecurityGroupMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.rds#DBSecurityGroupNotFoundFault" @@ -8441,6 +8251,9 @@ "input": { "target": "com.amazonaws.rds#DeleteDBSubnetGroupMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.rds#DBSubnetGroupNotFoundFault" @@ -8554,40 +8367,14 @@ } } }, - "com.amazonaws.rds#DeleteInstallationMedia": { - "type": "operation", - "input": { - "target": "com.amazonaws.rds#DeleteInstallationMediaMessage" - }, - "output": { - "target": "com.amazonaws.rds#InstallationMedia" - }, - "errors": [ - { - "target": "com.amazonaws.rds#InstallationMediaNotFoundFault" - } - ], - "traits": { - "smithy.api#documentation": "Deletes the installation medium for a DB engine that requires an on-premises customer provided license,\n such as Microsoft SQL Server.
" - } - }, - "com.amazonaws.rds#DeleteInstallationMediaMessage": { - "type": "structure", - "members": { - "InstallationMediaId": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The installation medium ID.
", - "smithy.api#required": {} - } - } - } - }, "com.amazonaws.rds#DeleteOptionGroup": { "type": "operation", "input": { "target": "com.amazonaws.rds#DeleteOptionGroupMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.rds#InvalidOptionGroupStateFault" @@ -8749,58 +8536,6 @@ "smithy.api#documentation": "" } }, - "com.amazonaws.rds#DescribeCustomAvailabilityZones": { - "type": "operation", - "input": { - "target": "com.amazonaws.rds#DescribeCustomAvailabilityZonesMessage" - }, - "output": { - "target": "com.amazonaws.rds#CustomAvailabilityZoneMessage" - }, - "errors": [ - { - "target": "com.amazonaws.rds#CustomAvailabilityZoneNotFoundFault" - } - ], - "traits": { - "smithy.api#documentation": "Returns information about custom Availability Zones (AZs).
\nA custom AZ is an on-premises AZ that is integrated with a VMware vSphere cluster.
\nFor more information about RDS on VMware, see the \n \n RDS on VMware User Guide.\n
", - "smithy.api#paginated": { - "inputToken": "Marker", - "outputToken": "Marker", - "items": "CustomAvailabilityZones", - "pageSize": "MaxRecords" - } - } - }, - "com.amazonaws.rds#DescribeCustomAvailabilityZonesMessage": { - "type": "structure", - "members": { - "CustomAvailabilityZoneId": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The custom AZ identifier. If this parameter is specified, information from only the specific custom AZ is returned.
" - } - }, - "Filters": { - "target": "com.amazonaws.rds#FilterList", - "traits": { - "smithy.api#documentation": "A filter that specifies one or more custom AZs to describe.
" - } - }, - "MaxRecords": { - "target": "com.amazonaws.rds#IntegerOptional", - "traits": { - "smithy.api#documentation": "The maximum number of records to include in the response.\n If more records exist than the specified MaxRecords
value,\n a pagination token called a marker is included in the response so you can retrieve the remaining results.
Default: 100
\nConstraints: Minimum 20, maximum 100.
" - } - }, - "Marker": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "An optional pagination token provided by a previous\n DescribeCustomAvailabilityZones
request.\n If this parameter is specified, the response includes\n only records beyond the marker,\n up to the value specified by MaxRecords
.
The name of the DB parameter group family.
", + "smithy.api#documentation": "The name of the DB parameter group family.
\nValid Values:
\n\n aurora5.6
\n
\n aurora-mysql5.7
\n
\n aurora-mysql8.0
\n
\n aurora-postgresql10
\n
\n aurora-postgresql11
\n
\n aurora-postgresql12
\n
\n aurora-postgresql13
\n
\n mariadb10.2
\n
\n mariadb10.3
\n
\n mariadb10.4
\n
\n mariadb10.5
\n
\n mariadb10.6
\n
\n mysql5.7
\n
\n mysql8.0
\n
\n postgres10
\n
\n postgres11
\n
\n postgres12
\n
\n postgres13
\n
\n postgres14
\n
\n sqlserver-ee-11.0
\n
\n sqlserver-ee-12.0
\n
\n sqlserver-ee-13.0
\n
\n sqlserver-ee-14.0
\n
\n sqlserver-ee-15.0
\n
\n sqlserver-ex-11.0
\n
\n sqlserver-ex-12.0
\n
\n sqlserver-ex-13.0
\n
\n sqlserver-ex-14.0
\n
\n sqlserver-ex-15.0
\n
\n sqlserver-se-11.0
\n
\n sqlserver-se-12.0
\n
\n sqlserver-se-13.0
\n
\n sqlserver-se-14.0
\n
\n sqlserver-se-15.0
\n
\n sqlserver-web-11.0
\n
\n sqlserver-web-12.0
\n
\n sqlserver-web-13.0
\n
\n sqlserver-web-14.0
\n
\n sqlserver-web-15.0
\n
Describes the available installation media for a DB engine that requires an \n on-premises customer provided license, such as Microsoft SQL Server.
", - "smithy.api#paginated": { - "inputToken": "Marker", - "outputToken": "Marker", - "items": "InstallationMedia", - "pageSize": "MaxRecords" - } - } - }, - "com.amazonaws.rds#DescribeInstallationMediaMessage": { - "type": "structure", - "members": { - "InstallationMediaId": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The installation medium ID.
" - } - }, - "Filters": { - "target": "com.amazonaws.rds#FilterList", - "traits": { - "smithy.api#documentation": "A filter that specifies one or more installation media to describe. Supported filters\n include the following:
\n\n custom-availability-zone-id
- Accepts custom Availability Zone (AZ)\n identifiers. The results list includes information about only the custom AZs\n identified by these identifiers.
\n engine
- Accepts database engines. The results list includes information about \n only the database engines identified by these identifiers.
For more information about the valid engines for installation media, see ImportInstallationMedia.
\nAn optional pagination token provided by a previous DescribeInstallationMedia request.\n If this parameter is specified, the response includes\n only records beyond the marker, up to the value specified by MaxRecords
.
An optional pagination token provided by a previous request.\n If this parameter is specified, the response includes\n only records beyond the marker,\n up to the value specified by MaxRecords
.
Imports the installation media for a DB engine that requires an on-premises \n customer provided license, such as SQL Server.
" - } - }, - "com.amazonaws.rds#ImportInstallationMediaMessage": { - "type": "structure", - "members": { - "CustomAvailabilityZoneId": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The identifier of the custom Availability Zone (AZ) to import the installation media to.
", - "smithy.api#required": {} - } - }, - "Engine": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The name of the database engine to be used for this instance.
\nThe list only includes supported DB engines that require an on-premises \n customer provided license.
\nValid Values:
\n\n sqlserver-ee
\n
\n sqlserver-se
\n
\n sqlserver-ex
\n
\n sqlserver-web
\n
The version number of the database engine to use.
\nFor a list of valid engine versions, call DescribeDBEngineVersions.
\nThe following are the database engines and links to information about the major and minor \n versions. The list only includes DB engines that require an on-premises \n customer provided license.
\n\n Microsoft SQL Server\n
\nSee \n Microsoft SQL Server Versions on Amazon RDS in the Amazon RDS User Guide.
", - "smithy.api#required": {} - } - }, - "EngineInstallationMediaPath": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The path to the installation medium for the specified DB engine.
\nExample: SQLServerISO/en_sql_server_2016_enterprise_x64_dvd_8701793.iso
\n
The path to the installation medium for the operating system associated with the specified DB engine.
\nExample: WindowsISO/en_windows_server_2016_x64_dvd_9327751.iso
\n
The installation medium ID.
" - } - }, - "CustomAvailabilityZoneId": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The custom Availability Zone (AZ) that contains the installation media.
" - } - }, - "Engine": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The DB engine.
" - } - }, - "EngineVersion": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The engine version of the DB engine.
" - } - }, - "EngineInstallationMediaPath": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The path to the installation medium for the DB engine.
" - } - }, - "OSInstallationMediaPath": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The path to the installation medium for the operating system associated with the DB engine.
" - } - }, - "Status": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The status of the installation medium.
" - } - }, - "FailureCause": { - "target": "com.amazonaws.rds#InstallationMediaFailureCause", - "traits": { - "smithy.api#documentation": "If an installation media failure occurred, the cause of the failure.
" - } - } - }, - "traits": { - "smithy.api#documentation": "Contains the installation media for a DB engine that requires an on-premises \n customer provided license, such as Microsoft SQL Server.
" - } - }, - "com.amazonaws.rds#InstallationMediaAlreadyExistsFault": { - "type": "structure", - "members": { - "message": { - "target": "com.amazonaws.rds#ExceptionMessage" - } - }, - "traits": { - "aws.protocols#awsQueryError": { - "code": "InstallationMediaAlreadyExists", - "httpResponseCode": 400 - }, - "smithy.api#documentation": "The specified installation medium has already been imported.
", - "smithy.api#error": "client", - "smithy.api#httpError": 400 - } - }, - "com.amazonaws.rds#InstallationMediaFailureCause": { - "type": "structure", - "members": { - "Message": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The reason that an installation media import failed.
" - } - } - }, - "traits": { - "smithy.api#documentation": "Contains the cause of an installation media failure. Installation media is used \n for a DB engine that requires an on-premises \n customer provided license, such as Microsoft SQL Server.
" - } - }, - "com.amazonaws.rds#InstallationMediaList": { - "type": "list", - "member": { - "target": "com.amazonaws.rds#InstallationMedia", - "traits": { - "smithy.api#xmlName": "InstallationMedia" - } - } - }, - "com.amazonaws.rds#InstallationMediaMessage": { - "type": "structure", - "members": { - "Marker": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "An optional pagination token provided by a previous\n DescribeInstallationMedia request.\n If this parameter is specified, the response includes\n only records beyond the marker,\n up to the value specified by MaxRecords
.
The list of InstallationMedia objects for the Amazon Web Services account.
" - } - } - } - }, - "com.amazonaws.rds#InstallationMediaNotFoundFault": { - "type": "structure", - "members": { - "message": { - "target": "com.amazonaws.rds#ExceptionMessage" - } - }, - "traits": { - "aws.protocols#awsQueryError": { - "code": "InstallationMediaNotFound", - "httpResponseCode": 404 - }, - "smithy.api#documentation": "\n InstallationMediaID
doesn't refer to an existing installation medium.
Override the system-default Secure Sockets Layer/Transport Layer Security (SSL/TLS)\n certificate for Amazon RDS for new DB instances temporarily, or remove the override.
\nBy using this operation, you can specify an RDS-approved SSL/TLS certificate for new DB\n instances that is different from the default certificate provided by RDS. You can also\n use this operation to remove the override, so that new DB instances use the default\n certificate provided by RDS.
\nYou might need to override the default certificate in the following situations:
\nYou already migrated your applications to support the latest certificate authority (CA) certificate, but the new CA certificate is not yet \n the RDS default CA certificate for the specified Amazon Web Services Region.
\nRDS has already moved to a new default CA certificate for the specified Amazon Web Services\n Region, but you are still in the process of supporting the new CA certificate.\n In this case, you temporarily need additional time to finish your application\n changes.
\nFor more information about rotating your SSL/TLS certificate for RDS DB engines, see \n \n Rotating Your SSL/TLS Certificate in the Amazon RDS User Guide.
\nFor more information about rotating your SSL/TLS certificate for Aurora DB engines, see \n \n Rotating Your SSL/TLS Certificate in the Amazon Aurora User Guide.
" + "smithy.api#documentation": "Override the system-default Secure Sockets Layer/Transport Layer Security (SSL/TLS)\n certificate for Amazon RDS for new DB instances, or remove the override.
\nBy using this operation, you can specify an RDS-approved SSL/TLS certificate for new DB\n instances that is different from the default certificate provided by RDS. You can also\n use this operation to remove the override, so that new DB instances use the default\n certificate provided by RDS.
\nYou might need to override the default certificate in the following situations:
\nYou already migrated your applications to support the latest certificate authority (CA) certificate, but the new CA certificate is not yet \n the RDS default CA certificate for the specified Amazon Web Services Region.
\nRDS has already moved to a new default CA certificate for the specified Amazon Web Services\n Region, but you are still in the process of supporting the new CA certificate.\n In this case, you temporarily need additional time to finish your application\n changes.
\nFor more information about rotating your SSL/TLS certificate for RDS DB engines, see \n \n Rotating Your SSL/TLS Certificate in the Amazon RDS User Guide.
\nFor more information about rotating your SSL/TLS certificate for Aurora DB engines, see \n \n Rotating Your SSL/TLS Certificate in the Amazon Aurora User Guide.
" } }, "com.amazonaws.rds#ModifyCertificatesMessage": { @@ -14073,6 +13566,9 @@ "traits": { "smithy.api#documentation": "The amount of time, in days, to retain Performance Insights data. Valid values are 7 or 731 (2 years).
\nValid for: Multi-AZ DB clusters only
" } + }, + "ServerlessV2ScalingConfiguration": { + "target": "com.amazonaws.rds#ServerlessV2ScalingConfiguration" } }, "traits": { @@ -16898,6 +16394,9 @@ "input": { "target": "com.amazonaws.rds#RemoveRoleFromDBClusterMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.rds#DBClusterNotFoundFault" @@ -16943,6 +16442,9 @@ "input": { "target": "com.amazonaws.rds#RemoveRoleFromDBInstanceMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.rds#DBInstanceNotFoundFault" @@ -17039,6 +16541,9 @@ "input": { "target": "com.amazonaws.rds#RemoveTagsFromResourceMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.rds#DBClusterNotFoundFault" @@ -17775,6 +17280,9 @@ "traits": { "smithy.api#documentation": "Specify the name of the IAM role to be used when making API calls to the Directory Service.
" } + }, + "ServerlessV2ScalingConfiguration": { + "target": "com.amazonaws.rds#ServerlessV2ScalingConfiguration" } } }, @@ -18012,6 +17520,9 @@ "traits": { "smithy.api#documentation": "A value that indicates whether the DB cluster is publicly accessible.
\nWhen the DB cluster is publicly accessible, its Domain Name System (DNS) endpoint resolves to the private IP address \n from within the DB cluster's virtual private cloud (VPC). It resolves to the public IP address from outside of the DB cluster's VPC. \n Access to the DB cluster is ultimately controlled by the security group it uses. \n That public access is not permitted if the security group assigned to the DB cluster doesn't permit it.
\nWhen the DB cluster isn't publicly accessible, it is an internal DB cluster with a DNS name that resolves to a private IP address.
\nDefault: The default behavior varies depending on whether DBSubnetGroupName
is specified.
If DBSubnetGroupName
isn't specified, and PubliclyAccessible
isn't specified, the following applies:
If the default VPC in the target Region doesn’t have an internet gateway attached to it, the DB cluster is private.
\nIf the default VPC in the target Region has an internet gateway attached to it, the DB cluster is public.
\nIf DBSubnetGroupName
is specified, and PubliclyAccessible
isn't specified, the following applies:
If the subnets are part of a VPC that doesn’t have an internet gateway attached to it, the DB cluster is private.
\nIf the subnets are part of a VPC that has an internet gateway attached to it, the DB cluster is public.
\nValid for: Aurora DB clusters and Multi-AZ DB clusters
" } + }, + "ServerlessV2ScalingConfiguration": { + "target": "com.amazonaws.rds#ServerlessV2ScalingConfiguration" } }, "traits": { @@ -18245,6 +17756,9 @@ "traits": { "smithy.api#documentation": "The amount of Provisioned IOPS (input/output operations per second) to be initially allocated for \n each DB instance in the Multi-AZ DB cluster.
\nFor information about valid Iops
values, see Amazon RDS Provisioned IOPS storage to improve performance in the Amazon RDS User Guide.
Constraints: Must be a multiple between .5 and 50 of the storage amount for the DB instance.
\nValid for: Multi-AZ DB clusters only
" } + }, + "ServerlessV2ScalingConfiguration": { + "target": "com.amazonaws.rds#ServerlessV2ScalingConfiguration" } }, "traits": { @@ -19422,6 +18936,46 @@ "smithy.api#documentation": "Shows the scaling configuration for an Aurora DB cluster in serverless
DB engine mode.
For more information, see Using Amazon Aurora Serverless v1 in the\n Amazon Aurora User Guide.
" } }, + "com.amazonaws.rds#ServerlessV2ScalingConfiguration": { + "type": "structure", + "members": { + "MinCapacity": { + "target": "com.amazonaws.rds#DoubleOptional", + "traits": { + "smithy.api#documentation": "The minimum number of Aurora capacity units (ACUs) for a DB instance in an Aurora Serverless v2 cluster.\n You can specify ACU values in half-step increments, such as 8, 8.5, 9, and so on. The smallest value\n that you can use is 0.5.
" + } + }, + "MaxCapacity": { + "target": "com.amazonaws.rds#DoubleOptional", + "traits": { + "smithy.api#documentation": "The maximum number of Aurora capacity units (ACUs) for a DB instance in an Aurora Serverless v2 cluster.\n You can specify ACU values in half-step increments, such as 40, 40.5, 41, and so on. The largest value\n that you can use is 128.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Contains the scaling configuration of an Aurora Serverless v2 DB cluster.
\nFor more information, see Using Amazon Aurora Serverless v2 in the\n Amazon Aurora User Guide.
" + } + }, + "com.amazonaws.rds#ServerlessV2ScalingConfigurationInfo": { + "type": "structure", + "members": { + "MinCapacity": { + "target": "com.amazonaws.rds#DoubleOptional", + "traits": { + "smithy.api#documentation": "The minimum number of Aurora capacity units (ACUs) for a DB instance in an Aurora Serverless v2 cluster.\n You can specify ACU values in half-step increments, such as 8, 8.5, 9, and so on. The smallest value\n that you can use is 0.5.
" + } + }, + "MaxCapacity": { + "target": "com.amazonaws.rds#DoubleOptional", + "traits": { + "smithy.api#documentation": "The maximum number of Aurora capacity units (ACUs) for a DB instance in an Aurora Serverless v2 cluster.\n You can specify ACU values in half-step increments, such as 40, 40.5, 41, and so on. The largest value\n that you can use is 128.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Shows the scaling configuration for an Aurora Serverless v2 DB cluster.
\nFor more information, see Using Amazon Aurora Serverless v2 in the\n Amazon Aurora User Guide.
" + } + }, "com.amazonaws.rds#SharedSnapshotQuotaExceededFault": { "type": "structure", "members": { @@ -20224,12 +19778,6 @@ "target": "com.amazonaws.rds#String" } }, - "com.amazonaws.rds#StringSensitive": { - "type": "string", - "traits": { - "smithy.api#sensitive": {} - } - }, "com.amazonaws.rds#Subnet": { "type": "structure", "members": { @@ -20812,50 +20360,6 @@ } } }, - "com.amazonaws.rds#VpnDetails": { - "type": "structure", - "members": { - "VpnId": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The ID of the VPN.
" - } - }, - "VpnTunnelOriginatorIP": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The IP address of network traffic from your on-premises data center. A custom AZ receives the network traffic.
" - } - }, - "VpnGatewayIp": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The IP address of network traffic from Amazon Web Services to your on-premises data center.
" - } - }, - "VpnPSK": { - "target": "com.amazonaws.rds#StringSensitive", - "traits": { - "smithy.api#documentation": "The preshared key (PSK) for the VPN.
" - } - }, - "VpnName": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The name of the VPN.
" - } - }, - "VpnState": { - "target": "com.amazonaws.rds#String", - "traits": { - "smithy.api#documentation": "The state of the VPN.
" - } - } - }, - "traits": { - "smithy.api#documentation": "Information about the virtual private network (VPN) between the VMware vSphere cluster and the Amazon Web Services website.
\nFor more information about RDS on VMware, see the \n \n RDS on VMware User Guide.\n
" - } - }, "com.amazonaws.rds#WriteForwardingStatus": { "type": "string", "traits": { diff --git a/aws/sdk/aws-models/redshift.json b/aws/sdk/aws-models/redshift.json index c2781befee..c9d6a84f6f 100644 --- a/aws/sdk/aws-models/redshift.json +++ b/aws/sdk/aws-models/redshift.json @@ -3554,6 +3554,9 @@ "input": { "target": "com.amazonaws.redshift#CreateTagsMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#InvalidClusterStateFault" @@ -4115,6 +4118,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteClusterParameterGroupMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#ClusterParameterGroupNotFoundFault" @@ -4155,6 +4161,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteClusterSecurityGroupMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#ClusterSecurityGroupNotFoundFault" @@ -4245,6 +4254,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteClusterSubnetGroupMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#ClusterSubnetGroupNotFoundFault" @@ -4321,6 +4333,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteEventSubscriptionMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#InvalidSubscriptionStateFault" @@ -4353,6 +4368,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteHsmClientCertificateMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#HsmClientCertificateNotFoundFault" @@ -4385,6 +4403,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteHsmConfigurationMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#HsmConfigurationNotFoundFault" @@ -4440,6 +4461,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteScheduledActionMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#ScheduledActionNotFoundFault" @@ -4469,6 +4493,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteSnapshotCopyGrantMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#InvalidSnapshotCopyGrantStateFault" @@ -4501,6 +4528,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteSnapshotScheduleMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#InvalidClusterSnapshotScheduleStateFault" @@ -4530,6 +4560,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteTagsMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#InvalidTagFault" @@ -4569,6 +4602,9 @@ "input": { "target": "com.amazonaws.redshift#DeleteUsageLimitMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#UnsupportedOperationFault" @@ -6724,6 +6760,9 @@ }, "com.amazonaws.redshift#DescribeStorage": { "type": "operation", + "input": { + "target": "smithy.api#Unit" + }, "output": { "target": "com.amazonaws.redshift#CustomerStorageMessage" }, @@ -7190,8 +7229,7 @@ "BucketName": { "target": "com.amazonaws.redshift#String", "traits": { - "smithy.api#documentation": "The name of an existing S3 bucket where the log files are to be stored.
\nConstraints:
\nMust be in the same region as the cluster
\nThe cluster must have read bucket and put object permissions
\nThe name of an existing S3 bucket where the log files are to be stored.
\nConstraints:
\nMust be in the same region as the cluster
\nThe cluster must have read bucket and put object permissions
\nThe prefix applied to the log file names.
\nConstraints:
\nCannot exceed 512 characters
\nCannot contain spaces( ), double quotes (\"), single quotes ('), a backslash\n (\\), or control characters. The hexadecimal codes for invalid characters are:
\nx00 to x20
\nx22
\nx27
\nx5c
\nx7f or larger
\nThe log destination type. An enum with possible values of s3
and cloudwatch
.
The collection of exported log types. Log types include the connection log, user log and user activity log.
" + } } }, "traits": { @@ -9074,6 +9124,27 @@ "smithy.api#httpError": 400 } }, + "com.amazonaws.redshift#LogDestinationType": { + "type": "string", + "traits": { + "smithy.api#enum": [ + { + "value": "s3", + "name": "S3" + }, + { + "value": "cloudwatch", + "name": "CLOUDWATCH" + } + ] + } + }, + "com.amazonaws.redshift#LogTypeList": { + "type": "list", + "member": { + "target": "com.amazonaws.redshift#String" + } + }, "com.amazonaws.redshift#LoggingStatus": { "type": "structure", "members": { @@ -9112,6 +9183,18 @@ "traits": { "smithy.api#documentation": "The message indicating that logs failed to be delivered.
" } + }, + "LogDestinationType": { + "target": "com.amazonaws.redshift#LogDestinationType", + "traits": { + "smithy.api#documentation": "The log destination type. An enum with possible values of s3
and cloudwatch
.
The collection of exported log types. Log types include the connection log, user log and user activity log.
" + } } }, "traits": { @@ -9800,6 +9883,9 @@ "input": { "target": "com.amazonaws.redshift#ModifyClusterSnapshotScheduleMessage" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.redshift#ClusterNotFoundFault" @@ -12554,7 +12640,7 @@ "KmsKeyId": { "target": "com.amazonaws.redshift#String", "traits": { - "smithy.api#documentation": "The Key Management Service (KMS) key ID of the encryption key to encrypt data in the cluster \n restored from a shared snapshot. You can also provide \n the key ID when you restore from an unencrypted snapshot to an encrypted cluster in \n the same account. Additionally, you can specify a new KMS key ID when you restore from an encrypted \n snapshot in the same account in order to change it. In that case, the restored cluster is encrypted \n with the new KMS key ID.
" + "smithy.api#documentation": "The Key Management Service (KMS) key ID of the encryption key that encrypts data in the cluster \n restored from a shared snapshot. You can also provide \n the key ID when you restore from an unencrypted snapshot to an encrypted cluster in \n the same account. Additionally, you can specify a new KMS key ID when you restore from an encrypted \n snapshot in the same account in order to change it. In that case, the restored cluster is encrypted \n with the new KMS key ID.
" } }, "NodeType": { @@ -12632,7 +12718,7 @@ "Encrypted": { "target": "com.amazonaws.redshift#BooleanOptional", "traits": { - "smithy.api#documentation": "Enables support for restoring an unencrypted snapshot to a cluster encrypted \n with Key Management Service (KMS) and a CMK.
" + "smithy.api#documentation": "Enables support for restoring an unencrypted snapshot to a cluster encrypted \n with Key Management Service (KMS) and a customer managed key.
" } } }, diff --git a/aws/sdk/aws-models/rekognition.json b/aws/sdk/aws-models/rekognition.json index 6882826d79..5ce8b08206 100644 --- a/aws/sdk/aws-models/rekognition.json +++ b/aws/sdk/aws-models/rekognition.json @@ -247,7 +247,7 @@ } }, "traits": { - "smithy.api#documentation": "Identifies the bounding box around the label, face, text or personal protective equipment.\n The left
(x-coordinate) and top
(y-coordinate) are coordinates representing the top and\n left sides of the bounding box. Note that the upper-left corner of the image is the origin\n (0,0).
The top
and left
values returned are ratios of the overall\n image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of\n the bounding box is 350x50 pixels, the API returns a left
value of 0.5 (350/700)\n and a top
value of 0.25 (50/200).
The width
and height
values represent the dimensions of the\n bounding box as a ratio of the overall image dimension. For example, if the input image is\n 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.
The bounding box coordinates can have negative values. For example, if Amazon Rekognition is\n able to detect a face that is at the image edge and is only partially visible, the service\n can return coordinates that are outside the image bounds and, depending on the image edge,\n you might get negative values or values greater than 1 for the left
or\n top
values.
Identifies the bounding box around the label, face, text, object of interest, or personal protective equipment.\n The left
(x-coordinate) and top
(y-coordinate) are coordinates representing the top and\n left sides of the bounding box. Note that the upper-left corner of the image is the origin\n (0,0).
The top
and left
values returned are ratios of the overall\n image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of\n the bounding box is 350x50 pixels, the API returns a left
value of 0.5 (350/700)\n and a top
value of 0.25 (50/200).
The width
and height
values represent the dimensions of the\n bounding box as a ratio of the overall image dimension. For example, if the input image is\n 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.
The bounding box coordinates can have negative values. For example, if Amazon Rekognition is\n able to detect a face that is at the image edge and is only partially visible, the service\n can return coordinates that are outside the image bounds and, depending on the image edge,\n you might get negative values or values greater than 1 for the left
or\n top
values.
Type that describes the face Amazon Rekognition chose to compare with the faces in the target.\n This contains a bounding box for the selected face and confidence level that the bounding box\n contains a face. Note that Amazon Rekognition selects the largest face in the source image for this\n comparison.
" } }, + "com.amazonaws.rekognition#ConnectedHomeLabel": { + "type": "string" + }, + "com.amazonaws.rekognition#ConnectedHomeLabels": { + "type": "list", + "member": { + "target": "com.amazonaws.rekognition#ConnectedHomeLabel" + }, + "traits": { + "smithy.api#length": { + "min": 1, + "max": 128 + } + } + }, + "com.amazonaws.rekognition#ConnectedHomeSettings": { + "type": "structure", + "members": { + "Labels": { + "target": "com.amazonaws.rekognition#ConnectedHomeLabels", + "traits": { + "smithy.api#documentation": "\n Specifies what you want to detect in the video, such as people, packages, or pets. The current valid labels you can include in this list are: \"PERSON\", \"PET\", \"PACKAGE\", and \"ALL\".\n
", + "smithy.api#required": {} + } + }, + "MinConfidence": { + "target": "com.amazonaws.rekognition#Percent", + "traits": { + "smithy.api#documentation": "\n The minimum confidence required to label an object in the video. \n
" + } + } + }, + "traits": { + "smithy.api#documentation": "\n Label detection settings to use on a streaming video. Defining the settings is required in the request parameter for CreateStreamProcessor.\n Including this setting in the CreateStreamProcessor
request enables you to use the stream processor for label detection. \n You can then select what you want the stream processor to detect, such as people or pets. When the stream processor has started, one notification\n is sent for each object class specified. For example, if packages and pets are selected, one SNS notification is published the first time a package is detected \n and one SNS notification is published the first time a pet is detected, as well as an end-of-session summary. \n
\n Specifies what you want to detect in the video, such as people, packages, or pets. The current valid labels you can include in this list are: \"PERSON\", \"PET\", \"PACKAGE\", and \"ALL\".\n
" + } + }, + "MinConfidence": { + "target": "com.amazonaws.rekognition#Percent", + "traits": { + "smithy.api#documentation": "\n The minimum confidence required to label an object in the video. \n
" + } + } + }, + "traits": { + "smithy.api#documentation": "\n The label detection settings you want to use in your stream processor. This includes the labels you want the stream processor to detect and the minimum confidence level allowed to label objects. \n
" + } + }, "com.amazonaws.rekognition#ContentClassifier": { "type": "string", "traits": { @@ -805,7 +861,7 @@ "FaceModelVersion": { "target": "com.amazonaws.rekognition#String", "traits": { - "smithy.api#documentation": "Latest face model being used with the collection. For more information, see Model versioning.
" + "smithy.api#documentation": "Version number of the face detection model associated with the collection you are creating.
" } } } @@ -1082,7 +1138,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates an Amazon Rekognition stream processor that you can use to detect and recognize faces in a streaming video.
\nAmazon Rekognition Video is a consumer of live video from Amazon Kinesis Video Streams. Amazon Rekognition Video sends analysis results to Amazon Kinesis Data Streams.
\nYou provide as input a Kinesis video stream (Input
) and a Kinesis data stream (Output
) stream. You also specify the\n face recognition criteria in Settings
. For example, the collection containing faces that you want to recognize.\n Use Name
to assign an identifier for the stream processor. You use Name
\n to manage the stream processor. For example, you can start processing the source video by calling StartStreamProcessor with\n the Name
field.
After you have finished analyzing a streaming video, use StopStreamProcessor to\n stop processing. You can delete the stream processor by calling DeleteStreamProcessor.
\nThis operation requires permissions to perform the\n rekognition:CreateStreamProcessor
action. If you want to tag your stream processor, you also require permission to perform the rekognition:TagResource
operation.
Creates an Amazon Rekognition stream processor that you can use to detect and recognize faces or to detect labels in a streaming video.
\nAmazon Rekognition Video is a consumer of live video from Amazon Kinesis Video Streams. There are two different settings for stream processors in Amazon Rekognition: detecting faces and detecting labels.
\nIf you are creating a stream processor for detecting faces, you provide as input a Kinesis video stream (Input
) and a Kinesis data stream (Output
) stream. You also specify the\n face recognition criteria in Settings
. For example, the collection containing faces that you want to recognize. After you have finished analyzing a streaming video, use StopStreamProcessor to\n stop processing.
If you are creating a stream processor to detect labels, you provide as input a Kinesis video stream (Input
), Amazon S3 bucket information (Output
), and an\n Amazon SNS topic ARN (NotificationChannel
). You can also provide a KMS key ID to encrypt the data sent to your Amazon S3 bucket.\n You specify what you want to detect in ConnectedHomeSettings
, such as people, packages and people, or pets, people, and packages. You can also specify where in the frame you want Amazon Rekognition to monitor with RegionsOfInterest
. \n When you run the StartStreamProcessor operation on a label detection stream processor, you input start and stop information to determine the length of the processing time.
\n Use Name
to assign an identifier for the stream processor. You use Name
\n to manage the stream processor. For example, you can start processing the source video by calling StartStreamProcessor with\n the Name
field.
This operation requires permissions to perform the\n rekognition:CreateStreamProcessor
action. If you want to tag your stream processor, you also require permission to perform the rekognition:TagResource
operation.
Kinesis video stream stream that provides the source streaming video. If you are using the AWS CLI, the parameter name is StreamProcessorInput
.
Kinesis video stream stream that provides the source streaming video. If you are using the AWS CLI, the parameter name is StreamProcessorInput
. This is required for both face search and label detection stream processors.
Kinesis data stream stream to which Amazon Rekognition Video puts the analysis results. If you are using the AWS CLI, the parameter name is StreamProcessorOutput
.
Kinesis data stream stream or Amazon S3 bucket location to which Amazon Rekognition Video puts the analysis results. If you are using the AWS CLI, the parameter name is StreamProcessorOutput
. \n This must be a S3Destination of an Amazon S3 bucket that you own for a label detection stream processor or a Kinesis data stream ARN for a face search stream processor.
An identifier you assign to the stream processor. You can use Name
to\n manage the stream processor. For example, you can get the current status of the stream processor by calling DescribeStreamProcessor.\n Name
is idempotent.\n
An identifier you assign to the stream processor. You can use Name
to\n manage the stream processor. For example, you can get the current status of the stream processor by calling DescribeStreamProcessor.\n Name
is idempotent. This is required for both face search and label detection stream processors.\n
Face recognition input parameters to be used by the stream processor. Includes the collection to use for face recognition and the face\n attributes to detect.
", + "smithy.api#documentation": "Input parameters used in a streaming video analyzed by a stream processor. You can use FaceSearch
to recognize faces in a streaming video, or you can use ConnectedHome
to detect labels.
ARN of the IAM role that allows access to the stream processor.
", + "smithy.api#documentation": "The Amazon Resource Number (ARN) of the IAM role that allows access to the stream processor. \n The IAM role provides Rekognition read permissions for a Kinesis stream. \n It also provides write permissions to an Amazon S3 bucket and Amazon Simple Notification Service topic for a label detection stream processor. This is required for both face search and label detection stream processors.
", "smithy.api#required": {} } }, @@ -1128,6 +1184,27 @@ "traits": { "smithy.api#documentation": "\n A set of tags (key-value pairs) that you want to attach to the stream processor.\n
" } + }, + "NotificationChannel": { + "target": "com.amazonaws.rekognition#StreamProcessorNotificationChannel" + }, + "KmsKeyId": { + "target": "com.amazonaws.rekognition#KmsKeyId", + "traits": { + "smithy.api#documentation": "\n The identifier for your AWS Key Management Service key (AWS KMS key). This is an optional parameter for label detection stream processors and should not be used to create a face search stream processor.\n You can supply the Amazon Resource Name (ARN) of your KMS key, the ID of your KMS key, an alias for your KMS key, or an alias ARN. \n The key is used to encrypt results and data published to your Amazon S3 bucket, which includes image frames and hero images. Your source images are unaffected. \n
\n\n
" + } + }, + "RegionsOfInterest": { + "target": "com.amazonaws.rekognition#RegionsOfInterest", + "traits": { + "smithy.api#documentation": "\n Specifies locations in the frames where Amazon Rekognition checks for objects or people. You can specify up to 10 regions of interest. This is an optional parameter for label detection stream processors and should not be used to create a face search stream processor.\n
" + } + }, + "DataSharingPreference": { + "target": "com.amazonaws.rekognition#StreamProcessorDataSharingPreference", + "traits": { + "smithy.api#documentation": "\n Shows whether you are sharing data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis.\n Note that if you opt out at the account level this setting is ignored on individual streams.\n
" + } } } }, @@ -1137,7 +1214,7 @@ "StreamProcessorArn": { "target": "com.amazonaws.rekognition#StreamProcessorArn", "traits": { - "smithy.api#documentation": "ARN for the newly create stream processor.
" + "smithy.api#documentation": "Amazon Resource Number for the newly created stream processor.
" } } } @@ -1537,7 +1614,7 @@ } ], "traits": { - "smithy.api#documentation": "Deletes the specified collection. Note that this operation\n removes all faces in the collection. For an example, see delete-collection-procedure.
\n\nThis operation requires permissions to perform the\n rekognition:DeleteCollection
action.
Deletes the specified collection. Note that this operation\n removes all faces in the collection. For an example, see Deleting a collection.
\n\nThis operation requires permissions to perform the\n rekognition:DeleteCollection
action.
The version of the face model that's used by the collection for face detection.
\n \nFor more information, see Model Versioning in the \n Amazon Rekognition Developer Guide.
" + "smithy.api#documentation": "The version of the face model that's used by the collection for face detection.
\n \nFor more information, see Model versioning in the \n Amazon Rekognition Developer Guide.
" } }, "CollectionARN": { @@ -2298,7 +2375,28 @@ "Settings": { "target": "com.amazonaws.rekognition#StreamProcessorSettings", "traits": { - "smithy.api#documentation": "Face recognition input parameters that are being used by the stream processor.\n Includes the collection to use for face recognition and the face\n attributes to detect.
" + "smithy.api#documentation": "Input parameters used in a streaming video analyzed by a stream processor. You can use FaceSearch
to recognize faces\n in a streaming video, or you can use ConnectedHome
to detect labels.
\n The identifier for your AWS Key Management Service key (AWS KMS key). This is an optional parameter for label detection stream processors.\n
" + } + }, + "RegionsOfInterest": { + "target": "com.amazonaws.rekognition#RegionsOfInterest", + "traits": { + "smithy.api#documentation": "\n Specifies locations in the frames where Amazon Rekognition checks for objects or people. This is an optional parameter for label detection stream processors.\n
" + } + }, + "DataSharingPreference": { + "target": "com.amazonaws.rekognition#StreamProcessorDataSharingPreference", + "traits": { + "smithy.api#documentation": "\n Shows whether you are sharing data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis.\n Note that if you opt out at the account level this setting is ignored on individual streams.\n
" } } } @@ -2499,7 +2597,7 @@ } ], "traits": { - "smithy.api#documentation": "Detects instances of real-world entities within an image (JPEG or PNG)\n provided as input. This includes objects like flower, tree, and table; events like\n wedding, graduation, and birthday party; and concepts like landscape, evening, and nature.\n
\n \nFor an example, see Analyzing Images Stored in an Amazon S3 Bucket in the Amazon Rekognition Developer Guide.
\n\n DetectLabels
does not support the detection of activities. However, activity detection\n is supported for label detection in videos. For more information, see StartLabelDetection in the Amazon Rekognition Developer Guide.
You pass the input image as base64-encoded image bytes or as a reference to an image in\n an Amazon S3 bucket. If you use the\n AWS\n CLI to call Amazon Rekognition operations, passing image bytes is not\n supported. The image must be either a PNG or JPEG formatted file.
\nFor each object, scene, and concept the API returns one or more labels. Each label\n provides the object name, and the level of confidence that the image contains the object. For\n example, suppose the input image has a lighthouse, the sea, and a rock. The response includes\n all three labels, one for each object.
\n \n\n {Name: lighthouse, Confidence: 98.4629}
\n
\n {Name: rock,Confidence: 79.2097}
\n
\n {Name: sea,Confidence: 75.061}
\n
In the preceding example, the operation returns one label for each of the three\n objects. The operation can also return multiple labels for the same object in the image. For\n example, if the input image shows a flower (for example, a tulip), the operation might return\n the following three labels.
\n\n {Name: flower,Confidence: 99.0562}
\n
\n {Name: plant,Confidence: 99.0562}
\n
\n {Name: tulip,Confidence: 99.0562}
\n
In this example, the detection algorithm more precisely identifies the flower as a\n tulip.
\nIn response, the API returns an array of labels. In addition, the response also\n includes the orientation correction. Optionally, you can specify MinConfidence
to\n control the confidence threshold for the labels returned. The default is 55%. You can also add\n the MaxLabels
parameter to limit the number of labels returned.
If the object detected is a person, the operation doesn't provide the same facial\n details that the DetectFaces operation provides.
\n\n DetectLabels
returns bounding boxes for instances of common object labels in an array of\n Instance objects. An Instance
object contains a \n BoundingBox object, for the location of the label on the image. It also includes \n the confidence by which the bounding box was detected.
\n DetectLabels
also returns a hierarchical taxonomy of detected labels. For example,\n a detected car might be assigned the label car. The label car\n has two parent labels: Vehicle (its parent) and Transportation (its\n grandparent). \n The response returns the entire list of ancestors for a label. Each ancestor is a unique label in the response.\n In the previous example, Car, Vehicle, and Transportation\n are returned as unique labels in the response.\n
This is a stateless API operation. That is, the operation does not persist any\n data.
\nThis operation requires permissions to perform the\n rekognition:DetectLabels
action.
Detects instances of real-world entities within an image (JPEG or PNG)\n provided as input. This includes objects like flower, tree, and table; events like\n wedding, graduation, and birthday party; and concepts like landscape, evening, and nature.\n
\n \nFor an example, see Analyzing images stored in an Amazon S3 bucket in the Amazon Rekognition Developer Guide.
\n\n DetectLabels
does not support the detection of activities. However, activity detection\n is supported for label detection in videos. For more information, see StartLabelDetection in the Amazon Rekognition Developer Guide.
You pass the input image as base64-encoded image bytes or as a reference to an image in\n an Amazon S3 bucket. If you use the\n AWS\n CLI to call Amazon Rekognition operations, passing image bytes is not\n supported. The image must be either a PNG or JPEG formatted file.
\nFor each object, scene, and concept the API returns one or more labels. Each label\n provides the object name, and the level of confidence that the image contains the object. For\n example, suppose the input image has a lighthouse, the sea, and a rock. The response includes\n all three labels, one for each object.
\n \n\n {Name: lighthouse, Confidence: 98.4629}
\n
\n {Name: rock,Confidence: 79.2097}
\n
\n {Name: sea,Confidence: 75.061}
\n
In the preceding example, the operation returns one label for each of the three\n objects. The operation can also return multiple labels for the same object in the image. For\n example, if the input image shows a flower (for example, a tulip), the operation might return\n the following three labels.
\n\n {Name: flower,Confidence: 99.0562}
\n
\n {Name: plant,Confidence: 99.0562}
\n
\n {Name: tulip,Confidence: 99.0562}
\n
In this example, the detection algorithm more precisely identifies the flower as a\n tulip.
\nIn response, the API returns an array of labels. In addition, the response also\n includes the orientation correction. Optionally, you can specify MinConfidence
to\n control the confidence threshold for the labels returned. The default is 55%. You can also add\n the MaxLabels
parameter to limit the number of labels returned.
If the object detected is a person, the operation doesn't provide the same facial\n details that the DetectFaces operation provides.
\n\n DetectLabels
returns bounding boxes for instances of common object labels in an array of\n Instance objects. An Instance
object contains a \n BoundingBox object, for the location of the label on the image. It also includes \n the confidence by which the bounding box was detected.
\n DetectLabels
also returns a hierarchical taxonomy of detected labels. For example,\n a detected car might be assigned the label car. The label car\n has two parent labels: Vehicle (its parent) and Transportation (its\n grandparent). \n The response returns the entire list of ancestors for a label. Each ancestor is a unique label in the response.\n In the previous example, Car, Vehicle, and Transportation\n are returned as unique labels in the response.\n
This is a stateless API operation. That is, the operation does not persist any\n data.
\nThis operation requires permissions to perform the\n rekognition:DetectLabels
action.
Detects text in the input image and converts it into machine-readable text.
\nPass the input image as base64-encoded image bytes or as a reference to an image in an\n Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, you must pass it as a\n reference to an image in an Amazon S3 bucket. For the AWS CLI, passing image bytes is not\n supported. The image must be either a .png or .jpeg formatted file.
\nThe DetectText
operation returns text in an array of TextDetection elements, TextDetections
. Each\n TextDetection
element provides information about a single word or line of text\n that was detected in the image.
A word is one or more script characters that are not separated by spaces.\n DetectText
can detect up to 100 words in an image.
A line is a string of equally spaced words. A line isn't necessarily a complete\n sentence. For example, a driver's license number is detected as a line. A line ends when there\n is no aligned text after it. Also, a line ends when there is a large gap between words,\n relative to the length of the words. This means, depending on the gap between words, Amazon Rekognition\n may detect multiple lines in text aligned in the same direction. Periods don't represent the\n end of a line. If a sentence spans multiple lines, the DetectText
operation\n returns multiple lines.
To determine whether a TextDetection
element is a line of text or a word,\n use the TextDetection
object Type
field.
To be detected, text must be within +/- 90 degrees orientation of the horizontal axis.
\n \nFor more information, see DetectText in the Amazon Rekognition Developer Guide.
" + "smithy.api#documentation": "Detects text in the input image and converts it into machine-readable text.
\nPass the input image as base64-encoded image bytes or as a reference to an image in an\n Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, you must pass it as a\n reference to an image in an Amazon S3 bucket. For the AWS CLI, passing image bytes is not\n supported. The image must be either a .png or .jpeg formatted file.
\nThe DetectText
operation returns text in an array of TextDetection elements, TextDetections
. Each\n TextDetection
element provides information about a single word or line of text\n that was detected in the image.
A word is one or more script characters that are not separated by spaces.\n DetectText
can detect up to 100 words in an image.
A line is a string of equally spaced words. A line isn't necessarily a complete\n sentence. For example, a driver's license number is detected as a line. A line ends when there\n is no aligned text after it. Also, a line ends when there is a large gap between words,\n relative to the length of the words. This means, depending on the gap between words, Amazon Rekognition\n may detect multiple lines in text aligned in the same direction. Periods don't represent the\n end of a line. If a sentence spans multiple lines, the DetectText
operation\n returns multiple lines.
To determine whether a TextDetection
element is a line of text or a word,\n use the TextDetection
object Type
field.
To be detected, text must be within +/- 90 degrees orientation of the horizontal axis.
\n \nFor more information, see Detecting text in the Amazon Rekognition Developer Guide.
" } }, "com.amazonaws.rekognition#DetectTextFilters": { @@ -2812,7 +2910,7 @@ "MinConfidence": { "target": "com.amazonaws.rekognition#Percent", "traits": { - "smithy.api#documentation": "Sets the confidence of word detection. Words with detection confidence below this will be excluded \n from the result. Values should be between 50 and 100 as Text in Video will not return any result below \n 50.
" + "smithy.api#documentation": "Sets the confidence of word detection. Words with detection confidence below this will be\n excluded from the result. Values should be between 0 and 100. The default MinConfidence is\n 80.
" } }, "MinBoundingBoxHeight": { @@ -3384,7 +3482,7 @@ } }, "traits": { - "smithy.api#documentation": "Input face recognition parameters for an Amazon Rekognition stream processor. FaceRecognitionSettings
is a request\n parameter for CreateStreamProcessor.
Input face recognition parameters for an Amazon Rekognition stream processor. \n Includes the collection to use for face recognition and the face attributes to detect. \n Defining the settings is required in the request parameter for CreateStreamProcessor.
" } }, "com.amazonaws.rekognition#FaceSearchSortBy": { @@ -3434,7 +3532,7 @@ } }, "traits": { - "smithy.api#documentation": "The predicted gender of a detected face. \n
\n \n \nAmazon Rekognition makes gender binary (male/female) predictions based on the physical appearance\n of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender\n identity, and you shouldn't use Amazon Rekognition to make such a determination. For example, a male actor\n wearing a long-haired wig and earrings for a role might be predicted as female.
\n \nUsing Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be \n analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.
\n \nWe don't recommend using gender binary predictions to make decisions that impact\u2028 an individual's rights, privacy, or access to services.
" + "smithy.api#documentation": "The predicted gender of a detected face. \n
\n \n \nAmazon Rekognition makes gender binary (male/female) predictions based on the physical appearance\n of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender\n identity, and you shouldn't use Amazon Rekognition to make such a determination. For example, a male actor\n wearing a long-haired wig and earrings for a role might be predicted as female.
\n \nUsing Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be \n analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.
\n \nWe don't recommend using gender binary predictions to make decisions that impact an individual's rights, privacy, or access to services.
" } }, "com.amazonaws.rekognition#GenderType": { @@ -3501,7 +3599,7 @@ } ], "traits": { - "smithy.api#documentation": "Gets the name and additional information about a celebrity based on their Amazon Rekognition ID.\n The additional information is returned as an array of URLs. If there is no additional\n information about the celebrity, this list is empty.
\n \nFor more information, see Recognizing Celebrities in an Image in\n the Amazon Rekognition Developer Guide.
\nThis operation requires permissions to perform the\n rekognition:GetCelebrityInfo
action.
Gets the name and additional information about a celebrity based on their Amazon Rekognition ID.\n The additional information is returned as an array of URLs. If there is no additional\n information about the celebrity, this list is empty.
\n \nFor more information, see Getting information about a celebrity in\n the Amazon Rekognition Developer Guide.
\nThis operation requires permissions to perform the\n rekognition:GetCelebrityInfo
action.
Gets the inappropriate, unwanted, or offensive content analysis results for a Amazon Rekognition Video analysis started by\n StartContentModeration. For a list of moderation labels in Amazon Rekognition, see\n Using the image and video moderation APIs.
\n\nAmazon Rekognition Video inappropriate or offensive content detection in a stored video is an asynchronous operation. You start analysis by calling\n StartContentModeration which returns a job identifier (JobId
).\n When analysis finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service\n topic registered in the initial call to StartContentModeration
.\n To get the results of the content analysis, first check that the status value published to the Amazon SNS\n topic is SUCCEEDED
. If so, call GetContentModeration
and pass the job identifier\n (JobId
) from the initial call to StartContentModeration
.
For more information, see Working with Stored Videos in the\n Amazon Rekognition Devlopers Guide.
\n\n GetContentModeration
returns detected inappropriate, unwanted, or offensive content moderation labels,\n and the time they are detected, in an array, ModerationLabels
, of\n ContentModerationDetection objects.\n
By default, the moderated labels are returned sorted by time, in milliseconds from the start of the\n video. You can also sort them by moderated label by specifying NAME
for the SortBy
\n input parameter.
Since video analysis can return a large number of results, use the MaxResults
parameter to limit\n the number of labels returned in a single call to GetContentModeration
. If there are more results than\n specified in MaxResults
, the value of NextToken
in the operation response contains a\n pagination token for getting the next set of results. To get the next page of results, call GetContentModeration
\n and populate the NextToken
request parameter with the value of NextToken
\n returned from the previous call to GetContentModeration
.
For more information, see Content moderation in the Amazon Rekognition Developer Guide.
", + "smithy.api#documentation": "Gets the inappropriate, unwanted, or offensive content analysis results for a Amazon Rekognition Video analysis started by\n StartContentModeration. For a list of moderation labels in Amazon Rekognition, see\n Using the image and video moderation APIs.
\n\nAmazon Rekognition Video inappropriate or offensive content detection in a stored video is an asynchronous operation. You start analysis by calling\n StartContentModeration which returns a job identifier (JobId
).\n When analysis finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service\n topic registered in the initial call to StartContentModeration
.\n To get the results of the content analysis, first check that the status value published to the Amazon SNS\n topic is SUCCEEDED
. If so, call GetContentModeration
and pass the job identifier\n (JobId
) from the initial call to StartContentModeration
.
For more information, see Working with Stored Videos in the\n Amazon Rekognition Devlopers Guide.
\n\n GetContentModeration
returns detected inappropriate, unwanted, or offensive content moderation labels,\n and the time they are detected, in an array, ModerationLabels
, of\n ContentModerationDetection objects.\n
By default, the moderated labels are returned sorted by time, in milliseconds from the start of the\n video. You can also sort them by moderated label by specifying NAME
for the SortBy
\n input parameter.
Since video analysis can return a large number of results, use the MaxResults
parameter to limit\n the number of labels returned in a single call to GetContentModeration
. If there are more results than\n specified in MaxResults
, the value of NextToken
in the operation response contains a\n pagination token for getting the next set of results. To get the next page of results, call GetContentModeration
\n and populate the NextToken
request parameter with the value of NextToken
\n returned from the previous call to GetContentModeration
.
For more information, see moderating content in the Amazon Rekognition Developer Guide.
", "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", @@ -4207,7 +4305,7 @@ } ], "traits": { - "smithy.api#documentation": "Gets the segment detection results of a Amazon Rekognition Video analysis started by StartSegmentDetection.
\nSegment detection with Amazon Rekognition Video is an asynchronous operation. You start segment detection by \n calling StartSegmentDetection which returns a job identifier (JobId
).\n When the segment detection operation finishes, Amazon Rekognition publishes a completion status to the Amazon Simple Notification Service\n topic registered in the initial call to StartSegmentDetection
. To get the results\n of the segment detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED
. \n if so, call GetSegmentDetection
and pass the job identifier (JobId
) from the initial call\n of StartSegmentDetection
.
\n GetSegmentDetection
returns detected segments in an array (Segments
)\n of SegmentDetection objects. Segments
is sorted by the segment types \n specified in the SegmentTypes
input parameter of StartSegmentDetection
. \n Each element of the array includes the detected segment, the precentage confidence in the acuracy \n of the detected segment, the type of the segment, and the frame in which the segment was detected.
Use SelectedSegmentTypes
to find out the type of segment detection requested in the \n call to StartSegmentDetection
.
Use the MaxResults
parameter to limit the number of segment detections returned. If there are more results than \n specified in MaxResults
, the value of NextToken
in the operation response contains\n a pagination token for getting the next set of results. To get the next page of results, call GetSegmentDetection
\n and populate the NextToken
request parameter with the token value returned from the previous \n call to GetSegmentDetection
.
For more information, see Detecting Video Segments in Stored Video in the Amazon Rekognition Developer Guide.
", + "smithy.api#documentation": "Gets the segment detection results of a Amazon Rekognition Video analysis started by StartSegmentDetection.
\nSegment detection with Amazon Rekognition Video is an asynchronous operation. You start segment detection by \n calling StartSegmentDetection which returns a job identifier (JobId
).\n When the segment detection operation finishes, Amazon Rekognition publishes a completion status to the Amazon Simple Notification Service\n topic registered in the initial call to StartSegmentDetection
. To get the results\n of the segment detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED
. \n if so, call GetSegmentDetection
and pass the job identifier (JobId
) from the initial call\n of StartSegmentDetection
.
\n GetSegmentDetection
returns detected segments in an array (Segments
)\n of SegmentDetection objects. Segments
is sorted by the segment types \n specified in the SegmentTypes
input parameter of StartSegmentDetection
. \n Each element of the array includes the detected segment, the precentage confidence in the acuracy \n of the detected segment, the type of the segment, and the frame in which the segment was detected.
Use SelectedSegmentTypes
to find out the type of segment detection requested in the \n call to StartSegmentDetection
.
Use the MaxResults
parameter to limit the number of segment detections returned. If there are more results than \n specified in MaxResults
, the value of NextToken
in the operation response contains\n a pagination token for getting the next set of results. To get the next page of results, call GetSegmentDetection
\n and populate the NextToken
request parameter with the token value returned from the previous \n call to GetSegmentDetection
.
For more information, see Detecting video segments in stored video in the Amazon Rekognition Developer Guide.
", "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", @@ -4593,7 +4691,7 @@ } }, "traits": { - "smithy.api#documentation": "Provides the input image either as bytes or an S3 object.
\nYou pass image bytes to an Amazon Rekognition API operation by using the Bytes
\n property. For example, you would use the Bytes
property to pass an image loaded\n from a local file system. Image bytes passed by using the Bytes
property must be\n base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to\n call Amazon Rekognition API operations.
For more information, see Analyzing an Image Loaded from a Local File System \n in the Amazon Rekognition Developer Guide.
\n You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the\n S3Object
property. Images stored in an S3 bucket do not need to be\n base64-encoded.
The region for the S3 bucket containing the S3 object must match the region you use for\n Amazon Rekognition operations.
\nIf you use the\n AWS\n CLI to call Amazon Rekognition operations, passing image bytes using the Bytes\n property is not supported. You must first upload the image to an Amazon S3 bucket and then\n call the operation using the S3Object property.
\n \nFor Amazon Rekognition to process an S3 object, the user must have permission to access the S3\n object. For more information, see Resource Based Policies in the Amazon Rekognition Developer Guide.\n
" + "smithy.api#documentation": "Provides the input image either as bytes or an S3 object.
\nYou pass image bytes to an Amazon Rekognition API operation by using the Bytes
\n property. For example, you would use the Bytes
property to pass an image loaded\n from a local file system. Image bytes passed by using the Bytes
property must be\n base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to\n call Amazon Rekognition API operations.
For more information, see Analyzing an Image Loaded from a Local File System \n in the Amazon Rekognition Developer Guide.
\n You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the\n S3Object
property. Images stored in an S3 bucket do not need to be\n base64-encoded.
The region for the S3 bucket containing the S3 object must match the region you use for\n Amazon Rekognition operations.
\nIf you use the\n AWS\n CLI to call Amazon Rekognition operations, passing image bytes using the Bytes\n property is not supported. You must first upload the image to an Amazon S3 bucket and then\n call the operation using the S3Object property.
\n \nFor Amazon Rekognition to process an S3 object, the user must have permission to access the S3\n object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.\n
" } }, "com.amazonaws.rekognition#ImageBlob": { @@ -4648,7 +4746,7 @@ } }, "traits": { - "smithy.api#documentation": "The input image size exceeds the allowed limit. If you are calling\n DetectProtectiveEquipment, the image size or resolution exceeds the allowed limit. For more information, see \n Limits in Amazon Rekognition in the Amazon Rekognition Developer Guide.
", + "smithy.api#documentation": "The input image size exceeds the allowed limit. If you are calling\n DetectProtectiveEquipment, the image size or resolution exceeds the allowed limit. For more information, see \n Guidelines and quotas in Amazon Rekognition in the Amazon Rekognition Developer Guide.
", "smithy.api#error": "client" } }, @@ -4693,7 +4791,7 @@ } ], "traits": { - "smithy.api#documentation": "Detects faces in the input image and adds them to the specified collection.
\nAmazon Rekognition doesn't save the actual faces that are detected. Instead, the underlying\n detection algorithm first detects the faces in the input image. For each face, the algorithm\n extracts facial features into a feature vector, and stores it in the backend database.\n Amazon Rekognition uses feature vectors when it performs face match and search operations using the\n SearchFaces and SearchFacesByImage\n operations.
\n \nFor more information, see Adding Faces to a Collection in the Amazon Rekognition\n Developer Guide.
\nTo get the number of faces in a collection, call DescribeCollection.
\n\nIf you're using version 1.0 of the face detection model, IndexFaces
\n indexes the 15 largest faces in the input image. Later versions of the face detection model\n index the 100 largest faces in the input image.
If you're using version 4 or later of the face model, image orientation information\n is not returned in the OrientationCorrection
field.
To determine which version of the model you're using, call DescribeCollection\n and supply the collection ID. You can also get the model version from the value of FaceModelVersion
in the response\n from IndexFaces
\n
For more information, see Model Versioning in the Amazon Rekognition Developer\n Guide.
\nIf you provide the optional ExternalImageId
for the input image you\n provided, Amazon Rekognition associates this ID with all faces that it detects. When you call the ListFaces operation, the response returns the external ID. You can use this\n external image ID to create a client-side index to associate the faces with each image. You\n can then use the index to find all faces in an image.
You can specify the maximum number of faces to index with the MaxFaces
input\n parameter. This is useful when you want to index the largest faces in an image and don't want to index\n smaller faces, such as those belonging to people standing in the background.
The QualityFilter
input parameter allows you to filter out detected faces\n that don’t meet a required quality bar. The quality bar is based on a\n variety of common use cases. By default, IndexFaces
chooses the quality bar that's \n used to filter faces. You can also explicitly choose\n the quality bar. Use QualityFilter
, to set the quality bar\n by specifying LOW
, MEDIUM
, or HIGH
.\n If you do not want to filter detected faces, specify NONE
.
To use quality filtering, you need a collection associated with version 3 of the \n face model or higher. To get the version of the face model associated with a collection, call \n DescribeCollection.
\nInformation about faces detected in an image, but not indexed, is returned in an array of\n UnindexedFace objects, UnindexedFaces
. Faces aren't\n indexed for reasons such as:
The number of faces detected exceeds the value of the MaxFaces
request\n parameter.
The face is too small compared to the image dimensions.
\nThe face is too blurry.
\nThe image is too dark.
\nThe face has an extreme pose.
\nThe face doesn’t have enough detail to be suitable for face search.
\nIn response, the IndexFaces
operation returns an array of metadata for \n all detected faces, FaceRecords
. This includes:
The bounding box, BoundingBox
, of the detected face.
A confidence value, Confidence
, which indicates the confidence that the\n bounding box contains a face.
A face ID, FaceId
, assigned by the service for each face that's detected\n and stored.
An image ID, ImageId
, assigned by the service for the input image.
If you request all facial attributes (by using the detectionAttributes
\n parameter), Amazon Rekognition returns detailed facial attributes, such as facial landmarks (for\n example, location of eye and mouth) and other facial attributes. If you provide\n the same image, specify the same collection, use the same external ID, and use the same model version in the\n IndexFaces
operation, Amazon Rekognition doesn't save duplicate face metadata.
The input image is passed either as base64-encoded image bytes, or as a reference to an\n image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations,\n passing image bytes isn't supported. The image must be formatted as a PNG or JPEG file.
\nThis operation requires permissions to perform the rekognition:IndexFaces
\n action.
Detects faces in the input image and adds them to the specified collection.
\nAmazon Rekognition doesn't save the actual faces that are detected. Instead, the underlying\n detection algorithm first detects the faces in the input image. For each face, the algorithm\n extracts facial features into a feature vector, and stores it in the backend database.\n Amazon Rekognition uses feature vectors when it performs face match and search operations using the\n SearchFaces and SearchFacesByImage\n operations.
\n \nFor more information, see Adding faces to a collection in the Amazon Rekognition\n Developer Guide.
\nTo get the number of faces in a collection, call DescribeCollection.
\n\nIf you're using version 1.0 of the face detection model, IndexFaces
\n indexes the 15 largest faces in the input image. Later versions of the face detection model\n index the 100 largest faces in the input image.
If you're using version 4 or later of the face model, image orientation information\n is not returned in the OrientationCorrection
field.
To determine which version of the model you're using, call DescribeCollection\n and supply the collection ID. You can also get the model version from the value of FaceModelVersion
in the response\n from IndexFaces
\n
For more information, see Model Versioning in the Amazon Rekognition Developer\n Guide.
\nIf you provide the optional ExternalImageId
for the input image you\n provided, Amazon Rekognition associates this ID with all faces that it detects. When you call the ListFaces operation, the response returns the external ID. You can use this\n external image ID to create a client-side index to associate the faces with each image. You\n can then use the index to find all faces in an image.
You can specify the maximum number of faces to index with the MaxFaces
input\n parameter. This is useful when you want to index the largest faces in an image and don't want to index\n smaller faces, such as those belonging to people standing in the background.
The QualityFilter
input parameter allows you to filter out detected faces\n that don’t meet a required quality bar. The quality bar is based on a\n variety of common use cases. By default, IndexFaces
chooses the quality bar that's \n used to filter faces. You can also explicitly choose\n the quality bar. Use QualityFilter
, to set the quality bar\n by specifying LOW
, MEDIUM
, or HIGH
.\n If you do not want to filter detected faces, specify NONE
.
To use quality filtering, you need a collection associated with version 3 of the \n face model or higher. To get the version of the face model associated with a collection, call \n DescribeCollection.
\nInformation about faces detected in an image, but not indexed, is returned in an array of\n UnindexedFace objects, UnindexedFaces
. Faces aren't\n indexed for reasons such as:
The number of faces detected exceeds the value of the MaxFaces
request\n parameter.
The face is too small compared to the image dimensions.
\nThe face is too blurry.
\nThe image is too dark.
\nThe face has an extreme pose.
\nThe face doesn’t have enough detail to be suitable for face search.
\nIn response, the IndexFaces
operation returns an array of metadata for \n all detected faces, FaceRecords
. This includes:
The bounding box, BoundingBox
, of the detected face.
A confidence value, Confidence
, which indicates the confidence that the\n bounding box contains a face.
A face ID, FaceId
, assigned by the service for each face that's detected\n and stored.
An image ID, ImageId
, assigned by the service for the input image.
If you request all facial attributes (by using the detectionAttributes
\n parameter), Amazon Rekognition returns detailed facial attributes, such as facial landmarks (for\n example, location of eye and mouth) and other facial attributes. If you provide\n the same image, specify the same collection, and use the same external ID in the\n IndexFaces
operation, Amazon Rekognition doesn't save duplicate face metadata.
The input image is passed either as base64-encoded image bytes, or as a reference to an\n image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations,\n passing image bytes isn't supported. The image must be formatted as a PNG or JPEG file.
\nThis operation requires permissions to perform the rekognition:IndexFaces
\n action.
Latest face model being used with the collection. For more information, see Model versioning.
" + "smithy.api#documentation": "The version number of the face detection model that's associated with the input\n collection (CollectionId
).
Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see\n CreateStreamProcessor in the Amazon Rekognition Developer Guide.
" } }, + "com.amazonaws.rekognition#KinesisVideoStreamFragmentNumber": { + "type": "string", + "traits": { + "smithy.api#length": { + "min": 1, + "max": 128 + }, + "smithy.api#pattern": "^[0-9]+$" + } + }, + "com.amazonaws.rekognition#KinesisVideoStreamStartSelector": { + "type": "structure", + "members": { + "ProducerTimestamp": { + "target": "com.amazonaws.rekognition#ULong", + "traits": { + "smithy.api#documentation": "\n The timestamp from the producer corresponding to the fragment.\n
" + } + }, + "FragmentNumber": { + "target": "com.amazonaws.rekognition#KinesisVideoStreamFragmentNumber", + "traits": { + "smithy.api#documentation": "\n The unique identifier of the fragment. This value monotonically increases based on the ingestion order.\n
" + } + } + }, + "traits": { + "smithy.api#documentation": "\n Specifies the starting point in a Kinesis stream to start processing. \n You can use the producer timestamp or the fragment number.\n For more information, see Fragment. \n
" + } + }, "com.amazonaws.rekognition#KmsKeyId": { "type": "string", "traits": { @@ -5319,7 +5447,7 @@ } ], "traits": { - "smithy.api#documentation": "Returns list of collection IDs in your account.\n If the result is truncated, the response also provides a NextToken
\n that you can use in the subsequent request to fetch the next set of collection IDs.
For an example, see Listing Collections in the Amazon Rekognition Developer Guide.
\nThis operation requires permissions to perform the rekognition:ListCollections
action.
Returns list of collection IDs in your account.\n If the result is truncated, the response also provides a NextToken
\n that you can use in the subsequent request to fetch the next set of collection IDs.
For an example, see Listing collections in the Amazon Rekognition Developer Guide.
\nThis operation requires permissions to perform the rekognition:ListCollections
action.
Latest face models being used with the corresponding collections in the array. For more information, see Model versioning. \n For example, the value of FaceModelVersions[2]
is the version number for the face detection model used\n by the collection in CollectionId[2]
.
Version numbers of the face detection models associated with the collections in the array CollectionIds
.\n For example, the value of FaceModelVersions[2]
is the version number for the face detection model used\n by the collection in CollectionId[2]
.
Latest face model being used with the collection. For more information, see Model versioning.
" + "smithy.api#documentation": "Version number of the face detection model associated with the input collection (CollectionId
).
The Amazon SNS topic to which Amazon Rekognition to posts the completion status.
", + "smithy.api#documentation": "The Amazon SNS topic to which Amazon Rekognition posts the completion status.
", "smithy.api#required": {} } }, @@ -5941,7 +6079,7 @@ } }, "traits": { - "smithy.api#documentation": "The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see\n api-video. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic.\n For more information, see Giving access to multiple Amazon SNS topics.
" + "smithy.api#documentation": "The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see\n Calling Amazon Rekognition Video operations. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic.\n For more information, see Giving access to multiple Amazon SNS topics.
" } }, "com.amazonaws.rekognition#OrientationCorrection": { @@ -6155,7 +6293,7 @@ } }, "traits": { - "smithy.api#documentation": "The X and Y coordinates of a point on an image. The X and Y values returned are ratios\n of the overall image size. For example, if the input image is 700x200 and the \n operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
\n \nAn array of Point
objects,\n Polygon
, is returned by DetectText and by DetectCustomLabels. Polygon
\n represents a fine-grained polygon around a detected item. For more information, see Geometry in the\n Amazon Rekognition Developer Guide.
The X and Y coordinates of a point on an image or video frame. The X and Y values are ratios\n of the overall image size or video resolution. For example, if an input image is 700x200 and the \n values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
\n \nAn array of Point
objects makes up a Polygon
.\n A Polygon
is returned by DetectText and by DetectCustomLabels\n Polygon
\n represents a fine-grained polygon around a detected item. For more information, see Geometry in the\n Amazon Rekognition Developer Guide.
Returns an array of celebrities recognized in the input image. For more information, see Recognizing Celebrities\n in the Amazon Rekognition Developer Guide.
\n\n RecognizeCelebrities
returns the 64 largest faces in the image. It lists the\n recognized celebrities in the CelebrityFaces
array and any unrecognized faces in\n the UnrecognizedFaces
array. RecognizeCelebrities
doesn't return\n celebrities whose faces aren't among the largest 64 faces in the image.
For each celebrity recognized, RecognizeCelebrities
returns a\n Celebrity
object. The Celebrity
object contains the celebrity\n name, ID, URL links to additional information, match confidence, and a\n ComparedFace
object that you can use to locate the celebrity's face on the\n image.
Amazon Rekognition doesn't retain information about which images a celebrity has been recognized\n in. Your application must store this information and use the Celebrity
ID\n property as a unique identifier for the celebrity. If you don't store the celebrity name or\n additional information URLs returned by RecognizeCelebrities
, you will need the\n ID to identify the celebrity in a call to the GetCelebrityInfo\n operation.
You pass the input image either as base64-encoded image bytes or as a reference to an\n image in an Amazon S3 bucket. If you use the\n AWS\n CLI to call Amazon Rekognition operations, passing image bytes is not\n supported. The image must be either a PNG or JPEG formatted file.
\n\n\n\n \nFor an example, see Recognizing Celebrities in an Image in the Amazon Rekognition Developer Guide.
\nThis operation requires permissions to perform the\n rekognition:RecognizeCelebrities
operation.
Returns an array of celebrities recognized in the input image. For more information, see Recognizing celebrities\n in the Amazon Rekognition Developer Guide.
\n\n RecognizeCelebrities
returns the 64 largest faces in the image. It lists the\n recognized celebrities in the CelebrityFaces
array and any unrecognized faces in\n the UnrecognizedFaces
array. RecognizeCelebrities
doesn't return\n celebrities whose faces aren't among the largest 64 faces in the image.
For each celebrity recognized, RecognizeCelebrities
returns a\n Celebrity
object. The Celebrity
object contains the celebrity\n name, ID, URL links to additional information, match confidence, and a\n ComparedFace
object that you can use to locate the celebrity's face on the\n image.
Amazon Rekognition doesn't retain information about which images a celebrity has been recognized\n in. Your application must store this information and use the Celebrity
ID\n property as a unique identifier for the celebrity. If you don't store the celebrity name or\n additional information URLs returned by RecognizeCelebrities
, you will need the\n ID to identify the celebrity in a call to the GetCelebrityInfo\n operation.
You pass the input image either as base64-encoded image bytes or as a reference to an\n image in an Amazon S3 bucket. If you use the\n AWS\n CLI to call Amazon Rekognition operations, passing image bytes is not\n supported. The image must be either a PNG or JPEG formatted file.
\n\n\n\n \nFor an example, see Recognizing celebrities in an image in the Amazon Rekognition Developer Guide.
\nThis operation requires permissions to perform the\n rekognition:RecognizeCelebrities
operation.
The box representing a region of interest on screen.
" } + }, + "Polygon": { + "target": "com.amazonaws.rekognition#Polygon", + "traits": { + "smithy.api#documentation": "\n Specifies a shape made up of up to 10 Point
objects to define a region of interest.\n
Specifies a location within the frame that Rekognition checks for text. Uses a BoundingBox
\n object to set a region of the screen.
A word is included in the region if the word is more than half in that region. If there is more than\n one region, the word will be compared with all regions of the screen. Any word more than half in a region\n is kept in the results.
" + "smithy.api#documentation": "Specifies a location within the frame that Rekognition checks for objects of interest such as text, labels, or faces. It uses a BoundingBox
\n or object or Polygon
to set a region of the screen.
A word, face, or label is included in the region if it is more than half in that region. If there is more than\n one region, the word, face, or label is compared with all regions of the screen. Any object of interest that is more than half in a region\n is kept in the results.
" } }, "com.amazonaws.rekognition#RegionsOfInterest": { @@ -6799,7 +6943,7 @@ "name": "rekognition" }, "aws.protocols#awsJson1_1": {}, - "smithy.api#documentation": "This is the Amazon Rekognition API reference.
", + "smithy.api#documentation": "This is the API Reference for Amazon Rekognition Image, \n Amazon Rekognition Custom Labels,\n Amazon Rekognition Stored Video, \n Amazon Rekognition Streaming Video.\n It provides descriptions of actions, data types, common parameters,\n and common errors.
\n\n\n Amazon Rekognition Image \n
\n\n\n Amazon Rekognition Custom Labels \n
\n\n Amazon Rekognition Video Stored Video \n
\n\n\n Amazon Rekognition Video Streaming Video \n
\n\n\n The name of the Amazon S3 bucket you want to associate with the streaming video project. You must be the owner of the Amazon S3 bucket.\n
" + } + }, + "KeyPrefix": { + "target": "com.amazonaws.rekognition#S3KeyPrefix", + "traits": { + "smithy.api#documentation": "\n The prefix value of the location within the bucket that you want the information to be published to. \n For more information, see Using prefixes.\n
" + } + } + }, + "traits": { + "smithy.api#documentation": "\n The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation.\n These results include the name of the stream processor resource, the session ID of the stream processing session, \n and labeled timestamps and bounding boxes for detected labels. \n
" + } + }, "com.amazonaws.rekognition#S3KeyPrefix": { "type": "string", "traits": { @@ -7127,7 +7294,7 @@ } }, "traits": { - "smithy.api#documentation": "Provides the S3 bucket name and object name.
\nThe region for the S3 bucket containing the S3 object must match the region you use for\n Amazon Rekognition operations.
\n \nFor Amazon Rekognition to process an S3 object, the user must have permission to\n access the S3 object. For more information, see Resource-Based Policies in the Amazon Rekognition\n Developer Guide.
" + "smithy.api#documentation": "Provides the S3 bucket name and object name.
\nThe region for the S3 bucket containing the S3 object must match the region you use for\n Amazon Rekognition operations.
\n \nFor Amazon Rekognition to process an S3 object, the user must have permission to\n access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition\n Developer Guide.
" } }, "com.amazonaws.rekognition#S3ObjectName": { @@ -7183,7 +7350,7 @@ } ], "traits": { - "smithy.api#documentation": "For a given input face ID, searches for matching faces in the collection the face\n belongs to. You get a face ID when you add a face to the collection using the IndexFaces operation. The operation compares the features of the input face with\n faces in the specified collection.
\nYou can also search faces without indexing faces by using the\n SearchFacesByImage
operation.
\n The operation response returns\n an array of faces that match, ordered by similarity score with the highest\n similarity first. More specifically, it is an\n array of metadata for each face match that is found. Along with the metadata, the response also\n includes a confidence
value for each face match, indicating the confidence\n that the specific face matches the input face.\n
For an example, see Searching for a Face Using Its Face ID in the Amazon Rekognition Developer Guide.
\n\nThis operation requires permissions to perform the rekognition:SearchFaces
\n action.
For a given input face ID, searches for matching faces in the collection the face\n belongs to. You get a face ID when you add a face to the collection using the IndexFaces operation. The operation compares the features of the input face with\n faces in the specified collection.
\nYou can also search faces without indexing faces by using the\n SearchFacesByImage
operation.
\n The operation response returns\n an array of faces that match, ordered by similarity score with the highest\n similarity first. More specifically, it is an\n array of metadata for each face match that is found. Along with the metadata, the response also\n includes a confidence
value for each face match, indicating the confidence\n that the specific face matches the input face.\n
For an example, see Searching for a face using its face ID in the Amazon Rekognition Developer Guide.
\n\nThis operation requires permissions to perform the rekognition:SearchFaces
\n action.
Latest face model being used with the collection. For more information, see Model versioning.
" + "smithy.api#documentation": "Version number of the face detection model associated with the input collection (CollectionId
).
Latest face model being used with the collection. For more information, see Model versioning.
" + "smithy.api#documentation": "Version number of the face detection model associated with the input collection (CollectionId
).
The size of the collection exceeds the allowed limit. For more information, see \n Limits in Amazon Rekognition in the Amazon Rekognition Developer Guide.
", + "smithy.api#documentation": "\n \n \nThe size of the collection exceeds the allowed limit. For more information, see \n Guidelines and quotas in Amazon Rekognition in the Amazon Rekognition Developer Guide.
", "smithy.api#error": "client" } }, @@ -7594,7 +7761,7 @@ } ], "traits": { - "smithy.api#documentation": "Starts asynchronous recognition of celebrities in a stored video.
\nAmazon Rekognition Video can detect celebrities in a video must be stored in an Amazon S3 bucket. Use Video to specify the bucket name\n and the filename of the video.\n StartCelebrityRecognition
\n returns a job identifier (JobId
) which you use to get the results of the analysis.\n When celebrity recognition analysis is finished, Amazon Rekognition Video publishes a completion status\n to the Amazon Simple Notification Service topic that you specify in NotificationChannel
.\n To get the results of the celebrity recognition analysis, first check that the status value published to the Amazon SNS\n topic is SUCCEEDED
. If so, call GetCelebrityRecognition and pass the job identifier\n (JobId
) from the initial call to StartCelebrityRecognition
.
For more information, see Recognizing Celebrities in the Amazon Rekognition Developer Guide.
", + "smithy.api#documentation": "Starts asynchronous recognition of celebrities in a stored video.
\nAmazon Rekognition Video can detect celebrities in a video must be stored in an Amazon S3 bucket. Use Video to specify the bucket name\n and the filename of the video.\n StartCelebrityRecognition
\n returns a job identifier (JobId
) which you use to get the results of the analysis.\n When celebrity recognition analysis is finished, Amazon Rekognition Video publishes a completion status\n to the Amazon Simple Notification Service topic that you specify in NotificationChannel
.\n To get the results of the celebrity recognition analysis, first check that the status value published to the Amazon SNS\n topic is SUCCEEDED
. If so, call GetCelebrityRecognition and pass the job identifier\n (JobId
) from the initial call to StartCelebrityRecognition
.
For more information, see Recognizing celebrities in the Amazon Rekognition Developer Guide.
", "smithy.api#idempotent": {} } }, @@ -7677,7 +7844,7 @@ } ], "traits": { - "smithy.api#documentation": "Starts asynchronous detection of inappropriate, unwanted, or offensive content in a stored video. For a list of moderation labels in Amazon Rekognition, see\n Using the image and video moderation APIs.
\nAmazon Rekognition Video can moderate content in a video stored in an Amazon S3 bucket. Use Video to specify the bucket name\n and the filename of the video. StartContentModeration
\n returns a job identifier (JobId
) which you use to get the results of the analysis.\n When content analysis is finished, Amazon Rekognition Video publishes a completion status\n to the Amazon Simple Notification Service topic that you specify in NotificationChannel
.
To get the results of the content analysis, first check that the status value published to the Amazon SNS\n topic is SUCCEEDED
. If so, call GetContentModeration and pass the job identifier\n (JobId
) from the initial call to StartContentModeration
.
For more information, see Content moderation in the Amazon Rekognition Developer Guide.
", + "smithy.api#documentation": "Starts asynchronous detection of inappropriate, unwanted, or offensive content in a stored video. For a list of moderation labels in Amazon Rekognition, see\n Using the image and video moderation APIs.
\nAmazon Rekognition Video can moderate content in a video stored in an Amazon S3 bucket. Use Video to specify the bucket name\n and the filename of the video. StartContentModeration
\n returns a job identifier (JobId
) which you use to get the results of the analysis.\n When content analysis is finished, Amazon Rekognition Video publishes a completion status\n to the Amazon Simple Notification Service topic that you specify in NotificationChannel
.
To get the results of the content analysis, first check that the status value published to the Amazon SNS\n topic is SUCCEEDED
. If so, call GetContentModeration and pass the job identifier\n (JobId
) from the initial call to StartContentModeration
.
For more information, see Moderating content in the Amazon Rekognition Developer Guide.
", "smithy.api#idempotent": {} } }, @@ -7766,7 +7933,7 @@ } ], "traits": { - "smithy.api#documentation": "Starts asynchronous detection of faces in a stored video.
\nAmazon Rekognition Video can detect faces in a video stored in an Amazon S3 bucket.\n Use Video to specify the bucket name and the filename of the video.\n StartFaceDetection
returns a job identifier (JobId
) that you\n use to get the results of the operation.\n When face detection is finished, Amazon Rekognition Video publishes a completion status\n to the Amazon Simple Notification Service topic that you specify in NotificationChannel
.\n To get the results of the face detection operation, first check that the status value published to the Amazon SNS\n topic is SUCCEEDED
. If so, call GetFaceDetection and pass the job identifier\n (JobId
) from the initial call to StartFaceDetection
.
For more information, see Detecting Faces in a Stored Video in the \n Amazon Rekognition Developer Guide.
", + "smithy.api#documentation": "Starts asynchronous detection of faces in a stored video.
\nAmazon Rekognition Video can detect faces in a video stored in an Amazon S3 bucket.\n Use Video to specify the bucket name and the filename of the video.\n StartFaceDetection
returns a job identifier (JobId
) that you\n use to get the results of the operation.\n When face detection is finished, Amazon Rekognition Video publishes a completion status\n to the Amazon Simple Notification Service topic that you specify in NotificationChannel
.\n To get the results of the face detection operation, first check that the status value published to the Amazon SNS\n topic is SUCCEEDED
. If so, call GetFaceDetection and pass the job identifier\n (JobId
) from the initial call to StartFaceDetection
.
For more information, see Detecting faces in a stored video in the \n Amazon Rekognition Developer Guide.
", "smithy.api#idempotent": {} } }, @@ -7858,7 +8025,7 @@ } ], "traits": { - "smithy.api#documentation": "Starts the asynchronous search for faces in a collection that match the faces of persons detected in a stored video.
\nThe video must be stored in an Amazon S3 bucket. Use Video to specify the bucket name\n and the filename of the video. StartFaceSearch
\n returns a job identifier (JobId
) which you use to get the search results once the search has completed.\n When searching is finished, Amazon Rekognition Video publishes a completion status\n to the Amazon Simple Notification Service topic that you specify in NotificationChannel
.\n To get the search results, first check that the status value published to the Amazon SNS\n topic is SUCCEEDED
. If so, call GetFaceSearch and pass the job identifier\n (JobId
) from the initial call to StartFaceSearch
. For more information, see\n procedure-person-search-videos.
Starts the asynchronous search for faces in a collection that match the faces of persons detected in a stored video.
\nThe video must be stored in an Amazon S3 bucket. Use Video to specify the bucket name\n and the filename of the video. StartFaceSearch
\n returns a job identifier (JobId
) which you use to get the search results once the search has completed.\n When searching is finished, Amazon Rekognition Video publishes a completion status\n to the Amazon Simple Notification Service topic that you specify in NotificationChannel
.\n To get the search results, first check that the status value published to the Amazon SNS\n topic is SUCCEEDED
. If so, call GetFaceSearch and pass the job identifier\n (JobId
) from the initial call to StartFaceSearch
. For more information, see\n Searching stored videos for faces.\n
Starts asynchronous detection of segment detection in a stored video.
\nAmazon Rekognition Video can detect segments in a video stored in an Amazon S3 bucket. Use Video to specify the bucket name and \n the filename of the video. StartSegmentDetection
returns a job identifier (JobId
) which you use to get \n the results of the operation. When segment detection is finished, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic\n that you specify in NotificationChannel
.
You can use the Filters
(StartSegmentDetectionFilters) \n input parameter to specify the minimum detection confidence returned in the response. \n Within Filters
, use ShotFilter
(StartShotDetectionFilter)\n to filter detected shots. Use TechnicalCueFilter
(StartTechnicalCueDetectionFilter)\n to filter technical cues.
To get the results of the segment detection operation, first check that the status value published to the Amazon SNS \n topic is SUCCEEDED
. if so, call GetSegmentDetection and pass the job identifier (JobId
) \n from the initial call to StartSegmentDetection
.
For more information, see Detecting Video Segments in Stored Video in the Amazon Rekognition Developer Guide.
", + "smithy.api#documentation": "Starts asynchronous detection of segment detection in a stored video.
\nAmazon Rekognition Video can detect segments in a video stored in an Amazon S3 bucket. Use Video to specify the bucket name and \n the filename of the video. StartSegmentDetection
returns a job identifier (JobId
) which you use to get \n the results of the operation. When segment detection is finished, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic\n that you specify in NotificationChannel
.
You can use the Filters
(StartSegmentDetectionFilters) \n input parameter to specify the minimum detection confidence returned in the response. \n Within Filters
, use ShotFilter
(StartShotDetectionFilter)\n to filter detected shots. Use TechnicalCueFilter
(StartTechnicalCueDetectionFilter)\n to filter technical cues.
To get the results of the segment detection operation, first check that the status value published to the Amazon SNS \n topic is SUCCEEDED
. if so, call GetSegmentDetection and pass the job identifier (JobId
) \n from the initial call to StartSegmentDetection
.
For more information, see Detecting video segments in stored video in the Amazon Rekognition Developer Guide.
", "smithy.api#idempotent": {} } }, @@ -8317,7 +8484,7 @@ } ], "traits": { - "smithy.api#documentation": "Starts processing a stream processor. You create a stream processor by calling CreateStreamProcessor.\n To tell StartStreamProcessor
which stream processor to start, use the value of the Name
field specified in the call to\n CreateStreamProcessor
.
Starts processing a stream processor. You create a stream processor by calling CreateStreamProcessor.\n To tell StartStreamProcessor
which stream processor to start, use the value of the Name
field specified in the call to\n CreateStreamProcessor
.
If you are using a label detection stream processor to detect labels, you need to provide a Start selector
and a Stop selector
to determine the length of the stream processing time.
The name of the stream processor to start processing.
", "smithy.api#required": {} } + }, + "StartSelector": { + "target": "com.amazonaws.rekognition#StreamProcessingStartSelector", + "traits": { + "smithy.api#documentation": "\n Specifies the starting point in the Kinesis stream to start processing. \n You can use the producer timestamp or the fragment number. \n For more information, see Fragment. \n
\nThis is a required parameter for label detection stream processors and should not be used to start a face search stream processor.
" + } + }, + "StopSelector": { + "target": "com.amazonaws.rekognition#StreamProcessingStopSelector", + "traits": { + "smithy.api#documentation": "\n Specifies when to stop processing the stream. You can specify a \n maximum amount of time to process the video. \n
\nThis is a required parameter for label detection stream processors and should not be used to start a face search stream processor.
" + } } } }, "com.amazonaws.rekognition#StartStreamProcessorResponse": { "type": "structure", - "members": {} + "members": { + "SessionId": { + "target": "com.amazonaws.rekognition#StartStreamProcessorSessionId", + "traits": { + "smithy.api#documentation": "\n A unique identifier for the stream processing session. \n
" + } + } + } + }, + "com.amazonaws.rekognition#StartStreamProcessorSessionId": { + "type": "string" }, "com.amazonaws.rekognition#StartTechnicalCueDetectionFilter": { "type": "structure", @@ -8573,6 +8762,34 @@ "type": "structure", "members": {} }, + "com.amazonaws.rekognition#StreamProcessingStartSelector": { + "type": "structure", + "members": { + "KVSStreamStartSelector": { + "target": "com.amazonaws.rekognition#KinesisVideoStreamStartSelector", + "traits": { + "smithy.api#documentation": "\n Specifies the starting point in the stream to start processing. This can be done with a timestamp or a fragment number in a Kinesis stream.\n
" + } + } + }, + "traits": { + "smithy.api#documentation": "" + } + }, + "com.amazonaws.rekognition#StreamProcessingStopSelector": { + "type": "structure", + "members": { + "MaxDurationInSeconds": { + "target": "com.amazonaws.rekognition#MaxDurationInSecondsULong", + "traits": { + "smithy.api#documentation": "\n Specifies the maximum amount of time in seconds that you want the stream to be processed. The largest amount of time is 2 minutes. The default is 10 seconds.\n
" + } + } + }, + "traits": { + "smithy.api#documentation": "\n Specifies when to stop processing the stream. You can specify a maximum amount\n of time to process the video. \n
" + } + }, "com.amazonaws.rekognition#StreamProcessor": { "type": "structure", "members": { @@ -8590,7 +8807,7 @@ } }, "traits": { - "smithy.api#documentation": "An object that recognizes faces in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request\n parameters for CreateStreamProcessor
describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.\n\n
An object that recognizes faces or labels in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request\n parameters for CreateStreamProcessor
describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.\n\n
\n If this option is set to true, you choose to share data with Rekognition to improve model performance.\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "\n Allows you to opt in or opt out to share data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis.\n Note that if you opt out at the account level this setting is ignored on individual streams.\n \n
" + } + }, "com.amazonaws.rekognition#StreamProcessorInput": { "type": "structure", "members": { @@ -8629,6 +8861,21 @@ "smithy.api#pattern": "^[a-zA-Z0-9_.\\-]+$" } }, + "com.amazonaws.rekognition#StreamProcessorNotificationChannel": { + "type": "structure", + "members": { + "SNSTopicArn": { + "target": "com.amazonaws.rekognition#SNSTopicArn", + "traits": { + "smithy.api#documentation": "\n The Amazon Resource Number (ARN) of the Amazon Amazon Simple Notification Service topic to which Amazon Rekognition posts the completion status.\n
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.
\nAmazon Rekognition publishes a notification the first time an object of interest or a person is detected in the video stream. For example, if Amazon Rekognition\n detects a person at second 2, a pet at second 4, and a person again at second 5, Amazon Rekognition sends 2 object class detected notifications,\n one for a person at second 2 and one for a pet at second 4.
\nAmazon Rekognition also publishes an an end-of-session notification with a summary when the stream processing session is complete.
" + } + }, "com.amazonaws.rekognition#StreamProcessorOutput": { "type": "structure", "members": { @@ -8637,12 +8884,39 @@ "traits": { "smithy.api#documentation": "The Amazon Kinesis Data Streams stream to which the Amazon Rekognition stream processor streams the analysis results.
" } + }, + "S3Destination": { + "target": "com.amazonaws.rekognition#S3Destination", + "traits": { + "smithy.api#documentation": "\n The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation.\n
" + } } }, "traits": { "smithy.api#documentation": "Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more\n information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
" } }, + "com.amazonaws.rekognition#StreamProcessorParameterToDelete": { + "type": "string", + "traits": { + "smithy.api#enum": [ + { + "value": "ConnectedHomeMinConfidence", + "name": "ConnectedHomeMinConfidence" + }, + { + "value": "RegionsOfInterest", + "name": "RegionsOfInterest" + } + ] + } + }, + "com.amazonaws.rekognition#StreamProcessorParametersToDelete": { + "type": "list", + "member": { + "target": "com.amazonaws.rekognition#StreamProcessorParameterToDelete" + } + }, "com.amazonaws.rekognition#StreamProcessorSettings": { "type": "structure", "members": { @@ -8651,10 +8925,27 @@ "traits": { "smithy.api#documentation": "Face search settings to use on a streaming video.
" } + }, + "ConnectedHome": { + "target": "com.amazonaws.rekognition#ConnectedHomeSettings" } }, "traits": { - "smithy.api#documentation": "Input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor.
" + "smithy.api#documentation": "Input parameters used in a streaming video analyzed by a Amazon Rekognition stream processor. \n You can use FaceSearch
to recognize faces in a streaming video, or you can use ConnectedHome
to detect labels.
\n The label detection settings you want to use for your stream processor.\n
" + } + } + }, + "traits": { + "smithy.api#documentation": "\n The stream processor settings that you want to update. ConnectedHome
settings can be updated to detect different labels with a different minimum confidence.\n
Information about a word or line of text detected by DetectText.
\nThe DetectedText
field contains the text that Amazon Rekognition detected in the\n image.
Every word and line has an identifier (Id
). Each word belongs to a line\n and has a parent identifier (ParentId
) that identifies the line of text in which\n the word appears. The word Id
is also an index for the word within a line of\n words.
For more information, see Detecting Text in the Amazon Rekognition Developer Guide.
" + "smithy.api#documentation": "Information about a word or line of text detected by DetectText.
\nThe DetectedText
field contains the text that Amazon Rekognition detected in the\n image.
Every word and line has an identifier (Id
). Each word belongs to a line\n and has a parent identifier (ParentId
) that identifies the line of text in which\n the word appears. The word Id
is also an index for the word within a line of\n words.
For more information, see Detecting text in the Amazon Rekognition Developer Guide.
" } }, "com.amazonaws.rekognition#TextDetectionList": { @@ -9252,6 +9547,78 @@ "type": "structure", "members": {} }, + "com.amazonaws.rekognition#UpdateStreamProcessor": { + "type": "operation", + "input": { + "target": "com.amazonaws.rekognition#UpdateStreamProcessorRequest" + }, + "output": { + "target": "com.amazonaws.rekognition#UpdateStreamProcessorResponse" + }, + "errors": [ + { + "target": "com.amazonaws.rekognition#AccessDeniedException" + }, + { + "target": "com.amazonaws.rekognition#InternalServerError" + }, + { + "target": "com.amazonaws.rekognition#InvalidParameterException" + }, + { + "target": "com.amazonaws.rekognition#ProvisionedThroughputExceededException" + }, + { + "target": "com.amazonaws.rekognition#ResourceNotFoundException" + }, + { + "target": "com.amazonaws.rekognition#ThrottlingException" + } + ], + "traits": { + "smithy.api#documentation": "\n Allows you to update a stream processor. You can change some settings and regions of interest and delete certain parameters.\n
" + } + }, + "com.amazonaws.rekognition#UpdateStreamProcessorRequest": { + "type": "structure", + "members": { + "Name": { + "target": "com.amazonaws.rekognition#StreamProcessorName", + "traits": { + "smithy.api#documentation": "\n Name of the stream processor that you want to update.\n
", + "smithy.api#required": {} + } + }, + "SettingsForUpdate": { + "target": "com.amazonaws.rekognition#StreamProcessorSettingsForUpdate", + "traits": { + "smithy.api#documentation": "\n The stream processor settings that you want to update. Label detection settings can be updated to detect different labels with a different minimum confidence.\n
" + } + }, + "RegionsOfInterestForUpdate": { + "target": "com.amazonaws.rekognition#RegionsOfInterest", + "traits": { + "smithy.api#documentation": "\n Specifies locations in the frames where Amazon Rekognition checks for objects or people. This is an optional parameter for label detection stream processors.\n
" + } + }, + "DataSharingPreferenceForUpdate": { + "target": "com.amazonaws.rekognition#StreamProcessorDataSharingPreference", + "traits": { + "smithy.api#documentation": "\n Shows whether you are sharing data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis.\n Note that if you opt out at the account level this setting is ignored on individual streams.\n
" + } + }, + "ParametersToDelete": { + "target": "com.amazonaws.rekognition#StreamProcessorParametersToDelete", + "traits": { + "smithy.api#documentation": "\n A list of parameters you want to delete from the stream processor.\n
" + } + } + } + }, + "com.amazonaws.rekognition#UpdateStreamProcessorResponse": { + "type": "structure", + "members": {} + }, "com.amazonaws.rekognition#Url": { "type": "string" }, diff --git a/aws/sdk/aws-models/sagemaker.json b/aws/sdk/aws-models/sagemaker.json index fae06a2008..9c07492f05 100644 --- a/aws/sdk/aws-models/sagemaker.json +++ b/aws/sdk/aws-models/sagemaker.json @@ -240,7 +240,7 @@ "target": "com.amazonaws.sagemaker#AddTagsOutput" }, "traits": { - "smithy.api#documentation": "Adds or overwrites one or more tags for the specified Amazon SageMaker resource. You can add\n tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform\n jobs, models, labeling jobs, work teams, endpoint configurations, and\n endpoints.
\nEach tag consists of a key and an optional value. Tag keys must be unique per\n resource. For more information about tags, see For more information, see Amazon Web Services\n Tagging Strategies.
\nTags that you add to a hyperparameter tuning job by calling this API are also\n added to any training jobs that the hyperparameter tuning job launches after you\n call this API, but not to training jobs that the hyperparameter tuning job launched\n before you called this API. To make sure that the tags associated with a\n hyperparameter tuning job are also added to all training jobs that the\n hyperparameter tuning job launches, add the tags when you first create the tuning\n job by specifying them in the Tags
parameter of CreateHyperParameterTuningJob\n
Tags that you add to a SageMaker Studio Domain or User Profile by calling this API\n are also added to any Apps that the Domain or User Profile launches after you call\n this API, but not to Apps that the Domain or User Profile launched before you called\n this API. To make sure that the tags associated with a Domain or User Profile are\n also added to all Apps that the Domain or User Profile launches, add the tags when\n you first create the Domain or User Profile by specifying them in the\n Tags
parameter of CreateDomain or CreateUserProfile.
Adds or overwrites one or more tags for the specified SageMaker resource. You can add\n tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform\n jobs, models, labeling jobs, work teams, endpoint configurations, and\n endpoints.
\nEach tag consists of a key and an optional value. Tag keys must be unique per\n resource. For more information about tags, see For more information, see Amazon Web Services\n Tagging Strategies.
\nTags that you add to a hyperparameter tuning job by calling this API are also\n added to any training jobs that the hyperparameter tuning job launches after you\n call this API, but not to training jobs that the hyperparameter tuning job launched\n before you called this API. To make sure that the tags associated with a\n hyperparameter tuning job are also added to all training jobs that the\n hyperparameter tuning job launches, add the tags when you first create the tuning\n job by specifying them in the Tags
parameter of CreateHyperParameterTuningJob\n
Tags that you add to a SageMaker Studio Domain or User Profile by calling this API\n are also added to any Apps that the Domain or User Profile launches after you call\n this API, but not to Apps that the Domain or User Profile launched before you called\n this API. To make sure that the tags associated with a Domain or User Profile are\n also added to all Apps that the Domain or User Profile launches, add the tags when\n you first create the Domain or User Profile by specifying them in the\n Tags
parameter of CreateDomain or CreateUserProfile.
A list of tags associated with the Amazon SageMaker resource.
" + "smithy.api#documentation": "A list of tags associated with the SageMaker resource.
" } } } @@ -454,7 +454,7 @@ "TrainingImage": { "target": "com.amazonaws.sagemaker#AlgorithmImage", "traits": { - "smithy.api#documentation": "The registry path of the Docker image\n that contains the training algorithm.\n For information about docker registry paths for built-in algorithms, see Algorithms\n Provided by Amazon SageMaker: Common Parameters. Amazon SageMaker supports both\n registry/repository[:tag]
and registry/repository[@digest]
\n image path formats. For more information, see Using Your Own Algorithms with Amazon\n SageMaker.
The registry path of the Docker image\n that contains the training algorithm.\n For information about docker registry paths for built-in algorithms, see Algorithms\n Provided by Amazon SageMaker: Common Parameters. SageMaker supports both\n registry/repository[:tag]
and registry/repository[@digest]
\n image path formats. For more information, see Using Your Own Algorithms with Amazon\n SageMaker.
A list of metric definition objects. Each object specifies the metric name and regular\n expressions used to parse algorithm logs. Amazon SageMaker publishes each metric to Amazon CloudWatch.
" + "smithy.api#documentation": "A list of metric definition objects. Each object specifies the metric name and regular\n expressions used to parse algorithm logs. SageMaker publishes each metric to Amazon CloudWatch.
" } }, "EnableSageMakerMetricsTimeSeries": { "target": "com.amazonaws.sagemaker#Boolean", "traits": { - "smithy.api#documentation": "To generate and save time-series metrics during training, set to true
.\n The default is false
and time-series metrics aren't generated except in the\n following cases:
You use one of the Amazon SageMaker built-in algorithms
\nYou use one of the following Prebuilt Amazon SageMaker Docker Images:
\nTensorflow (version >= 1.15)
\nMXNet (version >= 1.6)
\nPyTorch (version >= 1.3)
\nYou specify at least one MetricDefinition\n
\nTo generate and save time-series metrics during training, set to true
.\n The default is false
and time-series metrics aren't generated except in the\n following cases:
You use one of the SageMaker built-in algorithms
\nYou use one of the following Prebuilt SageMaker Docker Images:
\nTensorflow (version >= 1.15)
\nMXNet (version >= 1.6)
\nPyTorch (version >= 1.3)
\nYou specify at least one MetricDefinition\n
\nSpecifies the training algorithm to use in a CreateTrainingJob\n request.
\nFor more information about algorithms provided by Amazon SageMaker, see Algorithms. For\n information about using your own algorithms, see Using Your Own Algorithms with Amazon\n SageMaker.
" + "smithy.api#documentation": "Specifies the training algorithm to use in a CreateTrainingJob\n request.
\nFor more information about algorithms provided by SageMaker, see Algorithms. For\n information about using your own algorithms, see Using Your Own Algorithms with Amazon\n SageMaker.
" } }, "com.amazonaws.sagemaker#AlgorithmStatus": { @@ -628,19 +628,19 @@ "TrainingJobDefinition": { "target": "com.amazonaws.sagemaker#TrainingJobDefinition", "traits": { - "smithy.api#documentation": "The TrainingJobDefinition
object that describes the training job that\n Amazon SageMaker runs to validate your algorithm.
The TrainingJobDefinition
object that describes the training job that\n SageMaker runs to validate your algorithm.
The TransformJobDefinition
object that describes the transform job that\n Amazon SageMaker runs to validate your algorithm.
The TransformJobDefinition
object that describes the transform job that\n SageMaker runs to validate your algorithm.
Defines a training job and a batch transform job that Amazon SageMaker runs to validate your\n algorithm.
\nThe data provided in the validation profile is made available to your buyers on Amazon Web Services\n Marketplace.
" + "smithy.api#documentation": "Defines a training job and a batch transform job that SageMaker runs to validate your\n algorithm.
\nThe data provided in the validation profile is made available to your buyers on Amazon Web Services\n Marketplace.
" } }, "com.amazonaws.sagemaker#AlgorithmValidationProfiles": { @@ -661,20 +661,20 @@ "ValidationRole": { "target": "com.amazonaws.sagemaker#RoleArn", "traits": { - "smithy.api#documentation": "The IAM roles that Amazon SageMaker uses to run the training jobs.
", + "smithy.api#documentation": "The IAM roles that SageMaker uses to run the training jobs.
", "smithy.api#required": {} } }, "ValidationProfiles": { "target": "com.amazonaws.sagemaker#AlgorithmValidationProfiles", "traits": { - "smithy.api#documentation": "An array of AlgorithmValidationProfile
objects, each of which specifies a\n training job and batch transform job that Amazon SageMaker runs to validate your algorithm.
An array of AlgorithmValidationProfile
objects, each of which specifies a\n training job and batch transform job that SageMaker runs to validate your algorithm.
Specifies configurations for one or more training jobs that Amazon SageMaker runs to test the\n algorithm.
" + "smithy.api#documentation": "Specifies configurations for one or more training jobs that SageMaker runs to test the\n algorithm.
" } }, "com.amazonaws.sagemaker#AnnotationConsolidationConfig": { @@ -1506,12 +1506,12 @@ "MaxConcurrentInvocationsPerInstance": { "target": "com.amazonaws.sagemaker#MaxConcurrentInvocationsPerInstance", "traits": { - "smithy.api#documentation": "The maximum number of concurrent requests sent by the SageMaker client to the \n model container. If no value is provided, Amazon SageMaker will choose an optimal value for you.
" + "smithy.api#documentation": "The maximum number of concurrent requests sent by the SageMaker client to the \n model container. If no value is provided, SageMaker chooses an optimal value.
" } } }, "traits": { - "smithy.api#documentation": "Configures the behavior of the client used by Amazon SageMaker to interact with the \n model container during asynchronous inference.
" + "smithy.api#documentation": "Configures the behavior of the client used by SageMaker to interact with the \n model container during asynchronous inference.
" } }, "com.amazonaws.sagemaker#AsyncInferenceConfig": { @@ -1520,7 +1520,7 @@ "ClientConfig": { "target": "com.amazonaws.sagemaker#AsyncInferenceClientConfig", "traits": { - "smithy.api#documentation": "Configures the behavior of the client used by Amazon SageMaker to interact \n with the model container during asynchronous inference.
" + "smithy.api#documentation": "Configures the behavior of the client used by SageMaker to interact \n with the model container during asynchronous inference.
" } }, "OutputConfig": { @@ -1561,7 +1561,7 @@ "KmsKeyId": { "target": "com.amazonaws.sagemaker#KmsKeyId", "traits": { - "smithy.api#documentation": "The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that\n Amazon SageMaker uses to encrypt the asynchronous inference output in Amazon S3.
\n " + "smithy.api#documentation": "The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that\n SageMaker uses to encrypt the asynchronous inference output in Amazon S3.
\n " } }, "S3OutputPath": { @@ -1903,12 +1903,33 @@ "ContentType": { "target": "com.amazonaws.sagemaker#ContentType", "traits": { - "smithy.api#documentation": "The content type of the data from the input source. You can use\n text/csv;header=present
or x-application/vnd.amazon+parquet
.\n The default value is text/csv;header=present
.
The content type of the data from the input source. You can use\n text/csv;header=present
or x-application/vnd.amazon+parquet
.\n The default value is text/csv;header=present
.
The channel type (optional) is an enum string. The default value is\n training
. Channels for training and validation must share the same\n ContentType
and TargetAttributeName
.
A channel is a named input source that training algorithms can consume. For more\n information, see .
" + "smithy.api#documentation": "A channel is a named input source that training algorithms can consume. The\n validation dataset size is limited to less than 2 GB. The training dataset size must be\n less than 100 GB. For more information, see .
\nA validation dataset must contain the same headers as the training dataset.
\nThe data source for the Autopilot job.
" } }, + "com.amazonaws.sagemaker#AutoMLDataSplitConfig": { + "type": "structure", + "members": { + "ValidationFraction": { + "target": "com.amazonaws.sagemaker#ValidationFraction", + "traits": { + "smithy.api#documentation": "The validation fraction (optional) is a float that specifies the portion of the training\n dataset to be used for validation. The default value is 0.2, and values can range from 0 to\n 1. We recommend setting this value to be less than 0.5.
" + } + } + }, + "traits": { + "smithy.api#documentation": "This structure specifies how to split the data into train and test datasets. The\n validation and training datasets must contain the same headers. The validation dataset must\n be less than 2 GB in size.
" + } + }, "com.amazonaws.sagemaker#AutoMLFailureReason": { "type": "string", "traits": { @@ -2057,6 +2092,12 @@ "traits": { "smithy.api#documentation": "The security configuration for traffic encryption or Amazon VPC settings.
" } + }, + "DataSplitConfig": { + "target": "com.amazonaws.sagemaker#AutoMLDataSplitConfig", + "traits": { + "smithy.api#documentation": "The configuration for splitting the input training dataset.
\nType: AutoMLDataSplitConfig
" + } } }, "traits": { @@ -2702,7 +2743,7 @@ "MaximumExecutionTimeoutInSeconds": { "target": "com.amazonaws.sagemaker#MaximumExecutionTimeoutInSeconds", "traits": { - "smithy.api#documentation": "Maximum execution timeout for the deployment. Note that the timeout value should be larger\n than the total waiting time specified in TerminationWaitInSeconds
and WaitIntervalInSeconds
.
Maximum execution timeout for the deployment. Note that the timeout value should be larger\n than the total waiting time specified in TerminationWaitInSeconds
and WaitIntervalInSeconds
.
The Amazon S3 prefix to the model insight artifacts generated for the AutoML\n candidate.
" + "smithy.api#documentation": "The Amazon S3 prefix to the model insight artifacts generated for the AutoML candidate.
" } } }, @@ -2952,7 +2993,7 @@ "Type": { "target": "com.amazonaws.sagemaker#CapacitySizeType", "traits": { - "smithy.api#documentation": "Specifies the endpoint capacity type.
\n\n INSTANCE_COUNT
: The endpoint activates based on\n the number of instances.
\n CAPACITY_PERCENT
: The endpoint activates based on\n the specified percentage of capacity.
Specifies the endpoint capacity type.
\n\n INSTANCE_COUNT
: The endpoint activates based on\n the number of instances.
\n CAPACITY_PERCENT
: The endpoint activates based on\n the specified percentage of capacity.
Specify RecordIO as the value when input data is in raw format but the training\n algorithm requires the RecordIO format. In this case, Amazon SageMaker wraps each individual S3\n object in a RecordIO record. If the input data is already in RecordIO format, you don't\n need to set this attribute. For more information, see Create\n a Dataset Using RecordIO.
\nIn File mode, leave this field unset or set it to None.
" + "smithy.api#documentation": "\nSpecify RecordIO as the value when input data is in raw format but the training\n algorithm requires the RecordIO format. In this case, SageMaker wraps each individual S3\n object in a RecordIO record. If the input data is already in RecordIO format, you don't\n need to set this attribute. For more information, see Create\n a Dataset Using RecordIO.
\nIn File mode, leave this field unset or set it to None.
" } }, "InputMode": { "target": "com.amazonaws.sagemaker#TrainingInputMode", "traits": { - "smithy.api#documentation": "(Optional) The input mode to use for the data channel in a training job. If you don't\n set a value for InputMode
, Amazon SageMaker uses the value set for\n TrainingInputMode
. Use this parameter to override the\n TrainingInputMode
setting in a AlgorithmSpecification\n request when you have a channel that needs a different input mode from the training\n job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML\n storage volume, and mount the directory to a Docker volume, use File
input\n mode. To stream data directly from Amazon S3 to the container, choose Pipe
input\n mode.
To use a model for incremental training, choose File
input model.
(Optional) The input mode to use for the data channel in a training job. If you don't\n set a value for InputMode
, SageMaker uses the value set for\n TrainingInputMode
. Use this parameter to override the\n TrainingInputMode
setting in a AlgorithmSpecification\n request when you have a channel that needs a different input mode from the training\n job's general setting. To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML\n storage volume, and mount the directory to a Docker volume, use File
input\n mode. To stream data directly from Amazon S3 to the container, choose Pipe
input\n mode.
To use a model for incremental training, choose File
input model.
Identifies the S3 path where you want Amazon SageMaker to store checkpoints. For example,\n s3://bucket-name/key-name-prefix
.
Identifies the S3 path where you want SageMaker to store checkpoints. For example,\n s3://bucket-name/key-name-prefix
.
This flag indicates if the drift check against the previous baseline will be skipped or not. \n If it is set to False
, the previous baseline of the configured check type must be available.
This flag indicates if the drift check against the previous baseline will be skipped or not. \n If it is set to False
, the previous baseline of the configured check type must be available.
This flag indicates if a newly calculated baseline can be accessed through step properties \n BaselineUsedForDriftCheckConstraints
and BaselineUsedForDriftCheckStatistics
. \n If it is set to False
, the previous baseline of the configured check type must also be available. \n These can be accessed through the BaselineUsedForDriftCheckConstraints
property.
This flag indicates if a newly calculated baseline can be accessed through step properties \n BaselineUsedForDriftCheckConstraints
and BaselineUsedForDriftCheckStatistics
. \n If it is set to False
, the previous baseline of the configured check type must also be available. \n These can be accessed through the BaselineUsedForDriftCheckConstraints
property.
The container for the metadata for the ClarifyCheck step. For more information, \n see the topic on ClarifyCheck step in the Amazon SageMaker Developer Guide.\n
" + "smithy.api#documentation": "The container for the metadata for the ClarifyCheck step. For more information, \n see the topic on ClarifyCheck step in the Amazon SageMaker Developer Guide.\n
" } }, "com.amazonaws.sagemaker#ClientId": { @@ -3932,7 +3973,7 @@ "Image": { "target": "com.amazonaws.sagemaker#ContainerImage", "traits": { - "smithy.api#documentation": "The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a\n Docker registry that is accessible from the same VPC that you configure for your\n endpoint. If you are using your own custom algorithm instead of an algorithm provided by\n Amazon SageMaker, the inference code must meet Amazon SageMaker requirements. Amazon SageMaker supports both\n registry/repository[:tag]
and registry/repository[@digest]
\n image path formats. For more information, see Using Your Own Algorithms with Amazon\n SageMaker\n
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a\n Docker registry that is accessible from the same VPC that you configure for your\n endpoint. If you are using your own custom algorithm instead of an algorithm provided by\n SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both\n registry/repository[:tag]
and registry/repository[@digest]
\n image path formats. For more information, see Using Your Own Algorithms with Amazon\n SageMaker\n
The S3 path where the model artifacts, which result from model training, are stored.\n This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3\n path is required for Amazon SageMaker built-in algorithms, but not if you use your own algorithms.\n For more information on built-in algorithms, see Common\n Parameters.
\nThe model artifacts must be in an S3 bucket that is in the same region as the\n model or endpoint you are creating.
\nIf you provide a value for this parameter, Amazon SageMaker uses Amazon Web Services Security Token Service to\n download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your\n IAM user account by default. If you previously deactivated Amazon Web Services STS for a region, you\n need to reactivate Amazon Web Services STS for that region. For more information, see Activating and\n Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User\n Guide.
\nIf you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide\n a S3 path to the model artifacts in ModelDataUrl
.
The S3 path where the model artifacts, which result from model training, are stored.\n This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3\n path is required for SageMaker built-in algorithms, but not if you use your own algorithms.\n For more information on built-in algorithms, see Common\n Parameters.
\nThe model artifacts must be in an S3 bucket that is in the same region as the\n model or endpoint you are creating.
\nIf you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to\n download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your\n IAM user account by default. If you previously deactivated Amazon Web Services STS for a region, you\n need to reactivate Amazon Web Services STS for that region. For more information, see Activating and\n Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User\n Guide.
\nIf you use a built-in algorithm to create a model, SageMaker requires that you provide\n a S3 path to the model artifacts in ModelDataUrl
.
The scale that hyperparameter tuning uses to search the hyperparameter range. For\n information about choosing a hyperparameter scale, see Hyperparameter Scaling. One of the following values:
\nAmazon SageMaker hyperparameter tuning chooses the best scale for the\n hyperparameter.
\nHyperparameter tuning searches the values in the hyperparameter range by\n using a linear scale.
\nHyperparameter tuning searches the values in the hyperparameter range by\n using a logarithmic scale.
\nLogarithmic scaling works only for ranges that have only values greater\n than 0.
\nHyperparameter tuning searches the values in the hyperparameter range by\n using a reverse logarithmic scale.
\nReverse logarithmic scaling works only for ranges that are entirely within\n the range 0<=x<1.0.
\nThe scale that hyperparameter tuning uses to search the hyperparameter range. For\n information about choosing a hyperparameter scale, see Hyperparameter Scaling. One of the following values:
\nSageMaker hyperparameter tuning chooses the best scale for the\n hyperparameter.
\nHyperparameter tuning searches the values in the hyperparameter range by\n using a linear scale.
\nHyperparameter tuning searches the values in the hyperparameter range by\n using a logarithmic scale.
\nLogarithmic scaling works only for ranges that have only values greater\n than 0.
\nHyperparameter tuning searches the values in the hyperparameter range by\n using a reverse logarithmic scale.
\nReverse logarithmic scaling works only for ranges that are entirely within\n the range 0<=x<1.0.
\nCreate a machine learning algorithm that you can use in Amazon SageMaker and list in the Amazon Web Services\n Marketplace.
" + "smithy.api#documentation": "Create a machine learning algorithm that you can use in SageMaker and list in the Amazon Web Services\n Marketplace.
" } }, "com.amazonaws.sagemaker#CreateAlgorithmInput": { @@ -4385,7 +4426,7 @@ "ValidationSpecification": { "target": "com.amazonaws.sagemaker#AlgorithmValidationSpecification", "traits": { - "smithy.api#documentation": "Specifies configurations for one or more training jobs and that Amazon SageMaker runs to test the\n algorithm's training code and, optionally, one or more batch transform jobs that Amazon SageMaker\n runs to test the algorithm's inference code.
" + "smithy.api#documentation": "Specifies configurations for one or more training jobs and that SageMaker runs to test the\n algorithm's training code and, optionally, one or more batch transform jobs that SageMaker\n runs to test the algorithm's inference code.
" } }, "CertifyForMarketplace": { @@ -4720,7 +4761,7 @@ "target": "com.amazonaws.sagemaker#CreateCodeRepositoryOutput" }, "traits": { - "smithy.api#documentation": "Creates a Git repository as a resource in your Amazon SageMaker account. You can associate the\n repository with notebook instances so that you can use Git source control for the\n notebooks you create. The Git repository is a resource in your Amazon SageMaker account, so it can\n be associated with more than one notebook instance, and it persists independently from\n the lifecycle of any notebook instances it is associated with.
\nThe repository can be hosted either in Amazon Web Services CodeCommit or in any\n other Git repository.
" + "smithy.api#documentation": "Creates a Git repository as a resource in your SageMaker account. You can associate the\n repository with notebook instances so that you can use Git source control for the\n notebooks you create. The Git repository is a resource in your SageMaker account, so it can\n be associated with more than one notebook instance, and it persists independently from\n the lifecycle of any notebook instances it is associated with.
\nThe repository can be hosted either in Amazon Web Services CodeCommit or in any\n other Git repository.
" } }, "com.amazonaws.sagemaker#CreateCodeRepositoryInput": { @@ -5024,6 +5065,9 @@ "input": { "target": "com.amazonaws.sagemaker#CreateDeviceFleetRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceInUse" @@ -5200,6 +5244,9 @@ "input": { "target": "com.amazonaws.sagemaker#CreateEdgePackagingJobRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceLimitExceeded" @@ -5282,7 +5329,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates an endpoint using the endpoint configuration specified in the request. Amazon SageMaker\n uses the endpoint to provision resources and deploy models. You create the endpoint\n configuration with the CreateEndpointConfig API.
\nUse this API to deploy models using Amazon SageMaker hosting services.
\nFor an example that calls this method when deploying a model to Amazon SageMaker hosting services,\n see the Create Endpoint example notebook.\n
\n You must not delete an EndpointConfig
that is in use by an endpoint\n that is live or while the UpdateEndpoint
or CreateEndpoint
\n operations are being performed on the endpoint. To update an endpoint, you must\n create a new EndpointConfig
.
The endpoint name must be unique within an Amazon Web Services Region in your Amazon Web Services account.
\nWhen it receives the request, Amazon SageMaker creates the endpoint, launches the resources (ML\n compute instances), and deploys the model(s) on them.
\n \nWhen you call CreateEndpoint, a load call is made to DynamoDB to\n verify that your endpoint configuration exists. When you read data from a DynamoDB\n table supporting \n Eventually Consistent Reads
\n , the response might not\n reflect the results of a recently completed write operation. The response might\n include some stale data. If the dependent entities are not yet in DynamoDB, this\n causes a validation error. If you repeat your read request after a short time, the\n response should return the latest data. So retry logic is recommended to handle\n these possible issues. We also recommend that customers call DescribeEndpointConfig before calling CreateEndpoint to minimize the potential impact of a DynamoDB eventually consistent read.
When Amazon SageMaker receives the request, it sets the endpoint status to\n Creating
. After it creates the endpoint, it sets the status to\n InService
. Amazon SageMaker can then process incoming requests for inferences. To\n check the status of an endpoint, use the DescribeEndpoint\n API.
If any of the models hosted at this endpoint get model data from an Amazon S3 location,\n Amazon SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you\n provided. Amazon Web Services STS is activated in your IAM user account by default. If you previously\n deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For\n more information, see Activating and\n Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User\n Guide.
\nTo add the IAM role policies for using this API operation, go to the IAM console, and choose\n Roles in the left navigation pane. Search the IAM role that you want to grant\n access to use the CreateEndpoint and CreateEndpointConfig API operations, add the following policies to\n the role.
\nOption 1: For a full SageMaker access, search and attach the\n AmazonSageMakerFullAccess
policy.
Option 2: For granting a limited access to an IAM role, paste the\n following Action elements manually into the JSON file of the IAM role:
\n\n \"Action\": [\"sagemaker:CreateEndpoint\",\n \"sagemaker:CreateEndpointConfig\"]
\n
\n \"Resource\": [
\n
\n \"arn:aws:sagemaker:region:account-id:endpoint/endpointName\"
\n
\n \"arn:aws:sagemaker:region:account-id:endpoint-config/endpointConfigName\"
\n
\n ]
\n
For more information, see SageMaker API\n Permissions: Actions, Permissions, and Resources\n Reference.
\nCreates an endpoint using the endpoint configuration specified in the request. SageMaker\n uses the endpoint to provision resources and deploy models. You create the endpoint\n configuration with the CreateEndpointConfig API.
\nUse this API to deploy models using SageMaker hosting services.
\nFor an example that calls this method when deploying a model to SageMaker hosting services,\n see the Create Endpoint example notebook.\n
\n You must not delete an EndpointConfig
that is in use by an endpoint\n that is live or while the UpdateEndpoint
or CreateEndpoint
\n operations are being performed on the endpoint. To update an endpoint, you must\n create a new EndpointConfig
.
The endpoint name must be unique within an Amazon Web Services Region in your Amazon Web Services account.
\nWhen it receives the request, SageMaker creates the endpoint, launches the resources (ML\n compute instances), and deploys the model(s) on them.
\n \nWhen you call CreateEndpoint, a load call is made to DynamoDB to\n verify that your endpoint configuration exists. When you read data from a DynamoDB\n table supporting \n Eventually Consistent Reads
\n , the response might not\n reflect the results of a recently completed write operation. The response might\n include some stale data. If the dependent entities are not yet in DynamoDB, this\n causes a validation error. If you repeat your read request after a short time, the\n response should return the latest data. So retry logic is recommended to handle\n these possible issues. We also recommend that customers call DescribeEndpointConfig before calling CreateEndpoint to minimize the potential impact of a DynamoDB eventually consistent read.
When SageMaker receives the request, it sets the endpoint status to\n Creating
. After it creates the endpoint, it sets the status to\n InService
. SageMaker can then process incoming requests for inferences. To\n check the status of an endpoint, use the DescribeEndpoint\n API.
If any of the models hosted at this endpoint get model data from an Amazon S3 location,\n SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you\n provided. Amazon Web Services STS is activated in your IAM user account by default. If you previously\n deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For\n more information, see Activating and\n Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User\n Guide.
\nTo add the IAM role policies for using this API operation, go to the IAM console, and choose\n Roles in the left navigation pane. Search the IAM role that you want to grant\n access to use the CreateEndpoint and CreateEndpointConfig API operations, add the following policies to\n the role.
\nOption 1: For a full SageMaker access, search and attach the\n AmazonSageMakerFullAccess
policy.
Option 2: For granting a limited access to an IAM role, paste the\n following Action elements manually into the JSON file of the IAM role:
\n\n \"Action\": [\"sagemaker:CreateEndpoint\",\n \"sagemaker:CreateEndpointConfig\"]
\n
\n \"Resource\": [
\n
\n \"arn:aws:sagemaker:region:account-id:endpoint/endpointName\"
\n
\n \"arn:aws:sagemaker:region:account-id:endpoint-config/endpointConfigName\"
\n
\n ]
\n
For more information, see SageMaker API\n Permissions: Actions, Permissions, and Resources\n Reference.
\nCreates an endpoint configuration that Amazon SageMaker hosting services uses to deploy models. In\n the configuration, you identify one or more models, created using the\n CreateModel
API, to deploy and the resources that you want Amazon SageMaker to\n provision. Then you call the CreateEndpoint API.
Use this API if you want to use Amazon SageMaker hosting services to deploy models into\n production.
\nIn the request, you define a ProductionVariant
, for each model that you\n want to deploy. Each ProductionVariant
parameter also describes the\n resources that you want Amazon SageMaker to provision. This includes the number and type of ML\n compute instances to deploy.
If you are hosting multiple models, you also assign a VariantWeight
to\n specify how much traffic you want to allocate to each model. For example, suppose that\n you want to host two models, A and B, and you assign traffic weight 2 for model A and 1\n for model B. Amazon SageMaker distributes two-thirds of the traffic to Model A, and one-third to\n model B.
When you call CreateEndpoint, a load call is made to DynamoDB to\n verify that your endpoint configuration exists. When you read data from a DynamoDB\n table supporting \n Eventually Consistent Reads
\n , the response might not\n reflect the results of a recently completed write operation. The response might\n include some stale data. If the dependent entities are not yet in DynamoDB, this\n causes a validation error. If you repeat your read request after a short time, the\n response should return the latest data. So retry logic is recommended to handle\n these possible issues. We also recommend that customers call DescribeEndpointConfig before calling CreateEndpoint to minimize the potential impact of a DynamoDB eventually consistent read.
Creates an endpoint configuration that SageMaker hosting services uses to deploy models. In\n the configuration, you identify one or more models, created using the\n CreateModel
API, to deploy and the resources that you want SageMaker to\n provision. Then you call the CreateEndpoint API.
Use this API if you want to use SageMaker hosting services to deploy models into\n production.
\nIn the request, you define a ProductionVariant
, for each model that you\n want to deploy. Each ProductionVariant
parameter also describes the\n resources that you want SageMaker to provision. This includes the number and type of ML\n compute instances to deploy.
If you are hosting multiple models, you also assign a VariantWeight
to\n specify how much traffic you want to allocate to each model. For example, suppose that\n you want to host two models, A and B, and you assign traffic weight 2 for model A and 1\n for model B. SageMaker distributes two-thirds of the traffic to Model A, and one-third to\n model B.
When you call CreateEndpoint, a load call is made to DynamoDB to\n verify that your endpoint configuration exists. When you read data from a DynamoDB\n table supporting \n Eventually Consistent Reads
\n , the response might not\n reflect the results of a recently completed write operation. The response might\n include some stale data. If the dependent entities are not yet in DynamoDB, this\n causes a validation error. If you repeat your read request after a short time, the\n response should return the latest data. So retry logic is recommended to handle\n these possible issues. We also recommend that customers call DescribeEndpointConfig before calling CreateEndpoint to minimize the potential impact of a DynamoDB eventually consistent read.
The Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service key that Amazon SageMaker uses to encrypt data on\n the storage volume attached to the ML compute instance that hosts the endpoint.
\nThe KmsKeyId can be any of the following formats:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN:\n arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias name ARN:\n arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
\n
The KMS key policy must grant permission to the IAM role that you specify in your\n CreateEndpoint
, UpdateEndpoint
requests. For more\n information, refer to the Amazon Web Services Key Management Service section Using Key\n Policies in Amazon Web Services KMS \n
Certain Nitro-based instances include local storage, dependent on the instance\n type. Local storage volumes are encrypted using a hardware module on the instance.\n You can't request a KmsKeyId
when using an instance type with local\n storage. If any of the models that you specify in the\n ProductionVariants
parameter use nitro-based instances with local\n storage, do not specify a value for the KmsKeyId
parameter. If you\n specify a value for KmsKeyId
when using any nitro-based instances with\n local storage, the call to CreateEndpointConfig
fails.
For a list of instance types that support local instance storage, see Instance Store Volumes.
\nFor more information about local instance storage encryption, see SSD\n Instance Store Volumes.
\nThe Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service key that SageMaker uses to encrypt data on\n the storage volume attached to the ML compute instance that hosts the endpoint.
\nThe KmsKeyId can be any of the following formats:
\nKey ID: 1234abcd-12ab-34cd-56ef-1234567890ab
\n
Key ARN:\n arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
\n
Alias name: alias/ExampleAlias
\n
Alias name ARN:\n arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
\n
The KMS key policy must grant permission to the IAM role that you specify in your\n CreateEndpoint
, UpdateEndpoint
requests. For more\n information, refer to the Amazon Web Services Key Management Service section Using Key\n Policies in Amazon Web Services KMS \n
Certain Nitro-based instances include local storage, dependent on the instance\n type. Local storage volumes are encrypted using a hardware module on the instance.\n You can't request a KmsKeyId
when using an instance type with local\n storage. If any of the models that you specify in the\n ProductionVariants
parameter use nitro-based instances with local\n storage, do not specify a value for the KmsKeyId
parameter. If you\n specify a value for KmsKeyId
when using any nitro-based instances with\n local storage, the call to CreateEndpointConfig
fails.
For a list of instance types that support local instance storage, see Instance Store Volumes.
\nFor more information about local instance storage encryption, see SSD\n Instance Store Volumes.
\nThe HyperParameterTrainingJobDefinition object that describes the\n training jobs that this tuning job launches,\n including\n static hyperparameters, input data configuration, output data configuration, resource\n configuration, and stopping condition.
" + "smithy.api#documentation": "The HyperParameterTrainingJobDefinition object that describes the\n training jobs that this tuning job launches, including static hyperparameters, input\n data configuration, output data configuration, resource configuration, and stopping\n condition.
" } }, "TrainingJobDefinitions": { @@ -5755,7 +5802,7 @@ "HyperParameterTuningJobArn": { "target": "com.amazonaws.sagemaker#HyperParameterTuningJobArn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the tuning job. Amazon SageMaker assigns an ARN to a\n hyperparameter tuning job when you create it.
", + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the tuning job. SageMaker assigns an ARN to a\n hyperparameter tuning job when you create it.
", "smithy.api#required": {} } } @@ -5778,7 +5825,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a custom SageMaker image. A SageMaker image is a set of image versions. Each image\n version represents a container image stored in Amazon Container Registry (ECR). For more information, see\n Bring your own SageMaker image.
" + "smithy.api#documentation": "Creates a custom SageMaker image. A SageMaker image is a set of image versions. Each image\n version represents a container image stored in Amazon Elastic Container Registry (ECR). For more information, see\n Bring your own SageMaker image.
" } }, "com.amazonaws.sagemaker#CreateImageRequest": { @@ -5849,7 +5896,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a version of the SageMaker image specified by ImageName
. The version\n represents the Amazon Container Registry (ECR) container image specified by BaseImage
.
Creates a version of the SageMaker image specified by ImageName
. The version\n represents the Amazon Elastic Container Registry (ECR) container image specified by BaseImage
.
The registry path of the container image to use as the starting point for this\n version. The path is an Amazon Container Registry (ECR) URI in the following format:
\n\n
\n
The registry path of the container image to use as the starting point for this\n version. The path is an Amazon Elastic Container Registry (ECR) URI in the following format:
\n\n
\n
A set of conditions for stopping a recommendation job. If any of \n the conditions are met, the job is automatically stopped.
" } }, + "OutputConfig": { + "target": "com.amazonaws.sagemaker#RecommendationJobOutputConfig", + "traits": { + "smithy.api#documentation": "Provides information about the output artifacts and the KMS key \n to use for Amazon S3 server-side encryption.
" + } + }, "Tags": { "target": "com.amazonaws.sagemaker#TagList", "traits": { @@ -6090,7 +6143,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a model in Amazon SageMaker. In the request, you name the model and describe a primary\n container. For the primary container, you specify the Docker image that\n contains inference code, artifacts (from prior training), and a custom environment map\n that the inference code uses when you deploy the model for predictions.
\nUse this API to create a model if you want to use Amazon SageMaker hosting services or run a batch\n transform job.
\nTo host your model, you create an endpoint configuration with the\n CreateEndpointConfig
API, and then create an endpoint with the\n CreateEndpoint
API. Amazon SageMaker then deploys all of the containers that you\n defined for the model in the hosting environment.
For an example that calls this method when deploying a model to Amazon SageMaker hosting services,\n see Deploy the\n Model to Amazon SageMaker Hosting Services (Amazon Web Services SDK for Python (Boto\n 3)).\n
\nTo run a batch transform using your model, you start a job with the\n CreateTransformJob
API. Amazon SageMaker uses your model and your dataset to get\n inferences which are then saved to a specified S3 location.
In the CreateModel
request, you must define a container with the\n PrimaryContainer
parameter.
In the request, you also provide an IAM role that Amazon SageMaker can assume to access model\n artifacts and docker image for deployment on ML compute hosting instances or for batch\n transform jobs. In addition, you also use the IAM role to manage permissions the\n inference code needs. For example, if the inference code access any other Amazon Web Services resources,\n you grant necessary permissions via this role.
" + "smithy.api#documentation": "Creates a model in SageMaker. In the request, you name the model and describe a primary\n container. For the primary container, you specify the Docker image that\n contains inference code, artifacts (from prior training), and a custom environment map\n that the inference code uses when you deploy the model for predictions.
\nUse this API to create a model if you want to use SageMaker hosting services or run a batch\n transform job.
\nTo host your model, you create an endpoint configuration with the\n CreateEndpointConfig
API, and then create an endpoint with the\n CreateEndpoint
API. SageMaker then deploys all of the containers that you\n defined for the model in the hosting environment.
For an example that calls this method when deploying a model to SageMaker hosting services,\n see Deploy the\n Model to Amazon SageMaker Hosting Services (Amazon Web Services SDK for Python (Boto\n 3)).\n
\nTo run a batch transform using your model, you start a job with the\n CreateTransformJob
API. SageMaker uses your model and your dataset to get\n inferences which are then saved to a specified S3 location.
In the request, you also provide an IAM role that SageMaker can assume to access model\n artifacts and docker image for deployment on ML compute hosting instances or for batch\n transform jobs. In addition, you also use the IAM role to manage permissions the\n inference code needs. For example, if the inference code access any other Amazon Web Services resources,\n you grant necessary permissions via this role.
" } }, "com.amazonaws.sagemaker#CreateModelBiasJobDefinition": { @@ -6320,7 +6373,7 @@ "ExecutionRoleArn": { "target": "com.amazonaws.sagemaker#RoleArn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model\n artifacts and docker image for deployment on ML compute instances or for batch transform\n jobs. Deploying on ML compute instances is part of model hosting. For more information,\n see Amazon SageMaker\n Roles.
\nTo be able to pass this role to Amazon SageMaker, the caller of this API must have the\n iam:PassRole
permission.
The Amazon Resource Name (ARN) of the IAM role that SageMaker can assume to access model\n artifacts and docker image for deployment on ML compute instances or for batch transform\n jobs. Deploying on ML compute instances is part of model hosting. For more information,\n see SageMaker\n Roles.
\nTo be able to pass this role to SageMaker, the caller of this API must have the\n iam:PassRole
permission.
The ARN of the model created in Amazon SageMaker.
", + "smithy.api#documentation": "The ARN of the model created in SageMaker.
", "smithy.api#required": {} } } @@ -6373,7 +6426,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a model package that you can use to create Amazon SageMaker models or list on Amazon Web Services\n Marketplace, or a versioned model that is part of a model group. Buyers can subscribe to\n model packages listed on Amazon Web Services Marketplace to create models in Amazon SageMaker.
\nTo create a model package by specifying a Docker container that contains your\n inference code and the Amazon S3 location of your model artifacts, provide values for\n InferenceSpecification
. To create a model from an algorithm resource\n that you created or subscribed to in Amazon Web Services Marketplace, provide a value for\n SourceAlgorithmSpecification
.
There are two types of model packages:
\nVersioned - a model that is part of a model group in the model\n registry.
\nUnversioned - a model package that is not part of a model group.
\nCreates a model package that you can use to create SageMaker models or list on Amazon Web Services\n Marketplace, or a versioned model that is part of a model group. Buyers can subscribe to\n model packages listed on Amazon Web Services Marketplace to create models in SageMaker.
\nTo create a model package by specifying a Docker container that contains your\n inference code and the Amazon S3 location of your model artifacts, provide values for\n InferenceSpecification
. To create a model from an algorithm resource\n that you created or subscribed to in Amazon Web Services Marketplace, provide a value for\n SourceAlgorithmSpecification
.
There are two types of model packages:
\nVersioned - a model that is part of a model group in the model\n registry.
\nUnversioned - a model package that is not part of a model group.
\nSpecifies configurations for one or more transform jobs that Amazon SageMaker runs to test the\n model package.
" + "smithy.api#documentation": "Specifies configurations for one or more transform jobs that SageMaker runs to test the\n model package.
" } }, "SourceAlgorithmSpecification": { @@ -6511,7 +6564,7 @@ "DriftCheckBaselines": { "target": "com.amazonaws.sagemaker#DriftCheckBaselines", "traits": { - "smithy.api#documentation": "Represents the drift check baselines that can be used when the model monitor is set using the model package.\n For more information, see the topic on Drift Detection against Previous Baselines in SageMaker Pipelines in the Amazon SageMaker Developer Guide.\n
" + "smithy.api#documentation": "Represents the drift check baselines that can be used when the model monitor is set using the model package.\n For more information, see the topic on Drift Detection against Previous Baselines in SageMaker Pipelines in the Amazon SageMaker Developer Guide.\n
" } }, "Domain": { @@ -6721,7 +6774,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates an Amazon SageMaker notebook instance. A notebook instance is a machine learning (ML)\n compute instance running on a Jupyter notebook.
\nIn a CreateNotebookInstance
request, specify the type of ML compute\n instance that you want to run. Amazon SageMaker launches the instance, installs common libraries\n that you can use to explore datasets for model training, and attaches an ML storage\n volume to the notebook instance.
Amazon SageMaker also provides a set of example notebooks. Each notebook demonstrates how to\n use Amazon SageMaker with a specific algorithm or with a machine learning framework.
\nAfter receiving the request, Amazon SageMaker does the following:
\nCreates a network interface in the Amazon SageMaker VPC.
\n(Option) If you specified SubnetId
, Amazon SageMaker creates a network\n interface in your own VPC, which is inferred from the subnet ID that you provide\n in the input. When creating this network interface, Amazon SageMaker attaches the security\n group that you specified in the request to the network interface that it creates\n in your VPC.
Launches an EC2 instance of the type specified in the request in the Amazon SageMaker\n VPC. If you specified SubnetId
of your VPC, Amazon SageMaker specifies both\n network interfaces when launching this instance. This enables inbound traffic\n from your own VPC to the notebook instance, assuming that the security groups\n allow it.
After creating the notebook instance, Amazon SageMaker returns its Amazon Resource Name (ARN).\n You can't change the name of a notebook instance after you create it.
\nAfter Amazon SageMaker creates the notebook instance, you can connect to the Jupyter server and\n work in Jupyter notebooks. For example, you can write code to explore a dataset that you\n can use for model training, train a model, host models by creating Amazon SageMaker endpoints, and\n validate hosted models.
\nFor more information, see How It Works.
" + "smithy.api#documentation": "Creates an SageMaker notebook instance. A notebook instance is a machine learning (ML)\n compute instance running on a Jupyter notebook.
\nIn a CreateNotebookInstance
request, specify the type of ML compute\n instance that you want to run. SageMaker launches the instance, installs common libraries\n that you can use to explore datasets for model training, and attaches an ML storage\n volume to the notebook instance.
SageMaker also provides a set of example notebooks. Each notebook demonstrates how to\n use SageMaker with a specific algorithm or with a machine learning framework.
\nAfter receiving the request, SageMaker does the following:
\nCreates a network interface in the SageMaker VPC.
\n(Option) If you specified SubnetId
, SageMaker creates a network\n interface in your own VPC, which is inferred from the subnet ID that you provide\n in the input. When creating this network interface, SageMaker attaches the security\n group that you specified in the request to the network interface that it creates\n in your VPC.
Launches an EC2 instance of the type specified in the request in the SageMaker\n VPC. If you specified SubnetId
of your VPC, SageMaker specifies both\n network interfaces when launching this instance. This enables inbound traffic\n from your own VPC to the notebook instance, assuming that the security groups\n allow it.
After creating the notebook instance, SageMaker returns its Amazon Resource Name (ARN).\n You can't change the name of a notebook instance after you create it.
\nAfter SageMaker creates the notebook instance, you can connect to the Jupyter server and\n work in Jupyter notebooks. For example, you can write code to explore a dataset that you\n can use for model training, train a model, host models by creating SageMaker endpoints, and\n validate hosted models.
\nFor more information, see How It Works.
" } }, "com.amazonaws.sagemaker#CreateNotebookInstanceInput": { @@ -6756,14 +6809,14 @@ "RoleArn": { "target": "com.amazonaws.sagemaker#RoleArn", "traits": { - "smithy.api#documentation": "When you send any requests to Amazon Web Services resources from the notebook instance, Amazon SageMaker\n assumes this role to perform tasks on your behalf. You must grant this role necessary\n permissions so Amazon SageMaker can perform these tasks. The policy must allow the Amazon SageMaker service\n principal (sagemaker.amazonaws.com) permissions to assume this role. For more\n information, see Amazon SageMaker Roles.
\nTo be able to pass this role to Amazon SageMaker, the caller of this API must have the\n iam:PassRole
permission.
When you send any requests to Amazon Web Services resources from the notebook instance, SageMaker\n assumes this role to perform tasks on your behalf. You must grant this role necessary\n permissions so SageMaker can perform these tasks. The policy must allow the SageMaker service\n principal (sagemaker.amazonaws.com) permissions to assume this role. For more\n information, see SageMaker Roles.
\nTo be able to pass this role to SageMaker, the caller of this API must have the\n iam:PassRole
permission.
The Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service key that Amazon SageMaker uses to encrypt data on\n the storage volume attached to your notebook instance. The KMS key you provide must be\n enabled. For information, see Enabling and Disabling\n Keys in the Amazon Web Services Key Management Service Developer Guide.
" + "smithy.api#documentation": "The Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service key that SageMaker uses to encrypt data on\n the storage volume attached to your notebook instance. The KMS key you provide must be\n enabled. For information, see Enabling and Disabling\n Keys in the Amazon Web Services Key Management Service Developer Guide.
" } }, "Tags": { @@ -6781,7 +6834,7 @@ "DirectInternetAccess": { "target": "com.amazonaws.sagemaker#DirectInternetAccess", "traits": { - "smithy.api#documentation": "Sets whether Amazon SageMaker provides internet access to the notebook instance. If you set this\n to Disabled
this notebook instance is able to access resources only in your\n VPC, and is not be able to connect to Amazon SageMaker training and endpoint services unless you\n configure a NAT Gateway in your VPC.
For more information, see Notebook Instances Are Internet-Enabled by Default. You can set the value\n of this parameter to Disabled
only if you set a value for the\n SubnetId
parameter.
Sets whether SageMaker provides internet access to the notebook instance. If you set this\n to Disabled
this notebook instance is able to access resources only in your\n VPC, and is not be able to connect to SageMaker training and endpoint services unless you\n configure a NAT Gateway in your VPC.
For more information, see Notebook Instances Are Internet-Enabled by Default. You can set the value\n of this parameter to Disabled
only if you set a value for the\n SubnetId
parameter.
A Git repository to associate with the notebook instance as its default code\n repository. This can be either the name of a Git repository stored as a resource in your\n account, or the URL of a Git repository in Amazon Web Services CodeCommit or in any\n other Git repository. When you open a notebook instance, it opens in the directory that\n contains this repository. For more information, see Associating Git Repositories with Amazon SageMaker\n Notebook Instances.
" + "smithy.api#documentation": "A Git repository to associate with the notebook instance as its default code\n repository. This can be either the name of a Git repository stored as a resource in your\n account, or the URL of a Git repository in Amazon Web Services CodeCommit or in any\n other Git repository. When you open a notebook instance, it opens in the directory that\n contains this repository. For more information, see Associating Git Repositories with SageMaker\n Notebook Instances.
" } }, "AdditionalCodeRepositories": { "target": "com.amazonaws.sagemaker#AdditionalCodeRepositoryNamesOrUrls", "traits": { - "smithy.api#documentation": "An array of up to three Git repositories to associate with the notebook instance.\n These can be either the names of Git repositories stored as resources in your account,\n or the URL of Git repositories in Amazon Web Services CodeCommit or in any\n other Git repository. These repositories are cloned at the same level as the default\n repository of your notebook instance. For more information, see Associating Git\n Repositories with Amazon SageMaker Notebook Instances.
" + "smithy.api#documentation": "An array of up to three Git repositories to associate with the notebook instance.\n These can be either the names of Git repositories stored as resources in your account,\n or the URL of Git repositories in Amazon Web Services CodeCommit or in any\n other Git repository. These repositories are cloned at the same level as the default\n repository of your notebook instance. For more information, see Associating Git\n Repositories with SageMaker Notebook Instances.
" } }, "RootAccess": { @@ -7047,7 +7100,7 @@ "target": "com.amazonaws.sagemaker#CreatePresignedNotebookInstanceUrlOutput" }, "traits": { - "smithy.api#documentation": "Returns a URL that you can use to connect to the Jupyter server from a notebook\n instance. In the Amazon SageMaker console, when you choose Open
next to a notebook\n instance, Amazon SageMaker opens a new tab showing the Jupyter server home page from the notebook\n instance. The console uses this API to get the URL and show the page.
The IAM role or user used to call this API defines the permissions to access the\n notebook instance. Once the presigned URL is created, no additional permission is\n required to access this URL. IAM authorization policies for this API are also enforced\n for every HTTP request and WebSocket frame that attempts to connect to the notebook\n instance.
\nYou can restrict access to this API and to the URL that it returns to a list of IP\n addresses that you specify. Use the NotIpAddress
condition operator and the\n aws:SourceIP
condition context key to specify the list of IP addresses\n that you want to have access to the notebook instance. For more information, see Limit Access to a Notebook Instance by IP Address.
The URL that you get from a call to CreatePresignedNotebookInstanceUrl is valid only for 5 minutes. If\n you try to use the URL after the 5-minute limit expires, you are directed to the\n Amazon Web Services console sign-in page.
\nReturns a URL that you can use to connect to the Jupyter server from a notebook\n instance. In the SageMaker console, when you choose Open
next to a notebook\n instance, SageMaker opens a new tab showing the Jupyter server home page from the notebook\n instance. The console uses this API to get the URL and show the page.
The IAM role or user used to call this API defines the permissions to access the\n notebook instance. Once the presigned URL is created, no additional permission is\n required to access this URL. IAM authorization policies for this API are also enforced\n for every HTTP request and WebSocket frame that attempts to connect to the notebook\n instance.
\nYou can restrict access to this API and to the URL that it returns to a list of IP\n addresses that you specify. Use the NotIpAddress
condition operator and the\n aws:SourceIP
condition context key to specify the list of IP addresses\n that you want to have access to the notebook instance. For more information, see Limit Access to a Notebook Instance by IP Address.
The URL that you get from a call to CreatePresignedNotebookInstanceUrl is valid only for 5 minutes. If\n you try to use the URL after the 5-minute limit expires, you are directed to the\n Amazon Web Services console sign-in page.
\nStarts a model training job. After training completes, Amazon SageMaker saves the resulting\n model artifacts to an Amazon S3 location that you specify.
\nIf you choose to host your model using Amazon SageMaker hosting services, you can use the\n resulting model artifacts as part of the model. You can also use the artifacts in a\n machine learning service other than Amazon SageMaker, provided that you know how to use them for\n inference. \n
\nIn the request body, you provide the following:
\n\n AlgorithmSpecification
- Identifies the training algorithm to\n use.\n
\n HyperParameters
- Specify these algorithm-specific parameters to\n enable the estimation of model parameters during training. Hyperparameters can\n be tuned to optimize this learning process. For a list of hyperparameters for\n each training algorithm provided by Amazon SageMaker, see Algorithms.
\n InputDataConfig
- Describes the training dataset and the Amazon S3,\n EFS, or FSx location where it is stored.
\n OutputDataConfig
- Identifies the Amazon S3 bucket where you want\n Amazon SageMaker to save the results of model training.
\n ResourceConfig
- Identifies the resources, ML compute\n instances, and ML storage volumes to deploy for model training. In distributed\n training, you specify more than one instance.
\n EnableManagedSpotTraining
- Optimize the cost of training machine\n learning models by up to 80% by using Amazon EC2 Spot instances. For more\n information, see Managed Spot\n Training.
\n RoleArn
- The Amazon Resource Name (ARN) that Amazon SageMaker assumes to perform tasks on\n your behalf during model training.\n \n You must grant this role the necessary permissions so that Amazon SageMaker can successfully\n complete model training.
\n StoppingCondition
- To help cap training costs, use\n MaxRuntimeInSeconds
to set a time limit for training. Use\n MaxWaitTimeInSeconds
to specify how long a managed spot\n training job has to complete.
\n Environment
- The environment variables to set in the Docker\n container.
\n RetryStrategy
- The number of times to retry the job when the job\n fails due to an InternalServerError
.
For more information about Amazon SageMaker, see How It Works.
" + "smithy.api#documentation": "Starts a model training job. After training completes, SageMaker saves the resulting\n model artifacts to an Amazon S3 location that you specify.
\nIf you choose to host your model using SageMaker hosting services, you can use the\n resulting model artifacts as part of the model. You can also use the artifacts in a\n machine learning service other than SageMaker, provided that you know how to use them for\n inference. \n
\nIn the request body, you provide the following:
\n\n AlgorithmSpecification
- Identifies the training algorithm to\n use.\n
\n HyperParameters
- Specify these algorithm-specific parameters to\n enable the estimation of model parameters during training. Hyperparameters can\n be tuned to optimize this learning process. For a list of hyperparameters for\n each training algorithm provided by SageMaker, see Algorithms.
\n InputDataConfig
- Describes the training dataset and the Amazon S3,\n EFS, or FSx location where it is stored.
\n OutputDataConfig
- Identifies the Amazon S3 bucket where you want\n SageMaker to save the results of model training.
\n ResourceConfig
- Identifies the resources, ML compute\n instances, and ML storage volumes to deploy for model training. In distributed\n training, you specify more than one instance.
\n EnableManagedSpotTraining
- Optimize the cost of training machine\n learning models by up to 80% by using Amazon EC2 Spot instances. For more\n information, see Managed Spot\n Training.
\n RoleArn
- The Amazon Resource Name (ARN) that SageMaker assumes to perform tasks on\n your behalf during model training.\n \n You must grant this role the necessary permissions so that SageMaker can successfully\n complete model training.
\n StoppingCondition
- To help cap training costs, use\n MaxRuntimeInSeconds
to set a time limit for training. Use\n MaxWaitTimeInSeconds
to specify how long a managed spot\n training job has to complete.
\n Environment
- The environment variables to set in the Docker\n container.
\n RetryStrategy
- The number of times to retry the job when the job\n fails due to an InternalServerError
.
For more information about SageMaker, see How It Works.
" } }, "com.amazonaws.sagemaker#CreateTrainingJobRequest": { @@ -7349,40 +7402,40 @@ "HyperParameters": { "target": "com.amazonaws.sagemaker#HyperParameters", "traits": { - "smithy.api#documentation": "Algorithm-specific parameters that influence the quality of the model. You set\n hyperparameters before you start the learning process. For a list of hyperparameters for\n each training algorithm provided by Amazon SageMaker, see Algorithms.
\nYou can specify a maximum of 100 hyperparameters. Each hyperparameter is a\n key-value pair. Each key and value is limited to 256 characters, as specified by the\n Length Constraint
.
Algorithm-specific parameters that influence the quality of the model. You set\n hyperparameters before you start the learning process. For a list of hyperparameters for\n each training algorithm provided by SageMaker, see Algorithms.
\nYou can specify a maximum of 100 hyperparameters. Each hyperparameter is a\n key-value pair. Each key and value is limited to 256 characters, as specified by the\n Length Constraint
.
The registry path of the Docker image that contains the training algorithm and\n algorithm-specific metadata, including the input mode. For more information about\n algorithms provided by Amazon SageMaker, see Algorithms. For information about\n providing your own algorithms, see Using Your Own Algorithms with Amazon\n SageMaker.
", + "smithy.api#documentation": "The registry path of the Docker image that contains the training algorithm and\n algorithm-specific metadata, including the input mode. For more information about\n algorithms provided by SageMaker, see Algorithms. For information about\n providing your own algorithms, see Using Your Own Algorithms with Amazon\n SageMaker.
", "smithy.api#required": {} } }, "RoleArn": { "target": "com.amazonaws.sagemaker#RoleArn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform\n tasks on your behalf.
\nDuring model training, Amazon SageMaker needs your permission to read input data from an S3\n bucket, download a Docker image that contains training code, write model artifacts to an\n S3 bucket, write logs to Amazon CloudWatch Logs, and publish metrics to Amazon CloudWatch. You grant\n permissions for all of these tasks to an IAM role. For more information, see Amazon SageMaker\n Roles.
\nTo be able to pass this role to Amazon SageMaker, the caller of this API must have the\n iam:PassRole
permission.
The Amazon Resource Name (ARN) of an IAM role that SageMaker can assume to perform\n tasks on your behalf.
\nDuring model training, SageMaker needs your permission to read input data from an S3\n bucket, download a Docker image that contains training code, write model artifacts to an\n S3 bucket, write logs to Amazon CloudWatch Logs, and publish metrics to Amazon CloudWatch. You grant\n permissions for all of these tasks to an IAM role. For more information, see SageMaker\n Roles.
\nTo be able to pass this role to SageMaker, the caller of this API must have the\n iam:PassRole
permission.
An array of Channel
objects. Each channel is a named input source.\n InputDataConfig
\n describes the input data and its location.
Algorithms can accept input data from one or more channels. For example, an\n algorithm might have two channels of input data, training_data
and\n validation_data
. The configuration for each channel provides the S3,\n EFS, or FSx location where the input data is stored. It also provides information about\n the stored data: the MIME type, compression method, and whether the data is wrapped in\n RecordIO format.
Depending on the input mode that the algorithm supports, Amazon SageMaker either copies input\n data files from an S3 bucket to a local directory in the Docker container, or makes it\n available as input streams. For example, if you specify an EFS location, input data\n files will be made available as input streams. They do not need to be\n downloaded.
" + "smithy.api#documentation": "An array of Channel
objects. Each channel is a named input source.\n InputDataConfig
describes the input data and its location.
Algorithms can accept input data from one or more channels. For example, an\n algorithm might have two channels of input data, training_data
and\n validation_data
. The configuration for each channel provides the S3,\n EFS, or FSx location where the input data is stored. It also provides information about\n the stored data: the MIME type, compression method, and whether the data is wrapped in\n RecordIO format.
Depending on the input mode that the algorithm supports, SageMaker either copies input\n data files from an S3 bucket to a local directory in the Docker container, or makes it\n available as input streams. For example, if you specify an EFS location, input data\n files are available as input streams. They do not need to be\n downloaded.
" } }, "OutputDataConfig": { "target": "com.amazonaws.sagemaker#OutputDataConfig", "traits": { - "smithy.api#documentation": "Specifies the path to the S3 location where you want to store model artifacts. Amazon SageMaker\n creates subfolders for the artifacts.
", + "smithy.api#documentation": "Specifies the path to the S3 location where you want to store model artifacts. SageMaker\n creates subfolders for the artifacts.
", "smithy.api#required": {} } }, "ResourceConfig": { "target": "com.amazonaws.sagemaker#ResourceConfig", "traits": { - "smithy.api#documentation": "The resources, including the ML compute instances and ML storage volumes, to use\n for model training.
\nML storage volumes store model artifacts and incremental states. Training\n algorithms might also use ML storage volumes for scratch space. If you want Amazon SageMaker to use\n the ML storage volume to store the training data, choose File
as the\n TrainingInputMode
in the algorithm specification. For distributed\n training algorithms, specify an instance count greater than 1.
The resources, including the ML compute instances and ML storage volumes, to use\n for model training.
\nML storage volumes store model artifacts and incremental states. Training\n algorithms might also use ML storage volumes for scratch space. If you want SageMaker to use\n the ML storage volume to store the training data, choose File
as the\n TrainingInputMode
in the algorithm specification. For distributed\n training algorithms, specify an instance count greater than 1.
Specifies a limit to how long a model training job can run. It also specifies how long\n a managed Spot training job has to complete. When the job reaches the time limit, Amazon SageMaker\n ends the training job. Use this API to cap model training costs.
\nTo stop a job, Amazon SageMaker sends the algorithm the SIGTERM
signal, which delays\n job termination for 120 seconds. Algorithms can use this 120-second window to save the\n model artifacts, so the results of training are not lost.
Specifies a limit to how long a model training job can run. It also specifies how long\n a managed Spot training job has to complete. When the job reaches the time limit, SageMaker\n ends the training job. Use this API to cap model training costs.
\nTo stop a job, SageMaker sends the algorithm the SIGTERM
signal, which delays\n job termination for 120 seconds. Algorithms can use this 120-second window to save the\n model artifacts, so the results of training are not lost.
Isolates the training container. No inbound or outbound network calls can be made,\n except for calls between peers within a training cluster for distributed training. If\n you enable network isolation for training jobs that are configured to use a VPC, Amazon SageMaker\n downloads and uploads customer data and model artifacts through the specified VPC, but\n the training container does not have network access.
" + "smithy.api#documentation": "Isolates the training container. No inbound or outbound network calls can be made,\n except for calls between peers within a training cluster for distributed training. If\n you enable network isolation for training jobs that are configured to use a VPC, SageMaker\n downloads and uploads customer data and model artifacts through the specified VPC, but\n the training container does not have network access.
" } }, "EnableInterContainerTrafficEncryption": { @@ -7534,7 +7587,7 @@ "MaxPayloadInMB": { "target": "com.amazonaws.sagemaker#MaxPayloadInMB", "traits": { - "smithy.api#documentation": "The maximum allowed size of the payload, in MB. A payload is the\n data portion of a record (without metadata). The value in MaxPayloadInMB
\n must be greater than, or equal to, the size of a single record. To estimate the size of\n a record in MB, divide the size of your dataset by the number of records. To ensure that\n the records fit within the maximum payload size, we recommend using a slightly larger\n value. The default value is 6
MB.\n
For cases where the payload might be arbitrarily large and is transmitted using HTTP\n chunked encoding, set the value to 0
.\n This\n feature works only in supported algorithms. Currently, Amazon SageMaker built-in\n algorithms do not support HTTP chunked encoding.
The maximum allowed size of the payload, in MB. A payload is the\n data portion of a record (without metadata). The value in MaxPayloadInMB
\n must be greater than, or equal to, the size of a single record. To estimate the size of\n a record in MB, divide the size of your dataset by the number of records. To ensure that\n the records fit within the maximum payload size, we recommend using a slightly larger\n value. The default value is 6
MB.\n
The value of MaxPayloadInMB
cannot be greater than 100 MB. If you specify\n the MaxConcurrentTransforms
parameter, the value of\n (MaxConcurrentTransforms * MaxPayloadInMB)
also cannot exceed 100\n MB.
For cases where the payload might be arbitrarily large and is transmitted using HTTP\n chunked encoding, set the value to 0
.\n This\n feature works only in supported algorithms. Currently, Amazon SageMaker built-in\n algorithms do not support HTTP chunked encoding.
A JSONPath expression used to select a portion of the input data to pass to\n the algorithm. Use the InputFilter
parameter to exclude fields, such as an\n ID column, from the input. If you want Amazon SageMaker to pass the entire input dataset to the\n algorithm, accept the default value $
.
Examples: \"$\"
, \"$[1:]\"
, \"$.features\"
\n
A JSONPath expression used to select a portion of the input data to pass to\n the algorithm. Use the InputFilter
parameter to exclude fields, such as an\n ID column, from the input. If you want SageMaker to pass the entire input dataset to the\n algorithm, accept the default value $
.
Examples: \"$\"
, \"$[1:]\"
, \"$.features\"
\n
A JSONPath expression used to select a portion of the joined dataset to save\n in the output file for a batch transform job. If you want Amazon SageMaker to store the entire input\n dataset in the output file, leave the default value, $
. If you specify\n indexes that aren't within the dimension size of the joined dataset, you get an\n error.
Examples: \"$\"
, \"$[0,5:]\"
,\n \"$['id','SageMakerOutput']\"
\n
A JSONPath expression used to select a portion of the joined dataset to save\n in the output file for a batch transform job. If you want SageMaker to store the entire input\n dataset in the output file, leave the default value, $
. If you specify\n indexes that aren't within the dimension size of the joined dataset, you get an\n error.
Examples: \"$\"
, \"$[0,5:]\"
,\n \"$['id','SageMakerOutput']\"
\n
Removes the specified algorithm from your account.
" } @@ -8618,6 +8674,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteAppRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceInUse" @@ -8635,6 +8694,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteAppImageConfigRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceNotFound" @@ -8792,6 +8854,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteCodeRepositoryInput" }, + "output": { + "target": "smithy.api#Unit" + }, "traits": { "smithy.api#documentation": "Deletes the specified Git repository from your account.
" } @@ -8853,6 +8918,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteDataQualityJobDefinitionRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceNotFound" @@ -8879,6 +8947,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteDeviceFleetRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceInUse" @@ -8905,6 +8976,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteDomainRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceInUse" @@ -8940,8 +9014,11 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteEndpointInput" }, + "output": { + "target": "smithy.api#Unit" + }, "traits": { - "smithy.api#documentation": "Deletes an endpoint. Amazon SageMaker frees up all of the resources that were deployed when the\n endpoint was created.
\nAmazon SageMaker retires any custom KMS key grants associated with the endpoint, meaning you don't\n need to use the RevokeGrant API call.
" + "smithy.api#documentation": "Deletes an endpoint. SageMaker frees up all of the resources that were deployed when the\n endpoint was created.
\nSageMaker retires any custom KMS key grants associated with the endpoint, meaning you don't\n need to use the RevokeGrant API call.
\nWhen you delete your endpoint, SageMaker asynchronously deletes associated endpoint resources such as KMS key grants.\n You might still see these resources in your account for a few minutes after deleting your endpoint.\n Do not delete or revoke the permissions for your\n \n ExecutionRoleArn\n
,\n otherwise SageMaker cannot delete these resources.
Deletes an endpoint configuration. The DeleteEndpointConfig
API\n deletes only the specified configuration. It does not delete endpoints created using the\n configuration.
You must not delete an EndpointConfig
in use by an endpoint that is\n live or while the UpdateEndpoint
or CreateEndpoint
operations\n are being performed on the endpoint. If you delete the EndpointConfig
of an\n endpoint that is active or being created or updated you may lose visibility into the\n instance type the endpoint is using. The endpoint must be deleted in order to stop\n incurring charges.
Deletes a model. The DeleteModel
API deletes only the model entry that\n was created in Amazon SageMaker when you called the CreateModel
API. It does not\n delete model artifacts, inference code, or the IAM role that you specified when\n creating the model.
Deletes a model. The DeleteModel
API deletes only the model entry that\n was created in SageMaker when you called the CreateModel
API. It does not\n delete model artifacts, inference code, or the IAM role that you specified when\n creating the model.
Deletes a model package.
\nA model package is used to create Amazon SageMaker models or list on Amazon Web Services Marketplace. Buyers can\n subscribe to model packages listed on Amazon Web Services Marketplace to create models in Amazon SageMaker.
" + "smithy.api#documentation": "Deletes a model package.
\nA model package is used to create SageMaker models or list on Amazon Web Services Marketplace. Buyers can\n subscribe to model packages listed on Amazon Web Services Marketplace to create models in SageMaker.
" } }, "com.amazonaws.sagemaker#DeleteModelPackageGroup": { @@ -9283,6 +9378,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteModelPackageGroupInput" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ConflictException" @@ -9309,6 +9407,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteModelPackageGroupPolicyInput" }, + "output": { + "target": "smithy.api#Unit" + }, "traits": { "smithy.api#documentation": "Deletes a model group resource policy.
" } @@ -9331,7 +9432,7 @@ "ModelPackageName": { "target": "com.amazonaws.sagemaker#VersionedArnOrName", "traits": { - "smithy.api#documentation": "The name or Amazon Resource Name (ARN) of the model package to delete.
\nWhen you specify a name, the name must have 1 to 63 characters. Valid\n characters are a-z, A-Z, 0-9, and - (hyphen).
", + "smithy.api#documentation": "The name or Amazon Resource Name (ARN) of the model package to delete.
\nWhen you specify a name, the name must have 1 to 63 characters. Valid\n characters are a-z, A-Z, 0-9, and - (hyphen).
", "smithy.api#required": {} } } @@ -9342,6 +9443,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteModelQualityJobDefinitionRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceNotFound" @@ -9368,6 +9472,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteMonitoringScheduleRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceNotFound" @@ -9394,8 +9501,11 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteNotebookInstanceInput" }, + "output": { + "target": "smithy.api#Unit" + }, "traits": { - "smithy.api#documentation": " Deletes an Amazon SageMaker notebook instance. Before you can delete a notebook instance, you\n must call the StopNotebookInstance
API.
When you delete a notebook instance, you lose all of your data. Amazon SageMaker removes\n the ML compute instance, and deletes the ML storage volume and the network interface\n associated with the notebook instance.
\n Deletes an SageMaker notebook instance. Before you can delete a notebook instance, you\n must call the StopNotebookInstance
API.
When you delete a notebook instance, you lose all of your data. SageMaker removes\n the ML compute instance, and deletes the ML storage volume and the network interface\n associated with the notebook instance.
\nThe name of the Amazon SageMaker notebook instance to delete.
", + "smithy.api#documentation": "The name of the SageMaker notebook instance to delete.
", "smithy.api#required": {} } } @@ -9415,6 +9525,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteNotebookInstanceLifecycleConfigInput" }, + "output": { + "target": "smithy.api#Unit" + }, "traits": { "smithy.api#documentation": "Deletes a notebook instance lifecycle configuration.
" } @@ -9484,6 +9597,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteProjectInput" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ConflictException" @@ -9510,6 +9626,9 @@ "input": { "target": "com.amazonaws.sagemaker#DeleteStudioLifecycleConfigRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceInUse" @@ -9543,7 +9662,7 @@ "target": "com.amazonaws.sagemaker#DeleteTagsOutput" }, "traits": { - "smithy.api#documentation": "Deletes the specified tags from an Amazon SageMaker resource.
\nTo list a resource's tags, use the ListTags
API.
When you call this API to delete tags from a hyperparameter tuning job, the\n deleted tags are not removed from training jobs that the hyperparameter tuning job\n launched before you called this API.
\nWhen you call this API to delete tags from a SageMaker Studio Domain or User\n Profile, the deleted tags are not removed from Apps that the SageMaker Studio Domain\n or User Profile launched before you called this API.
\nDeletes the specified tags from an SageMaker resource.
\nTo list a resource's tags, use the ListTags
API.
When you call this API to delete tags from a hyperparameter tuning job, the\n deleted tags are not removed from training jobs that the hyperparameter tuning job\n launched before you called this API.
\nWhen you call this API to delete tags from a SageMaker Studio Domain or User\n Profile, the deleted tags are not removed from Apps that the SageMaker Studio Domain\n or User Profile launched before you called this API.
\nDeregisters the specified devices. After you deregister a device, you will need to re-register the devices.
" } @@ -10008,7 +10133,7 @@ "ValidationSpecification": { "target": "com.amazonaws.sagemaker#AlgorithmValidationSpecification", "traits": { - "smithy.api#documentation": "Details about configurations for one or more training jobs that Amazon SageMaker runs to test the\n algorithm.
" + "smithy.api#documentation": "Details about configurations for one or more training jobs that SageMaker runs to test the\n algorithm.
" } }, "AlgorithmStatus": { @@ -11477,7 +11602,7 @@ "EndpointConfigName": { "target": "com.amazonaws.sagemaker#EndpointConfigName", "traits": { - "smithy.api#documentation": "Name of the Amazon SageMaker endpoint configuration.
", + "smithy.api#documentation": "Name of the SageMaker endpoint configuration.
", "smithy.api#required": {} } }, @@ -12698,7 +12823,7 @@ "RoleArn": { "target": "com.amazonaws.sagemaker#RoleArn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) that Amazon SageMaker assumes to perform tasks on your behalf\n during data labeling.
", + "smithy.api#documentation": "The Amazon Resource Name (ARN) that SageMaker assumes to perform tasks on your behalf\n during data labeling.
", "smithy.api#required": {} } }, @@ -13053,7 +13178,7 @@ "ModelName": { "target": "com.amazonaws.sagemaker#ModelName", "traits": { - "smithy.api#documentation": "Name of the Amazon SageMaker model.
", + "smithy.api#documentation": "Name of the SageMaker model.
", "smithy.api#required": {} } }, @@ -13140,7 +13265,7 @@ "ModelPackageGroupName": { "target": "com.amazonaws.sagemaker#ArnOrName", "traits": { - "smithy.api#documentation": "The name of the model group to describe.
", + "smithy.api#documentation": "The name of gthe model group to describe.
", "smithy.api#required": {} } } @@ -13304,7 +13429,7 @@ "LastModifiedTime": { "target": "com.amazonaws.sagemaker#Timestamp", "traits": { - "smithy.api#documentation": "The last time the model package was modified.
" + "smithy.api#documentation": "The last time that the model package was modified.
" } }, "LastModifiedBy": { @@ -13325,7 +13450,7 @@ "DriftCheckBaselines": { "target": "com.amazonaws.sagemaker#DriftCheckBaselines", "traits": { - "smithy.api#documentation": "Represents the drift check baselines that can be used when the model monitor is set using the model package. \n For more information, see the topic on Drift Detection against Previous Baselines in SageMaker Pipelines in the Amazon SageMaker Developer Guide.\n
" + "smithy.api#documentation": "Represents the drift check baselines that can be used when the model monitor is set using the model package. \n For more information, see the topic on Drift Detection against Previous Baselines in SageMaker Pipelines in the Amazon SageMaker Developer Guide.\n
" } }, "Domain": { @@ -13734,7 +13859,7 @@ "NotebookInstanceName": { "target": "com.amazonaws.sagemaker#NotebookInstanceName", "traits": { - "smithy.api#documentation": "The name of the Amazon SageMaker notebook instance.
" + "smithy.api#documentation": "The name of the SageMaker notebook instance.
" } }, "NotebookInstanceStatus": { @@ -13782,13 +13907,13 @@ "KmsKeyId": { "target": "com.amazonaws.sagemaker#KmsKeyId", "traits": { - "smithy.api#documentation": "The Amazon Web Services KMS key ID Amazon SageMaker uses to encrypt data when storing it on the ML storage\n volume attached to the instance.
" + "smithy.api#documentation": "The Amazon Web Services KMS key ID SageMaker uses to encrypt data when storing it on the ML storage\n volume attached to the instance.
" } }, "NetworkInterfaceId": { "target": "com.amazonaws.sagemaker#NetworkInterfaceId", "traits": { - "smithy.api#documentation": "The network interface IDs that Amazon SageMaker created at the time of creating the instance.\n
" + "smithy.api#documentation": "The network interface IDs that SageMaker created at the time of creating the instance.\n
" } }, "LastModifiedTime": { @@ -13812,7 +13937,7 @@ "DirectInternetAccess": { "target": "com.amazonaws.sagemaker#DirectInternetAccess", "traits": { - "smithy.api#documentation": "Describes whether Amazon SageMaker provides internet access to the notebook instance. If this\n value is set to Disabled, the notebook instance does not have\n internet access, and cannot connect to Amazon SageMaker training and endpoint services.
\nFor more information, see Notebook Instances Are Internet-Enabled by Default.
" + "smithy.api#documentation": "Describes whether SageMaker provides internet access to the notebook instance. If this\n value is set to Disabled, the notebook instance does not have\n internet access, and cannot connect to SageMaker training and endpoint services.
\nFor more information, see Notebook Instances Are Internet-Enabled by Default.
" } }, "VolumeSizeInGB": { @@ -13830,13 +13955,13 @@ "DefaultCodeRepository": { "target": "com.amazonaws.sagemaker#CodeRepositoryNameOrUrl", "traits": { - "smithy.api#documentation": "The Git repository associated with the notebook instance as its default code\n repository. This can be either the name of a Git repository stored as a resource in your\n account, or the URL of a Git repository in Amazon Web Services CodeCommit or in any\n other Git repository. When you open a notebook instance, it opens in the directory that\n contains this repository. For more information, see Associating Git Repositories with Amazon SageMaker\n Notebook Instances.
" + "smithy.api#documentation": "The Git repository associated with the notebook instance as its default code\n repository. This can be either the name of a Git repository stored as a resource in your\n account, or the URL of a Git repository in Amazon Web Services CodeCommit or in any\n other Git repository. When you open a notebook instance, it opens in the directory that\n contains this repository. For more information, see Associating Git Repositories with SageMaker\n Notebook Instances.
" } }, "AdditionalCodeRepositories": { "target": "com.amazonaws.sagemaker#AdditionalCodeRepositoryNamesOrUrls", "traits": { - "smithy.api#documentation": "An array of up to three Git repositories associated with the notebook instance. These\n can be either the names of Git repositories stored as resources in your account, or the\n URL of Git repositories in Amazon Web Services CodeCommit or in any\n other Git repository. These repositories are cloned at the same level as the default\n repository of your notebook instance. For more information, see Associating Git\n Repositories with Amazon SageMaker Notebook Instances.
" + "smithy.api#documentation": "An array of up to three Git repositories associated with the notebook instance. These\n can be either the names of Git repositories stored as resources in your account, or the\n URL of Git repositories in Amazon Web Services CodeCommit or in any\n other Git repository. These repositories are cloned at the same level as the default\n repository of your notebook instance. For more information, see Associating Git\n Repositories with SageMaker Notebook Instances.
" } }, "RootAccess": { @@ -14616,7 +14741,7 @@ "LabelingJobArn": { "target": "com.amazonaws.sagemaker#LabelingJobArn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Amazon SageMaker Ground Truth labeling job that created the\n transform or training job.
" + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the SageMaker Ground Truth labeling job that created the\n transform or training job.
" } }, "AutoMLJobArn": { @@ -14635,14 +14760,14 @@ "TrainingJobStatus": { "target": "com.amazonaws.sagemaker#TrainingJobStatus", "traits": { - "smithy.api#documentation": "The status of the\n training\n job.
\nAmazon SageMaker provides the following training job statuses:
\n\n InProgress
- The training is in progress.
\n Completed
- The training job has completed.
\n Failed
- The training job has failed. To see the reason for the\n failure, see the FailureReason
field in the response to a\n DescribeTrainingJobResponse
call.
\n Stopping
- The training job is stopping.
\n Stopped
- The training job has stopped.
For\n more detailed information, see SecondaryStatus
.
The status of the training job.
\nSageMaker provides the following training job statuses:
\n\n InProgress
- The training is in progress.
\n Completed
- The training job has completed.
\n Failed
- The training job has failed. To see the reason for the\n failure, see the FailureReason
field in the response to a\n DescribeTrainingJobResponse
call.
\n Stopping
- The training job is stopping.
\n Stopped
- The training job has stopped.
For more detailed information, see SecondaryStatus
.
Provides detailed information about the state of the training job. For detailed\n information on the secondary status of the training job, see StatusMessage
\n under SecondaryStatusTransition.
Amazon SageMaker provides primary statuses and secondary statuses that apply to each of\n them:
\n\n Starting
\n - Starting the training job.
\n Downloading
- An optional stage for algorithms that\n support File
training input mode. It indicates that\n data is being downloaded to the ML storage volumes.
\n Training
- Training is in progress.
\n Interrupted
- The job stopped because the managed\n spot training instances were interrupted.
\n Uploading
- Training is complete and the model\n artifacts are being uploaded to the S3 location.
\n Completed
- The training job has completed.
\n Failed
- The training job has failed. The reason for\n the failure is returned in the FailureReason
field of\n DescribeTrainingJobResponse
.
\n MaxRuntimeExceeded
- The job stopped because it\n exceeded the maximum allowed runtime.
\n MaxWaitTimeExceeded
- The job stopped because it\n exceeded the maximum allowed wait time.
\n Stopped
- The training job has stopped.
\n Stopping
- Stopping the training job.
Valid values for SecondaryStatus
are subject to change.
We no longer support the following secondary statuses:
\n\n LaunchingMLInstances
\n
\n PreparingTraining
\n
\n DownloadingTrainingImage
\n
Provides detailed information about the state of the training job. For detailed\n information on the secondary status of the training job, see StatusMessage
\n under SecondaryStatusTransition.
SageMaker provides primary statuses and secondary statuses that apply to each of\n them:
\n\n Starting
\n - Starting the training job.
\n Downloading
- An optional stage for algorithms that\n support File
training input mode. It indicates that\n data is being downloaded to the ML storage volumes.
\n Training
- Training is in progress.
\n Interrupted
- The job stopped because the managed\n spot training instances were interrupted.
\n Uploading
- Training is complete and the model\n artifacts are being uploaded to the S3 location.
\n Completed
- The training job has completed.
\n Failed
- The training job has failed. The reason for\n the failure is returned in the FailureReason
field of\n DescribeTrainingJobResponse
.
\n MaxRuntimeExceeded
- The job stopped because it\n exceeded the maximum allowed runtime.
\n MaxWaitTimeExceeded
- The job stopped because it\n exceeded the maximum allowed wait time.
\n Stopped
- The training job has stopped.
\n Stopping
- Stopping the training job.
Valid values for SecondaryStatus
are subject to change.
We no longer support the following secondary statuses:
\n\n LaunchingMLInstances
\n
\n PreparingTraining
\n
\n DownloadingTrainingImage
\n
The S3 path where model artifacts that you configured when creating the job are\n stored. Amazon SageMaker creates subfolders for model artifacts.
" + "smithy.api#documentation": "The S3 path where model artifacts that you configured when creating the job are\n stored. SageMaker creates subfolders for model artifacts.
" } }, "ResourceConfig": { @@ -14699,7 +14824,7 @@ "StoppingCondition": { "target": "com.amazonaws.sagemaker#StoppingCondition", "traits": { - "smithy.api#documentation": "Specifies a limit to how long a model training job can run. It also specifies how long\n a managed Spot training job has to complete. When the job reaches the time limit, Amazon SageMaker\n ends the training job. Use this API to cap model training costs.
\nTo stop a job, Amazon SageMaker sends the algorithm the SIGTERM
signal, which delays\n job termination for 120 seconds. Algorithms can use this 120-second window to save the\n model artifacts, so the results of training are not lost.
Specifies a limit to how long a model training job can run. It also specifies how long\n a managed Spot training job has to complete. When the job reaches the time limit, SageMaker\n ends the training job. Use this API to cap model training costs.
\nTo stop a job, SageMaker sends the algorithm the SIGTERM
signal, which delays\n job termination for 120 seconds. Algorithms can use this 120-second window to save the\n model artifacts, so the results of training are not lost.
Indicates the time when the training job ends on training instances. You are billed\n for the time interval between the value of TrainingStartTime
and this time.\n For successful jobs and stopped jobs, this is the time after model artifacts are\n uploaded. For failed jobs, this is the time when Amazon SageMaker detects a job failure.
Indicates the time when the training job ends on training instances. You are billed\n for the time interval between the value of TrainingStartTime
and this time.\n For successful jobs and stopped jobs, this is the time after model artifacts are\n uploaded. For failed jobs, this is the time when SageMaker detects a job failure.
If you want to allow inbound or outbound network calls, except for calls between peers\n within a training cluster for distributed training, choose True
. If you\n enable network isolation for training jobs that are configured to use a VPC, Amazon SageMaker\n downloads and uploads customer data and model artifacts through the specified VPC, but\n the training container does not have network access.
If you want to allow inbound or outbound network calls, except for calls between peers\n within a training cluster for distributed training, choose True
. If you\n enable network isolation for training jobs that are configured to use a VPC, SageMaker\n downloads and uploads customer data and model artifacts through the specified VPC, but\n the training container does not have network access.
The billable time in seconds. Billable time refers to the absolute wall-clock\n time.
\nMultiply BillableTimeInSeconds
by the number of instances\n (InstanceCount
) in your training cluster to get the total compute time\n SageMaker will bill you if you run distributed training. The formula is as follows:\n BillableTimeInSeconds * InstanceCount
.
You can calculate the savings from using managed spot training using the formula\n (1 - BillableTimeInSeconds / TrainingTimeInSeconds) * 100
. For example,\n if BillableTimeInSeconds
is 100 and TrainingTimeInSeconds
is\n 500, the savings is 80%.
The billable time in seconds. Billable time refers to the absolute wall-clock\n time.
\nMultiply BillableTimeInSeconds
by the number of instances\n (InstanceCount
) in your training cluster to get the total compute time\n SageMaker bills you if you run distributed training. The formula is as follows:\n BillableTimeInSeconds * InstanceCount
.
You can calculate the savings from using managed spot training using the formula\n (1 - BillableTimeInSeconds / TrainingTimeInSeconds) * 100
. For example,\n if BillableTimeInSeconds
is 100 and TrainingTimeInSeconds
is\n 500, the savings is 80%.
The name of the\n variant\n to update.
", + "smithy.api#documentation": "The name of the variant to update.
", "smithy.api#required": {} } }, @@ -16072,30 +16197,30 @@ "Bias": { "target": "com.amazonaws.sagemaker#DriftCheckBias", "traits": { - "smithy.api#documentation": "Represents the drift check bias baselines that can be used when the model monitor is set using the model \n package.
" + "smithy.api#documentation": "Represents the drift check bias baselines that can be used when the model monitor is set using the model \n package.
" } }, "Explainability": { "target": "com.amazonaws.sagemaker#DriftCheckExplainability", "traits": { - "smithy.api#documentation": "Represents the drift check explainability baselines that can be used when the model monitor is set using \n the model package.
" + "smithy.api#documentation": "Represents the drift check explainability baselines that can be used when the model monitor is set using \n the model package.
" } }, "ModelQuality": { "target": "com.amazonaws.sagemaker#DriftCheckModelQuality", "traits": { - "smithy.api#documentation": "Represents the drift check model quality baselines that can be used when the model monitor is set using \n the model package.
" + "smithy.api#documentation": "Represents the drift check model quality baselines that can be used when the model monitor is set using \n the model package.
" } }, "ModelDataQuality": { "target": "com.amazonaws.sagemaker#DriftCheckModelDataQuality", "traits": { - "smithy.api#documentation": "Represents the drift check model data quality baselines that can be used when the model monitor is set \n using the model package.
" + "smithy.api#documentation": "Represents the drift check model data quality baselines that can be used when the model monitor is set \n using the model package.
" } } }, "traits": { - "smithy.api#documentation": "Represents the drift check baselines that can be used when the model monitor is set using the model \n package.
" + "smithy.api#documentation": "Represents the drift check baselines that can be used when the model monitor is set using the model \n package.
" } }, "com.amazonaws.sagemaker#DriftCheckBias": { @@ -16115,7 +16240,7 @@ } }, "traits": { - "smithy.api#documentation": "Represents the drift check bias baselines that can be used when the model monitor is set using the \n model package.
" + "smithy.api#documentation": "Represents the drift check bias baselines that can be used when the model monitor is set using the \n model package.
" } }, "com.amazonaws.sagemaker#DriftCheckExplainability": { @@ -16132,7 +16257,7 @@ } }, "traits": { - "smithy.api#documentation": "Represents the drift check explainability baselines that can be used when the model monitor is set \n using the model package.
" + "smithy.api#documentation": "Represents the drift check explainability baselines that can be used when the model monitor is set \n using the model package.
" } }, "com.amazonaws.sagemaker#DriftCheckModelDataQuality": { @@ -16146,7 +16271,7 @@ } }, "traits": { - "smithy.api#documentation": "Represents the drift check data quality baselines that can be used when the model monitor is set using \n the model package.
" + "smithy.api#documentation": "Represents the drift check data quality baselines that can be used when the model monitor is set using \n the model package.
" } }, "com.amazonaws.sagemaker#DriftCheckModelQuality": { @@ -16160,7 +16285,7 @@ } }, "traits": { - "smithy.api#documentation": "Represents the drift check model quality baselines that can be used when the model monitor is set using \n the model package.
" + "smithy.api#documentation": "Represents the drift check model quality baselines that can be used when the model monitor is set using \n the model package.
" } }, "com.amazonaws.sagemaker#EMRStepMetadata": { @@ -18833,7 +18958,7 @@ "TrainingImage": { "target": "com.amazonaws.sagemaker#AlgorithmImage", "traits": { - "smithy.api#documentation": " The registry path of the Docker image that contains the training algorithm. For\n information about Docker registry paths for built-in algorithms, see Algorithms\n Provided by Amazon SageMaker: Common Parameters. Amazon SageMaker supports both\n registry/repository[:tag]
and registry/repository[@digest]
\n image path formats. For more information, see Using Your Own Algorithms with Amazon\n SageMaker.
The registry path of the Docker image that contains the training algorithm. For\n information about Docker registry paths for built-in algorithms, see Algorithms\n Provided by Amazon SageMaker: Common Parameters. SageMaker supports both\n registry/repository[:tag]
and registry/repository[@digest]
\n image path formats. For more information, see Using Your Own Algorithms with Amazon\n SageMaker.
The resources,\n including\n the compute instances and storage volumes, to use for the training\n jobs that the tuning job launches.
\nStorage\n volumes store model artifacts and\n incremental\n states. Training algorithms might also use storage volumes for\n scratch\n space. If you want Amazon SageMaker to use the storage volume\n to store the training data, choose File
as the\n TrainingInputMode
in the algorithm specification. For distributed\n training algorithms, specify an instance count greater than 1.
The resources,\n including\n the compute instances and storage volumes, to use for the training\n jobs that the tuning job launches.
\nStorage volumes store model artifacts and\n incremental\n states. Training algorithms might also use storage volumes for\n scratch\n space. If you want SageMaker to use the storage volume to store the\n training data, choose File
as the TrainingInputMode
in the\n algorithm specification. For distributed training algorithms, specify an instance count\n greater than 1.
Specifies a limit to how long a model hyperparameter training job can run. It also\n specifies how long a managed spot training job has to complete. When the job reaches the\n time limit, Amazon SageMaker ends the training job. Use this API to cap model training costs.
", + "smithy.api#documentation": "Specifies a limit to how long a model hyperparameter training job can run. It also\n specifies how long a managed spot training job has to complete. When the job reaches the\n time limit, SageMaker ends the training job. Use this API to cap model training costs.
", "smithy.api#required": {} } }, "EnableNetworkIsolation": { "target": "com.amazonaws.sagemaker#Boolean", "traits": { - "smithy.api#documentation": "Isolates the training container. No inbound or outbound network calls can be made,\n except for calls between peers within a training cluster for distributed training. If\n network isolation is used for training jobs that are configured to use a VPC, Amazon SageMaker\n downloads and uploads customer data and model artifacts through the specified VPC, but\n the training container does not have network access.
" + "smithy.api#documentation": "Isolates the training container. No inbound or outbound network calls can be made,\n except for calls between peers within a training cluster for distributed training. If\n network isolation is used for training jobs that are configured to use a VPC, SageMaker\n downloads and uploads customer data and model artifacts through the specified VPC, but\n the training container does not have network access.
" } }, "EnableInterContainerTrafficEncryption": { @@ -19103,7 +19228,7 @@ "TrainingJobArn": { "target": "com.amazonaws.sagemaker#TrainingJobArn", "traits": { - "smithy.api#documentation": "The\n Amazon\n Resource Name (ARN) of the training job.
", + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the training job.
", "smithy.api#required": {} } }, @@ -19129,7 +19254,7 @@ "TrainingEndTime": { "target": "com.amazonaws.sagemaker#Timestamp", "traits": { - "smithy.api#documentation": "Specifies the time when the training job ends on training instances. You are billed\n for the time interval between the value of TrainingStartTime
and this time.\n For successful jobs and stopped jobs, this is the time after model artifacts are\n uploaded. For failed jobs, this is the time when Amazon SageMaker detects a job failure.
Specifies the time when the training job ends on training instances. You are billed\n for the time interval between the value of TrainingStartTime
and this time.\n For successful jobs and stopped jobs, this is the time after model artifacts are\n uploaded. For failed jobs, this is the time when SageMaker detects a job failure.
Specifies\n summary information about a training job.
" + "smithy.api#documentation": "The container for the summary information about a training job.
" } }, "com.amazonaws.sagemaker#HyperParameterTuningJobArn": { @@ -19211,7 +19336,7 @@ "TrainingJobEarlyStoppingType": { "target": "com.amazonaws.sagemaker#TrainingJobEarlyStoppingType", "traits": { - "smithy.api#documentation": "Specifies whether to use early stopping for training jobs launched by the\n hyperparameter tuning job. This can be one of the following values (the default value is\n OFF
):
Training jobs launched by the hyperparameter tuning job do not use early\n stopping.
\nAmazon SageMaker stops training jobs launched by the hyperparameter tuning job when\n they are unlikely to perform better than previously completed training jobs.\n For more information, see Stop Training Jobs Early.
\nSpecifies whether to use early stopping for training jobs launched by the\n hyperparameter tuning job. This can be one of the following values (the default value is\n OFF
):
Training jobs launched by the hyperparameter tuning job do not use early\n stopping.
\nSageMaker stops training jobs launched by the hyperparameter tuning job when\n they are unlikely to perform better than previously completed training jobs.\n For more information, see Stop Training Jobs Early.
\nThe scale that hyperparameter tuning uses to search the hyperparameter range. For\n information about choosing a hyperparameter scale, see Hyperparameter Scaling. One of the following values:
\nAmazon SageMaker hyperparameter tuning chooses the best scale for the\n hyperparameter.
\nHyperparameter tuning searches the values in the hyperparameter range by\n using a linear scale.
\nHyperparameter tuning searches the values in the hyperparameter range by\n using a logarithmic scale.
\nLogarithmic scaling works only for ranges that have only values greater\n than 0.
\nThe scale that hyperparameter tuning uses to search the hyperparameter range. For\n information about choosing a hyperparameter scale, see Hyperparameter Scaling. One of the following values:
\nSageMaker hyperparameter tuning chooses the best scale for the\n hyperparameter.
\nHyperparameter tuning searches the values in the hyperparameter range by\n using a linear scale.
\nHyperparameter tuning searches the values in the hyperparameter range by\n using a logarithmic scale.
\nLogarithmic scaling works only for ranges that have only values greater\n than 0.
\nThe default instance type and the Amazon Resource Name (ARN) of the default SageMaker image used by the JupyterServer app.
" + "smithy.api#documentation": "The default instance type and the Amazon Resource Name (ARN) of the default SageMaker image used by the JupyterServer app. If you use the LifecycleConfigArns
parameter, then this parameter is also required.
The Amazon Resource Name (ARN) of the Lifecycle Configurations attached to the JupyterServerApp.
" + "smithy.api#documentation": " The Amazon Resource Name (ARN) of the Lifecycle Configurations attached to the JupyterServerApp. If you use this parameter, the DefaultResourceSpec
parameter is also required.
To remove a Lifecycle Config, you must set LifecycleConfigArns
to an empty list.
The default instance type and the Amazon Resource Name (ARN) of the default SageMaker image used by the KernelGateway app.
" + "smithy.api#documentation": "The default instance type and the Amazon Resource Name (ARN) of the default SageMaker image used by the KernelGateway app.
\nThe Amazon SageMaker Studio UI does not use the default instance type value set here. The default\n instance type set here is used when Apps are created using the Amazon Web Services Command Line Interface or Amazon Web Services CloudFormation\n and the instance type parameter value is not passed.
\nThe Amazon Resource Name (ARN) of the Lifecycle Configurations attached to the the user profile or domain.
" + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Lifecycle Configurations attached to the the user profile or domain.
\nTo remove a Lifecycle Config, you must set LifecycleConfigArns
to an empty list.
Declares that your content is free of personally identifiable information or adult\n content. Amazon SageMaker may restrict the Amazon Mechanical Turk workers that can view your task\n based on this information.
" + "smithy.api#documentation": "Declares that your content is free of personally identifiable information or adult\n content. SageMaker may restrict the Amazon Mechanical Turk workers that can view your task\n based on this information.
" } } }, @@ -21062,7 +21187,7 @@ "FinalActiveLearningModelArn": { "target": "com.amazonaws.sagemaker#ModelArn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) for the most recent Amazon SageMaker model trained as part of\n automated data labeling.
" + "smithy.api#documentation": "The Amazon Resource Name (ARN) for the most recent SageMaker model trained as part of\n automated data labeling.
" } } }, @@ -21445,6 +21570,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "ActionSummaries", "pageSize": "MaxResults" } } @@ -21532,6 +21658,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "AlgorithmSummaryList", "pageSize": "MaxResults" } } @@ -21596,7 +21723,7 @@ "NextToken": { "target": "com.amazonaws.sagemaker#NextToken", "traits": { - "smithy.api#documentation": "If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of\n algorithms, use it in the subsequent request.
" + "smithy.api#documentation": "If the response is truncated, SageMaker returns this token. To retrieve the next set of\n algorithms, use it in the subsequent request.
" } } } @@ -21614,6 +21741,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "AppImageConfigs", "pageSize": "MaxResults" } } @@ -21707,6 +21835,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "Apps", "pageSize": "MaxResults" } } @@ -21787,6 +21916,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "ArtifactSummaries", "pageSize": "MaxResults" } } @@ -21879,6 +22009,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "AssociationSummaries", "pageSize": "MaxResults" } } @@ -21984,6 +22115,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "AutoMLJobSummaries", "pageSize": "MaxResults" } } @@ -22090,6 +22222,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "Candidates", "pageSize": "MaxResults" } } @@ -22174,6 +22307,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "CodeRepositorySummaryList", "pageSize": "MaxResults" } } @@ -22268,6 +22402,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "CompilationJobSummaries", "pageSize": "MaxResults" } } @@ -22393,6 +22528,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "ContextSummaries", "pageSize": "MaxResults" } } @@ -22480,6 +22616,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "JobDefinitionSummaries", "pageSize": "MaxResults" } } @@ -22568,6 +22705,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "DeviceFleetSummaries", "pageSize": "MaxResults" } } @@ -22682,6 +22820,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "DeviceSummaries", "pageSize": "MaxResults" } } @@ -22753,6 +22892,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "Domains", "pageSize": "MaxResults" } } @@ -22804,6 +22944,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "EdgePackagingJobSummaries", "pageSize": "MaxResults" } } @@ -22938,6 +23079,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "EndpointConfigs", "pageSize": "MaxResults" } } @@ -23002,7 +23144,7 @@ "NextToken": { "target": "com.amazonaws.sagemaker#PaginationToken", "traits": { - "smithy.api#documentation": "If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of\n endpoint configurations, use it in the subsequent request
" + "smithy.api#documentation": "If the response is truncated, SageMaker returns this token. To retrieve the next set of\n endpoint configurations, use it in the subsequent request
" } } } @@ -23020,6 +23162,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "Endpoints", "pageSize": "MaxResults" } } @@ -23102,7 +23245,7 @@ "NextToken": { "target": "com.amazonaws.sagemaker#PaginationToken", "traits": { - "smithy.api#documentation": "If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of\n training jobs, use it in the subsequent request.
" + "smithy.api#documentation": "If the response is truncated, SageMaker returns this token. To retrieve the next set of\n training jobs, use it in the subsequent request.
" } } } @@ -23120,6 +23263,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "ExperimentSummaries", "pageSize": "MaxResults" } } @@ -23285,6 +23429,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "FlowDefinitionSummaries", "pageSize": "MaxResults" } } @@ -23356,6 +23501,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "HumanTaskUiSummaries", "pageSize": "MaxResults" } } @@ -23427,6 +23573,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "HyperParameterTuningJobSummaries", "pageSize": "MaxResults" } } @@ -23450,13 +23597,13 @@ "SortBy": { "target": "com.amazonaws.sagemaker#HyperParameterTuningJobSortByOptions", "traits": { - "smithy.api#documentation": "The\n field\n to sort results by. The default is Name
.
The field to sort results by. The default is Name
.
The sort\n order\n for results. The default is Ascending
.
The sort order for results. The default is Ascending
.
A filter that returns only tuning jobs that were created after the\n specified\n time.
" + "smithy.api#documentation": "A filter that returns only tuning jobs that were created after the specified\n time.
" } }, "CreationTimeBefore": { "target": "com.amazonaws.sagemaker#Timestamp", "traits": { - "smithy.api#documentation": "A filter that returns only tuning jobs that were created before the\n specified\n time.
" + "smithy.api#documentation": "A filter that returns only tuning jobs that were created before the specified\n time.
" } }, "LastModifiedTimeAfter": { @@ -23492,7 +23639,7 @@ "StatusEquals": { "target": "com.amazonaws.sagemaker#HyperParameterTuningJobStatus", "traits": { - "smithy.api#documentation": "A filter that returns only tuning jobs with the\n specified\n status.
" + "smithy.api#documentation": "A filter that returns only tuning jobs with the specified status.
" } } } @@ -23533,6 +23680,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "ImageVersions", "pageSize": "MaxResults" } } @@ -23627,6 +23775,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "Images", "pageSize": "MaxResults" } } @@ -23720,6 +23869,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "InferenceRecommendationsJobs", "pageSize": "MaxResults" } } @@ -23839,6 +23989,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "LabelingJobSummaryList", "pageSize": "MaxResults" } } @@ -23861,6 +24012,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "LabelingJobSummaryList", "pageSize": "MaxResults" } } @@ -23932,7 +24084,7 @@ "NextToken": { "target": "com.amazonaws.sagemaker#NextToken", "traits": { - "smithy.api#documentation": "If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of\n labeling jobs, use it in the subsequent request.
" + "smithy.api#documentation": "If the response is truncated, SageMaker returns this token. To retrieve the next set of\n labeling jobs, use it in the subsequent request.
" } } } @@ -24025,7 +24177,7 @@ "NextToken": { "target": "com.amazonaws.sagemaker#NextToken", "traits": { - "smithy.api#documentation": "If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of\n labeling jobs, use it in the subsequent request.
" + "smithy.api#documentation": "If the response is truncated, SageMaker returns this token. To retrieve the next set of\n labeling jobs, use it in the subsequent request.
" } } } @@ -24049,6 +24201,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "LineageGroupSummaries", "pageSize": "MaxResults" } } @@ -24132,6 +24285,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "JobDefinitionSummaries", "pageSize": "MaxResults" } } @@ -24220,6 +24374,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "JobDefinitionSummaries", "pageSize": "MaxResults" } } @@ -24308,6 +24463,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "ModelMetadataSummaries", "pageSize": "MaxResults" } } @@ -24366,6 +24522,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "ModelPackageGroupSummaryList", "pageSize": "MaxResults" } } @@ -24448,6 +24605,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "ModelPackageSummaryList", "pageSize": "MaxResults" } } @@ -24530,7 +24688,7 @@ "NextToken": { "target": "com.amazonaws.sagemaker#NextToken", "traits": { - "smithy.api#documentation": "If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of\n model packages, use it in the subsequent request.
" + "smithy.api#documentation": "If the response is truncated, SageMaker returns this token. To retrieve the next set of\n model packages, use it in the subsequent request.
" } } } @@ -24548,6 +24706,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "JobDefinitionSummaries", "pageSize": "MaxResults" } } @@ -24636,6 +24795,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "Models", "pageSize": "MaxResults" } } @@ -24700,7 +24860,7 @@ "NextToken": { "target": "com.amazonaws.sagemaker#PaginationToken", "traits": { - "smithy.api#documentation": "If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of\n models, use it in the subsequent request.
" + "smithy.api#documentation": "If the response is truncated, SageMaker returns this token. To retrieve the next set of\n models, use it in the subsequent request.
" } } } @@ -24718,6 +24878,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "MonitoringExecutionSummaries", "pageSize": "MaxResults" } } @@ -24848,6 +25009,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "MonitoringScheduleSummaries", "pageSize": "MaxResults" } } @@ -24966,6 +25128,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "NotebookInstanceLifecycleConfigs", "pageSize": "MaxResults" } } @@ -25035,7 +25198,7 @@ "NextToken": { "target": "com.amazonaws.sagemaker#NextToken", "traits": { - "smithy.api#documentation": "If the response is truncated, Amazon SageMaker returns this token. To get the next set of\n lifecycle configurations, use it in the next request.
" + "smithy.api#documentation": "If the response is truncated, SageMaker returns this token. To get the next set of\n lifecycle configurations, use it in the next request.
" } }, "NotebookInstanceLifecycleConfigs": { @@ -25055,10 +25218,11 @@ "target": "com.amazonaws.sagemaker#ListNotebookInstancesOutput" }, "traits": { - "smithy.api#documentation": "Returns a list of the Amazon SageMaker notebook instances in the requester's account in an Amazon Web Services\n Region.
", + "smithy.api#documentation": "Returns a list of the SageMaker notebook instances in the requester's account in an Amazon Web Services\n Region.
", "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "NotebookInstances", "pageSize": "MaxResults" } } @@ -25152,7 +25316,7 @@ "NextToken": { "target": "com.amazonaws.sagemaker#NextToken", "traits": { - "smithy.api#documentation": "If the response to the previous ListNotebookInstances
request was\n truncated, Amazon SageMaker returns this token. To retrieve the next set of notebook instances, use\n the token in the next request.
If the response to the previous ListNotebookInstances
request was\n truncated, SageMaker returns this token. To retrieve the next set of notebook instances, use\n the token in the next request.
Returns the tags for the specified Amazon SageMaker resource.
", + "smithy.api#documentation": "Returns the tags for the specified SageMaker resource.
", "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "Tags", "pageSize": "MaxResults" } } @@ -25838,7 +26010,7 @@ "NextToken": { "target": "com.amazonaws.sagemaker#NextToken", "traits": { - "smithy.api#documentation": " If the response to the previous ListTags
request is truncated, Amazon SageMaker\n returns this token. To retrieve the next set of tags, use it in the subsequent request.\n
If the response to the previous ListTags
request is truncated, SageMaker\n returns this token. To retrieve the next set of tags, use it in the subsequent request.\n
If response is truncated, Amazon SageMaker includes a token in the response. You can use this\n token in your subsequent request to fetch next set of tokens.
" + "smithy.api#documentation": "If response is truncated, SageMaker includes a token in the response. You can use this\n token in your subsequent request to fetch next set of tokens.
" } } } @@ -25888,6 +26060,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "TrainingJobSummaries", "pageSize": "MaxResults" } } @@ -25910,6 +26083,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "TrainingJobSummaries", "pageSize": "MaxResults" } } @@ -25939,19 +26113,19 @@ "StatusEquals": { "target": "com.amazonaws.sagemaker#TrainingJobStatus", "traits": { - "smithy.api#documentation": "A filter that returns only training jobs with the\n specified\n status.
" + "smithy.api#documentation": "A filter that returns only training jobs with the specified status.
" } }, "SortBy": { "target": "com.amazonaws.sagemaker#TrainingJobSortByOptions", "traits": { - "smithy.api#documentation": "The field to sort\n results\n by. The default is Name
.
If the value of this field is FinalObjectiveMetricValue
, any training\n jobs that did not return an objective metric are not listed.
The field to sort results by. The default is Name
.
If the value of this field is FinalObjectiveMetricValue
, any training\n jobs that did not return an objective metric are not listed.
The sort order\n for\n results. The default is Ascending
.
The sort order for results. The default is Ascending
.
If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of\n training jobs, use it in the subsequent request.
" + "smithy.api#documentation": "If the response is truncated, SageMaker returns this token. To retrieve the next set of\n training jobs, use it in the subsequent request.
" } } } @@ -26071,6 +26245,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "TransformJobSummaries", "pageSize": "MaxResults" } } @@ -26183,6 +26358,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "TrialComponentSummaries", "pageSize": "MaxResults" } } @@ -26281,6 +26457,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "TrialSummaries", "pageSize": "MaxResults" } } @@ -26368,6 +26545,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "UserProfiles", "pageSize": "MaxResults" } } @@ -26443,6 +26621,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "Workforces", "pageSize": "MaxResults" } } @@ -26529,6 +26708,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "Workteams", "pageSize": "MaxResults" } } @@ -26944,7 +27124,7 @@ } }, "traits": { - "smithy.api#documentation": "Specifies a metric that the training algorithm\n writes\n to stderr
or stdout
. Amazon SageMakerhyperparameter\n tuning captures\n all\n defined metrics.\n You\n specify one metric that a hyperparameter tuning job uses as its\n objective metric to choose the best training job.
Specifies a metric that the training algorithm\n writes\n to stderr
or stdout
. SageMakerhyperparameter\n tuning captures\n all\n defined metrics.\n You\n specify one metric that a hyperparameter tuning job uses as its\n objective metric to choose the best training job.
The timeout value in seconds for an invocation request.
" + "smithy.api#documentation": "The timeout value in seconds for an invocation request. The default value is 600.
" } }, "InvocationsMaxRetries": { "target": "com.amazonaws.sagemaker#InvocationsMaxRetries", "traits": { - "smithy.api#documentation": "The maximum number of retries when invocation requests are failing.
" + "smithy.api#documentation": "The maximum number of retries when invocation requests are failing. The default value is 3.
" } } }, @@ -27725,7 +27905,7 @@ "Image": { "target": "com.amazonaws.sagemaker#ContainerImage", "traits": { - "smithy.api#documentation": "The Amazon EC2 Container Registry (Amazon ECR) path where inference code is stored.
\nIf you are using your own custom algorithm instead of an algorithm provided by Amazon SageMaker,\n the inference code must meet Amazon SageMaker requirements. Amazon SageMaker supports both\n registry/repository[:tag]
and registry/repository[@digest]
\n image path formats. For more information, see Using Your Own Algorithms with Amazon\n SageMaker.
The Amazon EC2 Container Registry (Amazon ECR) path where inference code is stored.
\nIf you are using your own custom algorithm instead of an algorithm provided by SageMaker,\n the inference code must meet SageMaker requirements. SageMaker supports both\n registry/repository[:tag]
and registry/repository[@digest]
\n image path formats. For more information, see Using Your Own Algorithms with Amazon\n SageMaker.
The Amazon S3 path where the model artifacts, which result from model training, are stored.\n This path must point to a single gzip
compressed tar archive\n (.tar.gz
suffix).
The model artifacts must be in an S3 bucket that is in the same region as the\n model package.
\nThe Amazon S3 path where the model artifacts, which result from model training, are stored.\n This path must point to a single gzip
compressed tar archive\n (.tar.gz
suffix).
The model artifacts must be in an S3 bucket that is in the same region as the\n model package.
\nThe approval status of the model. This can be one of the following values.
\n\n APPROVED
- The model is approved
\n REJECTED
- The model is rejected.
\n PENDING_MANUAL_APPROVAL
- The model is waiting for manual\n approval.
The approval status of the model. This can be one of the following values.
\n\n APPROVED
- The model is approved
\n REJECTED
- The model is rejected.
\n PENDING_MANUAL_APPROVAL
- The model is waiting for manual\n approval.
An array of ModelPackageValidationProfile
objects, each of which\n specifies a batch transform job that Amazon SageMaker runs to validate your model package.
An array of ModelPackageValidationProfile
objects, each of which\n specifies a batch transform job that SageMaker runs to validate your model package.
Specifies batch transform jobs that Amazon SageMaker runs to validate your model package.
" + "smithy.api#documentation": "Specifies batch transform jobs that SageMaker runs to validate your model package.
" } }, "com.amazonaws.sagemaker#ModelPackageVersion": { @@ -29694,7 +29874,7 @@ "Url": { "target": "com.amazonaws.sagemaker#NotebookInstanceUrl", "traits": { - "smithy.api#documentation": "The\n URL that you use to connect to the Jupyter instance running in your notebook instance.\n
" + "smithy.api#documentation": "The URL that you use to connect to the Jupyter notebook running in your notebook\n instance.
" } }, "InstanceType": { @@ -29724,18 +29904,18 @@ "DefaultCodeRepository": { "target": "com.amazonaws.sagemaker#CodeRepositoryNameOrUrl", "traits": { - "smithy.api#documentation": "The Git repository associated with the notebook instance as its default code\n repository. This can be either the name of a Git repository stored as a resource in your\n account, or the URL of a Git repository in Amazon Web Services CodeCommit or in any\n other Git repository. When you open a notebook instance, it opens in the directory that\n contains this repository. For more information, see Associating Git Repositories with Amazon SageMaker\n Notebook Instances.
" + "smithy.api#documentation": "The Git repository associated with the notebook instance as its default code\n repository. This can be either the name of a Git repository stored as a resource in your\n account, or the URL of a Git repository in Amazon Web Services CodeCommit or in any\n other Git repository. When you open a notebook instance, it opens in the directory that\n contains this repository. For more information, see Associating Git Repositories with SageMaker\n Notebook Instances.
" } }, "AdditionalCodeRepositories": { "target": "com.amazonaws.sagemaker#AdditionalCodeRepositoryNamesOrUrls", "traits": { - "smithy.api#documentation": "An array of up to three Git repositories associated with the notebook instance. These\n can be either the names of Git repositories stored as resources in your account, or the\n URL of Git repositories in Amazon Web Services CodeCommit or in any\n other Git repository. These repositories are cloned at the same level as the default\n repository of your notebook instance. For more information, see Associating Git\n Repositories with Amazon SageMaker Notebook Instances.
" + "smithy.api#documentation": "An array of up to three Git repositories associated with the notebook instance. These\n can be either the names of Git repositories stored as resources in your account, or the\n URL of Git repositories in Amazon Web Services CodeCommit or in any\n other Git repository. These repositories are cloned at the same level as the default\n repository of your notebook instance. For more information, see Associating Git\n Repositories with SageMaker Notebook Instances.
" } } }, "traits": { - "smithy.api#documentation": "Provides summary information for an Amazon SageMaker notebook instance.
" + "smithy.api#documentation": "Provides summary information for an SageMaker notebook instance.
" } }, "com.amazonaws.sagemaker#NotebookInstanceSummaryList": { @@ -30222,13 +30402,13 @@ "KmsKeyId": { "target": "com.amazonaws.sagemaker#KmsKeyId", "traits": { - "smithy.api#documentation": "The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using\n Amazon S3 server-side encryption. The KmsKeyId
can be any of the following\n formats:
// KMS Key ID
\n\n \"1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// Amazon Resource Name (ARN) of a KMS Key
\n\n \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// KMS Key Alias
\n\n \"alias/ExampleAlias\"
\n
// Amazon Resource Name (ARN) of a KMS Key Alias
\n\n \"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"
\n
If you use a KMS key ID or an alias of your KMS key, the Amazon SageMaker execution role must\n include permissions to call kms:Encrypt
. If you don't provide a KMS key ID,\n Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. Amazon SageMaker uses server-side\n encryption with KMS-managed keys for OutputDataConfig
. If you use a bucket\n policy with an s3:PutObject
permission that only allows objects with\n server-side encryption, set the condition key of\n s3:x-amz-server-side-encryption
to \"aws:kms\"
. For more\n information, see KMS-Managed Encryption\n Keys in the Amazon Simple Storage Service Developer Guide.\n
The KMS key policy must grant permission to the IAM role that you specify in your\n CreateTrainingJob
, CreateTransformJob
, or\n CreateHyperParameterTuningJob
requests. For more information, see\n Using\n Key Policies in Amazon Web Services KMS in the Amazon Web Services Key Management Service Developer\n Guide.
The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that SageMaker uses to encrypt the model artifacts at rest using\n Amazon S3 server-side encryption. The KmsKeyId
can be any of the following\n formats:
// KMS Key ID
\n\n \"1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// Amazon Resource Name (ARN) of a KMS Key
\n\n \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// KMS Key Alias
\n\n \"alias/ExampleAlias\"
\n
// Amazon Resource Name (ARN) of a KMS Key Alias
\n\n \"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"
\n
If you use a KMS key ID or an alias of your KMS key, the SageMaker execution role must\n include permissions to call kms:Encrypt
. If you don't provide a KMS key ID,\n SageMaker uses the default KMS key for Amazon S3 for your role's account. SageMaker uses server-side\n encryption with KMS-managed keys for OutputDataConfig
. If you use a bucket\n policy with an s3:PutObject
permission that only allows objects with\n server-side encryption, set the condition key of\n s3:x-amz-server-side-encryption
to \"aws:kms\"
. For more\n information, see KMS-Managed Encryption\n Keys in the Amazon Simple Storage Service Developer Guide.\n
The KMS key policy must grant permission to the IAM role that you specify in your\n CreateTrainingJob
, CreateTransformJob
, or\n CreateHyperParameterTuningJob
requests. For more information, see\n Using\n Key Policies in Amazon Web Services KMS in the Amazon Web Services Key Management Service Developer\n Guide.
Identifies the S3 path where you want Amazon SageMaker to store the model artifacts. For\n example, s3://bucket-name/key-name-prefix
.
Identifies the S3 path where you want SageMaker to store the model artifacts. For\n example, s3://bucket-name/key-name-prefix
.
The serverless configuration for the endpoint.
\nServerless Inference is in preview release for Amazon SageMaker and is subject to change. We do not recommend using this feature in production environments.
\nThe serverless configuration for the endpoint.
" } }, "DesiredServerlessConfig": { "target": "com.amazonaws.sagemaker#ProductionVariantServerlessConfig", "traits": { - "smithy.api#documentation": "The serverless configuration requested for this deployment, as specified in the endpoint configuration for the endpoint.
\nServerless Inference is in preview release for Amazon SageMaker and is subject to change. We do not recommend using this feature in production environments.
\nThe serverless configuration requested for this deployment, as specified in the endpoint configuration for the endpoint.
" } } }, @@ -31077,7 +31257,7 @@ "ClarifyCheck": { "target": "com.amazonaws.sagemaker#ClarifyCheckStepMetadata", "traits": { - "smithy.api#documentation": "Container for the metadata for a Clarify check step. The configurations \n and outcomes of the check step execution. This includes:
\nThe type of the check conducted,
\nThe Amazon S3 URIs of baseline constraints and statistics files to be used for the drift check.
\nThe Amazon S3 URIs of newly calculated baseline constraints and statistics.
\nThe model package group name provided.
\nThe Amazon S3 URI of the violation report if violations detected.
\nThe Amazon Resource Name (ARN) of check processing job initiated by the step execution.
\nThe boolean flags indicating if the drift check is skipped.
\nIf step property BaselineUsedForDriftCheck
is set the same as \n CalculatedBaseline
.
Container for the metadata for a Clarify check step. The configurations \n and outcomes of the check step execution. This includes:
\nThe type of the check conducted,
\nThe Amazon S3 URIs of baseline constraints and statistics files to be used for the drift check.
\nThe Amazon S3 URIs of newly calculated baseline constraints and statistics.
\nThe model package group name provided.
\nThe Amazon S3 URI of the violation report if violations detected.
\nThe Amazon Resource Name (ARN) of check processing job initiated by the step execution.
\nThe boolean flags indicating if the drift check is skipped.
\nIf step property BaselineUsedForDriftCheck
is set the same as \n CalculatedBaseline
.
The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.
\nServerless Inference is in preview release for Amazon SageMaker and is subject to change. We do not recommend using this feature in production environments.
\nThe serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.
" } } }, "traits": { - "smithy.api#documentation": "Identifies a model that you want to host and the resources chosen to deploy for\n hosting it. If you are deploying multiple models, tell Amazon SageMaker how to distribute traffic\n among the models by specifying variant weights.
" + "smithy.api#documentation": "Identifies a model that you want to host and the resources chosen to deploy for\n hosting it. If you are deploying multiple models, tell SageMaker how to distribute traffic\n among the models by specifying variant weights.
" } }, "com.amazonaws.sagemaker#ProductionVariantAcceleratorType": { @@ -32285,7 +32465,7 @@ "KmsKeyId": { "target": "com.amazonaws.sagemaker#KmsKeyId", "traits": { - "smithy.api#documentation": "The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the core dump data at rest using\n Amazon S3 server-side encryption. The KmsKeyId
can be any of the following\n formats:
// KMS Key ID
\n\n \"1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// Amazon Resource Name (ARN) of a KMS Key
\n\n \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// KMS Key Alias
\n\n \"alias/ExampleAlias\"
\n
// Amazon Resource Name (ARN) of a KMS Key Alias
\n\n \"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"
\n
If you use a KMS key ID or an alias of your KMS key, the Amazon SageMaker execution role must\n include permissions to call kms:Encrypt
. If you don't provide a KMS key ID,\n Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. Amazon SageMaker uses server-side\n encryption with KMS-managed keys for OutputDataConfig
. If you use a bucket\n policy with an s3:PutObject
permission that only allows objects with\n server-side encryption, set the condition key of\n s3:x-amz-server-side-encryption
to \"aws:kms\"
. For more\n information, see KMS-Managed Encryption\n Keys in the Amazon Simple Storage Service Developer Guide.\n
The KMS key policy must grant permission to the IAM role that you specify in your\n CreateEndpoint
and UpdateEndpoint
requests. For more\n information, see Using Key Policies in Amazon Web Services\n KMS in the Amazon Web Services Key Management Service Developer Guide.
The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that SageMaker uses to encrypt the core dump data at rest using\n Amazon S3 server-side encryption. The KmsKeyId
can be any of the following\n formats:
// KMS Key ID
\n\n \"1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// Amazon Resource Name (ARN) of a KMS Key
\n\n \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// KMS Key Alias
\n\n \"alias/ExampleAlias\"
\n
// Amazon Resource Name (ARN) of a KMS Key Alias
\n\n \"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"
\n
If you use a KMS key ID or an alias of your KMS key, the SageMaker execution role must\n include permissions to call kms:Encrypt
. If you don't provide a KMS key ID,\n SageMaker uses the default KMS key for Amazon S3 for your role's account. SageMaker uses server-side\n encryption with KMS-managed keys for OutputDataConfig
. If you use a bucket\n policy with an s3:PutObject
permission that only allows objects with\n server-side encryption, set the condition key of\n s3:x-amz-server-side-encryption
to \"aws:kms\"
. For more\n information, see KMS-Managed Encryption\n Keys in the Amazon Simple Storage Service Developer Guide.\n
The KMS key policy must grant permission to the IAM role that you specify in your\n CreateEndpoint
and UpdateEndpoint
requests. For more\n information, see Using Key Policies in Amazon Web Services\n KMS in the Amazon Web Services Key Management Service Developer Guide.
Serverless Inference is in preview release for Amazon SageMaker and is subject to change. We do not recommend using this feature in production environments.
\nSpecifies the serverless configuration for an endpoint variant.
" + "smithy.api#documentation": "Specifies the serverless configuration for an endpoint variant.
" } }, "com.amazonaws.sagemaker#ProductionVariantStatus": { @@ -32686,13 +32866,13 @@ "CurrentServerlessConfig": { "target": "com.amazonaws.sagemaker#ProductionVariantServerlessConfig", "traits": { - "smithy.api#documentation": "The serverless configuration for the endpoint.
\nServerless Inference is in preview release for Amazon SageMaker and is subject to change. We do not recommend using this feature in production environments.
\nThe serverless configuration for the endpoint.
" } }, "DesiredServerlessConfig": { "target": "com.amazonaws.sagemaker#ProductionVariantServerlessConfig", "traits": { - "smithy.api#documentation": "The serverless configuration requested for the endpoint update.
\nServerless Inference is in preview release for Amazon SageMaker and is subject to change. We do not recommend using this feature in production environments.
\nThe serverless configuration requested for the endpoint update.
" } } }, @@ -33417,12 +33597,12 @@ "Properties": { "target": "com.amazonaws.sagemaker#QueryProperties", "traits": { - "smithy.api#documentation": "Filter the lineage entities connected to the StartArn
(s) by a set if property key value pairs. \n If multiple pairs are provided, an entity will be included in the results if it matches any of the provided pairs.
Filter the lineage entities connected to the StartArn
(s) by a set if property key value pairs. \n If multiple pairs are provided, an entity is included in the results if it matches any of the provided pairs.
A set of filters to narrow the set of lineage entities connected to the StartArn
(s) returned by the \n QueryLineage
API action.
A set of filters to narrow the set of lineage entities connected to the StartArn
(s) returned by the \n QueryLineage
API action.
Associations between lineage entities are directed. This parameter determines the direction from the \n StartArn(s) the query will look.
" + "smithy.api#documentation": "Associations between lineage entities have a direction. This parameter determines the direction from the \n StartArn(s) that the query traverses.
" } }, "IncludeEdges": { "target": "com.amazonaws.sagemaker#Boolean", "traits": { - "smithy.api#documentation": " Setting this value to True
will retrieve not only the entities of interest but also the \n Associations and \n lineage entities on the path. Set to False
to only return lineage entities that match your query.
Setting this value to True
retrieves not only the entities of interest but also the \n Associations and \n lineage entities on the path. Set to False
to only return lineage entities that match your query.
The maximum depth in lineage relationships from the StartArns
that will be traversed. Depth is a measure of the number \n of Associations
from the StartArn
entity to the matched results.
The maximum depth in lineage relationships from the StartArns
that are traversed. Depth is a measure of the number \n of Associations
from the StartArn
entity to the matched results.
Identifies the Amazon S3 bucket where you want SageMaker to store the \n compiled model artifacts.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Provides information about the output configuration for the compiled \n model.
" + } + }, "com.amazonaws.sagemaker#RecommendationJobDescription": { "type": "string", "traits": { @@ -33750,6 +33944,12 @@ "traits": { "smithy.api#documentation": "Specifies the endpoint configuration to use for a job.
" } + }, + "VolumeKmsKeyId": { + "target": "com.amazonaws.sagemaker#KmsKeyId", + "traits": { + "smithy.api#documentation": "The Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service (Amazon Web Services KMS) key \n that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint. \n This key will be passed to SageMaker Hosting for endpoint creation.
\n \nThe SageMaker execution role must have kms:CreateGrant
permission in order to encrypt data on the storage \n volume of the endpoints created for inference recommendation. The inference recommendation job will fail \n asynchronously during endpoint configuration creation if the role passed does not have \n kms:CreateGrant
permission.
The KmsKeyId
can be any of the following formats:
// KMS Key ID
\n\n \"1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// Amazon Resource Name (ARN) of a KMS Key
\n\n \"arn:aws:kms:
\n
// KMS Key Alias
\n\n \"alias/ExampleAlias\"
\n
// Amazon Resource Name (ARN) of a KMS Key Alias
\n\n \"arn:aws:kms:
\n
For more information about key identifiers, see \n Key identifiers (KeyID) in the \n Amazon Web Services Key Management Service (Amazon Web Services KMS) documentation.
" + } } }, "traits": { @@ -33766,6 +33966,26 @@ "smithy.api#pattern": "^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,63}$" } }, + "com.amazonaws.sagemaker#RecommendationJobOutputConfig": { + "type": "structure", + "members": { + "KmsKeyId": { + "target": "com.amazonaws.sagemaker#KmsKeyId", + "traits": { + "smithy.api#documentation": "The Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service (Amazon Web Services KMS) key \n that Amazon SageMaker uses to encrypt your output artifacts with Amazon S3 server-side encryption. \n The SageMaker execution role must have kms:GenerateDataKey
permission.
The KmsKeyId
can be any of the following formats:
// KMS Key ID
\n\n \"1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// Amazon Resource Name (ARN) of a KMS Key
\n\n \"arn:aws:kms:
\n
// KMS Key Alias
\n\n \"alias/ExampleAlias\"
\n
// Amazon Resource Name (ARN) of a KMS Key Alias
\n\n \"arn:aws:kms:
\n
For more information about key identifiers, see \n Key identifiers (KeyID) in the \n Amazon Web Services Key Management Service (Amazon Web Services KMS) documentation.
" + } + }, + "CompiledOutputConfig": { + "target": "com.amazonaws.sagemaker#RecommendationJobCompiledOutputConfig", + "traits": { + "smithy.api#documentation": "Provides information about the output configuration for the compiled \n model.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Provides information about the output configuration for the compiled model.
" + } + }, "com.amazonaws.sagemaker#RecommendationJobResourceLimit": { "type": "structure", "members": { @@ -34057,6 +34277,9 @@ "input": { "target": "com.amazonaws.sagemaker#RegisterDevicesRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceLimitExceeded" @@ -34305,14 +34528,14 @@ "VolumeSizeInGB": { "target": "com.amazonaws.sagemaker#VolumeSizeInGB", "traits": { - "smithy.api#documentation": "The size of the ML storage volume that you want to provision.
\nML storage volumes store model artifacts and incremental states. Training\n algorithms might also use the ML storage volume for scratch space. If you want to store\n the training data in the ML storage volume, choose File
as the\n TrainingInputMode
in the algorithm specification.
You must specify sufficient ML storage for your scenario.
\nAmazon SageMaker supports only the General Purpose SSD (gp2) ML storage volume type.\n
\nCertain Nitro-based instances include local storage with a fixed total size,\n dependent on the instance type. When using these instances for training, Amazon SageMaker mounts\n the local instance storage instead of Amazon EBS gp2 storage. You can't request a\n VolumeSizeInGB
greater than the total size of the local instance\n storage.
For a list of instance types that support local instance storage, including the\n total size per instance type, see Instance Store Volumes.
\nThe size of the ML storage volume that you want to provision.
\nML storage volumes store model artifacts and incremental states. Training\n algorithms might also use the ML storage volume for scratch space. If you want to store\n the training data in the ML storage volume, choose File
as the\n TrainingInputMode
in the algorithm specification.
You must specify sufficient ML storage for your scenario.
\nSageMaker supports only the General Purpose SSD (gp2) ML storage volume type.\n
\nCertain Nitro-based instances include local storage with a fixed total size,\n dependent on the instance type. When using these instances for training, SageMaker mounts\n the local instance storage instead of Amazon EBS gp2 storage. You can't request a\n VolumeSizeInGB
greater than the total size of the local instance\n storage.
For a list of instance types that support local instance storage, including the\n total size per instance type, see Instance Store Volumes.
\nThe Amazon Web Services KMS key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML\n compute instance(s) that run the training job.
\nCertain Nitro-based instances include local storage, dependent on the instance\n type. Local storage volumes are encrypted using a hardware module on the instance.\n You can't request a VolumeKmsKeyId
when using an instance type with\n local storage.
For a list of instance types that support local instance storage, see Instance Store Volumes.
\nFor more information about local instance storage encryption, see SSD\n Instance Store Volumes.
\nThe VolumeKmsKeyId
can be in any of the following formats:
// KMS Key ID
\n\n \"1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// Amazon Resource Name (ARN) of a KMS Key
\n\n \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
The Amazon Web Services KMS key that SageMaker uses to encrypt data on the storage volume attached to the ML\n compute instance(s) that run the training job.
\nCertain Nitro-based instances include local storage, dependent on the instance\n type. Local storage volumes are encrypted using a hardware module on the instance.\n You can't request a VolumeKmsKeyId
when using an instance type with\n local storage.
For a list of instance types that support local instance storage, see Instance Store Volumes.
\nFor more information about local instance storage encryption, see SSD\n Instance Store Volumes.
\nThe VolumeKmsKeyId
can be in any of the following formats:
// KMS Key ID
\n\n \"1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
// Amazon Resource Name (ARN) of a KMS Key
\n\n \"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
\n
You have exceeded an Amazon SageMaker resource limit. For example, you might have too many\n training jobs created.
", + "smithy.api#documentation": "You have exceeded an SageMaker resource limit. For example, you might have too many\n training jobs created.
", "smithy.api#error": "client" } }, @@ -34425,7 +34648,7 @@ "InstanceType": { "target": "com.amazonaws.sagemaker#AppInstanceType", "traits": { - "smithy.api#documentation": "The instance type that the image version runs on.
" + "smithy.api#documentation": "The instance type that the image version runs on.
\nJupyterServer Apps only support the system
value. KernelGateway Apps do not support the system
value, but support all other values for available instance types.
If you choose S3Prefix
, S3Uri
identifies a key name prefix.\n Amazon SageMaker uses all objects that match the specified key name prefix for model training.
If you choose ManifestFile
, S3Uri
identifies an object that\n is a manifest file containing a list of object keys that you want Amazon SageMaker to use for model\n training.
If you choose AugmentedManifestFile
, S3Uri identifies an object that is\n an augmented manifest file in JSON lines format. This file contains the data you want to\n use for model training. AugmentedManifestFile
can only be used if the\n Channel's input mode is Pipe
.
If you choose S3Prefix
, S3Uri
identifies a key name prefix.\n SageMaker uses all objects that match the specified key name prefix for model training.
If you choose ManifestFile
, S3Uri
identifies an object that\n is a manifest file containing a list of object keys that you want SageMaker to use for model\n training.
If you choose AugmentedManifestFile
, S3Uri identifies an object that is\n an augmented manifest file in JSON lines format. This file contains the data you want to\n use for model training. AugmentedManifestFile
can only be used if the\n Channel's input mode is Pipe
.
Depending on the value specified for the S3DataType
, identifies either\n a key name prefix or a manifest. For example:
A key name prefix might look like this:\n s3://bucketname/exampleprefix
\n
A manifest might look like this:\n s3://bucketname/example.manifest
\n
A manifest is an S3 object which is a JSON file consisting of an array of\n elements. The first element is a prefix which is followed by one or more\n suffixes. SageMaker appends the suffix elements to the prefix to get a full set\n of S3Uri
. Note that the prefix must be a valid non-empty\n S3Uri
that precludes users from specifying a manifest whose\n individual S3Uri
is sourced from different S3 buckets.
The following code example shows a valid manifest format:
\n\n [ {\"prefix\": \"s3://customer_bucket/some/prefix/\"},
\n
\n \"relative/path/to/custdata-1\",
\n
\n \"relative/path/custdata-2\",
\n
\n ...
\n
\n \"relative/path/custdata-N\"
\n
\n ]
\n
This JSON is equivalent to the following S3Uri
\n list:
\n s3://customer_bucket/some/prefix/relative/path/to/custdata-1
\n
\n s3://customer_bucket/some/prefix/relative/path/custdata-2
\n
\n ...
\n
\n s3://customer_bucket/some/prefix/relative/path/custdata-N
\n
The complete set of S3Uri
in this manifest is the input data\n for the channel for this data source. The object that each S3Uri
\n points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on\n your behalf.
Depending on the value specified for the S3DataType
, identifies either\n a key name prefix or a manifest. For example:
A key name prefix might look like this:\n s3://bucketname/exampleprefix
\n
A manifest might look like this:\n s3://bucketname/example.manifest
\n
A manifest is an S3 object which is a JSON file consisting of an array of\n elements. The first element is a prefix which is followed by one or more\n suffixes. SageMaker appends the suffix elements to the prefix to get a full set\n of S3Uri
. Note that the prefix must be a valid non-empty\n S3Uri
that precludes users from specifying a manifest whose\n individual S3Uri
is sourced from different S3 buckets.
The following code example shows a valid manifest format:
\n\n [ {\"prefix\": \"s3://customer_bucket/some/prefix/\"},
\n
\n \"relative/path/to/custdata-1\",
\n
\n \"relative/path/custdata-2\",
\n
\n ...
\n
\n \"relative/path/custdata-N\"
\n
\n ]
\n
This JSON is equivalent to the following S3Uri
\n list:
\n s3://customer_bucket/some/prefix/relative/path/to/custdata-1
\n
\n s3://customer_bucket/some/prefix/relative/path/custdata-2
\n
\n ...
\n
\n s3://customer_bucket/some/prefix/relative/path/custdata-N
\n
The complete set of S3Uri
in this manifest is the input data\n for the channel for this data source. The object that each S3Uri
\n points to must be readable by the IAM role that SageMaker uses to perform tasks on\n your behalf.
If you want Amazon SageMaker to replicate the entire dataset on each ML compute instance that\n is launched for model training, specify FullyReplicated
.
If you want Amazon SageMaker to replicate a subset of data on each ML compute instance that is\n launched for model training, specify ShardedByS3Key
. If there are\n n ML compute instances launched for a training job, each\n instance gets approximately 1/n of the number of S3 objects. In\n this case, model training on each machine uses only the subset of training data.
Don't choose more ML compute instances for training than available S3 objects. If\n you do, some nodes won't get any data and you will pay for nodes that aren't getting any\n training data. This applies in both File and Pipe modes. Keep this in mind when\n developing algorithms.
\nIn distributed training, where you use multiple ML compute EC2 instances, you might\n choose ShardedByS3Key
. If the algorithm requires copying training data to\n the ML storage volume (when TrainingInputMode
is set to File
),\n this copies 1/n of the number of objects.
If you want SageMaker to replicate the entire dataset on each ML compute instance that\n is launched for model training, specify FullyReplicated
.
If you want SageMaker to replicate a subset of data on each ML compute instance that is\n launched for model training, specify ShardedByS3Key
. If there are\n n ML compute instances launched for a training job, each\n instance gets approximately 1/n of the number of S3 objects. In\n this case, model training on each machine uses only the subset of training data.
Don't choose more ML compute instances for training than available S3 objects. If\n you do, some nodes won't get any data and you will pay for nodes that aren't getting any\n training data. This applies in both File and Pipe modes. Keep this in mind when\n developing algorithms.
\nIn distributed training, where you use multiple ML compute EC2 instances, you might\n choose ShardedByS3Key
. If the algorithm requires copying training data to\n the ML storage volume (when TrainingInputMode
is set to File
),\n this copies 1/n of the number of objects.
Provides APIs for creating and managing Amazon SageMaker resources.
\nOther Resources:
\nProvides APIs for creating and managing SageMaker resources.
\nOther Resources:
\nA detailed description of the progress within a secondary status.\n
\nAmazon SageMaker provides secondary statuses and status messages that apply to each of\n them:
\nStarting the training job.
\nLaunching requested ML\n instances.
\nInsufficient\n capacity error from EC2 while launching instances,\n retrying!
\nLaunched\n instance was unhealthy, replacing it!
\nPreparing the instances for training.
\nDownloading the training image.
\nTraining\n image download completed. Training in\n progress.
\nStatus messages are subject to change. Therefore, we recommend not including them\n in code that programmatically initiates actions. For examples, don't use status\n messages in if statements.
\nTo have an overview of your training job's progress, view\n TrainingJobStatus
and SecondaryStatus
in DescribeTrainingJob, and StatusMessage
together. For\n example, at the start of a training job, you might see the following:
\n TrainingJobStatus
- InProgress
\n SecondaryStatus
- Training
\n StatusMessage
- Downloading the training image
A detailed description of the progress within a secondary status.\n
\nSageMaker provides secondary statuses and status messages that apply to each of\n them:
\nStarting the training job.
\nLaunching requested ML\n instances.
\nInsufficient\n capacity error from EC2 while launching instances,\n retrying!
\nLaunched\n instance was unhealthy, replacing it!
\nPreparing the instances for training.
\nDownloading the training image.
\nTraining\n image download completed. Training in\n progress.
\nStatus messages are subject to change. Therefore, we recommend not including them\n in code that programmatically initiates actions. For examples, don't use status\n messages in if statements.
\nTo have an overview of your training job's progress, view\n TrainingJobStatus
and SecondaryStatus
in DescribeTrainingJob, and StatusMessage
together. For\n example, at the start of a training job, you might see the following:
\n TrainingJobStatus
- InProgress
\n SecondaryStatus
- Training
\n StatusMessage
- Downloading the training image
An array element of DescribeTrainingJobResponse$SecondaryStatusTransitions. It provides\n additional details about a status that the training job has transitioned through. A\n training job can be in one of several states, for example, starting, downloading,\n training, or uploading. Within each state, there are a number of intermediate states.\n For example, within the starting state, Amazon SageMaker could be starting the training job or\n launching the ML instances. These transitional states are referred to as the job's\n secondary\n status.\n
\n " + "smithy.api#documentation": "An array element of DescribeTrainingJobResponse$SecondaryStatusTransitions. It provides\n additional details about a status that the training job has transitioned through. A\n training job can be in one of several states, for example, starting, downloading,\n training, or uploading. Within each state, there are a number of intermediate states.\n For example, within the starting state, SageMaker could be starting the training job or\n launching the ML instances. These transitional states are referred to as the job's\n secondary\n status.\n
\n " } }, "com.amazonaws.sagemaker#SecondaryStatusTransitions": { @@ -36107,7 +36331,7 @@ "smithy.api#box": {}, "smithy.api#range": { "min": 1, - "max": 50 + "max": 200 } } }, @@ -36475,13 +36699,13 @@ "AlgorithmName": { "target": "com.amazonaws.sagemaker#ArnOrName", "traits": { - "smithy.api#documentation": "The name of an algorithm that was used to create the model package. The algorithm must\n be either an algorithm resource in your Amazon SageMaker account or an algorithm in Amazon Web Services Marketplace that you\n are subscribed to.
", + "smithy.api#documentation": "The name of an algorithm that was used to create the model package. The algorithm must\n be either an algorithm resource in your SageMaker account or an algorithm in Amazon Web Services Marketplace that you\n are subscribed to.
", "smithy.api#required": {} } } }, "traits": { - "smithy.api#documentation": "Specifies an algorithm that was used to create the model package. The algorithm must\n be either an algorithm resource in your Amazon SageMaker account or an algorithm in Amazon Web Services Marketplace that you\n are subscribed to.
" + "smithy.api#documentation": "Specifies an algorithm that was used to create the model package. The algorithm must\n be either an algorithm resource in your SageMaker account or an algorithm in Amazon Web Services Marketplace that you\n are subscribed to.
" } }, "com.amazonaws.sagemaker#SourceAlgorithmList": { @@ -36582,6 +36806,9 @@ "input": { "target": "com.amazonaws.sagemaker#StartMonitoringScheduleRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceNotFound" @@ -36608,13 +36835,16 @@ "input": { "target": "com.amazonaws.sagemaker#StartNotebookInstanceInput" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceLimitExceeded" } ], "traits": { - "smithy.api#documentation": "Launches an ML compute instance with the latest version of the libraries and\n attaches your ML storage volume. After configuring the notebook instance, Amazon SageMaker sets the\n notebook instance status to InService
. A notebook instance's status must be\n InService
before you can connect to your Jupyter notebook.
Launches an ML compute instance with the latest version of the libraries and\n attaches your ML storage volume. After configuring the notebook instance, SageMaker sets the\n notebook instance status to InService
. A notebook instance's status must be\n InService
before you can connect to your Jupyter notebook.
Request to stop an edge packaging job.
" } @@ -36856,6 +37095,9 @@ "input": { "target": "com.amazonaws.sagemaker#StopHyperParameterTuningJobRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceNotFound" @@ -36882,6 +37124,9 @@ "input": { "target": "com.amazonaws.sagemaker#StopInferenceRecommendationsJobRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceNotFound" @@ -36908,6 +37153,9 @@ "input": { "target": "com.amazonaws.sagemaker#StopLabelingJobRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceNotFound" @@ -36934,6 +37182,9 @@ "input": { "target": "com.amazonaws.sagemaker#StopMonitoringScheduleRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceNotFound" @@ -36960,8 +37211,11 @@ "input": { "target": "com.amazonaws.sagemaker#StopNotebookInstanceInput" }, + "output": { + "target": "smithy.api#Unit" + }, "traits": { - "smithy.api#documentation": "Terminates the ML compute instance. Before terminating the instance, Amazon SageMaker\n disconnects the ML storage volume from it. Amazon SageMaker preserves the ML storage volume. Amazon SageMaker\n stops charging you for the ML compute instance when you call\n StopNotebookInstance
.
To access data on the ML storage volume for a notebook instance that has been\n terminated, call the StartNotebookInstance
API.\n StartNotebookInstance
launches another ML compute instance, configures\n it, and attaches the preserved ML storage volume so you can continue your work.\n
Terminates the ML compute instance. Before terminating the instance, SageMaker\n disconnects the ML storage volume from it. SageMaker preserves the ML storage volume. SageMaker\n stops charging you for the ML compute instance when you call\n StopNotebookInstance
.
To access data on the ML storage volume for a notebook instance that has been\n terminated, call the StartNotebookInstance
API.\n StartNotebookInstance
launches another ML compute instance, configures\n it, and attaches the preserved ML storage volume so you can continue your work.\n
Stops a training job. To stop a job, Amazon SageMaker sends the algorithm the\n SIGTERM
signal, which delays job termination for 120 seconds.\n Algorithms might use this 120-second window to save the model artifacts, so the results\n of the training is not lost.
When it receives a StopTrainingJob
request, Amazon SageMaker changes the status of\n the job to Stopping
. After Amazon SageMaker stops the job, it sets the status to\n Stopped
.
Stops a training job. To stop a job, SageMaker sends the algorithm the\n SIGTERM
signal, which delays job termination for 120 seconds.\n Algorithms might use this 120-second window to save the model artifacts, so the results\n of the training is not lost.
When it receives a StopTrainingJob
request, SageMaker changes the status of\n the job to Stopping
. After SageMaker stops the job, it sets the status to\n Stopped
.
The maximum length of time, in seconds, that a training or compilation job can run.
\nFor compilation jobs, if the job does not complete during this time, you will \n receive a TimeOut
error. We recommend starting with 900 seconds and increase as \n necessary based on your model.
For all other jobs, if the job does not complete during this time, Amazon SageMaker ends the job. When \n RetryStrategy
is specified in the job request,\n MaxRuntimeInSeconds
specifies the maximum time for all of the attempts\n in total, not each individual attempt. The default value is 1 day. The maximum value is 28 days.
The maximum length of time, in seconds, that a training or compilation job can run.
\nFor compilation jobs, if the job does not complete during this time, a TimeOut
error\n is generated. We recommend starting with 900 seconds and increasing as \n necessary based on your model.
For all other jobs, if the job does not complete during this time, SageMaker ends the job. When \n RetryStrategy
is specified in the job request,\n MaxRuntimeInSeconds
specifies the maximum time for all of the attempts\n in total, not each individual attempt. The default value is 1 day. The maximum value is 28 days.
The maximum length of time, in seconds, that a managed Spot training job has to\n complete. It is the amount of time spent waiting for Spot capacity plus the amount of\n time the job can run. It must be equal to or greater than\n MaxRuntimeInSeconds
. If the job does not complete during this time,\n Amazon SageMaker ends the job.
When RetryStrategy
is specified in the job request,\n MaxWaitTimeInSeconds
specifies the maximum time for all of the attempts\n in total, not each individual attempt.
The maximum length of time, in seconds, that a managed Spot training job has to\n complete. It is the amount of time spent waiting for Spot capacity plus the amount of\n time the job can run. It must be equal to or greater than\n MaxRuntimeInSeconds
. If the job does not complete during this time,\n SageMaker ends the job.
When RetryStrategy
is specified in the job request,\n MaxWaitTimeInSeconds
specifies the maximum time for all of the attempts\n in total, not each individual attempt.
Specifies a limit to how long a model training job or model compilation job \n can run. It also specifies how long a managed spot training\n job has to complete. When the job reaches the time limit, Amazon SageMaker ends the training or\n compilation job. Use this API to cap model training costs.
\nTo stop a training job, Amazon SageMaker sends the algorithm the SIGTERM
signal, which delays\n job termination for 120 seconds. Algorithms can use this 120-second window to save the\n model artifacts, so the results of training are not lost.
The training algorithms provided by Amazon SageMaker automatically save the intermediate results\n of a model training job when possible. This attempt to save artifacts is only a best\n effort case as model might not be in a state from which it can be saved. For example, if\n training has just started, the model might not be ready to save. When saved, this\n intermediate data is a valid model artifact. You can use it to create a model with\n CreateModel
.
The Neural Topic Model (NTM) currently does not support saving intermediate model\n artifacts. When training NTMs, make sure that the maximum runtime is sufficient for\n the training job to complete.
\nSpecifies a limit to how long a model training job or model compilation job \n can run. It also specifies how long a managed spot training\n job has to complete. When the job reaches the time limit, SageMaker ends the training or\n compilation job. Use this API to cap model training costs.
\nTo stop a training job, SageMaker sends the algorithm the SIGTERM
signal, which delays\n job termination for 120 seconds. Algorithms can use this 120-second window to save the\n model artifacts, so the results of training are not lost.
The training algorithms provided by SageMaker automatically save the intermediate results\n of a model training job when possible. This attempt to save artifacts is only a best\n effort case as model might not be in a state from which it can be saved. For example, if\n training has just started, the model might not be ready to save. When saved, this\n intermediate data is a valid model artifact. You can use it to create a model with\n CreateModel
.
The Neural Topic Model (NTM) currently does not support saving intermediate model\n artifacts. When training NTMs, make sure that the maximum runtime is sufficient for\n the training job to complete.
\nBatch size for the first step to turn on traffic on the new endpoint fleet. Value
must be less than\n or equal to 50% of the variant's total instance count.
Batch size for the first step to turn on traffic on the new endpoint fleet. Value
must be less than\n or equal to 50% of the variant's total instance count.
Provides detailed information about the state of the training job. For detailed\n information about the secondary status of the training job, see\n StatusMessage
under SecondaryStatusTransition.
Amazon SageMaker provides primary statuses and secondary statuses that apply to each of\n them:
\n\n Starting
\n - Starting the training job.
\n Downloading
- An optional stage for algorithms that\n support File
training input mode. It indicates that\n data is being downloaded to the ML storage volumes.
\n Training
- Training is in progress.
\n Uploading
- Training is complete and the model\n artifacts are being uploaded to the S3 location.
\n Completed
- The training job has completed.
\n Failed
- The training job has failed. The reason for\n the failure is returned in the FailureReason
field of\n DescribeTrainingJobResponse
.
\n MaxRuntimeExceeded
- The job stopped because it\n exceeded the maximum allowed runtime.
\n Stopped
- The training job has stopped.
\n Stopping
- Stopping the training job.
Valid values for SecondaryStatus
are subject to change.
We no longer support the following secondary statuses:
\n\n LaunchingMLInstances
\n
\n PreparingTrainingStack
\n
\n DownloadingTrainingImage
\n
Provides detailed information about the state of the training job. For detailed\n information about the secondary status of the training job, see\n StatusMessage
under SecondaryStatusTransition.
SageMaker provides primary statuses and secondary statuses that apply to each of\n them:
\n\n Starting
\n - Starting the training job.
\n Downloading
- An optional stage for algorithms that\n support File
training input mode. It indicates that\n data is being downloaded to the ML storage volumes.
\n Training
- Training is in progress.
\n Uploading
- Training is complete and the model\n artifacts are being uploaded to the S3 location.
\n Completed
- The training job has completed.
\n Failed
- The training job has failed. The reason for\n the failure is returned in the FailureReason
field of\n DescribeTrainingJobResponse
.
\n MaxRuntimeExceeded
- The job stopped because it\n exceeded the maximum allowed runtime.
\n Stopped
- The training job has stopped.
\n Stopping
- Stopping the training job.
Valid values for SecondaryStatus
are subject to change.
We no longer support the following secondary statuses:
\n\n LaunchingMLInstances
\n
\n PreparingTrainingStack
\n
\n DownloadingTrainingImage
\n
The S3 path where model artifacts that you configured when creating the job are\n stored. Amazon SageMaker creates subfolders for model artifacts.
" + "smithy.api#documentation": "The S3 path where model artifacts that you configured when creating the job are\n stored. SageMaker creates subfolders for model artifacts.
" } }, "ResourceConfig": { @@ -38370,7 +38633,7 @@ "StoppingCondition": { "target": "com.amazonaws.sagemaker#StoppingCondition", "traits": { - "smithy.api#documentation": "Specifies a limit to how long a model training job can run. It also specifies how long\n a managed Spot training job has to complete. When the job reaches the time limit, Amazon SageMaker\n ends the training job. Use this API to cap model training costs.
\nTo stop a job, Amazon SageMaker sends the algorithm the SIGTERM
signal, which delays\n job termination for 120 seconds. Algorithms can use this 120-second window to save the\n model artifacts, so the results of training are not lost.
Specifies a limit to how long a model training job can run. It also specifies how long\n a managed Spot training job has to complete. When the job reaches the time limit, SageMaker\n ends the training job. Use this API to cap model training costs.
\nTo stop a job, SageMaker sends the algorithm the SIGTERM
signal, which delays\n job termination for 120 seconds. Algorithms can use this 120-second window to save the\n model artifacts, so the results of training are not lost.
Indicates the time when the training job ends on training instances. You are billed\n for the time interval between the value of TrainingStartTime
and this time.\n For successful jobs and stopped jobs, this is the time after model artifacts are\n uploaded. For failed jobs, this is the time when Amazon SageMaker detects a job failure.
Indicates the time when the training job ends on training instances. You are billed\n for the time interval between the value of TrainingStartTime
and this time.\n For successful jobs and stopped jobs, this is the time after model artifacts are\n uploaded. For failed jobs, this is the time when SageMaker detects a job failure.
the path to the S3 bucket where you want to store model artifacts. Amazon SageMaker creates\n subfolders for the artifacts.
", + "smithy.api#documentation": "the path to the S3 bucket where you want to store model artifacts. SageMaker creates\n subfolders for the artifacts.
", "smithy.api#required": {} } }, @@ -38535,7 +38798,7 @@ "StoppingCondition": { "target": "com.amazonaws.sagemaker#StoppingCondition", "traits": { - "smithy.api#documentation": "Specifies a limit to how long a model training job can run. It also specifies how long\n a managed Spot training job has to complete. When the job reaches the time limit, Amazon SageMaker\n ends the training job. Use this API to cap model training costs.
\nTo stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job\n termination for 120 seconds. Algorithms can use this 120-second window to save the model\n artifacts.
", + "smithy.api#documentation": "Specifies a limit to how long a model training job can run. It also specifies how long\n a managed Spot training job has to complete. When the job reaches the time limit, SageMaker\n ends the training job. Use this API to cap model training costs.
\nTo stop a job, SageMaker sends the algorithm the SIGTERM signal, which delays job\n termination for 120 seconds. Algorithms can use this 120-second window to save the model\n artifacts.
", "smithy.api#required": {} } } @@ -40496,6 +40759,9 @@ "input": { "target": "com.amazonaws.sagemaker#UpdateDeviceFleetRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.sagemaker#ResourceInUse" @@ -40547,6 +40813,9 @@ "input": { "target": "com.amazonaws.sagemaker#UpdateDevicesRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "traits": { "smithy.api#documentation": "Updates one or more devices in a fleet.
" } @@ -40642,7 +40911,7 @@ } ], "traits": { - "smithy.api#documentation": "Deploys the new EndpointConfig
specified in the request, switches to\n using newly created endpoint, and then deletes resources provisioned for the endpoint\n using the previous EndpointConfig
(there is no availability loss).
When Amazon SageMaker receives the request, it sets the endpoint status to\n Updating
. After updating the endpoint, it sets the status to\n InService
. To check the status of an endpoint, use the DescribeEndpoint API.\n \n
You must not delete an EndpointConfig
in use by an endpoint that is\n live or while the UpdateEndpoint
or CreateEndpoint
\n operations are being performed on the endpoint. To update an endpoint, you must\n create a new EndpointConfig
.
If you delete the EndpointConfig
of an endpoint that is active or\n being created or updated you may lose visibility into the instance type the endpoint\n is using. The endpoint must be deleted in order to stop incurring charges.
Deploys the new EndpointConfig
specified in the request, switches to\n using newly created endpoint, and then deletes resources provisioned for the endpoint\n using the previous EndpointConfig
(there is no availability loss).
When SageMaker receives the request, it sets the endpoint status to\n Updating
. After updating the endpoint, it sets the status to\n InService
. To check the status of an endpoint, use the DescribeEndpoint API.\n \n
You must not delete an EndpointConfig
in use by an endpoint that is\n live or while the UpdateEndpoint
or CreateEndpoint
\n operations are being performed on the endpoint. To update an endpoint, you must\n create a new EndpointConfig
.
If you delete the EndpointConfig
of an endpoint that is active or\n being created or updated you may lose visibility into the instance type the endpoint\n is using. The endpoint must be deleted in order to stop incurring charges.
Updates variant weight of one or more variants associated with an existing\n endpoint, or capacity of one variant associated with an existing endpoint. When it\n receives the request, Amazon SageMaker sets the endpoint status to Updating
. After\n updating the endpoint, it sets the status to InService
. To check the status\n of an endpoint, use the DescribeEndpoint API.
Updates variant weight of one or more variants associated with an existing\n endpoint, or capacity of one variant associated with an existing endpoint. When it\n receives the request, SageMaker sets the endpoint status to Updating
. After\n updating the endpoint, it sets the status to InService
. To check the status\n of an endpoint, use the DescribeEndpoint API.
The name of an existing Amazon SageMaker endpoint.
", + "smithy.api#documentation": "The name of an existing SageMaker endpoint.
", "smithy.api#required": {} } }, @@ -41023,7 +41292,7 @@ "RoleArn": { "target": "com.amazonaws.sagemaker#RoleArn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access the\n notebook instance. For more information, see Amazon SageMaker Roles.
\nTo be able to pass this role to Amazon SageMaker, the caller of this API must have the\n iam:PassRole
permission.
The Amazon Resource Name (ARN) of the IAM role that SageMaker can assume to access the\n notebook instance. For more information, see SageMaker Roles.
\nTo be able to pass this role to SageMaker, the caller of this API must have the\n iam:PassRole
permission.
The size, in GB, of the ML storage volume to attach to the notebook instance. The\n default value is 5 GB. ML storage volumes are encrypted, so Amazon SageMaker can't determine the\n amount of available free space on the volume. Because of this, you can increase the\n volume size when you update a notebook instance, but you can't decrease the volume size.\n If you want to decrease the size of the ML storage volume in use, create a new notebook\n instance with the desired size.
" + "smithy.api#documentation": "The size, in GB, of the ML storage volume to attach to the notebook instance. The\n default value is 5 GB. ML storage volumes are encrypted, so SageMaker can't determine the\n amount of available free space on the volume. Because of this, you can increase the\n volume size when you update a notebook instance, but you can't decrease the volume size.\n If you want to decrease the size of the ML storage volume in use, create a new notebook\n instance with the desired size.
" } }, "DefaultCodeRepository": { "target": "com.amazonaws.sagemaker#CodeRepositoryNameOrUrl", "traits": { - "smithy.api#documentation": "The Git repository to associate with the notebook instance as its default code\n repository. This can be either the name of a Git repository stored as a resource in your\n account, or the URL of a Git repository in Amazon Web Services CodeCommit or in any\n other Git repository. When you open a notebook instance, it opens in the directory that\n contains this repository. For more information, see Associating Git Repositories with Amazon SageMaker\n Notebook Instances.
" + "smithy.api#documentation": "The Git repository to associate with the notebook instance as its default code\n repository. This can be either the name of a Git repository stored as a resource in your\n account, or the URL of a Git repository in Amazon Web Services CodeCommit or in any\n other Git repository. When you open a notebook instance, it opens in the directory that\n contains this repository. For more information, see Associating Git Repositories with SageMaker\n Notebook Instances.
" } }, "AdditionalCodeRepositories": { "target": "com.amazonaws.sagemaker#AdditionalCodeRepositoryNamesOrUrls", "traits": { - "smithy.api#documentation": "An array of up to three Git repositories to associate with the notebook instance.\n These can be either the names of Git repositories stored as resources in your account,\n or the URL of Git repositories in Amazon Web Services CodeCommit or in any\n other Git repository. These repositories are cloned at the same level as the default\n repository of your notebook instance. For more information, see Associating Git\n Repositories with Amazon SageMaker Notebook Instances.
" + "smithy.api#documentation": "An array of up to three Git repositories to associate with the notebook instance.\n These can be either the names of Git repositories stored as resources in your account,\n or the URL of Git repositories in Amazon Web Services CodeCommit or in any\n other Git repository. These repositories are cloned at the same level as the default\n repository of your notebook instance. For more information, see Associating Git\n Repositories with SageMaker Notebook Instances.
" } }, "AcceleratorTypes": { @@ -41902,6 +42171,16 @@ "smithy.api#documentation": "A collection of settings that apply to users of Amazon SageMaker Studio. These settings are\n specified when the CreateUserProfile
API is called, and as DefaultUserSettings
\n when the CreateDomain
API is called.
\n SecurityGroups
is aggregated when specified in both calls. For all other\n settings in UserSettings
, the values specified in CreateUserProfile
\n take precedence over those specified in CreateDomain
.
Turns off automatic rotation, and if a rotation is currently in\n progress, cancels the rotation.
\nTo turn on automatic rotation again, call RotateSecret.
\nIf you cancel a rotation in progress, it can leave the VersionStage
\n labels in an unexpected state. Depending on the step of the rotation in progress, you might\n need to remove the staging label AWSPENDING
from the partially created version, specified\n by the VersionId
response value. We recommend you also evaluate the partially rotated\n new version to see if it should be deleted. You can delete a version by removing all staging labels\n from it.
\n Required permissions: \n secretsmanager:CancelRotateSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Turns off automatic rotation, and if a rotation is currently in\n progress, cancels the rotation.
\nTo turn on automatic rotation again, call RotateSecret.
\nIf you cancel a rotation in progress, it can leave the VersionStage
\n labels in an unexpected state. Depending on the step of the rotation in progress, you might\n need to remove the staging label AWSPENDING
from the partially created version, specified\n by the VersionId
response value. We recommend you also evaluate the partially rotated\n new version to see if it should be deleted. You can delete a version by removing all staging labels\n from it.
\n Required permissions: \n secretsmanager:CancelRotateSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } } @@ -163,7 +163,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a new secret. A secret is a set of credentials, such as a \n user name and password, that you store in an encrypted form in Secrets Manager. The secret also \n includes the connection information to access a database or other service, which Secrets Manager \n doesn't encrypt. A secret in Secrets Manager consists of both the protected secret data and the\n important information needed to manage the secret.
\nFor information about creating a secret in the console, see Create a secret.
\nTo create a secret, you can provide the secret value to be encrypted in either the\n SecretString
parameter or the SecretBinary
parameter, but not both. \n If you include SecretString
or SecretBinary
\n then Secrets Manager creates an initial secret version and automatically attaches the staging\n label AWSCURRENT
to it.
If you don't specify an KMS encryption key, Secrets Manager uses the Amazon Web Services managed key \n aws/secretsmanager
. If this key \n doesn't already exist in your account, then Secrets Manager creates it for you automatically. All\n users and roles in the Amazon Web Services account automatically have access to use aws/secretsmanager
. \n Creating aws/secretsmanager
can result in a one-time significant delay in returning the \n result.
If the secret is in a different Amazon Web Services account from the credentials calling the API, then \n you can't use aws/secretsmanager
to encrypt the secret, and you must create \n and use a customer managed KMS key.
\n Required permissions: \n secretsmanager:CreateSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Creates a new secret. A secret can be a password, a set of \n credentials such as a user name and password, an OAuth token, or other secret information \n that you store in an encrypted form in Secrets Manager. The secret also \n includes the connection information to access a database or other service, which Secrets Manager \n doesn't encrypt. A secret in Secrets Manager consists of both the protected secret data and the\n important information needed to manage the secret.
\nFor information about creating a secret in the console, see Create a secret.
\nTo create a secret, you can provide the secret value to be encrypted in either the\n SecretString
parameter or the SecretBinary
parameter, but not both. \n If you include SecretString
or SecretBinary
\n then Secrets Manager creates an initial secret version and automatically attaches the staging\n label AWSCURRENT
to it.
For database credentials you want to rotate, for Secrets Manager to be able to rotate the secret,\n you must make sure the JSON you store in the SecretString
matches the JSON structure of\n a database secret.
If you don't specify an KMS encryption key, Secrets Manager uses the Amazon Web Services managed key \n aws/secretsmanager
. If this key \n doesn't already exist in your account, then Secrets Manager creates it for you automatically. All\n users and roles in the Amazon Web Services account automatically have access to use aws/secretsmanager
. \n Creating aws/secretsmanager
can result in a one-time significant delay in returning the \n result.
If the secret is in a different Amazon Web Services account from the credentials calling the API, then \n you can't use aws/secretsmanager
to encrypt the secret, and you must create \n and use a customer managed KMS key.
\n Required permissions: \n secretsmanager:CreateSecret
. If you \n include tags in the secret, you also need secretsmanager:TagResource
.\n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
To encrypt the secret with a KMS key other than aws/secretsmanager
, you need kms:GenerateDataKey
and kms:Decrypt
permission to the key.
Deletes the resource-based permission policy attached to the secret. To attach a policy to \n a secret, use PutResourcePolicy.
\n\n Required permissions: \n secretsmanager:DeleteResourcePolicy
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Deletes the resource-based permission policy attached to the secret. To attach a policy to \n a secret, use PutResourcePolicy.
\n\n Required permissions: \n secretsmanager:DeleteResourcePolicy
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret to delete the attached resource-based policy for.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret to delete the attached resource-based policy for.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } } @@ -349,7 +349,7 @@ } ], "traits": { - "smithy.api#documentation": "Deletes a secret and all of its versions. You can specify a recovery\n window during which you can restore the secret. The minimum recovery window is 7 days. \n The default recovery window is 30 days. Secrets Manager attaches a DeletionDate
stamp to\n the secret that specifies the end of the recovery window. At the end of the recovery window,\n Secrets Manager deletes the secret permanently.
For information about deleting a secret in the console, see https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_delete-secret.html.
\nSecrets Manager performs the permanent secret deletion at the end of the waiting period as a\n background task with low priority. There is no guarantee of a specific time after the\n recovery window for the permanent delete to occur.
\nAt any time before recovery window ends, you can use RestoreSecret to\n remove the DeletionDate
and cancel the deletion of the secret.
In a secret scheduled for deletion, you cannot access the encrypted secret value.\n To access that information, first cancel the deletion with RestoreSecret and then retrieve the information.
\n\n Required permissions: \n secretsmanager:DeleteSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Deletes a secret and all of its versions. You can specify a recovery\n window during which you can restore the secret. The minimum recovery window is 7 days. \n The default recovery window is 30 days. Secrets Manager attaches a DeletionDate
stamp to\n the secret that specifies the end of the recovery window. At the end of the recovery window,\n Secrets Manager deletes the secret permanently.
For information about deleting a secret in the console, see https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_delete-secret.html.
\nSecrets Manager performs the permanent secret deletion at the end of the waiting period as a\n background task with low priority. There is no guarantee of a specific time after the\n recovery window for the permanent delete to occur.
\nAt any time before recovery window ends, you can use RestoreSecret to\n remove the DeletionDate
and cancel the deletion of the secret.
In a secret scheduled for deletion, you cannot access the encrypted secret value.\n To access that information, first cancel the deletion with RestoreSecret and then retrieve the information.
\n\n Required permissions: \n secretsmanager:DeleteSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret to delete.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret to delete.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } }, @@ -427,7 +427,7 @@ } ], "traits": { - "smithy.api#documentation": "Retrieves the details of a secret. It does not include the encrypted secret value. Secrets Manager\n only returns fields that have a value in the response.
\n\n Required permissions: \n secretsmanager:DescribeSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Retrieves the details of a secret. It does not include the encrypted secret value. Secrets Manager\n only returns fields that have a value in the response.
\n\n Required permissions: \n secretsmanager:DescribeSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } } @@ -710,7 +710,7 @@ } ], "traits": { - "smithy.api#documentation": "Generates a random password. We recommend that you specify the\n maximum length and include every character type that the system you are generating a password\n for can support.
\n\n Required permissions: \n secretsmanager:GetRandomPassword
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Generates a random password. We recommend that you specify the\n maximum length and include every character type that the system you are generating a password\n for can support.
\n\n Required permissions: \n secretsmanager:GetRandomPassword
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Retrieves the JSON text of the resource-based policy document attached to the\n secret. For more information about permissions policies attached to a secret, see \n Permissions \n policies attached to a secret.
\n\n Required permissions: \n secretsmanager:GetResourcePolicy
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Retrieves the JSON text of the resource-based policy document attached to the\n secret. For more information about permissions policies attached to a secret, see \n Permissions \n policies attached to a secret.
\n\n Required permissions: \n secretsmanager:GetResourcePolicy
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret to retrieve the attached resource-based policy for.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret to retrieve the attached resource-based policy for.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } } @@ -871,7 +871,7 @@ } ], "traits": { - "smithy.api#documentation": "Retrieves the contents of the encrypted fields SecretString
or\n SecretBinary
from the specified version of a secret, whichever contains\n content.
We recommend that you cache your secret values by using client-side caching. \n Caching secrets improves speed and reduces your costs. For more information, see Cache secrets for \n your applications.
\n \n\n Required permissions: \n secretsmanager:GetSecretValue
. \n If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key \n aws/secretsmanager
, then you also need kms:Decrypt
permissions for that key.\n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Retrieves the contents of the encrypted fields SecretString
or\n SecretBinary
from the specified version of a secret, whichever contains\n content.
We recommend that you cache your secret values by using client-side caching. \n Caching secrets improves speed and reduces your costs. For more information, see Cache secrets for \n your applications.
\n \n\n Required permissions: \n secretsmanager:GetSecretValue
. \n If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key \n aws/secretsmanager
, then you also need kms:Decrypt
permissions for that key.\n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret to retrieve.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret to retrieve.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } }, @@ -1055,7 +1055,7 @@ } ], "traits": { - "smithy.api#documentation": "Lists the versions for a secret.
\nTo list the secrets in the account, use ListSecrets.
\nTo get the secret value from SecretString
or SecretBinary
, \n call GetSecretValue.
\n Required permissions: \n secretsmanager:ListSecretVersionIds
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Lists the versions for a secret.
\nTo list the secrets in the account, use ListSecrets.
\nTo get the secret value from SecretString
or SecretBinary
, \n call GetSecretValue.
\n Required permissions: \n secretsmanager:ListSecretVersionIds
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret whose versions you want to list.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret whose versions you want to list.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } }, @@ -1144,7 +1144,7 @@ } ], "traits": { - "smithy.api#documentation": "Lists the secrets that are stored by Secrets Manager in the Amazon Web Services account, not including secrets \n that are marked for deletion. To see secrets marked for deletion, use the Secrets Manager console.
\nTo list the versions of a secret, use ListSecretVersionIds.
\nTo get the secret value from SecretString
or SecretBinary
, \n call GetSecretValue.
For information about finding secrets in the console, see Enhanced search capabilities \n for secrets in Secrets Manager.
\n\n Required permissions: \n secretsmanager:ListSecrets
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Lists the secrets that are stored by Secrets Manager in the Amazon Web Services account, not including secrets \n that are marked for deletion. To see secrets marked for deletion, use the Secrets Manager console.
\nTo list the versions of a secret, use ListSecretVersionIds.
\nTo get the secret value from SecretString
or SecretBinary
, \n call GetSecretValue.
For information about finding secrets in the console, see Enhanced search capabilities \n for secrets in Secrets Manager.
\n\n Required permissions: \n secretsmanager:ListSecrets
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Attaches a resource-based permission policy to a secret. A resource-based policy is \n optional. For more information, see Authentication and access control for Secrets Manager\n
\nFor information about attaching a policy in the console, see Attach a \n permissions policy to a secret.
\n\n Required permissions: \n secretsmanager:PutResourcePolicy
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Attaches a resource-based permission policy to a secret. A resource-based policy is \n optional. For more information, see Authentication and access control for Secrets Manager\n
\nFor information about attaching a policy in the console, see Attach a \n permissions policy to a secret.
\n\n Required permissions: \n secretsmanager:PutResourcePolicy
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret to attach the resource-based policy.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret to attach the resource-based policy.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } }, @@ -1399,7 +1399,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a new version with a new encrypted secret value and attaches it to the secret. The \n version can contain a new SecretString
value or a new SecretBinary
value.
We recommend you avoid calling PutSecretValue
at a sustained rate of more than \n once every 10 minutes. When you update the secret value, Secrets Manager creates a new version \n of the secret. Secrets Manager removes outdated versions when there are more than 100, but it does not \n remove versions created less than 24 hours ago. If you call PutSecretValue
more \n than once every 10 minutes, you create more versions than Secrets Manager removes, and you will reach \n the quota for secret versions.
You can specify the staging labels to attach to the new version in VersionStages
. \n If you don't include VersionStages
, then Secrets Manager automatically\n moves the staging label AWSCURRENT
to this version. If this operation creates \n the first version for the secret, then Secrets Manager\n automatically attaches the staging label AWSCURRENT
to it .
If this operation moves the staging label AWSCURRENT
from another version to this\n version, then Secrets Manager also automatically moves the staging label AWSPREVIOUS
to\n the version that AWSCURRENT
was removed from.
This operation is idempotent. If a version with a VersionId
with the same\n value as the ClientRequestToken
parameter already exists, and you specify the\n same secret data, the operation succeeds but does nothing. However, if the secret data is\n different, then the operation fails because you can't modify an existing version; you can\n only create new ones.
\n Required permissions: \n secretsmanager:PutSecretValue
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Creates a new version with a new encrypted secret value and attaches it to the secret. The \n version can contain a new SecretString
value or a new SecretBinary
value.
We recommend you avoid calling PutSecretValue
at a sustained rate of more than \n once every 10 minutes. When you update the secret value, Secrets Manager creates a new version \n of the secret. Secrets Manager removes outdated versions when there are more than 100, but it does not \n remove versions created less than 24 hours ago. If you call PutSecretValue
more \n than once every 10 minutes, you create more versions than Secrets Manager removes, and you will reach \n the quota for secret versions.
You can specify the staging labels to attach to the new version in VersionStages
. \n If you don't include VersionStages
, then Secrets Manager automatically\n moves the staging label AWSCURRENT
to this version. If this operation creates \n the first version for the secret, then Secrets Manager\n automatically attaches the staging label AWSCURRENT
to it .
If this operation moves the staging label AWSCURRENT
from another version to this\n version, then Secrets Manager also automatically moves the staging label AWSPREVIOUS
to\n the version that AWSCURRENT
was removed from.
This operation is idempotent. If a version with a VersionId
with the same\n value as the ClientRequestToken
parameter already exists, and you specify the\n same secret data, the operation succeeds but does nothing. However, if the secret data is\n different, then the operation fails because you can't modify an existing version; you can\n only create new ones.
\n Required permissions: \n secretsmanager:PutSecretValue
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret to add a new version to.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
\nIf the secret doesn't already exist, use CreateSecret
instead.
The ARN or name of the secret to add a new version to.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
\nIf the secret doesn't already exist, use CreateSecret
instead.
For a secret that is replicated to other Regions, deletes the secret replicas from the Regions you specify.
\n\n Required permissions: \n secretsmanager:RemoveRegionsFromReplication
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
For a secret that is replicated to other Regions, deletes the secret replicas from the Regions you specify.
\n\n Required permissions: \n secretsmanager:RemoveRegionsFromReplication
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Replicates the secret to a new Regions. See Multi-Region secrets.
\n\n Required permissions: \n secretsmanager:ReplicateSecretToRegions
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Replicates the secret to a new Regions. See Multi-Region secrets.
\n\n Required permissions: \n secretsmanager:ReplicateSecretToRegions
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Cancels the scheduled deletion of a secret by removing the DeletedDate
time\n stamp. You can access a secret again after it has been restored.
\n Required permissions: \n secretsmanager:RestoreSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Cancels the scheduled deletion of a secret by removing the DeletedDate
time\n stamp. You can access a secret again after it has been restored.
\n Required permissions: \n secretsmanager:RestoreSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret to restore.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret to restore.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } } @@ -1801,7 +1801,7 @@ } ], "traits": { - "smithy.api#documentation": "Configures and starts the asynchronous process of rotating the secret.
\nIf you include the\n configuration parameters, the operation sets the values for the secret and then immediately\n starts a rotation. If you don't include the configuration parameters, the operation starts a\n rotation with the values already stored in the secret. For more information about rotation, \n see Rotate secrets.
\nTo configure rotation, you include the ARN of an Amazon Web Services Lambda function and the schedule \n for the rotation. The Lambda rotation function creates a new\n version of the secret and creates or updates the credentials on the database or service to\n match. After testing the new credentials, the function marks the new secret version with the staging\n label AWSCURRENT
. Then anyone who retrieves the secret gets the new version. For more\n information, see How rotation works.
When rotation is successful, the AWSPENDING
staging label might be attached to the same \n version as the AWSCURRENT
version, or it might not be attached to any version.
If the AWSPENDING
staging label is present but not attached to the same version as\n AWSCURRENT
, then any later invocation of RotateSecret
assumes that a previous\n rotation request is still in progress and returns an error.
\n Required permissions: \n secretsmanager:RotateSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager. You also need lambda:InvokeFunction
permissions on the rotation function. \n For more information, see \n Permissions for rotation.
Configures and starts the asynchronous process of rotating the secret. For more information about rotation, \n see Rotate secrets.
\nIf you include the\n configuration parameters, the operation sets the values for the secret and then immediately\n starts a rotation. If you don't include the configuration parameters, the operation starts a\n rotation with the values already stored in the secret.
\nFor database credentials you want to rotate, for Secrets Manager to be able to rotate the secret, you must \n make sure the secret value is in the\n JSON structure\n of a database secret. In particular, if you want to use the alternating users strategy, your secret must contain the ARN of a superuser\n secret.
\n \nTo configure rotation, you also need the ARN of an Amazon Web Services Lambda function and the schedule \n for the rotation. The Lambda rotation function creates a new\n version of the secret and creates or updates the credentials on the database or service to\n match. After testing the new credentials, the function marks the new secret version with the staging\n label AWSCURRENT
. Then anyone who retrieves the secret gets the new version. For more\n information, see How rotation works.
You can create the Lambda rotation function based on the rotation function templates that Secrets Manager provides. Choose \n a template that matches your Rotation strategy.
\nWhen rotation is successful, the AWSPENDING
staging label might be attached\n to the same version as the AWSCURRENT
version, or it might not be attached to any\n version. If the AWSPENDING
staging label is present but not attached to the same\n version as AWSCURRENT
, then any later invocation of RotateSecret
\n assumes that a previous rotation request is still in progress and returns an error.
\n Required permissions: \n secretsmanager:RotateSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager. You also need lambda:InvokeFunction
permissions on the rotation function. \n For more information, see \n Permissions for rotation.
The ARN or name of the secret to rotate.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret to rotate.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } }, @@ -2221,7 +2221,7 @@ } ], "traits": { - "smithy.api#documentation": "Removes the link between the replica secret and the primary secret and promotes the replica to a primary secret in the replica Region.
\nYou must call this operation from the Region in which you want to promote the replica to a primary secret.
\n\n Required permissions: \n secretsmanager:StopReplicationToReplica
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Removes the link between the replica secret and the primary secret and promotes the replica to a primary secret in the replica Region.
\nYou must call this operation from the Region in which you want to promote the replica to a primary secret.
\n\n Required permissions: \n secretsmanager:StopReplicationToReplica
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Attaches tags to a secret. Tags consist of a key name and a value. Tags are part of the \n secret's metadata. They are not associated with specific versions of the secret. This operation appends tags to the existing list of tags.
\nThe following restrictions apply to tags:
\nMaximum number of tags per secret: 50
\nMaximum key length: 127 Unicode characters in UTF-8
\nMaximum value length: 255 Unicode characters in UTF-8
\nTag keys and values are case sensitive.
\nDo not use the aws:
prefix in your tag names or values because Amazon Web Services reserves it\n for Amazon Web Services use. You can't edit or delete tag names or values with this \n prefix. Tags with this prefix do not count against your tags per secret limit.
If you use your tagging schema across multiple services and resources,\n other services might have restrictions on allowed characters. Generally\n allowed characters: letters, spaces, and numbers representable in UTF-8, plus the\n following special characters: + - = . _ : / @.
\nIf you use tags as part of your security strategy, then adding or removing a tag can\n change permissions. If successfully completing this operation would result in you losing\n your permissions for this secret, then the operation is blocked and returns an Access Denied\n error.
\n\n Required permissions: \n secretsmanager:TagResource
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Attaches tags to a secret. Tags consist of a key name and a value. Tags are part of the \n secret's metadata. They are not associated with specific versions of the secret. This operation appends tags to the existing list of tags.
\nThe following restrictions apply to tags:
\nMaximum number of tags per secret: 50
\nMaximum key length: 127 Unicode characters in UTF-8
\nMaximum value length: 255 Unicode characters in UTF-8
\nTag keys and values are case sensitive.
\nDo not use the aws:
prefix in your tag names or values because Amazon Web Services reserves it\n for Amazon Web Services use. You can't edit or delete tag names or values with this \n prefix. Tags with this prefix do not count against your tags per secret limit.
If you use your tagging schema across multiple services and resources,\n other services might have restrictions on allowed characters. Generally\n allowed characters: letters, spaces, and numbers representable in UTF-8, plus the\n following special characters: + - = . _ : / @.
\nIf you use tags as part of your security strategy, then adding or removing a tag can\n change permissions. If successfully completing this operation would result in you losing\n your permissions for this secret, then the operation is blocked and returns an Access Denied\n error.
\n\n Required permissions: \n secretsmanager:TagResource
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The identifier for the secret to attach tags to. You can specify either the\n Amazon Resource Name (ARN) or the friendly name of the secret.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The identifier for the secret to attach tags to. You can specify either the\n Amazon Resource Name (ARN) or the friendly name of the secret.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } }, @@ -2347,6 +2350,9 @@ "input": { "target": "com.amazonaws.secretsmanager#UntagResourceRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.secretsmanager#InternalServiceError" @@ -2362,7 +2368,7 @@ } ], "traits": { - "smithy.api#documentation": "Removes specific tags from a secret.
\nThis operation is idempotent. If a requested tag is not attached to the secret, no error\n is returned and the secret metadata is unchanged.
\nIf you use tags as part of your security strategy, then removing a tag can change\n permissions. If successfully completing this operation would result in you losing your\n permissions for this secret, then the operation is blocked and returns an Access Denied\n error.
\n\n Required permissions: \n secretsmanager:UntagResource
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Removes specific tags from a secret.
\nThis operation is idempotent. If a requested tag is not attached to the secret, no error\n is returned and the secret metadata is unchanged.
\nIf you use tags as part of your security strategy, then removing a tag can change\n permissions. If successfully completing this operation would result in you losing your\n permissions for this secret, then the operation is blocked and returns an Access Denied\n error.
\n\n Required permissions: \n secretsmanager:UntagResource
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or name of the secret.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } }, @@ -2425,7 +2431,7 @@ } ], "traits": { - "smithy.api#documentation": "Modifies the details of a secret, including metadata and the secret value. To change the secret value, you can also use PutSecretValue.
\nTo change the rotation configuration of a secret, use RotateSecret instead.
\n \nWe recommend you avoid calling UpdateSecret
at a sustained rate of more than \n once every 10 minutes. When you call UpdateSecret
to update the secret value, Secrets Manager creates a new version \n of the secret. Secrets Manager removes outdated versions when there are more than 100, but it does not \n remove versions created less than 24 hours ago. If you update the secret value more \n than once every 10 minutes, you create more versions than Secrets Manager removes, and you will reach \n the quota for secret versions.
If you include SecretString
or SecretBinary
to create a new\n secret version, Secrets Manager automatically attaches the staging label AWSCURRENT
to the new\n version.
If you call this operation with a VersionId
that matches an existing version's \n ClientRequestToken
, the operation results in an error. You can't modify an existing \n version, you can only create a new version. To remove a version, remove all staging labels from it. See \n UpdateSecretVersionStage.
If you don't specify an KMS encryption key, Secrets Manager uses the Amazon Web Services managed key \n aws/secretsmanager
. If this key doesn't already exist in your account, then Secrets Manager \n creates it for you automatically. All users and roles in the Amazon Web Services account automatically have access \n to use aws/secretsmanager
. Creating aws/secretsmanager
can result in a one-time \n significant delay in returning the result.
If the secret is in a different Amazon Web Services account from the credentials calling the API, then you can't \n use aws/secretsmanager
to encrypt the secret, and you must create and use a customer managed key.
\n Required permissions: \n secretsmanager:UpdateSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager. \n If you use a customer managed key, you must also have kms:GenerateDataKey
and \n kms:Decrypt
permissions on the key. For more information, see \n Secret encryption and decryption.
Modifies the details of a secret, including metadata and the secret value. To change the secret value, you can also use PutSecretValue.
\nTo change the rotation configuration of a secret, use RotateSecret instead.
\n \nWe recommend you avoid calling UpdateSecret
at a sustained rate of more than \n once every 10 minutes. When you call UpdateSecret
to update the secret value, Secrets Manager creates a new version \n of the secret. Secrets Manager removes outdated versions when there are more than 100, but it does not \n remove versions created less than 24 hours ago. If you update the secret value more \n than once every 10 minutes, you create more versions than Secrets Manager removes, and you will reach \n the quota for secret versions.
If you include SecretString
or SecretBinary
to create a new\n secret version, Secrets Manager automatically attaches the staging label AWSCURRENT
to the new\n version.
If you call this operation with a VersionId
that matches an existing version's \n ClientRequestToken
, the operation results in an error. You can't modify an existing \n version, you can only create a new version. To remove a version, remove all staging labels from it. See \n UpdateSecretVersionStage.
If you don't specify an KMS encryption key, Secrets Manager uses the Amazon Web Services managed key \n aws/secretsmanager
. If this key doesn't already exist in your account, then Secrets Manager \n creates it for you automatically. All users and roles in the Amazon Web Services account automatically have access \n to use aws/secretsmanager
. Creating aws/secretsmanager
can result in a one-time \n significant delay in returning the result.
If the secret is in a different Amazon Web Services account from the credentials calling the API, then you can't \n use aws/secretsmanager
to encrypt the secret, and you must create and use a customer managed key.
\n Required permissions: \n secretsmanager:UpdateSecret
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager. \n If you use a customer managed key, you must also have kms:GenerateDataKey
and \n kms:Decrypt
permissions on the key. For more information, see \n Secret encryption and decryption.
The ARN or name of the secret.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or name of the secret.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } }, @@ -2520,7 +2526,7 @@ } ], "traits": { - "smithy.api#documentation": "Modifies the staging labels attached to a version of a secret. Secrets Manager uses staging labels to\n track a version as it progresses through the secret rotation process. Each staging label can be \n attached to only one version at a time. To add a staging label to a version when it is already \n attached to another version, Secrets Manager first removes it from the other version first and\n then attaches it to this one. For more information about versions and staging labels, see Concepts: Version.
\nThe staging labels that you specify in the VersionStage
parameter are added\n to the existing list of staging labels for the version.
You can move the AWSCURRENT
staging label to this version by including it in this\n call.
Whenever you move AWSCURRENT
, Secrets Manager automatically moves the label AWSPREVIOUS
\n to the version that AWSCURRENT
was removed from.
If this action results in the last label being removed from a version, then the version is\n considered to be 'deprecated' and can be deleted by Secrets Manager.
\n\n Required permissions: \n secretsmanager:UpdateSecretVersionStage
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Modifies the staging labels attached to a version of a secret. Secrets Manager uses staging labels to\n track a version as it progresses through the secret rotation process. Each staging label can be \n attached to only one version at a time. To add a staging label to a version when it is already \n attached to another version, Secrets Manager first removes it from the other version first and\n then attaches it to this one. For more information about versions and staging labels, see Concepts: Version.
\nThe staging labels that you specify in the VersionStage
parameter are added\n to the existing list of staging labels for the version.
You can move the AWSCURRENT
staging label to this version by including it in this\n call.
Whenever you move AWSCURRENT
, Secrets Manager automatically moves the label AWSPREVIOUS
\n to the version that AWSCURRENT
was removed from.
If this action results in the last label being removed from a version, then the version is\n considered to be 'deprecated' and can be deleted by Secrets Manager.
\n\n Required permissions: \n secretsmanager:UpdateSecretVersionStage
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The ARN or the name of the secret with the version and staging labelsto modify.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN.
", + "smithy.api#documentation": "The ARN or the name of the secret with the version and staging labelsto modify.
\nFor an ARN, we recommend that you specify a complete ARN rather \n than a partial ARN. See Finding a secret from a partial ARN.
", "smithy.api#required": {} } }, @@ -2597,7 +2603,7 @@ } ], "traits": { - "smithy.api#documentation": "Validates that a resource policy does not grant a wide range of principals access to\n your secret. A resource-based policy is optional for secrets.
\nThe API performs three checks when validating the policy:
\nSends a call to Zelkova, an automated reasoning engine, to ensure your resource policy does not\n allow broad access to your secret, for example policies that use a wildcard for the principal.
\nChecks for correct syntax in a policy.
\nVerifies the policy does not lock out a caller.
\n\n Required permissions: \n secretsmanager:ValidateResourcePolicy
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
Validates that a resource policy does not grant a wide range of principals access to\n your secret. A resource-based policy is optional for secrets.
\nThe API performs three checks when validating the policy:
\nSends a call to Zelkova, an automated reasoning engine, to ensure your resource policy does not\n allow broad access to your secret, for example policies that use a wildcard for the principal.
\nChecks for correct syntax in a policy.
\nVerifies the policy does not lock out a caller.
\n\n Required permissions: \n secretsmanager:ValidateResourcePolicy
. \n For more information, see \n IAM policy actions for Secrets Manager and Authentication \n and access control in Secrets Manager.
The name of the product that generated the finding.
\nSecurity Hub populates this attribute automatically for each finding. You cannot update it using BatchImportFindings
or BatchUpdateFindings
. The exception to this is when you use a custom integration.
When you use the Security Hub console to filter findings by product name, you use this attribute.
\nWhen you use the Security Hub API to filter findings by product name, you use the aws/securityhub/ProductName
attribute under ProductFields
.
Security Hub does not synchronize those two attributes.
" + "smithy.api#documentation": "The name of the product that generated the finding.
\nSecurity Hub populates this attribute automatically for each finding. You cannot update this attribute with BatchImportFindings
or BatchUpdateFindings
. The exception to this is a custom integration.
When you use the Security Hub console or API to filter findings by product name, you use this attribute.
" } }, "CompanyName": { "target": "com.amazonaws.securityhub#NonEmptyString", "traits": { - "smithy.api#documentation": "The name of the company for the product that generated the finding.
\nSecurity Hub populates this attribute automatically for each finding. You cannot be updated using BatchImportFindings
or BatchUpdateFindings
. The exception to this is when you use a custom integration.
When you use the Security Hub console to filter findings by company name, you use this attribute.
\nWhen you use the Security Hub API to filter findings by company name, you use the aws/securityhub/CompanyName
attribute under ProductFields
.
Security Hub does not synchronize those two attributes.
" + "smithy.api#documentation": "The name of the company for the product that generated the finding.
\nSecurity Hub populates this attribute automatically for each finding. You cannot update this attribute with BatchImportFindings
or BatchUpdateFindings
. The exception to this is a custom integration.
When you use the Security Hub console or API to filter findings by company name, you use this attribute.
" } }, "Region": { @@ -12722,13 +12737,13 @@ "ProductName": { "target": "com.amazonaws.securityhub#StringFilterList", "traits": { - "smithy.api#documentation": "The name of the solution (product) that generates findings.
\nNote that this is a filter against the aws/securityhub/ProductName
field in ProductFields
. It is not a filter for the top-level ProductName
field.
The name of the solution (product) that generates findings.
" } }, "CompanyName": { "target": "com.amazonaws.securityhub#StringFilterList", "traits": { - "smithy.api#documentation": "The name of the findings provider (company) that owns the solution (product) that\n generates findings.
\nNote that this is a filter against the aws/securityhub/CompanyName
field in ProductFields
. It is not a filter for the top-level CompanyName
field.
The name of the findings provider (company) that owns the solution (product) that\n generates findings.
" } }, "UserDefinedFields": { @@ -14435,7 +14450,7 @@ } ], "traits": { - "smithy.api#documentation": "Used to enable finding aggregation. Must be called from the aggregation Region.
\nFor more details about cross-Region replication, see Configuring finding aggregation in the Security Hub User Guide.\n
", + "smithy.api#documentation": "Used to enable finding aggregation. Must be called from the aggregation Region.
\nFor more details about cross-Region replication, see Configuring finding aggregation in the Security Hub User Guide.\n
", "smithy.api#http": { "method": "POST", "uri": "/findingAggregator/create", @@ -15182,6 +15197,7 @@ "smithy.api#paginated": { "inputToken": "NextToken", "outputToken": "NextToken", + "items": "ActionTargets", "pageSize": "MaxResults" } } @@ -15345,6 +15361,12 @@ "traits": { "smithy.api#documentation": "Whether the maximum number of allowed member accounts are already associated with the\n Security Hub administrator account.
" } + }, + "AutoEnableStandards": { + "target": "com.amazonaws.securityhub#AutoEnableStandards", + "traits": { + "smithy.api#documentation": "Whether to automatically enable Security Hub default standards \n for new member accounts in the organization.
\nThe default value of this parameter is equal to DEFAULT
.
If equal to DEFAULT
, then Security Hub default standards are automatically enabled for new member \n accounts. If equal to NONE
, then default standards are not automatically enabled for new member \n accounts.
Security Hub provides you with a comprehensive view of the security state of your Amazon Web Services environment and resources. It also provides you with the readiness status\n of your environment based on controls from supported security standards. Security Hub collects\n security data from Amazon Web Services accounts, services, and integrated third-party products and helps\n you analyze security trends in your environment to identify the highest priority security\n issues. For more information about Security Hub, see the Security HubUser\n Guide\n .
\nWhen you use operations in the Security Hub API, the requests are executed only in the Amazon Web Services\n Region that is currently active or in the specific Amazon Web Services Region that you specify in your\n request. Any configuration or settings change that results from the operation is applied\n only to that Region. To make the same change in other Regions, execute the same command for\n each Region to apply the change to.
\nFor example, if your Region is set to us-west-2
, when you use CreateMembers
to add a member account to Security Hub, the association of\n the member account with the administrator account is created only in the us-west-2
\n Region. Security Hub must be enabled for the member account in the same Region that the invitation\n was sent from.
The following throttling limits apply to using Security Hub API operations.
\n\n BatchEnableStandards
- RateLimit
of 1\n request per second, BurstLimit
of 1 request per second.
\n GetFindings
- RateLimit
of 3 requests per second.\n BurstLimit
of 6 requests per second.
\n UpdateFindings
- RateLimit
of 1 request per\n second. BurstLimit
of 5 requests per second.
\n UpdateStandardsControl
- RateLimit
of\n 1 request per second, BurstLimit
of 5 requests per second.
All other operations - RateLimit
of 10 requests per second.\n BurstLimit
of 30 requests per second.
Security Hub provides you with a comprehensive view of the security state of your Amazon Web Services environment and resources. It also provides you with the readiness status\n of your environment based on controls from supported security standards. Security Hub collects\n security data from Amazon Web Services accounts, services, and integrated third-party products and helps\n you analyze security trends in your environment to identify the highest priority security\n issues. For more information about Security Hub, see the \n Security HubUser\n Guide\n .
\nWhen you use operations in the Security Hub API, the requests are executed only in the Amazon Web Services\n Region that is currently active or in the specific Amazon Web Services Region that you specify in your\n request. Any configuration or settings change that results from the operation is applied\n only to that Region. To make the same change in other Regions, execute the same command for\n each Region to apply the change to.
\nFor example, if your Region is set to us-west-2
, when you use CreateMembers
to add a member account to Security Hub, the association of\n the member account with the administrator account is created only in the us-west-2
\n Region. Security Hub must be enabled for the member account in the same Region that the invitation\n was sent from.
The following throttling limits apply to using Security Hub API operations.
\n\n BatchEnableStandards
- RateLimit
of 1\n request per second, BurstLimit
of 1 request per second.
\n GetFindings
- RateLimit
of 3 requests per second.\n BurstLimit
of 6 requests per second.
\n UpdateFindings
- RateLimit
of 1 request per\n second. BurstLimit
of 5 requests per second.
\n UpdateStandardsControl
- RateLimit
of\n 1 request per second, BurstLimit
of 5 requests per second.
All other operations - RateLimit
of 10 requests per second.\n BurstLimit
of 30 requests per second.
Whether to automatically enable Security Hub for new accounts in the organization.
\nBy default, this is false
, and new accounts are not added\n automatically.
To automatically enable Security Hub for new accounts, set this to true
.
Whether to automatically enable Security Hub default standards \n for new member accounts in the organization.
\nBy default, this parameter is equal to DEFAULT
, and new member accounts are automatically enabled with default Security Hub standards.
To opt out of enabling default standards for new member accounts, set this parameter equal to NONE
.
Adds or overwrites one or more tags for the specified resource. Tags are metadata that you\n can assign to your documents, managed nodes, maintenance windows, Parameter Store parameters, and\n patch baselines. Tags enable you to categorize your resources in different ways, for example, by\n purpose, owner, or environment. Each tag consists of a key and an optional value, both of which\n you define. For example, you could define a set of tags for your account's managed nodes that\n helps you track each node's owner and stack level. For example:
\n\n Key=Owner,Value=DbAdmin
\n
\n Key=Owner,Value=SysAdmin
\n
\n Key=Owner,Value=Dev
\n
\n Key=Stack,Value=Production
\n
\n Key=Stack,Value=Pre-Production
\n
\n Key=Stack,Value=Test
\n
Each resource can have a maximum of 50 tags.
\nWe recommend that you devise a set of tag keys that meets your needs for each resource type.\n Using a consistent set of tag keys makes it easier for you to manage your resources. You can\n search and filter the resources based on the tags you add. Tags don't have any semantic meaning\n to and are interpreted strictly as a string of characters.
\nFor more information about using tags with Amazon Elastic Compute Cloud (Amazon EC2) instances, see Tagging your Amazon EC2\n resources in the Amazon EC2 User Guide.
" + "smithy.api#documentation": "Adds or overwrites one or more tags for the specified resource. Tags are metadata that you\n can assign to your automations, documents, managed nodes, maintenance windows, Parameter Store parameters, and\n patch baselines. Tags enable you to categorize your resources in different ways, for example, by\n purpose, owner, or environment. Each tag consists of a key and an optional value, both of which\n you define. For example, you could define a set of tags for your account's managed nodes that\n helps you track each node's owner and stack level. For example:
\n\n Key=Owner,Value=DbAdmin
\n
\n Key=Owner,Value=SysAdmin
\n
\n Key=Owner,Value=Dev
\n
\n Key=Stack,Value=Production
\n
\n Key=Stack,Value=Pre-Production
\n
\n Key=Stack,Value=Test
\n
Most resources can have a maximum of 50 tags. Automations can have a maximum of 5 tags.
\nWe recommend that you devise a set of tag keys that meets your needs for each resource type.\n Using a consistent set of tag keys makes it easier for you to manage your resources. You can\n search and filter the resources based on the tags you add. Tags don't have any semantic meaning\n to and are interpreted strictly as a string of characters.
\nFor more information about using tags with Amazon Elastic Compute Cloud (Amazon EC2) instances, see Tagging your Amazon EC2\n resources in the Amazon EC2 User Guide.
" } }, "com.amazonaws.ssm#AddTagsToResourceRequest": { @@ -237,7 +237,7 @@ "ResourceId": { "target": "com.amazonaws.ssm#ResourceId", "traits": { - "smithy.api#documentation": "The resource ID you want to tag.
\nUse the ID of the resource. Here are some examples:
\n\n MaintenanceWindow
: mw-012345abcde
\n
\n PatchBaseline
: pb-012345abcde
\n
\n OpsMetadata
object: ResourceID
for tagging is created from the\n Amazon Resource Name (ARN) for the object. Specifically, ResourceID
is created from\n the strings that come after the word opsmetadata
in the ARN. For example, an\n OpsMetadata object with an ARN of\n arn:aws:ssm:us-east-2:1234567890:opsmetadata/aws/ssm/MyGroup/appmanager
has a\n ResourceID
of either aws/ssm/MyGroup/appmanager
or\n /aws/ssm/MyGroup/appmanager
.
For the Document
and Parameter
values, use the name of the\n resource.
\n ManagedInstance
: mi-012345abcde
\n
The ManagedInstance
type for this API operation is only for on-premises\n managed nodes. You must specify the name of the managed node in the following format:\n mi-ID_number\n
. For example,\n mi-1a2b3c4d5e6f
.
The resource ID you want to tag.
\nUse the ID of the resource. Here are some examples:
\n\n MaintenanceWindow
: mw-012345abcde
\n
\n PatchBaseline
: pb-012345abcde
\n
\n Automation
: example-c160-4567-8519-012345abcde
\n
\n OpsMetadata
object: ResourceID
for tagging is created from the\n Amazon Resource Name (ARN) for the object. Specifically, ResourceID
is created from\n the strings that come after the word opsmetadata
in the ARN. For example, an\n OpsMetadata object with an ARN of\n arn:aws:ssm:us-east-2:1234567890:opsmetadata/aws/ssm/MyGroup/appmanager
has a\n ResourceID
of either aws/ssm/MyGroup/appmanager
or\n /aws/ssm/MyGroup/appmanager
.
For the Document
and Parameter
values, use the name of the\n resource.
\n ManagedInstance
: mi-012345abcde
\n
The ManagedInstance
type for this API operation is only for on-premises\n managed nodes. You must specify the name of the managed node in the following format:\n mi-ID_number\n
. For example,\n mi-1a2b3c4d5e6f
.
The association name.
" } + }, + "ScheduleOffset": { + "target": "com.amazonaws.ssm#ScheduleOffset", + "traits": { + "smithy.api#box": {}, + "smithy.api#documentation": "Number of days to wait after the scheduled day to run an association.
" + } } }, "traits": { @@ -1070,6 +1077,13 @@ "traits": { "smithy.api#documentation": "The combination of Amazon Web Services Regions and Amazon Web Services accounts where you want to run the\n association.
" } + }, + "ScheduleOffset": { + "target": "com.amazonaws.ssm#ScheduleOffset", + "traits": { + "smithy.api#box": {}, + "smithy.api#documentation": "Number of days to wait after the scheduled day to run an association.
" + } } }, "traits": { @@ -1771,6 +1785,13 @@ "traits": { "smithy.api#documentation": "The combination of Amazon Web Services Regions and Amazon Web Services accounts where you wanted to run the association\n when this association version was created.
" } + }, + "ScheduleOffset": { + "target": "com.amazonaws.ssm#ScheduleOffset", + "traits": { + "smithy.api#box": {}, + "smithy.api#documentation": "Number of days to wait after the scheduled day to run an association.
" + } } }, "traits": { @@ -4445,6 +4466,13 @@ "traits": { "smithy.api#documentation": "Use this action to create an association in multiple Regions and multiple accounts.
" } + }, + "ScheduleOffset": { + "target": "com.amazonaws.ssm#ScheduleOffset", + "traits": { + "smithy.api#box": {}, + "smithy.api#documentation": "Number of days to wait after the scheduled day to run an association.
" + } } }, "traits": { @@ -4567,6 +4595,13 @@ "traits": { "smithy.api#documentation": "A location is a combination of Amazon Web Services Regions and Amazon Web Services accounts where you want to run the\n association. Use this action to create an association in multiple Regions and multiple\n accounts.
" } + }, + "ScheduleOffset": { + "target": "com.amazonaws.ssm#ScheduleOffset", + "traits": { + "smithy.api#box": {}, + "smithy.api#documentation": "Number of days to wait after the scheduled day to run an association. For example, if you\n specified a cron schedule of cron(0 0 ? * THU#2 *)
, you could specify an offset of 3\n to run the association each Sunday after the second Thursday of the month. For more information about cron schedules for associations, see Reference: Cron and rate expressions for Systems Manager in the Amazon Web Services Systems Manager User Guide.
To use offsets, you must specify the ApplyOnlyAtCronInterval
parameter. This\n option tells the system not to run an association immediately after you create it.
The ID of the resource from which you want to remove tags. For example:
\nManagedInstance: mi-012345abcde
\nMaintenanceWindow: mw-012345abcde
\nPatchBaseline: pb-012345abcde
\nOpsMetadata object: ResourceID
for tagging is created from the Amazon Resource\n Name (ARN) for the object. Specifically, ResourceID
is created from the strings that\n come after the word opsmetadata
in the ARN. For example, an OpsMetadata object with\n an ARN of arn:aws:ssm:us-east-2:1234567890:opsmetadata/aws/ssm/MyGroup/appmanager
\n has a ResourceID
of either aws/ssm/MyGroup/appmanager
or\n /aws/ssm/MyGroup/appmanager
.
For the Document and Parameter values, use the name of the resource.
\nThe ManagedInstance
type for this API operation is only for on-premises\n managed nodes. Specify the name of the managed node in the following format: mi-ID_number. For\n example, mi-1a2b3c4d5e6f.
The ID of the resource from which you want to remove tags. For example:
\nManagedInstance: mi-012345abcde
\nMaintenanceWindow: mw-012345abcde
\n\n Automation
: example-c160-4567-8519-012345abcde
\n
PatchBaseline: pb-012345abcde
\nOpsMetadata object: ResourceID
for tagging is created from the Amazon Resource\n Name (ARN) for the object. Specifically, ResourceID
is created from the strings that\n come after the word opsmetadata
in the ARN. For example, an OpsMetadata object with\n an ARN of arn:aws:ssm:us-east-2:1234567890:opsmetadata/aws/ssm/MyGroup/appmanager
\n has a ResourceID
of either aws/ssm/MyGroup/appmanager
or\n /aws/ssm/MyGroup/appmanager
.
For the Document and Parameter values, use the name of the resource.
\nThe ManagedInstance
type for this API operation is only for on-premises\n managed nodes. Specify the name of the managed node in the following format: mi-ID_number. For\n example, mi-1a2b3c4d5e6f.
Optional metadata that you assign to a resource. You can specify a maximum of five tags for\n an automation. Tags enable you to categorize a resource in different ways, such as by purpose,\n owner, or environment. For example, you might want to tag an automation to identify an\n environment or operating system. In this case, you could specify the following key-value\n pairs:
\n\n Key=environment,Value=test
\n
\n Key=OS,Value=Windows
\n
To add tags to an existing patch baseline, use the AddTagsToResource\n operation.
\nOptional metadata that you assign to a resource. You can specify a maximum of five tags for\n an automation. Tags enable you to categorize a resource in different ways, such as by purpose,\n owner, or environment. For example, you might want to tag an automation to identify an\n environment or operating system. In this case, you could specify the following key-value\n pairs:
\n\n Key=environment,Value=test
\n
\n Key=OS,Value=Windows
\n
To add tags to an existing automation, use the AddTagsToResource\n operation.
\nA location is a combination of Amazon Web Services Regions and Amazon Web Services accounts where you want to run the\n association. Use this action to update an association in multiple Regions and multiple\n accounts.
" } + }, + "ScheduleOffset": { + "target": "com.amazonaws.ssm#ScheduleOffset", + "traits": { + "smithy.api#box": {}, + "smithy.api#documentation": "Number of days to wait after the scheduled day to run an association. For example, if you\n specified a cron schedule of cron(0 0 ? * THU#2 *)
, you could specify an offset of 3\n to run the association each Sunday after the second Thursday of the month. For more information about cron schedules for associations, see Reference: Cron and rate expressions for Systems Manager in the Amazon Web Services Systems Manager User Guide.
To use offsets, you must specify the ApplyOnlyAtCronInterval
parameter. This\n option tells the system not to run an association immediately after you create it.
Assigns a tape to a tape pool for archiving. The tape assigned to a pool is archived in\n the S3 storage class that is associated with the pool. When you use your backup application\n to eject the tape, the tape is archived directly into the S3 storage class (S3 Glacier or\n S3 Glacier Deep Archive) that corresponds to the pool.
\n\nValid Values: GLACIER
| DEEP_ARCHIVE
\n
Assigns a tape to a tape pool for archiving. The tape assigned to a pool is archived in\n the S3 storage class that is associated with the pool. When you use your backup application\n to eject the tape, the tape is archived directly into the S3 storage class (S3 Glacier or\n S3 Glacier Deep Archive) that corresponds to the pool.
" } }, "com.amazonaws.storagegateway#AssignTapePoolInput": { @@ -397,7 +397,7 @@ "PoolId": { "target": "com.amazonaws.storagegateway#PoolId", "traits": { - "smithy.api#documentation": "The ID of the pool that you want to add your tape to for archiving. The tape in this\n pool is archived in the S3 storage class that is associated with the pool. When you use\n your backup application to eject the tape, the tape is archived directly into the storage\n class (S3 Glacier or S3 Glacier Deep Archive) that corresponds to the pool.
\n\nValid Values: GLACIER
| DEEP_ARCHIVE
\n
The ID of the pool that you want to add your tape to for archiving. The tape in this\n pool is archived in the S3 storage class that is associated with the pool. When you use\n your backup application to eject the tape, the tape is archived directly into the storage\n class (S3 Glacier or S3 Glacier Deep Archive) that corresponds to the pool.
", "smithy.api#required": {} } }, @@ -647,7 +647,7 @@ "PoolId": { "target": "com.amazonaws.storagegateway#PoolId", "traits": { - "smithy.api#documentation": "The ID of the pool that you want to add your tape to for archiving. The tape in this\n pool is archived in the Amazon S3 storage class that is associated with the pool.\n When you use your backup application to eject the tape, the tape is archived directly into\n the storage class (S3 Glacier or S3 Glacier Deep Archive) that corresponds to the\n pool.
\n\nValid Values: GLACIER
| DEEP_ARCHIVE
\n
The ID of the pool that you want to add your tape to for archiving. The tape in this\n pool is archived in the Amazon S3 storage class that is associated with the pool.\n When you use your backup application to eject the tape, the tape is archived directly into\n the storage class (S3 Glacier or S3 Glacier Deep Archive) that corresponds to the\n pool.
", "smithy.api#required": {} } }, @@ -896,7 +896,7 @@ "VolumeUsedInBytes": { "target": "com.amazonaws.storagegateway#VolumeUsedInBytes", "traits": { - "smithy.api#documentation": "The size of the data stored on the volume in bytes. This value is calculated based on\n the number of blocks that are touched, instead of the actual amount of data written. This\n value can be useful for sequential write patterns but less accurate for random write\n patterns. VolumeUsedInBytes
is different from the compressed size of the\n volume, which is the value that is used to calculate your bill.
This value is not available for volumes created prior to May 13, 2015, until you\n store data on the volume.
\nThe size of the data stored on the volume in bytes. This value is calculated based on\n the number of blocks that are touched, instead of the actual amount of data written. This\n value can be useful for sequential write patterns but less accurate for random write\n patterns. VolumeUsedInBytes
is different from the compressed size of the\n volume, which is the value that is used to calculate your bill.
This value is not available for volumes created prior to May 13, 2015, until you\n store data on the volume.
\n\nIf you use a delete tool that overwrites the data on your volume with random data,\n your usage will not be reduced. This is because the random data is not compressible. If\n you want to reduce the amount of billed storage on your volume, we recommend overwriting\n your files with zeros to compress the data to a negligible amount of actual\n storage.
\nThe default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_INTELLIGENT_TIERING
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_STANDARD
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_INTELLIGENT_TIERING
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_STANDARD
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
The ID of the pool that you want to add your tape to for archiving. The tape in this\n pool is archived in the S3 storage class that is associated with the pool. When you use\n your backup application to eject the tape, the tape is archived directly into the storage\n class (S3 Glacier or S3 Deep Archive) that corresponds to the pool.
\n\nValid Values: GLACIER
| DEEP_ARCHIVE
\n
The ID of the pool that you want to add your tape to for archiving. The tape in this\n pool is archived in the S3 storage class that is associated with the pool. When you use\n your backup application to eject the tape, the tape is archived directly into the storage\n class (S3 Glacier or S3 Deep Archive) that corresponds to the pool.
" } }, "Worm": { @@ -2078,7 +2078,7 @@ "PoolId": { "target": "com.amazonaws.storagegateway#PoolId", "traits": { - "smithy.api#documentation": "The ID of the pool that you want to add your tape to for archiving. The tape in this\n pool is archived in the S3 storage class that is associated with the pool. When you use\n your backup application to eject the tape, the tape is archived directly into the storage\n class (S3 Glacier or S3 Glacier Deep Archive) that corresponds to the pool.
\n\nValid Values: GLACIER
| DEEP_ARCHIVE
\n
The ID of the pool that you want to add your tape to for archiving. The tape in this\n pool is archived in the S3 storage class that is associated with the pool. When you use\n your backup application to eject the tape, the tape is archived directly into the storage\n class (S3 Glacier or S3 Glacier Deep Archive) that corresponds to the pool.
" } }, "Worm": { @@ -2427,7 +2427,7 @@ } ], "traits": { - "smithy.api#documentation": "Deletes a snapshot of a volume.
\n\nYou can take snapshots of your gateway volumes on a scheduled or ad hoc basis. This API\n action enables you to delete a snapshot schedule for a volume. For more information, see\n Backing up your\n volumes. In the DeleteSnapshotSchedule
request, you identify the\n volume by providing its Amazon Resource Name (ARN). This operation is only supported in\n stored and cached volume gateway types.
To list or delete a snapshot, you must use the Amazon EC2 API. For more information,\n go to DescribeSnapshots\n in the Amazon Elastic Compute Cloud API Reference.
\nDeletes a snapshot of a volume.
\n\nYou can take snapshots of your gateway volumes on a scheduled or ad hoc basis. This API\n action enables you to delete a snapshot schedule for a volume. For more information, see\n Backing up your\n volumes. In the DeleteSnapshotSchedule
request, you identify the\n volume by providing its Amazon Resource Name (ARN). This operation is only supported for\n cached volume gateway types.
To list or delete a snapshot, you must use the Amazon EC2 API. For more information,\n go to DescribeSnapshots\n in the Amazon Elastic Compute Cloud API Reference.
\nThe date on which the last software update was applied to the gateway. If the gateway\n has never been updated, this field does not return a value in the response.
" + "smithy.api#documentation": "The date on which the last software update was applied to the gateway. If the gateway\n has never been updated, this field does not return a value in the response. This only only\n exist and returns once it have been chosen and set by the SGW service, based on the OS\n version of the gateway VM
" } }, "Ec2InstanceId": { @@ -3152,7 +3152,7 @@ "CloudWatchLogGroupARN": { "target": "com.amazonaws.storagegateway#CloudWatchLogGroupARN", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Amazon CloudWatch log group that is used to\n monitor events in the gateway.
" + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Amazon CloudWatch log group that is used to\n monitor events in the gateway. This field only only exist and returns once it have been\n chosen and set by the SGW service, based on the OS version of the gateway VM
" } }, "HostEnvironment": { @@ -6162,7 +6162,7 @@ "DefaultStorageClass": { "target": "com.amazonaws.storagegateway#StorageClass", "traits": { - "smithy.api#documentation": "The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_INTELLIGENT_TIERING
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_STANDARD
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
Sends you notification through CloudWatch Events when all files written to your file\n share have been uploaded to Amazon S3.
\n\nStorage Gateway can send a notification through Amazon CloudWatch Events when all\n files written to your file share up to that point in time have been uploaded to Amazon S3. These files include files written to the file share up to the time that you\n make a request for notification. When the upload is done, Storage Gateway sends you\n notification through an Amazon CloudWatch Event. You can configure CloudWatch Events to\n send the notification through event targets such as Amazon SNS or Lambda function. This operation is only supported for S3 File Gateways.
\n\n \n\nFor more information, see Getting file upload notification in the Storage Gateway User\n Guide.
" + "smithy.api#documentation": "Sends you notification through CloudWatch Events when all files written to your file\n share have been uploaded to S3. Amazon S3.
\n\nStorage Gateway can send a notification through Amazon CloudWatch Events when all\n files written to your file share up to that point in time have been uploaded to Amazon S3. These files include files written to the file share up to the time that you\n make a request for notification. When the upload is done, Storage Gateway sends you\n notification through an Amazon CloudWatch Event. You can configure CloudWatch Events to\n send the notification through event targets such as Amazon SNS or Lambda function. This operation is only supported for S3 File Gateways.
\n\n \n\nFor more information, see Getting file upload notification in the Storage Gateway User\n Guide.
" } }, "com.amazonaws.storagegateway#NotifyWhenUploadedInput": { @@ -6564,7 +6564,7 @@ } ], "traits": { - "smithy.api#documentation": "Refreshes the cached inventory of objects for the specified file share. This operation\n finds objects in the Amazon S3 bucket that were added, removed, or replaced since\n the gateway last listed the bucket's contents and cached the results. This operation\n does not import files into the S3 File Gateway cache storage. It only updates the cached\n inventory to reflect changes in the inventory of the objects in the S3 bucket. This\n operation is only supported in the S3 File Gateway types.
\nYou can subscribe to be notified through an Amazon CloudWatch event when your\n RefreshCache
operation completes. For more information, see Getting notified about file operations in the Storage Gateway\n User Guide. This operation is Only supported for S3 File Gateways.
When this API is called, it only initiates the refresh operation. When the API call\n completes and returns a success code, it doesn't necessarily mean that the file\n refresh has completed. You should use the refresh-complete notification to determine that\n the operation has completed before you check for new files on the gateway file share. You\n can subscribe to be notified through a CloudWatch event when your RefreshCache
\n operation completes.
Throttle limit: This API is asynchronous, so the gateway will accept no more than two\n refreshes at any time. We recommend using the refresh-complete CloudWatch event\n notification before issuing additional requests. For more information, see Getting notified about file operations in the Storage Gateway\n User Guide.
\n\nIf you invoke the RefreshCache API when two requests are already being processed, any\n new request will cause an InvalidGatewayRequestException
error because too\n many requests were sent to the server.
For more information, see Getting notified about file operations in the Storage Gateway\n User Guide.
" + "smithy.api#documentation": "Refreshes the cached inventory of objects for the specified file share. This operation\n finds objects in the Amazon S3 bucket that were added, removed, or replaced since\n the gateway last listed the bucket's contents and cached the results. This operation\n does not import files into the S3 File Gateway cache storage. It only updates the cached\n inventory to reflect changes in the inventory of the objects in the S3 bucket. This\n operation is only supported in the S3 File Gateway types.
\n\nYou can subscribe to be notified through an Amazon CloudWatch event when your\n RefreshCache
operation completes. For more information, see Getting notified about file operations in the Storage Gateway\n User Guide. This operation is Only supported for S3 File Gateways.
When this API is called, it only initiates the refresh operation. When the API call\n completes and returns a success code, it doesn't necessarily mean that the file\n refresh has completed. You should use the refresh-complete notification to determine that\n the operation has completed before you check for new files on the gateway file share. You\n can subscribe to be notified through a CloudWatch event when your RefreshCache
\n operation completes.
Throttle limit: This API is asynchronous, so the gateway will accept no more than two\n refreshes at any time. We recommend using the refresh-complete CloudWatch event\n notification before issuing additional requests. For more information, see Getting notified about file operations in the Storage Gateway\n User Guide.
\n\nWait at least 60 seconds between consecutive RefreshCache API requests.
\nRefreshCache does not evict cache entries if invoked consecutively within 60\n seconds of a previous RefreshCache request.
\nIf you invoke the RefreshCache API when two requests are already being\n processed, any new request will cause an\n InvalidGatewayRequestException
error because too many requests\n were sent to the server.
The S3 bucket name does not need to be included when entering the list of folders in\n the FolderList parameter.
\nFor more information, see Getting notified about file operations in the Storage Gateway\n User Guide.
" } }, "com.amazonaws.storagegateway#RefreshCacheInput": { @@ -6911,7 +6911,7 @@ "DefaultStorageClass": { "target": "com.amazonaws.storagegateway#StorageClass", "traits": { - "smithy.api#documentation": "The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_INTELLIGENT_TIERING
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_STANDARD
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
The ID of the pool that contains tapes that will be archived. The tapes in this pool are\n archived in the S3 storage class that is associated with the pool. When you use your backup\n application to eject the tape, the tape is archived directly into the storage class (S3\n Glacier or S3 Glacier Deep Archive) that corresponds to the pool.
\n\nValid Values: GLACIER
| DEEP_ARCHIVE
\n
The ID of the pool that contains tapes that will be archived. The tapes in this pool are\n archived in the S3 storage class that is associated with the pool. When you use your backup\n application to eject the tape, the tape is archived directly into the storage class (S3\n Glacier or S3 Glacier Deep Archive) that corresponds to the pool.
" } }, "Worm": { @@ -7935,7 +7935,7 @@ "min": 50, "max": 500 }, - "smithy.api#pattern": "^arn:(aws|aws-cn|aws-us-gov):storagegateway:[a-z\\-0-9]+:[0-9]+:tape\\/[0-9A-Z]{7,16}$" + "smithy.api#pattern": "^arn:(aws|aws-cn|aws-us-gov):storagegateway:[a-z\\-0-9]+:[0-9]+:tape\\/[0-9A-Z]{5,16}$" } }, "com.amazonaws.storagegateway#TapeARNs": { @@ -8004,7 +8004,7 @@ "PoolId": { "target": "com.amazonaws.storagegateway#PoolId", "traits": { - "smithy.api#documentation": "The ID of the pool that was used to archive the tape. The tapes in this pool are\n archived in the S3 storage class that is associated with the pool.
\n\nValid Values: GLACIER
| DEEP_ARCHIVE
\n
The ID of the pool that was used to archive the tape. The tapes in this pool are\n archived in the S3 storage class that is associated with the pool.
" } }, "Worm": { @@ -8043,7 +8043,7 @@ "type": "string", "traits": { "smithy.api#length": { - "min": 7, + "min": 5, "max": 16 }, "smithy.api#pattern": "^[A-Z0-9]*$" @@ -8104,7 +8104,7 @@ "PoolId": { "target": "com.amazonaws.storagegateway#PoolId", "traits": { - "smithy.api#documentation": "The ID of the pool that you want to add your tape to for archiving. The tape in this\n pool is archived in the S3 storage class that is associated with the pool. When you use\n your backup application to eject the tape, the tape is archived directly into the storage\n class (S3 Glacier or S3 Glacier Deep Archive) that corresponds to the pool.
\n\nValid Values: GLACIER
| DEEP_ARCHIVE
\n
The ID of the pool that you want to add your tape to for archiving. The tape in this\n pool is archived in the S3 storage class that is associated with the pool. When you use\n your backup application to eject the tape, the tape is archived directly into the storage\n class (S3 Glacier or S3 Glacier Deep Archive) that corresponds to the pool.
" } }, "RetentionStartDate": { @@ -8766,7 +8766,7 @@ "DefaultStorageClass": { "target": "com.amazonaws.storagegateway#StorageClass", "traits": { - "smithy.api#documentation": "The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_INTELLIGENT_TIERING
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_STANDARD
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_INTELLIGENT_TIERING
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
The default storage class for objects put into an Amazon S3 bucket by the S3\n File Gateway. The default value is S3_STANDARD
. Optional.
Valid Values: S3_STANDARD
| S3_INTELLIGENT_TIERING
|\n S3_STANDARD_IA
| S3_ONEZONE_IA
\n
Analyzes an input document for relationships between detected items.
\nThe types of information returned are as follows:
\nForm data (key-value pairs). The related information is returned in two Block objects, each of type KEY_VALUE_SET
: a KEY\n Block
object and a VALUE Block
object. For example,\n Name: Ana Silva Carolina contains a key and value.\n Name: is the key. Ana Silva Carolina is\n the value.
Table and table cell data. A TABLE Block
object contains information about a detected table. A CELL\n Block
object is returned for each cell in a table.
Lines and words of text. A LINE Block
object contains one or more WORD Block
objects.\n All lines and words that are detected in the document are returned (including text that doesn't have a\n relationship with the value of FeatureTypes
).
Selection elements such as check boxes and option buttons (radio buttons) can be detected in form data and in tables.\n A SELECTION_ELEMENT Block
object contains information about a selection element,\n including the selection status.
You can choose which type of analysis to perform by specifying the FeatureTypes
list. \n
The output is returned in a list of Block
objects.
\n AnalyzeDocument
is a synchronous operation. To analyze documents \n asynchronously, use StartDocumentAnalysis.
For more information, see Document Text Analysis.
" + "smithy.api#documentation": "Analyzes an input document for relationships between detected items.
\nThe types of information returned are as follows:
\nForm data (key-value pairs). The related information is returned in two Block objects, each of type KEY_VALUE_SET
: a KEY\n Block
object and a VALUE Block
object. For example,\n Name: Ana Silva Carolina contains a key and value.\n Name: is the key. Ana Silva Carolina is\n the value.
Table and table cell data. A TABLE Block
object contains information about a detected table. A CELL\n Block
object is returned for each cell in a table.
Lines and words of text. A LINE Block
object contains one or more WORD Block
objects.\n All lines and words that are detected in the document are returned (including text that doesn't have a\n relationship with the value of FeatureTypes
).
Queries.A QUERIES_RESULT Block object contains the answer to the query, the alias associated and an ID that \n connect it to the query asked. This Block also contains a location and attached confidence score.
\nSelection elements such as check boxes and option buttons (radio buttons) can be detected in form data and in tables.\n A SELECTION_ELEMENT Block
object contains information about a selection element,\n including the selection status.
You can choose which type of analysis to perform by specifying the FeatureTypes
list. \n
The output is returned in a list of Block
objects.
\n AnalyzeDocument
is a synchronous operation. To analyze documents \n asynchronously, use StartDocumentAnalysis.
For more information, see Document Text Analysis.
" } }, "com.amazonaws.textract#AnalyzeDocumentRequest": { @@ -94,7 +94,7 @@ "Document": { "target": "com.amazonaws.textract#Document", "traits": { - "smithy.api#documentation": "The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI\n to call Amazon Textract operations, you can't pass image bytes. The document must be an image \n in JPEG or PNG format.
\nIf you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode\n image bytes that are passed using the Bytes
field.
The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI\n to call Amazon Textract operations, you can't pass image bytes. The document must be an image \n in JPEG, PNG, PDF, or TIFF format.
\nIf you're using an AWS SDK to call Amazon Textract, you might not need to base64-encode\n image bytes that are passed using the Bytes
field.
Sets the configuration for the human in the loop workflow for analyzing documents.
" } + }, + "QueriesConfig": { + "target": "com.amazonaws.textract#QueriesConfig", + "traits": { + "smithy.api#documentation": "Contains Queries and the alias for those Queries, as determined by the input.
" + } } } }, @@ -246,7 +252,7 @@ } ], "traits": { - "smithy.api#documentation": "Analyzes identity documents for relevant information. This information is extracted\n and returned as IdentityDocumentFields
, which records both the normalized\n field and value of the extracted text.
Analyzes identity documents for relevant information. This information is extracted\n and returned as IdentityDocumentFields
, which records both the normalized\n field and value of the extracted text.Unlike other Amazon Textract operations, AnalyzeID
\n doesn't return any Geometry data.
The type of text item that's recognized. In operations for text detection, the following\n types are returned:
\n\n PAGE - Contains a list of the LINE Block
objects\n that are detected on a document page.
\n WORD - A word detected on a document page. A word is one or\n more ISO basic Latin script characters that aren't separated by spaces.
\n\n LINE - A string of tab-delimited, contiguous words that are\n detected on a document page.
\nIn text analysis operations, the following types are returned:
\n\n PAGE - Contains a list of child Block
objects\n that are detected on a document page.
\n KEY_VALUE_SET - Stores the KEY and VALUE Block
\n objects for linked text that's detected on a document page. Use the\n EntityType
field to determine if a KEY_VALUE_SET object is a KEY\n Block
object or a VALUE Block
object.
\n WORD - A word that's detected on a document page. A word is\n one or more ISO basic Latin script characters that aren't separated by spaces.
\n\n LINE - A string of tab-delimited, contiguous words that are\n detected on a document page.
\n\n TABLE - A table that's detected on a document page. A table\n is grid-based information with two or more rows or columns, with a cell span of one\n row and one column each.
\n\n CELL - A cell within a detected table. The cell is the parent\n of the block that contains the text in the cell.
\n\n SELECTION_ELEMENT - A selection element such as an option\n button (radio button) or a check box that's detected on a document page. Use the\n value of SelectionStatus
to determine the status of the selection\n element.
The type of text item that's recognized. In operations for text detection, the following\n types are returned:
\n\n PAGE - Contains a list of the LINE Block
objects\n that are detected on a document page.
\n WORD - A word detected on a document page. A word is one or\n more ISO basic Latin script characters that aren't separated by spaces.
\n\n LINE - A string of tab-delimited, contiguous words that are\n detected on a document page.
\nIn text analysis operations, the following types are returned:
\n\n PAGE - Contains a list of child Block
objects\n that are detected on a document page.
\n KEY_VALUE_SET - Stores the KEY and VALUE Block
\n objects for linked text that's detected on a document page. Use the\n EntityType
field to determine if a KEY_VALUE_SET object is a KEY\n Block
object or a VALUE Block
object.
\n WORD - A word that's detected on a document page. A word is\n one or more ISO basic Latin script characters that aren't separated by spaces.
\n\n LINE - A string of tab-delimited, contiguous words that are\n detected on a document page.
\n\n TABLE - A table that's detected on a document page. A table\n is grid-based information with two or more rows or columns, with a cell span of one\n row and one column each.
\n\n CELL - A cell within a detected table. The cell is the parent\n of the block that contains the text in the cell.
\n\n SELECTION_ELEMENT - A selection element such as an option\n button (radio button) or a check box that's detected on a document page. Use the\n value of SelectionStatus
to determine the status of the selection\n element.
\n QUERY - A question asked during the call of AnalyzeDocument. Contains an\n alias and an ID that attachs it to its answer.
\n\n QUERY_RESULT - A response to a question asked during the call\n of analyze document. Comes with an alias and ID for ease of locating in a \n response. Also contains location and confidence score.
\nThe page on which a block was detected. Page
is returned by asynchronous\n operations. Page values greater than 1 are only returned for multipage documents that are\n in PDF or TIFF format. A scanned image (JPEG/PNG), even if it contains multiple document pages, is\n considered to be a single-page document. The value of Page
is always 1.\n Synchronous operations don't return Page
because every input document is\n considered to be a single-page document.
Detects text in the input document. Amazon Textract can detect lines of text and the\n words that make up a line of text. The input document must be an image in JPEG or PNG\n format. DetectDocumentText
returns the detected text in an array of Block objects.
Each document page has as an associated Block
of type PAGE. Each PAGE Block
object\n is the parent of LINE Block
objects that represent the lines of detected text on a page. A LINE Block
object is\n a parent for each word that makes up the line. Words are represented by Block
objects of type WORD.
\n DetectDocumentText
is a synchronous operation. To analyze documents \n asynchronously, use StartDocumentTextDetection.
For more information, see Document Text Detection.
" + "smithy.api#documentation": "Detects text in the input document. Amazon Textract can detect lines of text and the\n words that make up a line of text. The input document must be an image in JPEG, PNG, PDF, or TIFF\n format. DetectDocumentText
returns the detected text in an array of Block objects.
Each document page has as an associated Block
of type PAGE. Each PAGE Block
object\n is the parent of LINE Block
objects that represent the lines of detected text on a page. A LINE Block
object is\n a parent for each word that makes up the line. Words are represented by Block
objects of type WORD.
\n DetectDocumentText
is a synchronous operation. To analyze documents \n asynchronously, use StartDocumentTextDetection.
For more information, see Document Text Detection.
" } }, "com.amazonaws.textract#DetectDocumentTextRequest": { @@ -836,6 +856,10 @@ { "value": "FORMS", "name": "FORMS" + }, + { + "value": "QUERIES", + "name": "QUERIES" } ] } @@ -913,7 +937,7 @@ } ], "traits": { - "smithy.api#documentation": "Gets the results for an Amazon Textract asynchronous operation that analyzes text in a document.
\nYou start asynchronous text analysis by calling StartDocumentAnalysis, which returns a job identifier\n (JobId
). When the text analysis operation finishes, Amazon Textract publishes a\n completion status to the Amazon Simple Notification Service (Amazon SNS) topic that's registered in the initial call to\n StartDocumentAnalysis
. To get the results of the text-detection operation,\n first check that the status value published to the Amazon SNS topic is SUCCEEDED
.\n If so, call GetDocumentAnalysis
, and pass the job identifier\n (JobId
) from the initial call to StartDocumentAnalysis
.
\n GetDocumentAnalysis
returns an array of Block objects. The following\n types of information are returned:
Form data (key-value pairs). The related information is returned in two Block objects, each of type KEY_VALUE_SET
: a KEY\n Block
object and a VALUE Block
object. For example,\n Name: Ana Silva Carolina contains a key and value.\n Name: is the key. Ana Silva Carolina is\n the value.
Table and table cell data. A TABLE Block
object contains information about a detected table. A CELL\n Block
object is returned for each cell in a table.
Lines and words of text. A LINE Block
object contains one or more WORD Block
objects.\n All lines and words that are detected in the document are returned (including text that doesn't have a\n relationship with the value of the StartDocumentAnalysis
\n FeatureTypes
input parameter).
Selection elements such as check boxes and option buttons (radio buttons) can be detected in form data and in tables.\n A SELECTION_ELEMENT Block
object contains information about a selection element,\n including the selection status.
Use the MaxResults
parameter to limit the number of blocks that are\n returned. If there are more results than specified in MaxResults
, the value of\n NextToken
in the operation response contains a pagination token for getting\n the next set of results. To get the next page of results, call\n GetDocumentAnalysis
, and populate the NextToken
request\n parameter with the token value that's returned from the previous call to\n GetDocumentAnalysis
.
For more information, see Document Text Analysis.
" + "smithy.api#documentation": "Gets the results for an Amazon Textract asynchronous operation that analyzes text in a document.
\nYou start asynchronous text analysis by calling StartDocumentAnalysis, which returns a job identifier\n (JobId
). When the text analysis operation finishes, Amazon Textract publishes a\n completion status to the Amazon Simple Notification Service (Amazon SNS) topic that's registered in the initial call to\n StartDocumentAnalysis
. To get the results of the text-detection operation,\n first check that the status value published to the Amazon SNS topic is SUCCEEDED
.\n If so, call GetDocumentAnalysis
, and pass the job identifier\n (JobId
) from the initial call to StartDocumentAnalysis
.
\n GetDocumentAnalysis
returns an array of Block objects. The following\n types of information are returned:
Form data (key-value pairs). The related information is returned in two Block objects, each of type KEY_VALUE_SET
: a KEY\n Block
object and a VALUE Block
object. For example,\n Name: Ana Silva Carolina contains a key and value.\n Name: is the key. Ana Silva Carolina is\n the value.
Table and table cell data. A TABLE Block
object contains information about a detected table. A CELL\n Block
object is returned for each cell in a table.
Lines and words of text. A LINE Block
object contains one or more WORD Block
objects.\n All lines and words that are detected in the document are returned (including text that doesn't have a\n relationship with the value of the StartDocumentAnalysis
\n FeatureTypes
input parameter).
Queries. A QUERIES_RESULT Block object contains the answer to the query, the alias associated and an ID that \n connect it to the query asked. This Block also contains a location and attached confidence score
\nSelection elements such as check boxes and option buttons (radio buttons) can be detected in form data and in tables.\n A SELECTION_ELEMENT Block
object contains information about a selection element,\n including the selection status.
Use the MaxResults
parameter to limit the number of blocks that are\n returned. If there are more results than specified in MaxResults
, the value of\n NextToken
in the operation response contains a pagination token for getting\n the next set of results. To get the next page of results, call\n GetDocumentAnalysis
, and populate the NextToken
request\n parameter with the token value that's returned from the previous call to\n GetDocumentAnalysis
.
For more information, see Document Text Analysis.
" } }, "com.amazonaws.textract#GetDocumentAnalysisRequest": { @@ -1750,6 +1774,90 @@ "smithy.api#error": "client" } }, + "com.amazonaws.textract#Queries": { + "type": "list", + "member": { + "target": "com.amazonaws.textract#Query" + }, + "traits": { + "smithy.api#length": { + "min": 1 + } + } + }, + "com.amazonaws.textract#QueriesConfig": { + "type": "structure", + "members": { + "Queries": { + "target": "com.amazonaws.textract#Queries", + "traits": { + "smithy.api#documentation": "", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "" + } + }, + "com.amazonaws.textract#Query": { + "type": "structure", + "members": { + "Text": { + "target": "com.amazonaws.textract#QueryInput", + "traits": { + "smithy.api#documentation": "Question that Amazon Textract will apply to the document. An example would be \"What is the customer's SSN?\"
", + "smithy.api#required": {} + } + }, + "Alias": { + "target": "com.amazonaws.textract#QueryInput", + "traits": { + "smithy.api#documentation": "Alias attached to the query, for ease of location.
" + } + }, + "Pages": { + "target": "com.amazonaws.textract#QueryPages", + "traits": { + "smithy.api#documentation": "List of pages associated with the query. The following is a list of rules for using this parameter.
\nIf a page is not specified, it is set to [\"1\"]
by default.
The following characters are allowed in the parameter's string: \n 0 1 2 3 4 5 6 7 8 9 - *
. No whitespace is allowed.
When using *
to indicate all pages, it must be the only element\n in the string.
You can use page intervals, such as [“1-3”, “1-1”, “4-*”]
. Where *
indicates last page of \n document.
Specified pages must be greater than 0 and less than or equal to the number of pages in the document.
\nEach query contains the question you want to ask in the Text and the alias you want to associate.
" + } + }, + "com.amazonaws.textract#QueryInput": { + "type": "string", + "traits": { + "smithy.api#length": { + "min": 1, + "max": 200 + }, + "smithy.api#pattern": "^[a-zA-Z0-9\\s!\"\\#\\$%'&\\(\\)\\*\\+\\,\\-\\./:;=\\?@\\[\\\\\\]\\^_`\\{\\|\\}~><]+$" + } + }, + "com.amazonaws.textract#QueryPage": { + "type": "string", + "traits": { + "smithy.api#length": { + "min": 1, + "max": 9 + }, + "smithy.api#pattern": "^[0-9\\*\\-]+$" + } + }, + "com.amazonaws.textract#QueryPages": { + "type": "list", + "member": { + "target": "com.amazonaws.textract#QueryPage" + }, + "traits": { + "smithy.api#length": { + "min": 1 + } + } + }, "com.amazonaws.textract#Relationship": { "type": "structure", "members": { @@ -1799,6 +1907,10 @@ { "value": "TITLE", "name": "TITLE" + }, + { + "value": "ANSWER", + "name": "ANSWER" } ] } @@ -1990,6 +2102,9 @@ "traits": { "smithy.api#documentation": "The KMS key used to encrypt the inference results. This can be \n in either Key ID or Key Alias format. When a KMS key is provided, the \n KMS key will be used for server-side encryption of the objects in the \n customer bucket. When this parameter is not enabled, the result will \n be encrypted server side,using SSE-S3.
" } + }, + "QueriesConfig": { + "target": "com.amazonaws.textract#QueriesConfig" } } }, @@ -2327,7 +2442,7 @@ } }, "traits": { - "smithy.api#documentation": "The format of the input document isn't supported. Documents for synchronous operations can be in\n PNG or JPEG format only. Documents for asynchronous operations can be in PDF format.
", + "smithy.api#documentation": "The format of the input document isn't supported. Documents for operations can be in\n PNG, JPEG, PDF, or TIFF format.
", "smithy.api#error": "client" } }, diff --git a/aws/sdk/aws-models/transfer.json b/aws/sdk/aws-models/transfer.json index f132f531e4..227dddd613 100644 --- a/aws/sdk/aws-models/transfer.json +++ b/aws/sdk/aws-models/transfer.json @@ -179,7 +179,7 @@ "HomeDirectoryMappings": { "target": "com.amazonaws.transfer#HomeDirectoryMappings", "traits": { - "smithy.api#documentation": "Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should\n be visible to your user and how you want to make them visible. You must specify the\n Entry
and Target
pair, where Entry
shows how the path\n is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you\n only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity\n and Access Management (IAM) role provides access to paths in Target
. This value\n can only be set when HomeDirectoryType
is set to\n LOGICAL.
The following is an Entry
and Target
pair example.
\n [ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
In most cases, you can use this value instead of the session policy to lock down your\n user to the designated home directory (\"chroot
\"). To do this, you can set\n Entry
to /
and set Target
to the\n HomeDirectory
parameter value.
The following is an Entry
and Target
pair example for chroot
.
\n [ { \"Entry:\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should\n be visible to your user and how you want to make them visible. You must specify the\n Entry
and Target
pair, where Entry
shows how the path\n is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you\n only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity\n and Access Management (IAM) role provides access to paths in Target
. This value\n can only be set when HomeDirectoryType
is set to\n LOGICAL.
The following is an Entry
and Target
pair example.
\n [ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
In most cases, you can use this value instead of the session policy to lock down your\n user to the designated home directory (\"chroot
\"). To do this, you can set\n Entry
to /
and set Target
to the\n HomeDirectory
parameter value.
The following is an Entry
and Target
pair example for chroot
.
\n [ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should\n be visible to your user and how you want to make them visible. You must specify the\n Entry
and Target
pair, where Entry
shows how the path\n is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you\n only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity\n and Access Management (IAM) role provides access to paths in Target
. This value\n can only be set when HomeDirectoryType
is set to\n LOGICAL.
The following is an Entry
and Target
pair example.
\n [ { \"Entry\": \"/directory1\", \"Target\":\n \"/bucket_name/home/mydirectory\" } ]
\n
In most cases, you can use this value instead of the session policy to lock your user\n down to the designated home directory (\"chroot
\"). To do this, you can set\n Entry
to /
and set Target
to the HomeDirectory\n parameter value.
The following is an Entry
and Target
pair example for chroot
.
\n [ { \"Entry:\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should\n be visible to your user and how you want to make them visible. You must specify the\n Entry
and Target
pair, where Entry
shows how the path\n is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you\n only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity\n and Access Management (IAM) role provides access to paths in Target
. This value\n can only be set when HomeDirectoryType
is set to\n LOGICAL.
The following is an Entry
and Target
pair example.
\n [ { \"Entry\": \"/directory1\", \"Target\":\n \"/bucket_name/home/mydirectory\" } ]
\n
In most cases, you can use this value instead of the session policy to lock your user\n down to the designated home directory (\"chroot
\"). To do this, you can set\n Entry
to /
and set Target
to the HomeDirectory\n parameter value.
The following is an Entry
and Target
pair example for chroot
.
\n [ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
Specifies the details for the steps that are in the specified workflow.
\n\n The TYPE
specifies which of the following actions is being taken for this step.\n
\n Copy: copy the file to another location
\n\n Custom: custom step with a lambda target
\n\n Delete: delete the file
\n\n Tag: add a tag to the file
\n\n Currently, copying and tagging are supported only on S3.\n
\n\n For file location, you specify either the S3 bucket and key, or the EFS filesystem ID and path.\n
", + "smithy.api#documentation": "Specifies the details for the steps that are in the specified workflow.
\n\n The TYPE
specifies which of the following actions is being taken for this step.\n
\n COPY: copy the file to another location
\n\n CUSTOM: custom step with a lambda target
\n\n DELETE: delete the file
\n\n TAG: add a tag to the file
\n\n Currently, copying and tagging are supported only on S3.\n
\n\n For file location, you specify either the S3 bucket and key, or the EFS filesystem ID and path.\n
", "smithy.api#required": {} } }, @@ -640,6 +640,9 @@ "input": { "target": "com.amazonaws.transfer#DeleteAccessRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.transfer#InternalServiceError" @@ -682,6 +685,9 @@ "input": { "target": "com.amazonaws.transfer#DeleteServerRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.transfer#AccessDeniedException" @@ -720,6 +726,9 @@ "input": { "target": "com.amazonaws.transfer#DeleteSshPublicKeyRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.transfer#InternalServiceError" @@ -792,6 +801,9 @@ "input": { "target": "com.amazonaws.transfer#DeleteUserRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.transfer#InternalServiceError" @@ -834,6 +846,9 @@ "input": { "target": "com.amazonaws.transfer#DeleteWorkflowRequest" }, + "output": { + "target": "smithy.api#Unit" + }, "errors": [ { "target": "com.amazonaws.transfer#AccessDeniedException" @@ -1888,7 +1903,7 @@ "StepType": { "target": "com.amazonaws.transfer#WorkflowStepType", "traits": { - "smithy.api#documentation": "One of the available step types.
\n\n Copy: copy the file to another location
\n\n Custom: custom step with a lambda target
\n\n Delete: delete the file
\n\n Tag: add a tag to the file
\nOne of the available step types.
\n\n COPY: copy the file to another location
\n\n CUSTOM: custom step with a lambda target
\n\n DELETE: delete the file
\n\n TAG: add a tag to the file
\nRepresents an object that contains entries and targets for\n HomeDirectoryMappings
.
The following is an Entry
and Target
pair example for chroot
.
\n [ { \"Entry:\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
Represents an object that contains entries and targets for\n HomeDirectoryMappings
.
The following is an Entry
and Target
pair example for chroot
.
\n [ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should\n be visible to your user and how you want to make them visible. You must specify the\n Entry
and Target
pair, where Entry
shows how the path\n is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you\n only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity\n and Access Management (IAM) role provides access to paths in Target
. This value\n can only be set when HomeDirectoryType
is set to\n LOGICAL.
The following is an Entry
and Target
pair example.
\n [ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
In most cases, you can use this value instead of the session policy to lock down your\n user to the designated home directory (\"chroot
\"). To do this, you can set\n Entry
to /
and set Target
to the\n HomeDirectory
parameter value.
The following is an Entry
and Target
pair example for chroot
.
\n [ { \"Entry:\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should\n be visible to your user and how you want to make them visible. You must specify the\n Entry
and Target
pair, where Entry
shows how the path\n is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you\n only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity\n and Access Management (IAM) role provides access to paths in Target
. This value\n can only be set when HomeDirectoryType
is set to\n LOGICAL.
The following is an Entry
and Target
pair example.
\n [ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
In most cases, you can use this value instead of the session policy to lock down your\n user to the designated home directory (\"chroot
\"). To do this, you can set\n Entry
to /
and set Target
to the\n HomeDirectory
parameter value.
The following is an Entry
and Target
pair example for chroot
.
\n [ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should\n be visible to your user and how you want to make them visible. You must specify the\n Entry
and Target
pair, where Entry
shows how the path\n is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you\n only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity\n and Access Management (IAM) role provides access to paths in Target
. This value\n can only be set when HomeDirectoryType
is set to\n LOGICAL.
The following is an Entry
and Target
pair example.
\n [ { \"Entry\": \"/directory1\", \"Target\":\n \"/bucket_name/home/mydirectory\" } ]
\n
In most cases, you can use this value instead of the session policy to lock down your\n user to the designated home directory (\"chroot
\"). To do this, you can set\n Entry
to '/' and set Target
to the HomeDirectory\n parameter value.
The following is an Entry
and Target
pair example for chroot
.
\n [ { \"Entry:\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should\n be visible to your user and how you want to make them visible. You must specify the\n Entry
and Target
pair, where Entry
shows how the path\n is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you\n only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity\n and Access Management (IAM) role provides access to paths in Target
. This value\n can only be set when HomeDirectoryType
is set to\n LOGICAL.
The following is an Entry
and Target
pair example.
\n [ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
In most cases, you can use this value instead of the session policy to lock down your\n user to the designated home directory (\"chroot
\"). To do this, you can set\n Entry
to '/' and set Target
to the HomeDirectory\n parameter value.
The following is an Entry
and Target
pair example for chroot
.
\n [ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]
\n
\n Currently, the following step types are supported.\n
\n\n Copy: copy the file to another location
\n\n Custom: custom step with a lambda target
\n\n Delete: delete the file
\n\n Tag: add a tag to the file
\n\n Currently, the following step types are supported.\n
\n\n COPY: copy the file to another location
\n\n CUSTOM: custom step with a lambda target
\n\n DELETE: delete the file
\n\n TAG: add a tag to the file
\nThe fields from the source that are made available to your agents in Wisdom.
\n For Salesforce, you must include at least Id
,\n ArticleNumber
, VersionNumber
, Title
,\n PublishStatus
, and IsDeleted
.
For ServiceNow, you must include at least number
,\n short_description
, sys_mod_count
, workflow_state
,\n and active
.
Make sure to include additional field(s); these are indexed and used to source\n recommendations.
", + "smithy.api#documentation": "The fields from the source that are made available to your agents in Wisdom.
\n For Salesforce, you must include at least Id
,\n ArticleNumber
, VersionNumber
, Title
,\n PublishStatus
, and IsDeleted
.
For ServiceNow, you must include at least number
,\n short_description
, sys_mod_count
, workflow_state
,\n and active
.
Make sure to include additional fields. These fields are indexed and used to source\n recommendations.
", "smithy.api#required": {} } } @@ -85,10 +85,7 @@ ], "traits": { "aws.api#arn": { - "template": "assistant/{assistantId}", - "absolute": false, - "noAccount": false, - "noRegion": false + "template": "assistant/{assistantId}" }, "aws.cloudformation#cfnResource": {}, "aws.iam#disableConditionKeyInference": {} @@ -118,10 +115,7 @@ }, "traits": { "aws.api#arn": { - "template": "association/{assistantId}/{assistantAssociationId}", - "absolute": false, - "noAccount": false, - "noRegion": false + "template": "association/{assistantId}/{assistantAssociationId}" }, "aws.cloudformation#cfnResource": {}, "aws.iam#disableConditionKeyInference": {} @@ -154,7 +148,7 @@ "assistantArn": { "target": "com.amazonaws.wisdom#Arn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Wisdom assistant
", + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Wisdom assistant.
", "smithy.api#required": {} } }, @@ -194,7 +188,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#Uuid", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base.
" + "smithy.api#documentation": "The identifier of the knowledge base.
" } } }, @@ -243,7 +237,7 @@ "assistantArn": { "target": "com.amazonaws.wisdom#Arn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Wisdom assistant
", + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Wisdom assistant.
", "smithy.api#required": {} } }, @@ -296,7 +290,7 @@ "assistantArn": { "target": "com.amazonaws.wisdom#Arn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Wisdom assistant
", + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Wisdom assistant.
", "smithy.api#required": {} } }, @@ -394,7 +388,7 @@ "assistantArn": { "target": "com.amazonaws.wisdom#Arn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Wisdom assistant
", + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Wisdom assistant.
", "smithy.api#required": {} } }, @@ -518,10 +512,7 @@ ], "traits": { "aws.api#arn": { - "template": "content/{knowledgeBaseId}/{contentId}", - "absolute": false, - "noAccount": false, - "noRegion": false + "template": "content/{knowledgeBaseId}/{contentId}" }, "aws.iam#disableConditionKeyInference": {} } @@ -553,7 +544,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#Uuid", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base.
", + "smithy.api#documentation": "The identifier of the knowledge base.
", "smithy.api#required": {} } }, @@ -615,8 +606,7 @@ "target": "com.amazonaws.wisdom#Url", "traits": { "smithy.api#documentation": "The URL of the content.
", - "smithy.api#required": {}, - "smithy.api#sensitive": {} + "smithy.api#required": {} } }, "urlExpiry": { @@ -647,6 +637,12 @@ }, "value": { "target": "com.amazonaws.wisdom#NonEmptyString" + }, + "traits": { + "smithy.api#length": { + "min": 0, + "max": 10 + } } }, "com.amazonaws.wisdom#ContentReference": { @@ -661,7 +657,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#Uuid", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base.
" + "smithy.api#documentation": "The identifier of the knowledge base.
" } }, "contentArn": { @@ -743,7 +739,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#Uuid", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base.
", + "smithy.api#documentation": "The identifier of the knowledge base.
", "smithy.api#required": {} } }, @@ -826,7 +822,7 @@ "com.amazonaws.wisdom#ContentType": { "type": "string", "traits": { - "smithy.api#pattern": "^(text/(plain|html))|(application/x\\.wisdom-json;source=(salesforce|servicenow))$" + "smithy.api#pattern": "^(text/(plain|html))|(application/x\\.wisdom-json;source=(salesforce|servicenow|zendesk))$" } }, "com.amazonaws.wisdom#CreateAssistant": { @@ -854,9 +850,8 @@ "traits": { "smithy.api#documentation": "Creates an Amazon Connect Wisdom assistant.
", "smithy.api#http": { - "method": "POST", "uri": "/assistants", - "code": 200 + "method": "POST" }, "smithy.api#idempotent": {} } @@ -889,9 +884,8 @@ "traits": { "smithy.api#documentation": "Creates an association between an Amazon Connect Wisdom assistant and another resource. Currently, the\n only supported association is with a knowledge base. An assistant can have only a single\n association.
", "smithy.api#http": { - "method": "POST", "uri": "/assistants/{assistantId}/associations", - "code": 200 + "method": "POST" }, "smithy.api#idempotent": {} } @@ -1030,9 +1024,8 @@ "traits": { "smithy.api#documentation": "Creates Wisdom content. Before to calling this API, use StartContentUpload to\n upload an asset.
", "smithy.api#http": { - "method": "POST", "uri": "/knowledgeBases/{knowledgeBaseId}/contents", - "code": 200 + "method": "POST" }, "smithy.api#idempotent": {} } @@ -1043,7 +1036,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#UuidOrArn", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -1131,9 +1124,8 @@ "traits": { "smithy.api#documentation": "Creates a knowledge base.
\nWhen using this API, you cannot reuse Amazon AppIntegrations\n DataIntegrations with external knowledge bases such as Salesforce and ServiceNow. If you do,\n you'll get an InvalidRequestException
error.
For example, you're programmatically managing your external knowledge base, and you want\n to add or remove one of the fields that is being ingested from Salesforce. Do the\n following:
\nCall DeleteKnowledgeBase.
\nCall DeleteDataIntegration.
\nCall CreateDataIntegration to recreate the DataIntegration or a create different\n one.
\nCall CreateKnowledgeBase.
\nCreates a session. A session is a contextual container used for generating\n recommendations. Amazon Connect creates a new Wisdom session for each contact on which Wisdom is\n enabled.
", "smithy.api#http": { - "method": "POST", "uri": "/assistants/{assistantId}/sessions", - "code": 200 + "method": "POST" }, "smithy.api#idempotent": {} } @@ -1306,8 +1297,8 @@ "traits": { "smithy.api#documentation": "Deletes an assistant.
", "smithy.api#http": { - "method": "DELETE", "uri": "/assistants/{assistantId}", + "method": "DELETE", "code": 204 }, "smithy.api#idempotent": {} @@ -1335,8 +1326,8 @@ "traits": { "smithy.api#documentation": "Deletes an assistant association.
", "smithy.api#http": { - "method": "DELETE", "uri": "/assistants/{assistantId}/associations/{assistantAssociationId}", + "method": "DELETE", "code": 204 }, "smithy.api#idempotent": {} @@ -1406,8 +1397,8 @@ "traits": { "smithy.api#documentation": "Deletes the content.
", "smithy.api#http": { - "method": "DELETE", "uri": "/knowledgeBases/{knowledgeBaseId}/contents/{contentId}", + "method": "DELETE", "code": 204 }, "smithy.api#idempotent": {} @@ -1419,7 +1410,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#UuidOrArn", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -1455,13 +1446,16 @@ }, { "target": "com.amazonaws.wisdom#ResourceNotFoundException" + }, + { + "target": "com.amazonaws.wisdom#ValidationException" } ], "traits": { "smithy.api#documentation": "Deletes the knowledge base.
\nWhen you use this API to delete an external knowledge base such as Salesforce or\n ServiceNow, you must also delete the Amazon AppIntegrations DataIntegration.\n This is because you can't reuse the DataIntegration after it's been associated with an\n external knowledge base. However, you can delete and recreate it. See DeleteDataIntegration and CreateDataIntegration in the Amazon AppIntegrations API\n Reference.
\nText in the document.
", - "smithy.api#sensitive": {} + "smithy.api#documentation": "Text in the document.
" } }, "highlights": { @@ -1631,9 +1624,8 @@ "traits": { "smithy.api#documentation": "Retrieves information about an assistant.
", "smithy.api#http": { - "method": "GET", "uri": "/assistants/{assistantId}", - "code": 200 + "method": "GET" }, "smithy.api#readonly": {} } @@ -1660,9 +1652,8 @@ "traits": { "smithy.api#documentation": "Retrieves information about an assistant association.
", "smithy.api#http": { - "method": "GET", "uri": "/assistants/{assistantId}/associations/{assistantAssociationId}", - "code": 200 + "method": "GET" }, "smithy.api#readonly": {} } @@ -1745,9 +1736,8 @@ "traits": { "smithy.api#documentation": "Retrieves content, including a pre-signed URL to download the content.
", "smithy.api#http": { - "method": "GET", "uri": "/knowledgeBases/{knowledgeBaseId}/contents/{contentId}", - "code": 200 + "method": "GET" }, "smithy.api#readonly": {} } @@ -1766,7 +1756,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#UuidOrArn", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -1806,9 +1796,8 @@ "traits": { "smithy.api#documentation": "Retrieves summary information about the content.
", "smithy.api#http": { - "method": "GET", "uri": "/knowledgeBases/{knowledgeBaseId}/contents/{contentId}/summary", - "code": 200 + "method": "GET" }, "smithy.api#readonly": {} } @@ -1827,7 +1816,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#UuidOrArn", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -1867,9 +1856,8 @@ "traits": { "smithy.api#documentation": "Retrieves information about the knowledge base.
", "smithy.api#http": { - "method": "GET", "uri": "/knowledgeBases/{knowledgeBaseId}", - "code": 200 + "method": "GET" }, "smithy.api#readonly": {} } @@ -1880,7 +1868,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#UuidOrArn", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -1920,9 +1908,8 @@ "traits": { "smithy.api#documentation": "Retrieves recommendations for the specified session. To avoid retrieving the same\n recommendations in subsequent calls, use NotifyRecommendationsReceived. This API supports long-polling behavior with the\n waitTimeSeconds
parameter. Short poll is the default behavior and only returns\n recommendations already available. To perform a manual query against an assistant, use QueryAssistant.
The recommendations.
", "smithy.api#required": {} } + }, + "triggers": { + "target": "com.amazonaws.wisdom#RecommendationTriggerList", + "traits": { + "smithy.api#documentation": "The triggers corresponding to recommendations.
" + } } } }, @@ -1996,9 +1989,8 @@ "traits": { "smithy.api#documentation": "Retrieves information for a specified session.
", "smithy.api#http": { - "method": "GET", "uri": "/assistants/{assistantId}/sessions/{sessionId}", - "code": 200 + "method": "GET" }, "smithy.api#readonly": {} } @@ -2113,10 +2105,7 @@ ], "traits": { "aws.api#arn": { - "template": "knowledge-base/{knowledgeBaseId}", - "absolute": false, - "noAccount": false, - "noRegion": false + "template": "knowledge-base/{knowledgeBaseId}" }, "aws.cloudformation#cfnResource": {}, "aws.iam#disableConditionKeyInference": {} @@ -2128,7 +2117,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#Uuid", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base.
" + "smithy.api#documentation": "The identifier of the knowledge base.
" } }, "knowledgeBaseArn": { @@ -2148,7 +2137,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#Uuid", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base.
", + "smithy.api#documentation": "The identifier of the knowledge base.
", "smithy.api#required": {} } }, @@ -2270,7 +2259,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#Uuid", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base.
", + "smithy.api#documentation": "The identifier of the knowledge base.
", "smithy.api#required": {} } }, @@ -2305,7 +2294,7 @@ "sourceConfiguration": { "target": "com.amazonaws.wisdom#SourceConfiguration", "traits": { - "smithy.api#documentation": "[KEVIN]
" + "smithy.api#documentation": "Configuration information about the external data source.
" } }, "renderingConfiguration": { @@ -2379,15 +2368,14 @@ "traits": { "smithy.api#documentation": "Lists information about assistant associations.
", "smithy.api#http": { - "method": "GET", "uri": "/assistants/{assistantId}/associations", - "code": 200 + "method": "GET" }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "assistantAssociationSummaries", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "assistantAssociationSummaries" }, "smithy.api#readonly": {} } @@ -2456,15 +2444,14 @@ "traits": { "smithy.api#documentation": "Lists information about assistants.
", "smithy.api#http": { - "method": "GET", "uri": "/assistants", - "code": 200 + "method": "GET" }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "assistantSummaries", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "assistantSummaries" }, "smithy.api#readonly": {} } @@ -2528,15 +2515,14 @@ "traits": { "smithy.api#documentation": "Lists the content.
", "smithy.api#http": { - "method": "GET", "uri": "/knowledgeBases/{knowledgeBaseId}/contents", - "code": 200 + "method": "GET" }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "contentSummaries", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "contentSummaries" }, "smithy.api#readonly": {} } @@ -2561,7 +2547,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#UuidOrArn", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -2605,15 +2591,14 @@ "traits": { "smithy.api#documentation": "Lists the knowledge bases.
", "smithy.api#http": { - "method": "GET", "uri": "/knowledgeBases", - "code": 200 + "method": "GET" }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "knowledgeBaseSummaries", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "knowledgeBaseSummaries" }, "smithy.api#readonly": {} } @@ -2671,9 +2656,8 @@ "traits": { "smithy.api#documentation": "Lists the tags for the specified resource.
", "smithy.api#http": { - "method": "GET", "uri": "/tags/{resourceArn}", - "code": 200 + "method": "GET" }, "smithy.api#readonly": {} } @@ -2762,9 +2746,8 @@ "traits": { "smithy.api#documentation": "Removes the specified recommendations from the specified assistant's queue of newly\n available recommendations. You can use this API in conjunction with GetRecommendations and a waitTimeSeconds
input for long-polling\n behavior and avoiding duplicate recommendations.
Performs a manual search against the specified assistant. To retrieve recommendations for\n an assistant, use GetRecommendations.\n
", "smithy.api#http": { - "method": "POST", "uri": "/assistants/{assistantId}/query", - "code": 200 + "method": "POST" }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "results", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "results" }, - "smithy.api#readonly": {} + "smithy.api#readonly": {}, + "smithy.api#suppress": [ + "HttpMethodSemantics" + ] } }, "com.amazonaws.wisdom#QueryAssistantRequest": { @@ -2953,6 +2938,20 @@ } } }, + "com.amazonaws.wisdom#QueryRecommendationTriggerData": { + "type": "structure", + "members": { + "text": { + "target": "com.amazonaws.wisdom#QueryText", + "traits": { + "smithy.api#documentation": "The text associated with the recommendation trigger.
" + } + } + }, + "traits": { + "smithy.api#documentation": "Data associated with the QUERY RecommendationTriggerType.
" + } + }, "com.amazonaws.wisdom#QueryResultsList": { "type": "list", "member": { @@ -2993,6 +2992,12 @@ "traits": { "smithy.api#documentation": "The relevance level of the recommendation.
" } + }, + "type": { + "target": "com.amazonaws.wisdom#RecommendationType", + "traits": { + "smithy.api#documentation": "The type of recommendation.
" + } } }, "traits": { @@ -3011,6 +3016,110 @@ "target": "com.amazonaws.wisdom#RecommendationData" } }, + "com.amazonaws.wisdom#RecommendationSourceType": { + "type": "string", + "traits": { + "smithy.api#enum": [ + { + "value": "ISSUE_DETECTION", + "name": "ISSUE_DETECTION" + }, + { + "value": "RULE_EVALUATION", + "name": "RULE_EVALUATION" + }, + { + "value": "OTHER", + "name": "OTHER" + } + ] + } + }, + "com.amazonaws.wisdom#RecommendationTrigger": { + "type": "structure", + "members": { + "id": { + "target": "com.amazonaws.wisdom#Uuid", + "traits": { + "smithy.api#documentation": "The identifier of the recommendation trigger.
", + "smithy.api#required": {} + } + }, + "type": { + "target": "com.amazonaws.wisdom#RecommendationTriggerType", + "traits": { + "smithy.api#documentation": "The type of recommendation trigger.
", + "smithy.api#required": {} + } + }, + "source": { + "target": "com.amazonaws.wisdom#RecommendationSourceType", + "traits": { + "smithy.api#documentation": "The source of the recommendation trigger.
\nISSUE_DETECTION: The corresponding recommendations were triggered\n by a Contact Lens issue.
\nRULE_EVALUATION: The corresponding recommendations were triggered\n by a Contact Lens rule.
\nA union type containing information related to the trigger.
", + "smithy.api#required": {} + } + }, + "recommendationIds": { + "target": "com.amazonaws.wisdom#RecommendationIdList", + "traits": { + "smithy.api#documentation": "The identifiers of the recommendations.
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "A recommendation trigger provides context on the event that produced the referenced recommendations.\n Recommendations are only referenced in recommendationIds
by a single RecommendationTrigger.
Data associated with the QUERY RecommendationTriggerType.
" + } + } + }, + "traits": { + "smithy.api#documentation": "A union type containing information related to the trigger.
" + } + }, + "com.amazonaws.wisdom#RecommendationTriggerList": { + "type": "list", + "member": { + "target": "com.amazonaws.wisdom#RecommendationTrigger" + } + }, + "com.amazonaws.wisdom#RecommendationTriggerType": { + "type": "string", + "traits": { + "smithy.api#enum": [ + { + "value": "QUERY", + "name": "QUERY" + } + ] + } + }, + "com.amazonaws.wisdom#RecommendationType": { + "type": "string", + "traits": { + "smithy.api#enum": [ + { + "value": "KNOWLEDGE_CONTENT", + "name": "KNOWLEDGE_CONTENT" + } + ] + } + }, "com.amazonaws.wisdom#RelevanceLevel": { "type": "string", "traits": { @@ -3060,8 +3169,8 @@ "traits": { "smithy.api#documentation": "Removes a URI template from a knowledge base.
", "smithy.api#http": { - "method": "DELETE", "uri": "/knowledgeBases/{knowledgeBaseId}/templateUri", + "method": "DELETE", "code": 204 } } @@ -3072,7 +3181,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#UuidOrArn", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -3166,15 +3275,14 @@ "traits": { "smithy.api#documentation": "Searches for content in a specified knowledge base. Can be used to get a specific content\n resource by its name.
", "smithy.api#http": { - "method": "POST", "uri": "/knowledgeBases/{knowledgeBaseId}/search", - "code": 200 + "method": "POST" }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "contentSummaries", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "contentSummaries" }, "smithy.api#readonly": {} } @@ -3199,7 +3307,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#UuidOrArn", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -3268,15 +3376,14 @@ "traits": { "smithy.api#documentation": "Searches for sessions.
", "smithy.api#http": { - "method": "POST", "uri": "/assistants/{assistantId}/searchSessions", - "code": 200 + "method": "POST" }, "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", - "items": "sessionSummaries", - "pageSize": "maxResults" + "pageSize": "maxResults", + "items": "sessionSummaries" }, "smithy.api#readonly": {} } @@ -3333,13 +3440,19 @@ } } }, + "com.amazonaws.wisdom#SensitiveString": { + "type": "string", + "traits": { + "smithy.api#sensitive": {} + } + }, "com.amazonaws.wisdom#ServerSideEncryptionConfiguration": { "type": "structure", "members": { "kmsKeyId": { "target": "com.amazonaws.wisdom#NonEmptyString", "traits": { - "smithy.api#documentation": "The KMS key. For information about valid ID values, see Key identifiers (KeyId) in the\n AWS Key Management Service Developer Guide.
" + "smithy.api#documentation": "The KMS key. For information about valid ID values, see Key identifiers (KeyId).
" } } }, @@ -3378,10 +3491,7 @@ }, "traits": { "aws.api#arn": { - "template": "session/{assistantId}/{sessionId}", - "absolute": false, - "noAccount": false, - "noRegion": false + "template": "session/{assistantId}/{sessionId}" }, "aws.iam#disableConditionKeyInference": {} } @@ -3460,7 +3570,7 @@ "assistantArn": { "target": "com.amazonaws.wisdom#Arn", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Wisdom assistant
", + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Wisdom assistant.
", "smithy.api#required": {} } } @@ -3505,9 +3615,8 @@ "traits": { "smithy.api#documentation": "Get a URL to upload content to a knowledge base. To upload content, first make a PUT\n request to the returned URL with your file, making sure to include the required headers. Then\n use CreateContent to finalize the content creation process or UpdateContent to modify an existing resource. You can only upload content to a\n knowledge base of type CUSTOM.
", "smithy.api#http": { - "method": "POST", "uri": "/knowledgeBases/{knowledgeBaseId}/upload", - "code": 200 + "method": "POST" } } }, @@ -3517,7 +3626,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#UuidOrArn", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -3545,8 +3654,7 @@ "target": "com.amazonaws.wisdom#Url", "traits": { "smithy.api#documentation": "The URL of the upload.
", - "smithy.api#required": {}, - "smithy.api#sensitive": {} + "smithy.api#required": {} } }, "urlExpiry": { @@ -3607,9 +3715,8 @@ "traits": { "smithy.api#documentation": "Adds the specified tags to the specified resource.
", "smithy.api#http": { - "method": "POST", "uri": "/tags/{resourceArn}", - "code": 200 + "method": "POST" }, "smithy.api#idempotent": {} } @@ -3691,9 +3798,8 @@ "traits": { "smithy.api#documentation": "Removes the specified tags from the specified resource.
", "smithy.api#http": { - "method": "DELETE", "uri": "/tags/{resourceArn}", - "code": 200 + "method": "DELETE" }, "smithy.api#idempotent": {} } @@ -3748,9 +3854,8 @@ "traits": { "smithy.api#documentation": "Updates information about the content.
", "smithy.api#http": { - "method": "POST", "uri": "/knowledgeBases/{knowledgeBaseId}/contents/{contentId}", - "code": 200 + "method": "POST" } } }, @@ -3760,7 +3865,7 @@ "knowledgeBaseId": { "target": "com.amazonaws.wisdom#UuidOrArn", "traits": { - "smithy.api#documentation": "The the identifier of the knowledge base. Can be either the ID or the ARN
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -3844,9 +3949,8 @@ "traits": { "smithy.api#documentation": "Updates the template URI of a knowledge base. This is only supported for knowledge bases\n of type EXTERNAL. Include a single variable in ${variable}
format; this\n interpolated by Wisdom using ingested content. For example, if you ingest a Salesforce\n article, it has an Id
value, and you can set the template URI to\n https://myInstanceName.lightning.force.com/lightning/r/Knowledge__kav/*${Id}*/view
.\n
The the identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", + "smithy.api#documentation": "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN.
", "smithy.api#httpLabel": {}, "smithy.api#required": {} } @@ -3896,7 +4000,8 @@ "smithy.api#length": { "min": 1, "max": 4096 - } + }, + "smithy.api#sensitive": {} } }, "com.amazonaws.wisdom#Uuid": { @@ -3919,7 +4024,7 @@ } }, "traits": { - "smithy.api#documentation": "The input fails to satisfy the constraints specified by an AWS service.
", + "smithy.api#documentation": "The input fails to satisfy the constraints specified by a service.
", "smithy.api#error": "client", "smithy.api#httpError": 400 } @@ -3935,6 +4040,20 @@ }, "com.amazonaws.wisdom#WisdomService": { "type": "service", + "traits": { + "aws.api#service": { + "sdkId": "Wisdom", + "arnNamespace": "wisdom", + "cloudFormationName": "Wisdom" + }, + "aws.auth#sigv4": { + "name": "wisdom" + }, + "aws.protocols#restJson1": {}, + "smithy.api#cors": {}, + "smithy.api#documentation": "Amazon Connect Wisdom delivers agents the information they need to solve customer issues as they're actively\n speaking with customers. Agents can search across connected repositories from within their agent desktop\n to find answers quickly. Use the Amazon Connect Wisdom APIs to create an assistant and a knowledge base, for example, or manage content by uploading custom files.
", + "smithy.api#title": "Amazon Connect Wisdom Service" + }, "version": "2020-10-19", "operations": [ { @@ -3954,23 +4073,7 @@ { "target": "com.amazonaws.wisdom#KnowledgeBase" } - ], - "traits": { - "aws.api#service": { - "sdkId": "Wisdom", - "arnNamespace": "wisdom", - "cloudFormationName": "Wisdom", - "cloudTrailEventSource": "wisdom.amazonaws.com", - "endpointPrefix": "wisdom" - }, - "aws.auth#sigv4": { - "name": "wisdom" - }, - "aws.protocols#restJson1": {}, - "smithy.api#cors": {}, - "smithy.api#documentation": "All Amazon Connect Wisdom functionality is accessible using the API. For example, you can create an\n assistant and a knowledge base.
\n\nSome more advanced features are only accessible using the Wisdom API. For example, you\n can manually manage content by uploading custom files and control their lifecycle.
", - "smithy.api#title": "Amazon Connect Wisdom Service" - } + ] } } } diff --git a/aws/sdk/aws-models/worklink.json b/aws/sdk/aws-models/worklink.json index 1ee3df9773..7b8ca00780 100644 --- a/aws/sdk/aws-models/worklink.json +++ b/aws/sdk/aws-models/worklink.json @@ -32,7 +32,7 @@ "com.amazonaws.worklink#AcmCertificateArn": { "type": "string", "traits": { - "smithy.api#pattern": "arn:[\\w+=/,.@-]+:[\\w+=/,.@-]+:[\\w+=/,.@-]*:[0-9]+:[\\w+=,.@-]+(/[\\w+=/,.@-]+)*" + "smithy.api#pattern": "^arn:[\\w+=/,.@-]+:[\\w+=/,.@-]+:[\\w+=/,.@-]*:[0-9]+:[\\w+=,.@-]+(/[\\w+=/,.@-]+)*$" } }, "com.amazonaws.worklink#AssociateDomain": { @@ -64,6 +64,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Specifies a domain to be associated to Amazon WorkLink.
", "smithy.api#http": { "method": "POST", @@ -137,6 +140,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Associates a website authorization provider with a specified fleet. This is used to authorize users against associated websites in the company network.
", "smithy.api#http": { "method": "POST", @@ -210,6 +216,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Imports the root certificate of a certificate authority (CA) used to obtain TLS\n certificates used by associated websites within the company network.
", "smithy.api#http": { "method": "POST", @@ -284,7 +293,7 @@ "min": 1, "max": 8192 }, - "smithy.api#pattern": "-{5}BEGIN CERTIFICATE-{5}\\u000D?\\u000A([A-Za-z0-9/+]{64}\\u000D?\\u000A)*[A-Za-z0-9/+]{1,64}={0,2}\\u000D?\\u000A-{5}END CERTIFICATE-{5}(\\u000D?\\u000A)?" + "smithy.api#pattern": "^-{5}BEGIN CERTIFICATE-{5}\\u000D?\\u000A([A-Za-z0-9/+]{64}\\u000D?\\u000A)*[A-Za-z0-9/+]{1,64}={0,2}\\u000D?\\u000A-{5}END CERTIFICATE-{5}(\\u000D?\\u000A)?$" } }, "com.amazonaws.worklink#CertificateChain": { @@ -294,7 +303,7 @@ "min": 1, "max": 32768 }, - "smithy.api#pattern": "(-{5}BEGIN CERTIFICATE-{5}\\u000D?\\u000A([A-Za-z0-9/+]{64}\\u000D?\\u000A)*[A-Za-z0-9/+]{1,64}={0,2}\\u000D?\\u000A-{5}END CERTIFICATE-{5}\\u000D?\\u000A)*-{5}BEGIN CERTIFICATE-{5}\\u000D?\\u000A([A-Za-z0-9/+]{64}\\u000D?\\u000A)*[A-Za-z0-9/+]{1,64}={0,2}\\u000D?\\u000A-{5}END CERTIFICATE-{5}(\\u000D?\\u000A)?" + "smithy.api#pattern": "^(-{5}BEGIN CERTIFICATE-{5}\\u000D?\\u000A([A-Za-z0-9/+]{64}\\u000D?\\u000A)*[A-Za-z0-9/+]{1,64}={0,2}\\u000D?\\u000A-{5}END CERTIFICATE-{5}\\u000D?\\u000A)*-{5}BEGIN CERTIFICATE-{5}\\u000D?\\u000A([A-Za-z0-9/+]{64}\\u000D?\\u000A)*[A-Za-z0-9/+]{1,64}={0,2}\\u000D?\\u000A-{5}END CERTIFICATE-{5}(\\u000D?\\u000A)?$" } }, "com.amazonaws.worklink#CompanyCode": { @@ -335,6 +344,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Creates a fleet. A fleet consists of resources and the configuration that delivers\n associated websites to authorized users who download and set up the Amazon WorkLink app.
", "smithy.api#http": { "method": "POST", @@ -413,6 +425,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Deletes a fleet. Prevents users from accessing previously associated websites.
", "smithy.api#http": { "method": "POST", @@ -463,6 +478,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Describes the configuration for delivering audit streams to the customer account.
", "smithy.api#http": { "method": "POST", @@ -520,6 +538,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Describes the networking configuration to access the internal websites associated with\n the specified fleet.
", "smithy.api#http": { "method": "POST", @@ -589,6 +610,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Provides information about a user's device.
", "smithy.api#http": { "method": "POST", @@ -623,6 +647,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Describes the device policy configuration for the specified fleet.
", "smithy.api#http": { "method": "POST", @@ -758,6 +785,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Provides information about the domain.
", "smithy.api#http": { "method": "POST", @@ -846,6 +876,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Provides basic information for the specified fleet, excluding identity provider,\n networking, and device configuration details.
", "smithy.api#http": { "method": "POST", @@ -945,6 +978,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Describes the identity provider configuration of the specified fleet.
", "smithy.api#http": { "method": "POST", @@ -1014,6 +1050,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Provides information about the certificate authority.
", "smithy.api#http": { "method": "POST", @@ -1176,6 +1215,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Disassociates a domain from Amazon WorkLink. End users lose the ability to access the domain with Amazon WorkLink.
", "smithy.api#http": { "method": "POST", @@ -1236,6 +1278,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Disassociates a website authorization provider from a specified fleet. After the\n disassociation, users can't load any associated websites that require this authorization\n provider.
", "smithy.api#http": { "method": "POST", @@ -1293,6 +1338,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Removes a certificate authority (CA).
", "smithy.api#http": { "method": "POST", @@ -1610,6 +1658,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Retrieves a list of devices registered with the specified fleet.
", "smithy.api#http": { "method": "POST", @@ -1690,6 +1741,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Retrieves a list of domains associated to a specified fleet.
", "smithy.api#http": { "method": "POST", @@ -1767,6 +1821,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Retrieves a list of fleets for the current account and Region.
", "smithy.api#http": { "method": "POST", @@ -1828,6 +1885,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Retrieves a list of tags for the specified resource.
", "smithy.api#http": { "method": "GET", @@ -1886,6 +1946,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Retrieves a list of website authorization providers associated with a specified fleet.
", "smithy.api#http": { "method": "POST", @@ -1963,6 +2026,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Retrieves a list of certificate authorities added for the current account and\n Region.
", "smithy.api#http": { "method": "POST", @@ -2033,7 +2099,7 @@ "min": 1, "max": 4096 }, - "smithy.api#pattern": "[\\w\\-]+" + "smithy.api#pattern": "^[\\w\\-]+$" } }, "com.amazonaws.worklink#ResourceAlreadyExistsException": { @@ -2088,6 +2154,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Moves a domain to ACTIVE status if it was in the INACTIVE status.
", "smithy.api#http": { "method": "POST", @@ -2145,6 +2214,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Moves a domain to INACTIVE status if it was in the ACTIVE status.
", "smithy.api#http": { "method": "POST", @@ -2229,6 +2301,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Signs the user out from all of their devices. The user can sign in again if they have\n valid credentials.
", "smithy.api#http": { "method": "POST", @@ -2323,6 +2398,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Adds or overwrites one or more tags for the specified resource, such as a fleet. Each tag consists of a key and an optional value. If a resource already has a tag with the same key, this operation updates its value.
", "smithy.api#http": { "method": "POST", @@ -2404,6 +2482,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Removes one or more tags from the specified resource.
", "smithy.api#http": { "method": "DELETE", @@ -2463,6 +2544,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Updates the audit stream configuration for the fleet.
", "smithy.api#http": { "method": "POST", @@ -2519,6 +2603,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Updates the company network configuration for the fleet.
", "smithy.api#http": { "method": "POST", @@ -2590,6 +2677,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Updates the device policy configuration for the fleet.
", "smithy.api#http": { "method": "POST", @@ -2646,6 +2736,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Updates domain metadata, such as DisplayName.
", "smithy.api#http": { "method": "POST", @@ -2709,6 +2802,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Updates fleet metadata, such as DisplayName.
", "smithy.api#http": { "method": "POST", @@ -2771,6 +2867,9 @@ } ], "traits": { + "smithy.api#deprecated": { + "message": "Amazon WorkLink is no longer supported. This will be removed in a future version of the SDK." + }, "smithy.api#documentation": "Updates the identity provider configuration for the fleet.
", "smithy.api#http": { "method": "POST", @@ -2896,6 +2995,21 @@ }, "com.amazonaws.worklink#WorkLink": { "type": "service", + "traits": { + "aws.api#service": { + "sdkId": "WorkLink", + "arnNamespace": "worklink", + "cloudFormationName": "WorkLink", + "cloudTrailEventSource": "worklink.amazonaws.com", + "endpointPrefix": "worklink" + }, + "aws.auth#sigv4": { + "name": "worklink" + }, + "aws.protocols#restJson1": {}, + "smithy.api#documentation": "Amazon WorkLink is a cloud-based service that provides secure access\n to internal websites and web apps from iOS and Android phones. In a single step, your users, such as\n employees, can access internal websites as efficiently as they access any other public website.\n They enter a URL in their web browser, or choose a link to an internal website in an email. Amazon WorkLink\n authenticates the user's access and securely renders authorized internal web content in a secure\n rendering service in the AWS cloud. Amazon WorkLink doesn't download or store any internal web content on\n mobile devices.
", + "smithy.api#title": "Amazon WorkLink" + }, "version": "2018-09-25", "operations": [ { @@ -2997,22 +3111,7 @@ { "target": "com.amazonaws.worklink#UpdateIdentityProviderConfiguration" } - ], - "traits": { - "aws.api#service": { - "sdkId": "WorkLink", - "arnNamespace": "worklink", - "cloudFormationName": "WorkLink", - "cloudTrailEventSource": "worklink.amazonaws.com", - "endpointPrefix": "worklink" - }, - "aws.auth#sigv4": { - "name": "worklink" - }, - "aws.protocols#restJson1": {}, - "smithy.api#documentation": "Amazon WorkLink is a cloud-based service that provides secure access\n to internal websites and web apps from iOS and Android phones. In a single step, your users, such as\n employees, can access internal websites as efficiently as they access any other public website.\n They enter a URL in their web browser, or choose a link to an internal website in an email. Amazon WorkLink\n authenticates the user's access and securely renders authorized internal web content in a secure\n rendering service in the AWS cloud. Amazon WorkLink doesn't download or store any internal web content on\n mobile devices.
", - "smithy.api#title": "Amazon WorkLink" - } + ] } } } diff --git a/gradle.properties b/gradle.properties index ecd78ea30f..bc5815c2dd 100644 --- a/gradle.properties +++ b/gradle.properties @@ -8,10 +8,10 @@ rust.msrv=1.58.1 # Version number to use for the generated SDK # Note: these must always be full 3-segment semver versions -aws.sdk.version=0.10.1 +aws.sdk.version=0.11.0 # Version number to use for the generated runtime crates -smithy.rs.runtime.crate.version=0.40.2 +smithy.rs.runtime.crate.version=0.41.0 kotlin.code.style=official diff --git a/tools/sdk-lints/Cargo.lock b/tools/sdk-lints/Cargo.lock index 7b69a845b6..c07c0af3c1 100644 --- a/tools/sdk-lints/Cargo.lock +++ b/tools/sdk-lints/Cargo.lock @@ -121,6 +121,43 @@ version = "2.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "308cc39be01b73d0d18f82a0e7b2a3df85245f84af96fdddc5d202d27e47b86a" +[[package]] +name = "num-integer" +version = "0.1.44" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d2cc698a63b549a70bc047073d2949cce27cd1c7b0a4a862d08a8031bc2801db" +dependencies = [ + "autocfg", + "num-traits", +] + +[[package]] +name = "num-traits" +version = "0.2.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9a64b1ec5cda2586e284722486d802acf1f7dbdc623e2bfc57e65ca1cd099290" +dependencies = [ + "autocfg", +] + +[[package]] +name = "num_threads" +version = "0.1.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "aba1801fb138d8e85e11d0fc70baf4fe1cdfffda7c6cd34a854905df588e5ed0" +dependencies = [ + "libc", +] + +[[package]] +name = "ordinal" +version = "0.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c80c1530f46e9d8985706d7deb80b83172b250538902f607dea6cd6028851083" +dependencies = [ + "num-integer", +] + [[package]] name = "os_str_bytes" version = "6.0.0" @@ -180,7 +217,9 @@ dependencies = [ "cargo_toml", "clap", "lazy_static", + "ordinal", "serde", + "time", "toml", ] @@ -236,6 +275,16 @@ version = "0.15.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b1141d4d61095b28419e22cb0bbf02755f5e54e0526f97f1e3d1d160e60885fb" +[[package]] +name = "time" +version = "0.3.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c2702e08a7a860f005826c6815dcac101b19b5eb330c27fe4a5928fec1d20ddd" +dependencies = [ + "libc", + "num_threads", +] + [[package]] name = "toml" version = "0.5.8" diff --git a/tools/sdk-lints/Cargo.toml b/tools/sdk-lints/Cargo.toml index d178154b79..6185ee41b0 100644 --- a/tools/sdk-lints/Cargo.toml +++ b/tools/sdk-lints/Cargo.toml @@ -17,3 +17,5 @@ clap = { version = "3.1.7", features = ["derive"]} toml = "0.5.8" serde = { version = "1", features = ["derive"]} lazy_static = "1.4.0" +time = { version = "0.3.9", features = ["local-offset"]} +ordinal = "0.3.2" diff --git a/tools/sdk-lints/src/changelog.rs b/tools/sdk-lints/src/changelog.rs index 7a2eaf8f81..8f2f0e4abf 100644 --- a/tools/sdk-lints/src/changelog.rs +++ b/tools/sdk-lints/src/changelog.rs @@ -220,7 +220,7 @@ pub(crate) fn update_changelogs( std::fs::write(path, update)?; } std::fs::write(changelog_next.as_ref(), EXAMPLE_ENTRY.trim())?; - eprintln!("Changelogs updated!"); + eprintln!("Changelogs updated:\n SDK: {aws_sdk_rust_version}\n Smithy: {smithy_rs_version}\n Date: {date}"); Ok(()) } diff --git a/tools/sdk-lints/src/main.rs b/tools/sdk-lints/src/main.rs index 2bf635e5f4..d468de0161 100644 --- a/tools/sdk-lints/src/main.rs +++ b/tools/sdk-lints/src/main.rs @@ -12,10 +12,12 @@ use crate::todos::TodosHaveContext; use anyhow::{bail, Context, Result}; use clap::Parser; use lazy_static::lazy_static; +use ordinal::Ordinal; use std::env::set_current_dir; use std::path::{Path, PathBuf}; use std::process::Command; use std::{fs, io}; +use time::OffsetDateTime; mod anchor; mod changelog; @@ -64,11 +66,11 @@ enum Args { }, UpdateChangelog { #[clap(long)] - smithy_version: String, + smithy_version: Option