From 1f88422051125aee6a38362ffe14a843a9e095c1 Mon Sep 17 00:00:00 2001
From: awssdkgo With Application Auto Scaling, you can configure automatic scaling for the following resources: Amazon ECS services Amazon EC2 Spot Fleet requests Amazon EMR clusters Amazon AppStream 2.0 fleets Amazon DynamoDB tables and global secondary indexes throughput capacity Amazon Aurora Replicas Amazon SageMaker endpoint variants Custom resources provided by your own applications or services Amazon Comprehend document classification and entity recognizer endpoints AWS Lambda function provisioned concurrency Amazon Keyspaces (for Apache Cassandra) tables Amazon Managed Streaming for Apache Kafka cluster storage API Summary The Application Auto Scaling service API includes three key sets of actions: Register and manage scalable targets - Register AWS or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets. Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history. Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by calling the RegisterScalableTarget API action for any Application Auto Scaling scalable target. You can suspend and resume (individually or in combination) scale-out activities that are triggered by a scaling policy, scale-in activities that are triggered by a scaling policy, and scheduled scaling. To learn more about Application Auto Scaling, including information about granting IAM users required permissions for Application Auto Scaling actions, see the Application Auto Scaling User Guide. With Application Auto Scaling, you can configure automatic scaling for the following resources: Amazon ECS services Amazon EC2 Spot Fleet requests Amazon EMR clusters Amazon AppStream 2.0 fleets Amazon DynamoDB tables and global secondary indexes throughput capacity Amazon Aurora Replicas Amazon SageMaker endpoint variants Custom resources provided by your own applications or services Amazon Comprehend document classification and entity recognizer endpoints AWS Lambda function provisioned concurrency Amazon Keyspaces (for Apache Cassandra) tables Amazon Managed Streaming for Apache Kafka broker storage API Summary The Application Auto Scaling service API includes three key sets of actions: Register and manage scalable targets - Register AWS or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets. Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history. Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by calling the RegisterScalableTarget API action for any Application Auto Scaling scalable target. You can suspend and resume (individually or in combination) scale-out activities that are triggered by a scaling policy, scale-in activities that are triggered by a scaling policy, and scheduled scaling. To learn more about Application Auto Scaling, including information about granting IAM users required permissions for Application Auto Scaling actions, see the Application Auto Scaling User Guide. Deletes the specified scaling policy for an Application Auto Scaling scalable target. Deleting a step scaling policy deletes the underlying alarm action, but does not delete the CloudWatch alarm associated with the scaling policy, even if it no longer has an associated action. For more information, see Delete a Step Scaling Policy and Delete a Target Tracking Scaling Policy in the Application Auto Scaling User Guide. Deletes the specified scheduled action for an Application Auto Scaling scalable target. For more information, see Delete a Scheduled Action in the Application Auto Scaling User Guide. Deletes the specified scaling policy for an Application Auto Scaling scalable target. Deleting a step scaling policy deletes the underlying alarm action, but does not delete the CloudWatch alarm associated with the scaling policy, even if it no longer has an associated action. For more information, see Delete a step scaling policy and Delete a target tracking scaling policy in the Application Auto Scaling User Guide. Deletes the specified scheduled action for an Application Auto Scaling scalable target. For more information, see Delete a scheduled action in the Application Auto Scaling User Guide. Deregisters an Application Auto Scaling scalable target when you have finished using it. To see which resources have been registered, use DescribeScalableTargets. Deregistering a scalable target deletes the scaling policies and the scheduled actions that are associated with it. Gets information about the scalable targets in the specified namespace. You can filter the results using Provides descriptive information about the scaling activities in the specified namespace from the previous six weeks. You can filter the results using Describes the Application Auto Scaling scaling policies for the specified service namespace. You can filter the results using For more information, see Target Tracking Scaling Policies and Step Scaling Policies in the Application Auto Scaling User Guide. Describes the Application Auto Scaling scheduled actions for the specified service namespace. You can filter the results using the For more information, see Scheduled Scaling in the Application Auto Scaling User Guide. Creates or updates a scaling policy for an Application Auto Scaling scalable target. Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scaling policy applies to the scalable target identified by those three attributes. You cannot create a scaling policy until you have registered the resource as a scalable target. Multiple scaling policies can be in force at the same time for the same scalable target. You can have one or more target tracking scaling policies, one or more step scaling policies, or both. However, there is a chance that multiple policies could conflict, instructing the scalable target to scale out or in at the same time. Application Auto Scaling gives precedence to the policy that provides the largest capacity for both scale out and scale in. For example, if one policy increases capacity by 3, another policy increases capacity by 200 percent, and the current capacity is 10, Application Auto Scaling uses the policy with the highest calculated capacity (200% of 10 = 20) and scales out to 30. We recommend caution, however, when using target tracking scaling policies with step scaling policies because conflicts between these policies can cause undesirable behavior. For example, if the step scaling policy initiates a scale-in activity before the target tracking policy is ready to scale in, the scale-in activity will not be blocked. After the scale-in activity completes, the target tracking policy could instruct the scalable target to scale out again. For more information, see Target Tracking Scaling Policies and Step Scaling Policies in the Application Auto Scaling User Guide. If a scalable target is deregistered, the scalable target is no longer available to execute scaling policies. Any scaling policies that were specified for the scalable target are deleted. Creates or updates a scheduled action for an Application Auto Scaling scalable target. Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scheduled action applies to the scalable target identified by those three attributes. You cannot create a scheduled action until you have registered the resource as a scalable target. When start and end times are specified with a recurring schedule using a cron expression or rates, they form the boundaries of when the recurring action starts and stops. To update a scheduled action, specify the parameters that you want to change. If you don't specify start and end times, the old values are deleted. For more information, see Scheduled Scaling in the Application Auto Scaling User Guide. If a scalable target is deregistered, the scalable target is no longer available to run scheduled actions. Any scheduled actions that were specified for the scalable target are deleted. Describes the Application Auto Scaling scaling policies for the specified service namespace. You can filter the results using For more information, see Target tracking scaling policies and Step scaling policies in the Application Auto Scaling User Guide. Describes the Application Auto Scaling scheduled actions for the specified service namespace. You can filter the results using the For more information, see Scheduled scaling and Managing scheduled scaling in the Application Auto Scaling User Guide. Creates or updates a scaling policy for an Application Auto Scaling scalable target. Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scaling policy applies to the scalable target identified by those three attributes. You cannot create a scaling policy until you have registered the resource as a scalable target. Multiple scaling policies can be in force at the same time for the same scalable target. You can have one or more target tracking scaling policies, one or more step scaling policies, or both. However, there is a chance that multiple policies could conflict, instructing the scalable target to scale out or in at the same time. Application Auto Scaling gives precedence to the policy that provides the largest capacity for both scale out and scale in. For example, if one policy increases capacity by 3, another policy increases capacity by 200 percent, and the current capacity is 10, Application Auto Scaling uses the policy with the highest calculated capacity (200% of 10 = 20) and scales out to 30. We recommend caution, however, when using target tracking scaling policies with step scaling policies because conflicts between these policies can cause undesirable behavior. For example, if the step scaling policy initiates a scale-in activity before the target tracking policy is ready to scale in, the scale-in activity will not be blocked. After the scale-in activity completes, the target tracking policy could instruct the scalable target to scale out again. For more information, see Target tracking scaling policies and Step scaling policies in the Application Auto Scaling User Guide. If a scalable target is deregistered, the scalable target is no longer available to execute scaling policies. Any scaling policies that were specified for the scalable target are deleted. Creates or updates a scheduled action for an Application Auto Scaling scalable target. Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scheduled action applies to the scalable target identified by those three attributes. You cannot create a scheduled action until you have registered the resource as a scalable target. When start and end times are specified with a recurring schedule using a cron expression or rates, they form the boundaries for when the recurring action starts and stops. To update a scheduled action, specify the parameters that you want to change. If you don't specify start and end times, the old values are deleted. For more information, see Scheduled scaling in the Application Auto Scaling User Guide. If a scalable target is deregistered, the scalable target is no longer available to run scheduled actions. Any scheduled actions that were specified for the scalable target are deleted. Registers or updates a scalable target. A scalable target is a resource that Application Auto Scaling can scale out and scale in. Scalable targets are uniquely identified by the combination of resource ID, scalable dimension, and namespace. When you register a new scalable target, you must specify values for minimum and maximum capacity. Current capacity will be adjusted within the specified range when scaling starts. Application Auto Scaling scaling policies will not scale capacity to values that are outside of this range. After you register a scalable target, you do not need to register it again to use other Application Auto Scaling operations. To see which resources have been registered, use DescribeScalableTargets. You can also view the scaling policies for a service namespace by using DescribeScalableTargets. If you no longer need a scalable target, you can deregister it by using DeregisterScalableTarget. To update a scalable target, specify the parameters that you want to change. Include the parameters that identify the scalable target: resource ID, scalable dimension, and namespace. Any parameters that you don't specify are not changed by this update request. The amount of time, in seconds, to wait for a previous scaling activity to take effect. With scale-out policies, the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a step scaling policy, it starts to calculate the cooldown time. The scaling policy won't increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. While the cooldown period is in effect, capacity added by the initiating scale-out activity is calculated as part of the desired capacity for the next scale-out activity. For example, when an alarm triggers a step scaling policy to increase the capacity by 2, the scaling activity completes successfully, and a cooldown period starts. If the alarm triggers again during the cooldown period but at a more aggressive step adjustment of 3, the previous increase of 2 is considered part of the current capacity. Therefore, only 1 is added to the capacity. With scale-in policies, the intention is to scale in conservatively to protect your application’s availability, so scale-in activities are blocked until the cooldown period has expired. However, if another alarm triggers a scale-out activity during the cooldown period after a scale-in activity, Application Auto Scaling scales out the target immediately. In this case, the cooldown period for the scale-in activity stops and doesn't complete. Application Auto Scaling provides a default value of 300 for the following scalable targets: ECS services Spot Fleet requests EMR clusters AppStream 2.0 fleets Aurora DB clusters Amazon SageMaker endpoint variants Custom resources For all other scalable targets, the default value is 0: DynamoDB tables DynamoDB global secondary indexes Amazon Comprehend document classification and entity recognizer endpoints Lambda provisioned concurrency Amazon Keyspaces tables Amazon MSK cluster storage The amount of time, in seconds, to wait for a previous scale-out activity to take effect. With the scale-out cooldown period, the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a target tracking scaling policy, it starts to calculate the cooldown time. The scaling policy won't increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. While the cooldown period is in effect, the capacity added by the initiating scale-out activity is calculated as part of the desired capacity for the next scale-out activity. Application Auto Scaling provides a default value of 300 for the following scalable targets: ECS services Spot Fleet requests EMR clusters AppStream 2.0 fleets Aurora DB clusters Amazon SageMaker endpoint variants Custom resources For all other scalable targets, the default value is 0: DynamoDB tables DynamoDB global secondary indexes Amazon Comprehend document classification and entity recognizer endpoints Lambda provisioned concurrency Amazon Keyspaces tables Amazon MSK cluster storage The amount of time, in seconds, after a scale-in activity completes before another scale-in activity can start. With the scale-in cooldown period, the intention is to scale in conservatively to protect your application’s availability, so scale-in activities are blocked until the cooldown period has expired. However, if another alarm triggers a scale-out activity during the scale-in cooldown period, Application Auto Scaling scales out the target immediately. In this case, the scale-in cooldown period stops and doesn't complete. Application Auto Scaling provides a default value of 300 for the following scalable targets: ECS services Spot Fleet requests EMR clusters AppStream 2.0 fleets Aurora DB clusters Amazon SageMaker endpoint variants Custom resources For all other scalable targets, the default value is 0: DynamoDB tables DynamoDB global secondary indexes Amazon Comprehend document classification and entity recognizer endpoints Lambda provisioned concurrency Amazon Keyspaces tables Amazon MSK cluster storage The amount of time, in seconds, to wait for a previous scaling activity to take effect. With scale-out policies, the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a step scaling policy, it starts to calculate the cooldown time. The scaling policy won't increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. While the cooldown period is in effect, capacity added by the initiating scale-out activity is calculated as part of the desired capacity for the next scale-out activity. For example, when an alarm triggers a step scaling policy to increase the capacity by 2, the scaling activity completes successfully, and a cooldown period starts. If the alarm triggers again during the cooldown period but at a more aggressive step adjustment of 3, the previous increase of 2 is considered part of the current capacity. Therefore, only 1 is added to the capacity. With scale-in policies, the intention is to scale in conservatively to protect your application’s availability, so scale-in activities are blocked until the cooldown period has expired. However, if another alarm triggers a scale-out activity during the cooldown period after a scale-in activity, Application Auto Scaling scales out the target immediately. In this case, the cooldown period for the scale-in activity stops and doesn't complete. Application Auto Scaling provides a default value of 300 for the following scalable targets: ECS services Spot Fleet requests EMR clusters AppStream 2.0 fleets Aurora DB clusters Amazon SageMaker endpoint variants Custom resources For all other scalable targets, the default value is 0: DynamoDB tables DynamoDB global secondary indexes Amazon Comprehend document classification and entity recognizer endpoints Lambda provisioned concurrency Amazon Keyspaces tables Amazon MSK broker storage The amount of time, in seconds, to wait for a previous scale-out activity to take effect. With the scale-out cooldown period, the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a target tracking scaling policy, it starts to calculate the cooldown time. The scaling policy won't increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. While the cooldown period is in effect, the capacity added by the initiating scale-out activity is calculated as part of the desired capacity for the next scale-out activity. Application Auto Scaling provides a default value of 300 for the following scalable targets: ECS services Spot Fleet requests EMR clusters AppStream 2.0 fleets Aurora DB clusters Amazon SageMaker endpoint variants Custom resources For all other scalable targets, the default value is 0: DynamoDB tables DynamoDB global secondary indexes Amazon Comprehend document classification and entity recognizer endpoints Lambda provisioned concurrency Amazon Keyspaces tables Amazon MSK broker storage The amount of time, in seconds, after a scale-in activity completes before another scale-in activity can start. With the scale-in cooldown period, the intention is to scale in conservatively to protect your application’s availability, so scale-in activities are blocked until the cooldown period has expired. However, if another alarm triggers a scale-out activity during the scale-in cooldown period, Application Auto Scaling scales out the target immediately. In this case, the scale-in cooldown period stops and doesn't complete. Application Auto Scaling provides a default value of 300 for the following scalable targets: ECS services Spot Fleet requests EMR clusters AppStream 2.0 fleets Aurora DB clusters Amazon SageMaker endpoint variants Custom resources For all other scalable targets, the default value is 0: DynamoDB tables DynamoDB global secondary indexes Amazon Comprehend document classification and entity recognizer endpoints Lambda provisioned concurrency Amazon Keyspaces tables Amazon MSK broker storage A per-account resource limit is exceeded. For more information, see Application Auto Scaling Limits. A per-account resource limit is exceeded. For more information, see Application Auto Scaling service quotas. The policy type. This parameter is required if you are creating a scaling policy. The following policy types are supported: For more information, see Target Tracking Scaling Policies and Step Scaling Policies in the Application Auto Scaling User Guide. The policy type. This parameter is required if you are creating a scaling policy. The following policy types are supported: For more information, see Target tracking scaling policies and Step scaling policies in the Application Auto Scaling User Guide. The scaling policy type. Represents a predefined metric for a target tracking scaling policy to use with Application Auto Scaling. Only the AWS services that you're using send metrics to Amazon CloudWatch. To determine whether a desired metric already exists by looking up its namespace and dimension using the CloudWatch metrics dashboard in the console, follow the procedure in Building Dashboards with CloudWatch in the Application Auto Scaling User Guide. Represents a predefined metric for a target tracking scaling policy to use with Application Auto Scaling. Only the AWS services that you're using send metrics to Amazon CloudWatch. To determine whether a desired metric already exists by looking up its namespace and dimension using the CloudWatch metrics dashboard in the console, follow the procedure in Building dashboards with CloudWatch in the Application Auto Scaling User Guide. A predefined metric. You can specify either a predefined metric or a customized metric. The name of the scheduled action. The identifier of the resource associated with the scheduled action. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource associated with the scaling activity. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource associated with the scheduled action. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource associated with the scaling activity. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource associated with the scheduled action. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The Amazon Resource Name (ARN) of the resulting scaling policy. The schedule for this action. The following formats are supported: At expressions - \" Rate expressions - \" Cron expressions - \" At expressions are useful for one-time schedules. Specify the time in UTC. For rate expressions, value is a positive integer and unit is For cron expressions, fields is a cron expression. The supported cron format consists of six fields separated by white spaces: [Minutes] [Hours] [Day_of_Month] [Month] [Day_of_Week] [Year]. For more information and examples, see Scheduled Scaling in the Application Auto Scaling User Guide. The schedule for this action. The following formats are supported: At expressions - \" Rate expressions - \" Cron expressions - \" At expressions are useful for one-time schedules. Cron expressions are useful for scheduled actions that run periodically at a specified date and time, and rate expressions are useful for scheduled actions that run at a regular interval. At and cron expressions use Universal Coordinated Time (UTC) by default. The cron format consists of six fields separated by white spaces: [Minutes] [Hours] [Day_of_Month] [Month] [Day_of_Week] [Year]. For rate expressions, value is a positive integer and unit is For more information and examples, see Example scheduled actions for Application Auto Scaling in the Application Auto Scaling User Guide. Specifies the time zone used when setting a scheduled action by using an at or cron expression. If a time zone is not provided, UTC is used by default. Valid values are the canonical names of the IANA time zones supported by Joda-Time (such as The identifier of the resource associated with the scheduled action. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource that is associated with the scalable target. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: This parameter is required for services that do not support service-linked roles (such as Amazon EMR), and it must specify the ARN of an IAM role that allows Application Auto Scaling to modify the scalable target on your behalf. If the service supports service-linked roles, Application Auto Scaling uses a service-linked role, which it creates if it does not yet exist. For more information, see Application Auto Scaling IAM Roles. This parameter is required for services that do not support service-linked roles (such as Amazon EMR), and it must specify the ARN of an IAM role that allows Application Auto Scaling to modify the scalable target on your behalf. If the service supports service-linked roles, Application Auto Scaling uses a service-linked role, which it creates if it does not yet exist. For more information, see Application Auto Scaling IAM roles. The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The ARN of an IAM role that allows Application Auto Scaling to modify the scalable target on your behalf. The Amazon Resource Name (ARN) of the scaling policy. The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The Amazon Resource Name (ARN) of the scheduled action. The schedule for this action. The following formats are supported: At expressions - \" Rate expressions - \" Cron expressions - \" At expressions are useful for one-time schedules. Specify the time in UTC. For rate expressions, value is a positive integer and unit is For cron expressions, fields is a cron expression. The supported cron format consists of six fields separated by white spaces: [Minutes] [Hours] [Day_of_Month] [Month] [Day_of_Week] [Year]. For more information and examples, see Scheduled Scaling in the Application Auto Scaling User Guide. The schedule for this action. The following formats are supported: At expressions - \" Rate expressions - \" Cron expressions - \" At expressions are useful for one-time schedules. Cron expressions are useful for scheduled actions that run periodically at a specified date and time, and rate expressions are useful for scheduled actions that run at a regular interval. At and cron expressions use Universal Coordinated Time (UTC) by default. The cron format consists of six fields separated by white spaces: [Minutes] [Hours] [Day_of_Month] [Month] [Day_of_Week] [Year]. For rate expressions, value is a positive integer and unit is For more information and examples, see Example scheduled actions for Application Auto Scaling in the Application Auto Scaling User Guide. The time zone used when referring to the date and time of a scheduled action, when the scheduled action uses an at or cron expression. The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier. ECS service - The resource type is Spot Fleet request - The resource type is EMR cluster - The resource type is AppStream 2.0 fleet - The resource type is DynamoDB table - The resource type is DynamoDB global secondary index - The resource type is Aurora DB cluster - The resource type is Amazon SageMaker endpoint variant - The resource type is Custom resources are not supported with a resource type. This parameter must specify the Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: Lambda provisioned concurrency - The resource type is Amazon Keyspaces table - The resource type is Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: The names of the scaling policies to describe. The names of the scheduled actions to describe. Specifies whether the scaling activities for a scalable target are in a suspended state. An embedded object that contains attributes and attribute values that are used to suspend and resume automatic scaling. Setting the value of an attribute to Suspension Outcomes For For For For more information, see Suspending and Resuming Scaling in the Application Auto Scaling User Guide. An embedded object that contains attributes and attribute values that are used to suspend and resume automatic scaling. Setting the value of an attribute to Suspension Outcomes For For For For more information, see Suspending and resuming scaling in the Application Auto Scaling User Guide. The date and time for this scheduled action to start. The date and time for the recurring schedule to end. The date and time for this scheduled action to start, in UTC. The date and time for the recurring schedule to end, in UTC. The Unix timestamp for when the scalable target was created. The Unix timestamp for when the scaling activity began. The Unix timestamp for when the scaling activity ended. The Unix timestamp for when the scaling policy was created. The date and time that the action is scheduled to begin. The date and time that the action is scheduled to end. The date and time that the action is scheduled to begin, in UTC. The date and time that the action is scheduled to end, in UTC. The date and time that the scheduled action was created. A reference to an object that represents a Transport Layer Security (TLS) client policy. An object that represents the client's certificate. A reference to an object that represents a client's TLS certificate. The request contains a client token that was used for a previous update resource call with different specifications. Try the request again with a new client token. An object that represents a listener's Transport Layer Security (TLS) certificate. A reference to an object that represents a listener's TLS certificate. A reference to an object that represents a listener's Transport Layer Security (TLS) certificate. An object that represents a local file certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS). A reference to an object that represents a local file certificate. Specify one of the following modes. STRICT – Listener only accepts connections with TLS enabled. PERMISSIVE – Listener accepts connections with or without TLS enabled. DISABLED – Listener only accepts connections without TLS. An object that represents the listener's Secret Discovery Service certificate. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info. A reference to an object that represents a client's TLS Secret Discovery Service certificate. A reference to an object that represents a listener's Secret Discovery Service certificate. An object that represents a listener's Transport Layer Security (TLS) validation context. A reference to an object that represents a listener's Transport Layer Security (TLS) validation context. An object that represents a listener's Transport Layer Security (TLS) validation context trust. A reference to where to retrieve the trust chain when validating a peer’s Transport Layer Security (TLS) certificate. The current status for the route. A reference to an object that represents the name of the secret requested from the Secret Discovery Service provider representing Transport Layer Security (TLS) materials like a certificate or certificate chain. A reference to an object that represents the name of the secret for a Transport Layer Security (TLS) Secret Discovery Service validation context trust. An object that represents the service discovery information for a virtual node. The destination path for the health check request. This value is only used if the specified protocol is HTTP or HTTP/2. For any other protocol, this value is ignored. The values sent must match the specified values exactly. An object that represents the methods by which a subject alternative name on a peer Transport Layer Security (TLS) certificate can be matched. An object that represents the criteria for determining a SANs match. An object that represents the subject alternative names secured by the certificate. A reference to an object that represents the SANs for a listener's Transport Layer Security (TLS) validation context. A reference to an object that represents the SANs for a Transport Layer Security (TLS) validation context. A reference to an object that represents the SANs for a virtual gateway listener's Transport Layer Security (TLS) validation context. A reference to an object that represents the SANs for a virtual gateway's listener's Transport Layer Security (TLS) validation context. An object that represents a Transport Layer Security (TLS) validation context. An object that represents how the proxy will validate its peer during Transport Layer Security (TLS) negotiation. A reference to an object that represents a TLS validation context. An object that represents a TLS validation context trust for an AWS Certicate Manager (ACM) certificate. An object that represents a Transport Layer Security (TLS) validation context trust for an AWS Certicate Manager (ACM) certificate. A reference to an object that represents a TLS validation context trust for an AWS Certicate Manager (ACM) certificate. A reference to an object that represents a Transport Layer Security (TLS) validation context trust for an AWS Certicate Manager (ACM) certificate. An object that represents a Transport Layer Security (TLS) validation context trust for a local file. An object that represents a TLS validation context trust for a local file. An object that represents a Transport Layer Security (TLS) validation context trust for a local file. An object that represents a Transport Layer Security (TLS) Secret Discovery Service validation context trust. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info. A reference to an object that represents a listener's Transport Layer Security (TLS) Secret Discovery Service validation context trust. A reference to an object that represents a Transport Layer Security (TLS) Secret Discovery Service validation context trust. An object that represents a Transport Layer Security (TLS) validation context trust. A reference to an object that represents a TLS validation context trust. A reference to where to retrieve the trust chain when validating a peer’s Transport Layer Security (TLS) certificate. A reference to an object that represents a Transport Layer Security (TLS) client policy. An object that represents the virtual gateway's client's Transport Layer Security (TLS) certificate. A reference to an object that represents a virtual gateway's client's Transport Layer Security (TLS) certificate. An object that represents the type of virtual gateway connection pool. Only one protocol is used at a time and should be the same protocol as the one chosen under port mapping. If not present the default value for An object that represents a local file certificate. The certificate must meet specific requirements and you must have proxy authorization enabled. For more information, see Transport Layer Security (TLS). A reference to an object that represents a local file certificate. Specify one of the following modes. STRICT – Listener only accepts connections with TLS enabled. PERMISSIVE – Listener accepts connections with or without TLS enabled. DISABLED – Listener only accepts connections without TLS. An object that represents the virtual gateway's listener's Secret Discovery Service certificate.The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info. A reference to an object that represents a virtual gateway's client's Secret Discovery Service certificate. A reference to an object that represents a virtual gateway's listener's Secret Discovery Service certificate. An object that represents a virtual gateway's listener's Transport Layer Security (TLS) validation context. A reference to an object that represents a virtual gateway's listener's Transport Layer Security (TLS) validation context. An object that represents a virtual gateway's listener's Transport Layer Security (TLS) validation context trust. A reference to where to retrieve the trust chain when validating a peer’s Transport Layer Security (TLS) certificate. A reference to an object that represents the name of the secret secret requested from the Secret Discovery Service provider representing Transport Layer Security (TLS) materials like a certificate or certificate chain. A reference to an object that represents the name of the secret for a virtual gateway's Transport Layer Security (TLS) Secret Discovery Service validation context trust. An object that represents the specification of a service mesh resource. An object that represents a Transport Layer Security (TLS) validation context. A reference to an object that represents a TLS validation context. A reference to an object that represents a Transport Layer Security (TLS) validation context. An object that represents a TLS validation context trust for an AWS Certicate Manager (ACM) certificate. An object that represents a Transport Layer Security (TLS) validation context trust for an AWS Certicate Manager (ACM) certificate. A reference to an object that represents a TLS validation context trust for an AWS Certicate Manager (ACM) certificate. A reference to an object that represents a Transport Layer Security (TLS) validation context trust for an AWS Certicate Manager (ACM) certificate. An object that represents a Transport Layer Security (TLS) validation context trust for a local file. An object that represents a TLS validation context trust for a local file. An object that represents a Transport Layer Security (TLS) validation context trust for a local file. An object that represents a virtual gateway's listener's Transport Layer Security (TLS) Secret Discovery Service validation context trust. The proxy must be configured with a local SDS provider via a Unix Domain Socket. See App Mesh TLS documentation for more info. A reference to an object that represents a virtual gateway's listener's Transport Layer Security (TLS) Secret Discovery Service validation context trust. A reference to an object that represents a virtual gateway's Transport Layer Security (TLS) Secret Discovery Service validation context trust. An object that represents a Transport Layer Security (TLS) validation context trust. A reference to an object that represents a TLS validation context trust. A reference to where to retrieve the trust chain when validating a peer’s Transport Layer Security (TLS) certificate. Fingerprint for Sidewalk application server private key. The fingerprint of the Sidewalk application server private key. The ID of the certificate to associate with the wireless gateway. The ID of the certificate associated with the wireless gateway. The ID of the certificate associated with the wireless gateway. The ID of the certificate associated with the wireless gateway. The ID of the certificate associated with the wireless gateway and used for LoRaWANNetworkServer endpoint. The transmit mode to use to send data to the wireless device. Can be: The transmit mode to use to send data to the wireless device. Can be: Specifies the map style selected from an available data provider. Valid styles: When using HERE as your data provider, and selecting the Style Specifies the map style selected from an available data provider. Valid styles: When using HERE as your data provider, and selecting the Style This is the Amazon Lookout for Vision API Reference. It provides descriptions of actions, data types, common parameters, and common errors. Amazon Lookout for Vision enables you to find visual defects in industrial products, accurately and at scale. It uses computer vision to identify missing components in an industrial product, damage to vehicles or structures, irregularities in production lines, and even minuscule defects in silicon wafers — or any other physical item where quality is important such as a missing capacitor on printed circuit boards. Creates a new dataset in an Amazon Lookout for Vision project. If you want a single dataset project, specify To have a project with separate training and test datasets, call Creates a new version of a model within an an Amazon Lookout for Vision project. To get the current status, check the If the project has a single dataset, Amazon Lookout for Vision internally splits the dataset to create a training and a test dataset. If the project has a training and a test dataset, Lookout for Vision uses the respective datasets to train and test the model. After training completes, the evaluation metrics are stored at the location specified in Creates an empty Amazon Lookout for Vision project. After you create the project, add a dataset by calling CreateDataset. Deletes an existing Amazon Lookout for Vision If your the project has a single dataset, you must create a new dataset before you can create a model. If you project has a training dataset and a test dataset consider the following. If you delete the test dataset, your project reverts to a single dataset project. If you then train the model, Amazon Lookout for Vision internally splits the remaining dataset into a training and test dataset. If you delete the training dataset, you must create a training dataset before you can create a model. It might take a while to delete the dataset. To check the current status, check the Deletes an Amazon Lookout for Vision model. You can't delete a running model. To stop a running model, use the StopModel operation. Deletes an Amazon Lookout for Vision project. To delete a project, you must first delete each version of the model associated with the project. To delete a model use the DeleteModel operation. The training and test datasets are deleted automatically for you. The images referenced by the training and test datasets aren't deleted. Describe an Amazon Lookout for Vision dataset. Describes a version of an Amazon Lookout for Vision model. Describes an Amazon Lookout for Vision project. Detects anomalies in an image that you supply. The response from Before calling Lists the JSON Lines within a dataset. An Amazon Lookout for Vision JSON Line contains the anomaly information for a single image, including the image location and the assigned label. Lists the versions of a model in an Amazon Lookout for Vision project. Lists the Amazon Lookout for Vision projects in your AWS account. Starts the running of the version of an Amazon Lookout for Vision model. Starting a model takes a while to complete. To check the current state of the model, use DescribeModel. Once the model is running, you can detect custom labels in new images by calling DetectAnomalies. You are charged for the amount of time that the model is running. To stop a running model, call StopModel. Stops a running model. The operation might take a while to complete. To check the current status, call DescribeModel. Adds one or more JSON Line entries to a dataset. A JSON Line includes information about an image used for training or testing an Amazon Lookout for Vision model. The following is an example JSON Line. Updating a dataset might take a while to complete. To check the current status, call DescribeDataset and check the Creates a new dataset in an Amazon Lookout for Vision project. If you want a single dataset project, specify To have a project with separate training and test datasets, call This operation requires permissions to perform the Creates a new version of a model within an an Amazon Lookout for Vision project. To get the current status, check the If the project has a single dataset, Amazon Lookout for Vision internally splits the dataset to create a training and a test dataset. If the project has a training and a test dataset, Lookout for Vision uses the respective datasets to train and test the model. After training completes, the evaluation metrics are stored at the location specified in This operation requires permissions to perform the Creates an empty Amazon Lookout for Vision project. After you create the project, add a dataset by calling CreateDataset. This operation requires permissions to perform the Deletes an existing Amazon Lookout for Vision If your the project has a single dataset, you must create a new dataset before you can create a model. If you project has a training dataset and a test dataset consider the following. If you delete the test dataset, your project reverts to a single dataset project. If you then train the model, Amazon Lookout for Vision internally splits the remaining dataset into a training and test dataset. If you delete the training dataset, you must create a training dataset before you can create a model. It might take a while to delete the dataset. To check the current status, check the This operation requires permissions to perform the Deletes an Amazon Lookout for Vision model. You can't delete a running model. To stop a running model, use the StopModel operation. This operation requires permissions to perform the Deletes an Amazon Lookout for Vision project. To delete a project, you must first delete each version of the model associated with the project. To delete a model use the DeleteModel operation. You also have to delete the dataset(s) associated with the model. For more information, see DeleteDataset. The images referenced by the training and test datasets aren't deleted. This operation requires permissions to perform the Describe an Amazon Lookout for Vision dataset. This operation requires permissions to perform the Describes a version of an Amazon Lookout for Vision model. This operation requires permissions to perform the Describes an Amazon Lookout for Vision project. This operation requires permissions to perform the Detects anomalies in an image that you supply. The response from Before calling This operation requires permissions to perform the Lists the JSON Lines within a dataset. An Amazon Lookout for Vision JSON Line contains the anomaly information for a single image, including the image location and the assigned label. This operation requires permissions to perform the Lists the versions of a model in an Amazon Lookout for Vision project. This operation requires permissions to perform the Lists the Amazon Lookout for Vision projects in your AWS account. This operation requires permissions to perform the Returns a list of tags attached to the specified Amazon Lookout for Vision model. This operation requires permissions to perform the Starts the running of the version of an Amazon Lookout for Vision model. Starting a model takes a while to complete. To check the current state of the model, use DescribeModel. Once the model is running, you can detect custom labels in new images by calling DetectAnomalies. You are charged for the amount of time that the model is running. To stop a running model, call StopModel. This operation requires permissions to perform the Stops a running model. The operation might take a while to complete. To check the current status, call DescribeModel. This operation requires permissions to perform the Adds one or more key-value tags to an Amazon Lookout for Vision model. For more information, see Tagging a model in the Amazon Lookout for Vision Developer Guide. This operation requires permissions to perform the Removes one or more tags from an Amazon Lookout for Vision model. For more information, see Tagging a model in the Amazon Lookout for Vision Developer Guide. This operation requires permissions to perform the Adds one or more JSON Line entries to a dataset. A JSON Line includes information about an image used for training or testing an Amazon Lookout for Vision model. The following is an example JSON Line. Updating a dataset might take a while to complete. To check the current status, call DescribeDataset and check the This operation requires permissions to perform the Describes an Amazon Lookout for Vision model. A description for the version of the model. Contains the description of the model. A description for the version of the model. The description for the model. The description for the model. Information about the evaluation performance of a trained model. Performance metrics for the model. Created during training. Performance metrics for the model. Created during training. Performance metrics for the model. Not available until training has successfully completed. The name of the project in which you want to create a dataset. The name of the project in which you want to create a model version. S nsme for the project. The name for the project. The name of the project that contains the dataset. The name of the project that contains the dataset that you want to delete. The name of the project that contains the model that you want to delete. The unencrypted image bytes that you want to analyze. A key and value pair that is attached to the specified Amazon Lookout for Vision model. The Amazon Resource Name (ARN) of the model for which you want to list tags. The Amazon Resource Name (ARN) of the model to assign the tags. The Amazon Resource Name (ARN) of the model from which you want to remove tags. The key of the tag that is attached to the specified model. A list of the keys of the tags that you want to remove. A set of tags (key-value pairs) that you want to attach to the model. A map of tag keys and values attached to the specified model. The key-value tags to assign to the model. The value of the tag that is attached to the specified model. Amazon Lookout for Vision is temporarily unable to process the request. Try your call again. Deletes the organization. You can delete an organization only by using credentials from the management account. The organization must be empty of member accounts. Deletes an organizational unit (OU) from a root or another OU. You must first remove all accounts and child OUs from the OU that you want to delete. This operation can be called only from the organization's management account. Deletes the specified policy from your organization. Before you perform this operation, you must first detach the policy from all organizational units (OUs), roots, and accounts. This operation can be called only from the organization's management account. Removes the specified member AWS account as a delegated administrator for the specified AWS service. Deregistering a delegated administrator can have unintended impacts on the functionality of the enabled AWS service. See the documentation for the enabled service before you deregister a delegated administrator so that you understand any potential impacts. You can run this action only for AWS services that support this feature. For a current list of services that support it, see the column Supports Delegated Administrator in the table at AWS Services that you can use with AWS Organizations in the AWS Organizations User Guide. This operation can be called only from the organization's management account. Removes the specified member AWS account as a delegated administrator for the specified AWS service. Deregistering a delegated administrator can have unintended impacts on the functionality of the enabled AWS service. See the documentation for the enabled service before you deregister a delegated administrator so that you understand any potential impacts. You can run this action only for AWS services that support this feature. For a current list of services that support it, see the column Supports Delegated Administrator in the table at AWS Services that you can use with AWS Organizations in the AWS Organizations User Guide. This operation can be called only from the organization's management account. Retrieves AWS Organizations-related information about the specified account. This operation can be called only from the organization's management account or by a member account that is a delegated administrator for an AWS service. Retrieves the current status of an asynchronous request to create an account. This operation can be called only from the organization's management account or by a member account that is a delegated administrator for an AWS service. Returns the contents of the effective policy for specified policy type and account. The effective policy is the aggregation of any policies of the specified type that the account inherits, plus any policy of that type that is directly attached to the account. This operation applies only to policy types other than service control policies (SCPs). For more information about policy inheritance, see How Policy Inheritance Works in the AWS Organizations User Guide. This operation can be called only from the organization's management account or by a member account that is a delegated administrator for an AWS service. Retrieves information about an organizational unit (OU). This operation can be called only from the organization's management account or by a member account that is a delegated administrator for an AWS service. Retrieves information about a policy. This operation can be called only from the organization's management account or by a member account that is a delegated administrator for an AWS service. Detaches a policy from a target root, organizational unit (OU), or account. If the policy being detached is a service control policy (SCP), the changes to permissions for AWS Identity and Access Management (IAM) users and roles in affected accounts are immediate. Every root, OU, and account must have at least one SCP attached. If you want to replace the default This operation can be called only from the organization's management account. Disables the integration of an AWS service (the service that is specified by We recommend that you disable integration between AWS Organizations and the specified AWS service by using the console or commands that are provided by the specified service. Doing so ensures that the other service is aware that it can clean up any resources that are required only for the integration. How the service cleans up its resources in the organization's accounts depends on that service. For more information, see the documentation for the other AWS service. After you perform the For more information about integrating other services with AWS Organizations, including the list of services that work with Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide. This operation can be called only from the organization's management account. Disables the integration of an AWS service (the service that is specified by We strongly recommend that you don't use this command to disable integration between AWS Organizations and the specified AWS service. Instead, use the console or commands that are provided by the specified service. This lets the trusted service perform any required initialization when enabling trusted access, such as creating any required resources and any required clean up of resources when disabling trusted access. For information about how to disable trusted service access to your organization using the trusted service, see the Learn more link under the Supports Trusted Access column at AWS services that you can use with AWS Organizations. on this page. If you disable access by using this command, it causes the following actions to occur: The service can no longer create a service-linked role in the accounts in your organization. This means that the service can't perform operations on your behalf on any new accounts in your organization. The service can still perform operations in older accounts until the service completes its clean-up from AWS Organizations. The service can no longer perform tasks in the member accounts in the organization, unless those operations are explicitly permitted by the IAM policies that are attached to your roles. This includes any data aggregation from the member accounts to the management account, or to a delegated administrator account, where relevant. Some services detect this and clean up any remaining data or resources related to the integration, while other services stop accessing the organization but leave any historical data and configuration in place to support a possible re-enabling of the integration. Using the other service's console or commands to disable the integration ensures that the other service is aware that it can clean up any resources that are required only for the integration. How the service cleans up its resources in the organization's accounts depends on that service. For more information, see the documentation for the other AWS service. After you perform the For more information about integrating other services with AWS Organizations, including the list of services that work with Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide. This operation can be called only from the organization's management account. Disables an organizational policy type in a root. A policy of a certain type can be attached to entities in a root only if that type is enabled in the root. After you perform this operation, you no longer can attach policies of the specified type to that root or to any organizational unit (OU) or account in that root. You can undo this by using the EnablePolicyType operation. This is an asynchronous request that AWS performs in the background. If you disable a policy type for a root, it still appears enabled for the organization if all features are enabled for the organization. AWS recommends that you first use ListRoots to see the status of policy types for a specified root, and then use this operation. This operation can be called only from the organization's management account. To view the status of available policy types in the organization, use DescribeOrganization. Enables the integration of an AWS service (the service that is specified by We recommend that you enable integration between AWS Organizations and the specified AWS service by using the console or commands that are provided by the specified service. Doing so ensures that the service is aware that it can create the resources that are required for the integration. How the service creates those resources in the organization's accounts depends on that service. For more information, see the documentation for the other AWS service. For more information about enabling services to integrate with AWS Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide. This operation can be called only from the organization's management account and only if the organization has enabled all features. Enables all features in an organization. This enables the use of organization policies that can restrict the services and actions that can be called in each account. Until you enable all features, you have access only to consolidated billing, and you can't use any of the advanced account administration features that AWS Organizations supports. For more information, see Enabling All Features in Your Organization in the AWS Organizations User Guide. This operation is required only for organizations that were created explicitly with only the consolidated billing features enabled. Calling this operation sends a handshake to every invited account in the organization. The feature set change can be finalized and the additional features enabled only after all administrators in the invited accounts approve the change by accepting the handshake. After you enable all features, you can separately enable or disable individual policy types in a root using EnablePolicyType and DisablePolicyType. To see the status of policy types in a root, use ListRoots. After all invited member accounts accept the handshake, you finalize the feature set change by accepting the handshake that contains After you enable all features in your organization, the management account in the organization can apply policies on all member accounts. These policies can restrict what users and even administrators in those accounts can do. The management account can apply policies that prevent accounts from leaving the organization. Ensure that your account administrators are aware of this. This operation can be called only from the organization's management account. Enables a policy type in a root. After you enable a policy type in a root, you can attach policies of that type to the root, any organizational unit (OU), or account in that root. You can undo this by using the DisablePolicyType operation. This is an asynchronous request that AWS performs in the background. AWS recommends that you first use ListRoots to see the status of policy types for a specified root, and then use this operation. This operation can be called only from the organization's management account. You can enable a policy type in a root only if that policy type is available in the organization. To view the status of available policy types in the organization, use DescribeOrganization. Sends an invitation to another account to join your organization as a member account. AWS Organizations sends email on your behalf to the email address that is associated with the other account's owner. The invitation is implemented as a Handshake whose details are in the response. You can invite AWS accounts only from the same seller as the management account. For example, if your organization's management account was created by Amazon Internet Services Pvt. Ltd (AISPL), an AWS seller in India, you can invite only other AISPL accounts to your organization. You can't combine accounts from AISPL and AWS or from any other AWS seller. For more information, see Consolidated Billing in India. If you receive an exception that indicates that you exceeded your account limits for the organization or that the operation failed because your organization is still initializing, wait one hour and then try again. If the error persists after an hour, contact AWS Support. If the request includes tags, then the requester must have the This operation can be called only from the organization's management account. Removes a member account from its parent organization. This version of the operation is performed by the account that wants to leave. To remove a member account as a user in the management account, use RemoveAccountFromOrganization instead. This operation can be called only from a member account in the organization. The management account in an organization with all features enabled can set service control policies (SCPs) that can restrict what administrators of member accounts can do. This includes preventing them from successfully calling You can leave an organization as a member account only if the account is configured with the information required to operate as a standalone account. When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required of standalone accounts is not automatically collected. For each account that you want to make standalone, you must perform the following steps. If any of the steps are already completed for this account, that step doesn't appear. Choose a support plan Provide and verify the required contact information Provide a current payment method AWS uses the payment method to charge for any billable (not free tier) AWS activity that occurs while the account isn't attached to an organization. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide. You can leave an organization only after you enable IAM user access to billing in your account. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide. After the account leaves the organization, all tags that were attached to the account object in the organization are deleted. AWS accounts outside of an organization do not support tags. Removes a member account from its parent organization. This version of the operation is performed by the account that wants to leave. To remove a member account as a user in the management account, use RemoveAccountFromOrganization instead. This operation can be called only from a member account in the organization. The management account in an organization with all features enabled can set service control policies (SCPs) that can restrict what administrators of member accounts can do. This includes preventing them from successfully calling You can leave an organization as a member account only if the account is configured with the information required to operate as a standalone account. When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required of standalone accounts is not automatically collected. For each account that you want to make standalone, you must perform the following steps. If any of the steps are already completed for this account, that step doesn't appear. Choose a support plan Provide and verify the required contact information Provide a current payment method AWS uses the payment method to charge for any billable (not free tier) AWS activity that occurs while the account isn't attached to an organization. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide. The account that you want to leave must not be a delegated administrator account for any AWS service enabled for your organization. If the account is a delegated administrator, you must first change the delegated administrator account to another account that is remaining in the organization. You can leave an organization only after you enable IAM user access to billing in your account. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide. After the account leaves the organization, all tags that were attached to the account object in the organization are deleted. AWS accounts outside of an organization do not support tags. Returns a list of the AWS services that you enabled to integrate with your organization. After a service on this list creates the resources that it requires for the integration, it can perform operations on your organization and its accounts. For more information about integrating other services with AWS Organizations, including the list of services that currently work with Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide. This operation can be called only from the organization's management account or by a member account that is a delegated administrator for an AWS service. Lists all the accounts in the organization. To request only the accounts in a specified root or organizational unit (OU), use the ListAccountsForParent operation instead. Always check the This operation can be called only from the organization's management account or by a member account that is a delegated administrator for an AWS service. Lists the accounts in an organization that are contained by the specified target root or organizational unit (OU). If you specify the root, you get a list of all the accounts that aren't in any OU. If you specify an OU, you get a list of all the accounts in only that OU and not in any child OUs. To get a list of all accounts in the organization, use the ListAccounts operation. Always check the This operation can be called only from the organization's management account or by a member account that is a delegated administrator for an AWS service. Lists tags that are attached to the specified resource. You can attach tags to the following resources in AWS Organizations. AWS account Organization root Organizational unit (OU) Policy (any type) This operation can be called only from the organization's management account or by a member account that is a delegated administrator for an AWS service. Lists all the roots, organizational units (OUs), and accounts that the specified policy is attached to. Always check the This operation can be called only from the organization's management account or by a member account that is a delegated administrator for an AWS service. Moves an account from its current source parent root or organizational unit (OU) to the specified destination parent root or OU. This operation can be called only from the organization's management account. Enables the specified member account to administer the Organizations features of the specified AWS service. It grants read-only access to AWS Organizations service data. The account still requires IAM permissions to access and administer the AWS service. You can run this action only for AWS services that support this feature. For a current list of services that support it, see the column Supports Delegated Administrator in the table at AWS Services that you can use with AWS Organizations in the AWS Organizations User Guide. This operation can be called only from the organization's management account. Removes the specified account from the organization. The removed account becomes a standalone account that isn't a member of any organization. It's no longer subject to any policies and is responsible for its own bill payments. The organization's management account is no longer charged for any expenses accrued by the member account after it's removed from the organization. This operation can be called only from the organization's management account. Member accounts can remove themselves with LeaveOrganization instead. You can remove an account from your organization only if the account is configured with the information required to operate as a standalone account. When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required of standalone accounts is not automatically collected. For an account that you want to make standalone, you must choose a support plan, provide and verify the required contact information, and provide a current payment method. AWS uses the payment method to charge for any billable (not free tier) AWS activity that occurs while the account isn't attached to an organization. To remove an account that doesn't yet have this information, you must sign in as the member account and follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide. After the account leaves the organization, all tags that were attached to the account object in the organization are deleted. AWS accounts outside of an organization do not support tags. Enables the specified member account to administer the Organizations features of the specified AWS service. It grants read-only access to AWS Organizations service data. The account still requires IAM permissions to access and administer the AWS service. You can run this action only for AWS services that support this feature. For a current list of services that support it, see the column Supports Delegated Administrator in the table at AWS Services that you can use with AWS Organizations in the AWS Organizations User Guide. This operation can be called only from the organization's management account. Removes the specified account from the organization. The removed account becomes a standalone account that isn't a member of any organization. It's no longer subject to any policies and is responsible for its own bill payments. The organization's management account is no longer charged for any expenses accrued by the member account after it's removed from the organization. This operation can be called only from the organization's management account. Member accounts can remove themselves with LeaveOrganization instead. You can remove an account from your organization only if the account is configured with the information required to operate as a standalone account. When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required of standalone accounts is not automatically collected. For an account that you want to make standalone, you must choose a support plan, provide and verify the required contact information, and provide a current payment method. AWS uses the payment method to charge for any billable (not free tier) AWS activity that occurs while the account isn't attached to an organization. To remove an account that doesn't yet have this information, you must sign in as the member account and follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide. The account that you want to leave must not be a delegated administrator account for any AWS service enabled for your organization. If the account is a delegated administrator, you must first change the delegated administrator account to another account that is remaining in the organization. After the account leaves the organization, all tags that were attached to the account object in the organization are deleted. AWS accounts outside of an organization do not support tags. Adds one or more tags to the specified resource. Currently, you can attach tags to the following resources in AWS Organizations. AWS account Organization root Organizational unit (OU) Policy (any type) This operation can be called only from the organization's management account. Removes any tags with the specified keys from the specified resource. You can attach tags to the following resources in AWS Organizations. AWS account Organization root Organizational unit (OU) Policy (any type) This operation can be called only from the organization's management account. Renames the specified organizational unit (OU). The ID and ARN don't change. The child OUs and accounts remain in place, and any attached policies of the OU remain attached. This operation can be called only from the organization's management account. The Amazon Resource Name (ARN) of the account. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide. The Amazon Resource Name (ARN) of the account. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Service Authorization Reference. The Amazon Resource Name (ARN) of the delegated administrator's account. The Amazon Resource Name (ARN) of the account that is designated as the management account for the organization. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide. The Amazon Resource Name (ARN) of the account that is designated as the management account for the organization. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Service Authorization Reference. The Amazon Resource Name (ARN) of the policy target. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide. The Amazon Resource Name (ARN) of the policy target. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Service Authorization Reference. The Amazon Resource Name (ARN) of a handshake. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide. The Amazon Resource Name (ARN) of a handshake. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Service Authorization Reference. The Amazon Resource Name (ARN) of an organization. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide. The Amazon Resource Name (ARN) of an organization. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Service Authorization Reference. The Amazon Resource Name (ARN) of this OU. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide. The Amazon Resource Name (ARN) of this OU. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Service Authorization Reference. The Amazon Resource Name (ARN) of the policy. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide. The Amazon Resource Name (ARN) of the policy. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Service Authorization Reference. The Amazon Resource Name (ARN) of the root. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide. The Amazon Resource Name (ARN) of the root. For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Service Authorization Reference. Amazon RDS provides an HTTP endpoint to run SQL statements on an Amazon Aurora Serverless DB cluster. To run these statements, you work with the Data Service API. For more information about the Data Service API, see Using the Data API for Aurora Serverless in the Amazon Aurora User Guide. If you have questions or comments related to the Data API, send email to Rds-data-api-feedback@amazon.com.
ResourceIds
and ScalableDimension
.ResourceId
and ScalableDimension
.ResourceId
, ScalableDimension
, and PolicyNames
.ResourceId
, ScalableDimension
, and ScheduledActionNames
parameters.ResourceId
, ScalableDimension
, and PolicyNames
.ResourceId
, ScalableDimension
, and ScheduledActionNames
parameters.
",
- "TargetTrackingScalingPolicyConfiguration$ScaleOutCooldown": "
",
- "TargetTrackingScalingPolicyConfiguration$ScaleInCooldown": "
"
+ "StepScalingPolicyConfiguration$Cooldown": "
",
+ "TargetTrackingScalingPolicyConfiguration$ScaleOutCooldown": "
",
+ "TargetTrackingScalingPolicyConfiguration$ScaleInCooldown": "
"
}
},
"CustomizedMetricSpecification": {
@@ -156,7 +156,7 @@
}
},
"LimitExceededException": {
- "base": "TargetTrackingScaling
—Not supported for Amazon EMRStepScaling
—Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces (for Apache Cassandra), or Amazon MSK.TargetTrackingScaling
—Not supported for Amazon EMRStepScaling
—Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces (for Apache Cassandra), or Amazon MSK.
",
"DeregisterScalableTargetRequest$ResourceId": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
- "DescribeScalingActivitiesRequest$ResourceId": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
- "DescribeScalingPoliciesRequest$ResourceId": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
- "DescribeScheduledActionsRequest$ResourceId": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
+ "DescribeScalingActivitiesRequest$ResourceId": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
+ "DescribeScalingPoliciesRequest$ResourceId": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
+ "DescribeScheduledActionsRequest$ResourceId": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
"PutScalingPolicyRequest$ResourceId": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
"PutScalingPolicyResponse$PolicyARN": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
at(yyyy-mm-ddThh:mm:ss)
\"rate(value unit)
\"cron(fields)
\"minute
| minutes
| hour
| hours
| day
| days
.
at(yyyy-mm-ddThh:mm:ss)
\"rate(value unit)
\"cron(fields)
\"minute
| minutes
| hour
| hours
| day
| days
.Etc/GMT+9
or Pacific/Tahiti
). For more information, see https://www.joda.org/joda-time/timezones.html.
",
"RegisterScalableTargetRequest$ResourceId": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
- "RegisterScalableTargetRequest$RoleARN": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
"ScalableTarget$RoleARN": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
"ScheduledAction$ScheduledActionARN": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
at(yyyy-mm-ddThh:mm:ss)
\"rate(value unit)
\"cron(fields)
\"minute
| minutes
| hour
| hours
| day
| days
.
at(yyyy-mm-ddThh:mm:ss)
\"rate(value unit)
\"cron(fields)
\"minute
| minutes
| hour
| hours
| day
| days
.
"
}
},
"ResourceIdsMaxLen1600": {
"base": null,
"refs": {
- "DescribeScalableTargetsRequest$ResourceIds": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
+ "DescribeScalableTargetsRequest$ResourceIds": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.
",
"DescribeScalingPoliciesRequest$PolicyNames": "service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.true
suspends the specified scaling activities. Setting it to false
(default) resumes the specified scaling activities.
DynamicScalingInSuspended
, while a suspension is in effect, all scale-in activities that are triggered by a scaling policy are suspended.DynamicScalingOutSuspended
, while a suspension is in effect, all scale-out activities that are triggered by a scaling policy are suspended.ScheduledScalingSuspended
, while a suspension is in effect, all scaling activities that involve scheduled actions are suspended. true
suspends the specified scaling activities. Setting it to false
(default) resumes the specified scaling activities.
DynamicScalingInSuspended
, while a suspension is in effect, all scale-in activities that are triggered by a scaling policy are suspended.DynamicScalingOutSuspended
, while a suspension is in effect, all scale-out activities that are triggered by a scaling policy are suspended.ScheduledScalingSuspended
, while a suspension is in effect, all scaling activities that involve scheduled actions are suspended.
"
}
},
+ "ListenerTlsSdsCertificate": {
+ "base": "maxPendingRequests
is 2147483647
.
"
}
},
+ "VirtualGatewayListenerTlsSdsCertificate": {
+ "base": "0
for UM (unacknowledge mode), 1
for AM (acknowledge mode), or 2
for (TM) transparent mode.0
for UM (unacknowledge mode) or 1
for AM (acknowledge mode).VectorEsriLightGrayCanvas
, VectorEsriLight
, VectorEsriStreets
, VectorEsriNavigation
, VectorEsriDarkGrayCanvas
, VectorEsriLightGrayCanvas
, VectorHereBerlin
VectorHereBerlin
, you may not use HERE Maps for Asset Management. See the AWS Service Terms for Amazon Location Service. VectorEsriStreets
, VectorEsriTopographic
, VectorEsriNavigation
, VectorEsriDarkGrayCanvas
, VectorEsriLightGrayCanvas
, VectorHereBerlin
.VectorHereBerlin
, you may not use HERE Maps for Asset Management. See the AWS Service Terms for Amazon Location Service. CreateDataset
can create a training or a test dataset from a valid dataset source (DatasetSource
).train
for the value of DatasetType
.CreateDataset
twice. On the first call, specify train
for the value of DatasetType
. On the second call, specify test
for the value of DatasetType
. of dataset with CreateModel
is an asynchronous operation in which Amazon Lookout for Vision trains, tests, and evaluates a new version of a model. Status
field returned in the response from DescribeModel.OutputConfig
. dataset
.
Status
field in the response from a call to DescribeDataset. DetectAnomalies
includes a boolean prediction that the image contains one or more anomalies and a confidence value for the prediction.DetectAnomalies
, you must first start your model with the StartModel operation. You are charged for the amount of time, in minutes, that a model runs and for the number of anomaly detection units that your model uses. If you are not using a model, use the StopModel operation to stop your model. Status
field in the response.CreateDataset
can create a training or a test dataset from a valid dataset source (DatasetSource
).train
for the value of DatasetType
.CreateDataset
twice. On the first call, specify train
for the value of DatasetType
. On the second call, specify test
for the value of DatasetType
. lookoutvision:CreateDataset
operation.CreateModel
is an asynchronous operation in which Amazon Lookout for Vision trains, tests, and evaluates a new version of a model. Status
field returned in the response from DescribeModel.OutputConfig
. lookoutvision:CreateModel
operation. If you want to tag your model, you also require permission to the lookoutvision:TagResource
operation.lookoutvision:CreateProject
operation.dataset
.
Status
field in the response from a call to DescribeDataset. lookoutvision:DeleteDataset
operation.lookoutvision:DeleteModel
operation.lookoutvision:DeleteProject
operation.lookoutvision:DescribeDataset
operation.lookoutvision:DescribeModel
operation.lookoutvision:DescribeProject
operation.DetectAnomalies
includes a boolean prediction that the image contains one or more anomalies and a confidence value for the prediction.DetectAnomalies
, you must first start your model with the StartModel operation. You are charged for the amount of time, in minutes, that a model runs and for the number of anomaly detection units that your model uses. If you are not using a model, use the StopModel operation to stop your model. lookoutvision:DetectAnomalies
operation.lookoutvision:ListDatasetEntries
operation.lookoutvision:ListModels
operation.lookoutvision:ListProjects
operation.lookoutvision:ListTagsForResource
operation.lookoutvision:StartModel
operation.lookoutvision:StopModel
operation.lookoutvision:TagResource
operation.lookoutvision:UntagResource
operation.Status
field in the response.lookoutvision:UpdateDatasetEntries
operation.FullAWSAccess
policy with an SCP that limits the permissions that can be delegated, you must attach the replacement SCP before you can remove the default SCP. This is the authorization strategy of an \"allow list\". If you instead attach a second SCP and leave the FullAWSAccess
SCP still attached, and specify \"Effect\": \"Deny\"
in the second SCP to override the \"Effect\": \"Allow\"
in the FullAWSAccess
policy (or any other attached SCP), you're using the authorization strategy of a \"deny list\".ServicePrincipal
) with AWS Organizations. When you disable integration, the specified service no longer can create a service-linked role in new accounts in your organization. This means the service can't perform operations on your behalf on any new accounts in your organization. The service can still perform operations in older accounts until the service completes its clean-up from AWS Organizations.DisableAWSServiceAccess
operation, the specified service can no longer perform operations in your organization's accounts unless the operations are explicitly permitted by the IAM policies that are attached to your roles.ServicePrincipal
) with AWS Organizations. When you disable integration, the specified service no longer can create a service-linked role in new accounts in your organization. This means the service can't perform operations on your behalf on any new accounts in your organization. The service can still perform operations in older accounts until the service completes its clean-up from AWS Organizations.
DisableAWSServiceAccess
operation, the specified service can no longer perform operations in your organization's accounts ServicePrincipal
) with AWS Organizations. When you enable integration, you allow the specified service to create a service-linked role in all the accounts in your organization. This allows the service to perform operations on your behalf in your organization and its accounts.\"Action\": \"ENABLE_ALL_FEATURES\"
. This completes the change.
organizations:TagResource
permission.
LeaveOrganization
and leaving the organization.
LeaveOrganization
and leaving the organization.
NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
Amazon RDS provides an HTTP endpoint to run SQL statements on an Amazon Aurora Serverless DB cluster. To run these statements, you work with the Data Service API.
For more information about the Data Service API, see Using the Data API for Aurora Serverless in the Amazon Aurora User Guide.
", "operations": { "BatchExecuteStatement": "Runs a batch SQL statement over an array of data.
You can run bulk update and insert operations for multiple records using a DML statement with different parameter sets. Bulk operations can provide a significant performance improvement over individual insert and update operations.
If a call isn't part of a transaction because it doesn't include the transactionID
parameter, changes that result from the call are committed automatically.
Starts a SQL transaction.
<important> <p>A transaction can run for a maximum of 24 hours. A transaction is terminated and rolled back automatically after 24 hours.</p> <p>A transaction times out if no calls use its transaction ID in three minutes. If a transaction times out before it's committed, it's rolled back automatically.</p> <p>DDL statements inside a transaction cause an implicit commit. We recommend that you run each DDL statement in a separate <code>ExecuteStatement</code> call with <code>continueAfterTimeout</code> enabled.</p> </important>
",
@@ -160,7 +160,7 @@
"ExecuteSqlRequest$database": "The name of the database.
", "ExecuteSqlRequest$schema": "The name of the database schema.
", "ExecuteStatementRequest$database": "The name of the database.
", - "ExecuteStatementRequest$schema": "The name of the database schema.
" + "ExecuteStatementRequest$schema": "The name of the database schema.
Currently, the schema
parameter isn't supported.
A hint that specifies the correct object type for data type mapping.
Values:
DECIMAL
- The corresponding String
parameter value is sent as an object of DECIMAL
type to the database.
TIMESTAMP
- The corresponding String
parameter value is sent as an object of TIMESTAMP
type to the database. The accepted format is YYYY-MM-DD HH:MM:SS[.FFF]
.
TIME
- The corresponding String
parameter value is sent as an object of TIME
type to the database. The accepted format is HH:MM:SS[.FFF]
.
DATE
- The corresponding String
parameter value is sent as an object of DATE
type to the database. The accepted format is YYYY-MM-DD
.
A hint that specifies the correct object type for data type mapping. Possible values are as follows:
DATE
- The corresponding String
parameter value is sent as an object of DATE
type to the database. The accepted format is YYYY-MM-DD
.
DECIMAL
- The corresponding String
parameter value is sent as an object of DECIMAL
type to the database.
JSON
- The corresponding String
parameter value is sent as an object of JSON
type to the database.
TIME
- The corresponding String
parameter value is sent as an object of TIME
type to the database. The accepted format is HH:MM:SS[.FFF]
.
TIMESTAMP
- The corresponding String
parameter value is sent as an object of TIMESTAMP
type to the database. The accepted format is YYYY-MM-DD HH:MM:SS[.FFF]
.
UUID
- The corresponding String
parameter value is sent as an object of UUID
type to the database.
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.
", "operations": { - "ActivateKeySigningKey": "Activates a key signing key (KSK) so that it can be used for signing by DNSSEC. This operation changes the KSK status to ACTIVE
.
Activates a key-signing key (KSK) so that it can be used for signing by DNSSEC. This operation changes the KSK status to ACTIVE
.
Associates an Amazon VPC with a private hosted zone.
To perform the association, the VPC and the private hosted zone must already exist. You can't convert a public hosted zone into a private hosted zone.
If you want to associate a VPC that was created by using one AWS account with a private hosted zone that was created by using a different account, the AWS account that created the private hosted zone must first submit a CreateVPCAssociationAuthorization
request. Then the account that created the VPC must submit an AssociateVPCWithHostedZone
request.
Creates, changes, or deletes a resource record set, which contains authoritative DNS information for a specified domain name or subdomain name. For example, you can use ChangeResourceRecordSets
to create a resource record set that routes traffic for test.example.com to a web server that has an IP address of 192.0.2.44.
Deleting Resource Record Sets
To delete a resource record set, you must specify all the same values that you specified when you created it.
Change Batches and Transactional Changes
The request body must include a document with a ChangeResourceRecordSetsRequest
element. The request body contains a list of change items, known as a change batch. Change batches are considered transactional changes. Route 53 validates the changes in the request and then either makes all or none of the changes in the change batch request. This ensures that DNS routing isn't adversely affected by partial changes to the resource record sets in a hosted zone.
For example, suppose a change batch request contains two changes: it deletes the CNAME
resource record set for www.example.com and creates an alias resource record set for www.example.com. If validation for both records succeeds, Route 53 deletes the first resource record set and creates the second resource record set in a single operation. If validation for either the DELETE
or the CREATE
action fails, then the request is canceled, and the original CNAME
record continues to exist.
If you try to delete the same resource record set more than once in a single change batch, Route 53 returns an InvalidChangeBatch
error.
Traffic Flow
To create resource record sets for complex routing configurations, use either the traffic flow visual editor in the Route 53 console or the API actions for traffic policies and traffic policy instances. Save the configuration as a traffic policy, then associate the traffic policy with one or more domain names (such as example.com) or subdomain names (such as www.example.com), in the same hosted zone or in multiple hosted zones. You can roll back the updates if the new configuration isn't performing as expected. For more information, see Using Traffic Flow to Route DNS Traffic in the Amazon Route 53 Developer Guide.
Create, Delete, and Upsert
Use ChangeResourceRecordsSetsRequest
to perform the following actions:
CREATE
: Creates a resource record set that has the specified values.
DELETE
: Deletes an existing resource record set that has the specified values.
UPSERT
: If a resource record set does not already exist, AWS creates it. If a resource set does exist, Route 53 updates it with the values in the request.
Syntaxes for Creating, Updating, and Deleting Resource Record Sets
The syntax for a request depends on the type of resource record set that you want to create, delete, or update, such as weighted, alias, or failover. The XML elements in your request must appear in the order listed in the syntax.
For an example for each type of resource record set, see \"Examples.\"
Don't refer to the syntax in the \"Parameter Syntax\" section, which includes all of the elements for every kind of resource record set that you can create, delete, or update by using ChangeResourceRecordSets
.
Change Propagation to Route 53 DNS Servers
When you submit a ChangeResourceRecordSets
request, Route 53 propagates your changes to all of the Route 53 authoritative DNS servers. While your changes are propagating, GetChange
returns a status of PENDING
. When propagation is complete, GetChange
returns a status of INSYNC
. Changes generally propagate to all Route 53 name servers within 60 seconds. For more information, see GetChange.
Limits on ChangeResourceRecordSets Requests
For information about the limits on a ChangeResourceRecordSets
request, see Limits in the Amazon Route 53 Developer Guide.
Adds, edits, or deletes tags for a health check or a hosted zone.
For information about using tags for cost allocation, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "CreateHealthCheck": "Creates a new health check.
For information about adding health checks to resource record sets, see HealthCheckId in ChangeResourceRecordSets.
ELB Load Balancers
If you're registering EC2 instances with an Elastic Load Balancing (ELB) load balancer, do not create Amazon Route 53 health checks for the EC2 instances. When you register an EC2 instance with a load balancer, you configure settings for an ELB health check, which performs a similar function to a Route 53 health check.
Private Hosted Zones
You can associate health checks with failover resource record sets in a private hosted zone. Note the following:
Route 53 health checkers are outside the VPC. To check the health of an endpoint within a VPC by IP address, you must assign a public IP address to the instance in the VPC.
You can configure a health checker to check the health of an external resource that the instance relies on, such as a database server.
You can create a CloudWatch metric, associate an alarm with the metric, and then create a health check that is based on the state of the alarm. For example, you might create a CloudWatch metric that checks the status of the Amazon EC2 StatusCheckFailed
metric, add an alarm to the metric, and then create a health check that is based on the state of the alarm. For information about creating CloudWatch metrics and alarms by using the CloudWatch console, see the Amazon CloudWatch User Guide.
Creates a new public or private hosted zone. You create records in a public hosted zone to define how you want to route traffic on the internet for a domain, such as example.com, and its subdomains (apex.example.com, acme.example.com). You create records in a private hosted zone to define how you want to route traffic for a domain and its subdomains within one or more Amazon Virtual Private Clouds (Amazon VPCs).
You can't convert a public hosted zone to a private hosted zone or vice versa. Instead, you must create a new hosted zone with the same name and create new resource record sets.
For more information about charges for hosted zones, see Amazon Route 53 Pricing.
Note the following:
You can't create a hosted zone for a top-level domain (TLD) such as .com.
For public hosted zones, Route 53 automatically creates a default SOA record and four NS records for the zone. For more information about SOA and NS records, see NS and SOA Records that Route 53 Creates for a Hosted Zone in the Amazon Route 53 Developer Guide.
If you want to use the same name servers for multiple public hosted zones, you can optionally associate a reusable delegation set with the hosted zone. See the DelegationSetId
element.
If your domain is registered with a registrar other than Route 53, you must update the name servers with your registrar to make Route 53 the DNS service for the domain. For more information, see Migrating DNS Service for an Existing Domain to Amazon Route 53 in the Amazon Route 53 Developer Guide.
When you submit a CreateHostedZone
request, the initial status of the hosted zone is PENDING
. For public hosted zones, this means that the NS and SOA records are not yet available on all Route 53 DNS servers. When the NS and SOA records are available, the status of the zone changes to INSYNC
.
Creates a new key signing key (KSK) associated with a hosted zone. You can only have two KSKs per hosted zone.
", + "CreateKeySigningKey": "Creates a new key-signing key (KSK) associated with a hosted zone. You can only have two KSKs per hosted zone.
", "CreateQueryLoggingConfig": "Creates a configuration for DNS query logging. After you create a query logging configuration, Amazon Route 53 begins to publish log data to an Amazon CloudWatch Logs log group.
DNS query logs contain information about the queries that Route 53 receives for a specified public hosted zone, such as the following:
Route 53 edge location that responded to the DNS query
Domain or subdomain that was requested
DNS record type, such as A or AAAA
DNS response code, such as NoError
or ServFail
Before you create a query logging configuration, perform the following operations.
If you create a query logging configuration using the Route 53 console, Route 53 performs these operations automatically.
Create a CloudWatch Logs log group, and make note of the ARN, which you specify when you create a query logging configuration. Note the following:
You must create the log group in the us-east-1 region.
You must use the same AWS account to create the log group and the hosted zone that you want to configure query logging for.
When you create log groups for query logging, we recommend that you use a consistent prefix, for example:
/aws/route53/hosted zone name
In the next step, you'll create a resource policy, which controls access to one or more log groups and the associated AWS resources, such as Route 53 hosted zones. There's a limit on the number of resource policies that you can create, so we recommend that you use a consistent prefix so you can use the same resource policy for all the log groups that you create for query logging.
Create a CloudWatch Logs resource policy, and give it the permissions that Route 53 needs to create log streams and to send query logs to log streams. For the value of Resource
, specify the ARN for the log group that you created in the previous step. To use the same resource policy for all the CloudWatch Logs log groups that you created for query logging configurations, replace the hosted zone name with *
, for example:
arn:aws:logs:us-east-1:123412341234:log-group:/aws/route53/*
You can't use the CloudWatch console to create or edit a resource policy. You must use the CloudWatch API, one of the AWS SDKs, or the AWS CLI.
When Route 53 finishes creating the configuration for DNS query logging, it does the following:
Creates a log stream for an edge location the first time that the edge location responds to DNS queries for the specified hosted zone. That log stream is used to log all queries that Route 53 responds to for that edge location.
Begins to send query logs to the applicable log stream.
The name of each log stream is in the following format:
hosted zone ID/edge location code
The edge location code is a three-letter code and an arbitrarily assigned number, for example, DFW3. The three-letter code typically corresponds with the International Air Transport Association airport code for an airport near the edge location. (These abbreviations might change in the future.) For a list of edge locations, see \"The Route 53 Global Network\" on the Route 53 Product Details page.
Query logs contain only the queries that DNS resolvers forward to Route 53. If a DNS resolver has already cached the response to a query (such as the IP address for a load balancer for example.com), the resolver will continue to return the cached response. It doesn't forward another query to Route 53 until the TTL for the corresponding resource record set expires. Depending on how many DNS queries are submitted for a resource record set, and depending on the TTL for that resource record set, query logs might contain information about only one query out of every several thousand queries that are submitted to DNS. For more information about how DNS works, see Routing Internet Traffic to Your Website or Web Application in the Amazon Route 53 Developer Guide.
For a list of the values in each query log and the format of each value, see Logging DNS Queries in the Amazon Route 53 Developer Guide.
For information about charges for query logs, see Amazon CloudWatch Pricing.
If you want Route 53 to stop sending query logs to CloudWatch Logs, delete the query logging configuration. For more information, see DeleteQueryLoggingConfig.
Creates a delegation set (a group of four name servers) that can be reused by multiple hosted zones that were created by the same AWS account.
You can also create a reusable delegation set that uses the four name servers that are associated with an existing hosted zone. Specify the hosted zone ID in the CreateReusableDelegationSet
request.
You can't associate a reusable delegation set with a private hosted zone.
For information about using a reusable delegation set to configure white label name servers, see Configuring White Label Name Servers.
The process for migrating existing hosted zones to use a reusable delegation set is comparable to the process for configuring white label name servers. You need to perform the following steps:
Create a reusable delegation set.
Recreate hosted zones, and reduce the TTL to 60 seconds or less.
Recreate resource record sets in the new hosted zones.
Change the registrar's name servers to use the name servers for the new hosted zones.
Monitor traffic for the website or application.
Change TTLs back to their original values.
If you want to migrate existing hosted zones to use a reusable delegation set, the existing hosted zones can't use any of the name servers that are assigned to the reusable delegation set. If one or more hosted zones do use one or more name servers that are assigned to the reusable delegation set, you can do one of the following:
For small numbers of hosted zones—up to a few hundred—it's relatively easy to create reusable delegation sets until you get one that has four name servers that don't overlap with any of the name servers in your hosted zones.
For larger numbers of hosted zones, the easiest solution is to use more than one reusable delegation set.
For larger numbers of hosted zones, you can also migrate hosted zones that have overlapping name servers to hosted zones that don't have overlapping name servers, then migrate the hosted zones again to use the reusable delegation set.
Creates a traffic policy, which you use to create multiple DNS resource record sets for one domain name (such as example.com) or one subdomain name (such as www.example.com).
", "CreateTrafficPolicyInstance": "Creates resource record sets in a specified hosted zone based on the settings in a specified traffic policy version. In addition, CreateTrafficPolicyInstance
associates the resource record sets with a specified domain name (such as example.com) or subdomain name (such as www.example.com). Amazon Route 53 responds to DNS queries for the domain or subdomain name by using the resource record sets that CreateTrafficPolicyInstance
created.
Creates a new version of an existing traffic policy. When you create a new version of a traffic policy, you specify the ID of the traffic policy that you want to update and a JSON-formatted document that describes the new version. You use traffic policies to create multiple DNS resource record sets for one domain name (such as example.com) or one subdomain name (such as www.example.com). You can create a maximum of 1000 versions of a traffic policy. If you reach the limit and need to create another version, you'll need to start a new traffic policy.
", "CreateVPCAssociationAuthorization": "Authorizes the AWS account that created a specified VPC to submit an AssociateVPCWithHostedZone
request to associate the VPC with a specified hosted zone that was created by a different account. To submit a CreateVPCAssociationAuthorization
request, you must use the account that created the hosted zone. After you authorize the association, use the account that created the VPC to submit an AssociateVPCWithHostedZone
request.
If you want to associate multiple VPCs that you created by using one account with a hosted zone that you created by using a different account, you must submit one authorization request for each VPC.
Deactivates a key signing key (KSK) so that it will not be used for signing by DNSSEC. This operation changes the KSK status to INACTIVE
.
Deactivates a key-signing key (KSK) so that it will not be used for signing by DNSSEC. This operation changes the KSK status to INACTIVE
.
Deletes a health check.
Amazon Route 53 does not prevent you from deleting a health check even if the health check is associated with one or more resource record sets. If you delete a health check and you don't update the associated resource record sets, the future status of the health check can't be predicted and may change. This will affect the routing of DNS queries for your DNS failover configuration. For more information, see Replacing and Deleting Health Checks in the Amazon Route 53 Developer Guide.
If you're using AWS Cloud Map and you configured Cloud Map to create a Route 53 health check when you register an instance, you can't use the Route 53 DeleteHealthCheck
command to delete the health check. The health check is deleted automatically when you deregister the instance; there can be a delay of several hours before the health check is deleted from Route 53.
Deletes a hosted zone.
If the hosted zone was created by another service, such as AWS Cloud Map, see Deleting Public Hosted Zones That Were Created by Another Service in the Amazon Route 53 Developer Guide for information about how to delete it. (The process is the same for public and private hosted zones that were created by another service.)
If you want to keep your domain registration but you want to stop routing internet traffic to your website or web application, we recommend that you delete resource record sets in the hosted zone instead of deleting the hosted zone.
If you delete a hosted zone, you can't undelete it. You must create a new hosted zone and update the name servers for your domain registration, which can require up to 48 hours to take effect. (If you delegated responsibility for a subdomain to a hosted zone and you delete the child hosted zone, you must update the name servers in the parent hosted zone.) In addition, if you delete a hosted zone, someone could hijack the domain and route traffic to their own resources using your domain name.
If you want to avoid the monthly charge for the hosted zone, you can transfer DNS service for the domain to a free DNS service. When you transfer DNS service, you have to update the name servers for the domain registration. If the domain is registered with Route 53, see UpdateDomainNameservers for information about how to replace Route 53 name servers with name servers for the new DNS service. If the domain is registered with another registrar, use the method provided by the registrar to update name servers for the domain registration. For more information, perform an internet search on \"free DNS service.\"
You can delete a hosted zone only if it contains only the default SOA record and NS resource record sets. If the hosted zone contains other resource record sets, you must delete them before you can delete the hosted zone. If you try to delete a hosted zone that contains other resource record sets, the request fails, and Route 53 returns a HostedZoneNotEmpty
error. For information about deleting records from your hosted zone, see ChangeResourceRecordSets.
To verify that the hosted zone has been deleted, do one of the following:
Use the GetHostedZone
action to request information about the hosted zone.
Use the ListHostedZones
action to get a list of the hosted zones associated with the current AWS account.
Deletes a key signing key (KSK). Before you can delete a KSK, you must deactivate it. The KSK must be deactived before you can delete it regardless of whether the hosted zone is enabled for DNSSEC signing.
", + "DeleteKeySigningKey": "Deletes a key-signing key (KSK). Before you can delete a KSK, you must deactivate it. The KSK must be deactived before you can delete it regardless of whether the hosted zone is enabled for DNSSEC signing.
", "DeleteQueryLoggingConfig": "Deletes a configuration for DNS query logging. If you delete a configuration, Amazon Route 53 stops sending query logs to CloudWatch Logs. Route 53 doesn't delete any logs that are already in CloudWatch Logs.
For more information about DNS query logs, see CreateQueryLoggingConfig.
", "DeleteReusableDelegationSet": "Deletes a reusable delegation set.
You can delete a reusable delegation set only if it isn't associated with any hosted zones.
To verify that the reusable delegation set is not associated with any hosted zones, submit a GetReusableDelegationSet request and specify the ID of the reusable delegation set that you want to delete.
", "DeleteTrafficPolicy": "Deletes a traffic policy.
When you delete a traffic policy, Route 53 sets a flag on the policy to indicate that it has been deleted. However, Route 53 never fully deletes the traffic policy. Note the following:
Deleted traffic policies aren't listed if you run ListTrafficPolicies.
There's no way to get a list of deleted policies.
If you retain the ID of the policy, you can get information about the policy, including the traffic policy document, by running GetTrafficPolicy.
Deletes a traffic policy instance and all of the resource record sets that Amazon Route 53 created when you created the instance.
In the Route 53 console, traffic policy instances are known as policy records.
Removes authorization to submit an AssociateVPCWithHostedZone
request to associate a specified VPC with a hosted zone that was created by a different account. You must use the account that created the hosted zone to submit a DeleteVPCAssociationAuthorization
request.
Sending this request only prevents the AWS account that created the VPC from associating the VPC with the Amazon Route 53 hosted zone in the future. If the VPC is already associated with the hosted zone, DeleteVPCAssociationAuthorization
won't disassociate the VPC from the hosted zone. If you want to delete an existing association, use DisassociateVPCFromHostedZone
.
Disables DNSSEC signing in a specific hosted zone. This action does not deactivate any key signing keys (KSKs) that are active in the hosted zone.
", + "DisableHostedZoneDNSSEC": "Disables DNSSEC signing in a specific hosted zone. This action does not deactivate any key-signing keys (KSKs) that are active in the hosted zone.
", "DisassociateVPCFromHostedZone": "Disassociates an Amazon Virtual Private Cloud (Amazon VPC) from an Amazon Route 53 private hosted zone. Note the following:
You can't disassociate the last Amazon VPC from a private hosted zone.
You can't convert a private hosted zone into a public hosted zone.
You can submit a DisassociateVPCFromHostedZone
request using either the account that created the hosted zone or the account that created the Amazon VPC.
Some services, such as AWS Cloud Map and Amazon Elastic File System (Amazon EFS) automatically create hosted zones and associate VPCs with the hosted zones. A service can create a hosted zone using your account or using its own account. You can disassociate a VPC from a hosted zone only if the service created the hosted zone using your account.
When you run DisassociateVPCFromHostedZone, if the hosted zone has a value for OwningAccount
, you can use DisassociateVPCFromHostedZone
. If the hosted zone has a value for OwningService
, you can't use DisassociateVPCFromHostedZone
.
Enables DNSSEC signing in a specific hosted zone.
", "GetAccountLimit": "Gets the specified limit for the current account, for example, the maximum number of health checks that you can create using the account.
For the default limit, see Limits in the Amazon Route 53 Developer Guide. To request a higher limit, open a case.
You can also view account limits in AWS Trusted Advisor. Sign in to the AWS Management Console and open the Trusted Advisor console at https://console.aws.amazon.com/trustedadvisor/. Then choose Service limits in the navigation pane.
Returns the current status of a change batch request. The status is one of the following values:
PENDING
indicates that the changes in this request have not propagated to all Amazon Route 53 DNS servers. This is the initial status of all change batch requests.
INSYNC
indicates that the changes have propagated to all Route 53 DNS servers.
GetCheckerIpRanges
still works, but we recommend that you download ip-ranges.json, which includes IP address ranges for all AWS services. For more information, see IP Address Ranges of Amazon Route 53 Servers in the Amazon Route 53 Developer Guide.
Returns information about DNSSEC for a specific hosted zone, including the key signing keys (KSKs) and zone signing keys (ZSKs) in the hosted zone.
", - "GetGeoLocation": "Gets information about whether a specified geographic location is supported for Amazon Route 53 geolocation resource record sets.
Use the following syntax to determine whether a continent is supported for geolocation:
GET /2013-04-01/geolocation?continentcode=two-letter abbreviation for a continent
Use the following syntax to determine whether a country is supported for geolocation:
GET /2013-04-01/geolocation?countrycode=two-character country code
Use the following syntax to determine whether a subdivision of a country is supported for geolocation:
GET /2013-04-01/geolocation?countrycode=two-character country code&subdivisioncode=subdivision code
Route 53 does not perform authorization for this API because it retrieves information that is already available to the public.
GetCheckerIpRanges
still works, but we recommend that you download ip-ranges.json, which includes IP address ranges for all AWS services. For more information, see IP Address Ranges of Amazon Route 53 Servers in the Amazon Route 53 Developer Guide.
Returns information about DNSSEC for a specific hosted zone, including the key-signing keys (KSKs) in the hosted zone.
", + "GetGeoLocation": "Gets information about whether a specified geographic location is supported for Amazon Route 53 geolocation resource record sets.
Route 53 does not perform authorization for this API because it retrieves information that is already available to the public.
Use the following syntax to determine whether a continent is supported for geolocation:
GET /2013-04-01/geolocation?continentcode=two-letter abbreviation for a continent
Use the following syntax to determine whether a country is supported for geolocation:
GET /2013-04-01/geolocation?countrycode=two-character country code
Use the following syntax to determine whether a subdivision of a country is supported for geolocation:
GET /2013-04-01/geolocation?countrycode=two-character country code&subdivisioncode=subdivision code
Gets information about a specified health check.
", "GetHealthCheckCount": "Retrieves the number of health checks that are associated with the current AWS account.
", "GetHealthCheckLastFailureReason": "Gets the reason that a specified health check failed most recently.
", @@ -45,7 +45,7 @@ "GetTrafficPolicy": "Gets information about a specific traffic policy version.
For information about how of deleting a traffic policy affects the response from GetTrafficPolicy
, see DeleteTrafficPolicy.
Gets information about a specified traffic policy instance.
After you submit a CreateTrafficPolicyInstance
or an UpdateTrafficPolicyInstance
request, there's a brief delay while Amazon Route 53 creates the resource record sets that are specified in the traffic policy definition. For more information, see the State
response element.
In the Route 53 console, traffic policy instances are known as policy records.
Gets the number of traffic policy instances that are associated with the current AWS account.
", - "ListGeoLocations": "Retrieves a list of supported geographic locations.
Countries are listed first, and continents are listed last. If Amazon Route 53 supports subdivisions for a country (for example, states or provinces), the subdivisions for that country are listed in alphabetical order immediately after the corresponding country.
For a list of supported geolocation codes, see the GeoLocation data type.
", + "ListGeoLocations": "Retrieves a list of supported geographic locations.
Countries are listed first, and continents are listed last. If Amazon Route 53 supports subdivisions for a country (for example, states or provinces), the subdivisions for that country are listed in alphabetical order immediately after the corresponding country.
Route 53 does not perform authorization for this API because it retrieves information that is already available to the public.
For a list of supported geolocation codes, see the GeoLocation data type.
", "ListHealthChecks": "Retrieve a list of the health checks that are associated with the current AWS account.
", "ListHostedZones": "Retrieves a list of the public and private hosted zones that are associated with the current AWS account. The response includes a HostedZones
child element for each hosted zone.
Amazon Route 53 returns a maximum of 100 items in each response. If you have a lot of hosted zones, you can use the maxitems
parameter to list them in groups of up to 100.
Retrieves a list of your hosted zones in lexicographic order. The response includes a HostedZones
child element for each hosted zone created by the current AWS account.
ListHostedZonesByName
sorts hosted zones by name with the labels reversed. For example:
com.example.www.
Note the trailing dot, which can change the sort order in some circumstances.
If the domain name includes escape characters or Punycode, ListHostedZonesByName
alphabetizes the domain name using the escaped or Punycoded value, which is the format that Amazon Route 53 saves in its database. For example, to create a hosted zone for exämple.com, you specify ex\\344mple.com for the domain name. ListHostedZonesByName
alphabetizes it as:
com.ex\\344mple.
The labels are reversed and alphabetized using the escaped value. For more information about valid domain name formats, including internationalized domain names, see DNS Domain Name Format in the Amazon Route 53 Developer Guide.
Route 53 returns up to 100 items in each response. If you have a lot of hosted zones, use the MaxItems
parameter to list them in groups of up to 100. The response includes values that help navigate from one group of MaxItems
hosted zones to the next:
The DNSName
and HostedZoneId
elements in the response contain the values, if any, specified for the dnsname
and hostedzoneid
parameters in the request that produced the current response.
The MaxItems
element in the response contains the value, if any, that you specified for the maxitems
parameter in the request that produced the current response.
If the value of IsTruncated
in the response is true, there are more hosted zones associated with the current AWS account.
If IsTruncated
is false, this response includes the last hosted zone that is associated with the current account. The NextDNSName
element and NextHostedZoneId
elements are omitted from the response.
The NextDNSName
and NextHostedZoneId
elements in the response contain the domain name and the hosted zone ID of the next hosted zone that is associated with the current AWS account. If you want to list more hosted zones, make another call to ListHostedZonesByName
, and specify the value of NextDNSName
and NextHostedZoneId
in the dnsname
and hostedzoneid
parameters, respectively.
For the CloudWatch alarm that you want Route 53 health checkers to use to determine whether this health check is healthy, the region that the alarm was created in.
For the current list of CloudWatch regions, see Amazon CloudWatch in the AWS Service Endpoints chapter of the Amazon Web Services General Reference.
" + "AlarmIdentifier$Region": "For the CloudWatch alarm that you want Route 53 health checkers to use to determine whether this health check is healthy, the region that the alarm was created in.
For the current list of CloudWatch regions, see Amazon CloudWatch endpoints and quotas in the Amazon Web Services General Reference.
" } }, "ComparisonOperator": { @@ -747,8 +747,8 @@ "base": null, "refs": { "GeoLocation$SubdivisionCode": "For geolocation resource record sets, the two-letter code for a state of the United States. Route 53 doesn't support any other values for SubdivisionCode
. For a list of state abbreviations, see Appendix B: Two–Letter State and Possession Abbreviations on the United States Postal Service website.
If you specify subdivisioncode
, you must also specify US
for CountryCode
.
The code for the subdivision. Route 53 currently supports only states in the United States.
", - "GetGeoLocationRequest$SubdivisionCode": "For SubdivisionCode
, Amazon Route 53 supports only states of the United States. For a list of state abbreviations, see Appendix B: Two–Letter State and Possession Abbreviations on the United States Postal Service website.
If you specify subdivisioncode
, you must also specify US
for CountryCode
.
The code for the subdivision, such as a particular state within the United States. For a list of US state abbreviations, see Appendix B: Two–Letter State and Possession Abbreviations on the United States Postal Service website. For a list of all supported subdivision codes, use the ListGeoLocations API.
", + "GetGeoLocationRequest$SubdivisionCode": "The code for the subdivision, such as a particular state within the United States. For a list of US state abbreviations, see Appendix B: Two–Letter State and Possession Abbreviations on the United States Postal Service website. For a list of all supported subdivision codes, use the ListGeoLocations API.
", "ListGeoLocationsRequest$StartSubdivisionCode": "The code for the state of the United States with which you want to start listing locations that Amazon Route 53 supports for geolocation. If Route 53 has already returned a page or more of results, if IsTruncated
is true
, and if NextSubdivisionCode
from the previous response has a value, enter that value in startsubdivisioncode
to return the next page of results.
To list subdivisions (U.S. states), you must include both startcountrycode
and startsubdivisioncode
.
If IsTruncated
is true
, you can make a follow-up request to display more locations. Enter the value of NextSubdivisionCode
in the startsubdivisioncode
parameter in another ListGeoLocations
request.
The ID for the health check for which you want the last failure reason. When you created the health check, CreateHealthCheck
returned the ID in the response, in the HealthCheckId
element.
If you want to get the last failure reason for a calculated health check, you must use the Amazon Route 53 console or the CloudWatch console. You can't use GetHealthCheckLastFailureReason
for a calculated health check.
The identifier that Amazon Route 53 assigned to the health check when you created it. When you add or update a resource record set, you use this value to specify which health check to use. The value can be up to 64 characters long.
", "GetHealthCheckStatusRequest$HealthCheckId": "The ID for the health check that you want the current status for. When you created the health check, CreateHealthCheck
returned the ID in the response, in the HealthCheckId
element.
If you want to check the status of a calculated health check, you must use the Amazon Route 53 console or the CloudWatch console. You can't use GetHealthCheckStatus
to get the status of a calculated health check.
The identifier that Amazon Route 53assigned to the health check when you created it. When you add or update a resource record set, you use this value to specify which health check to use. The value can be up to 64 characters long.
", + "HealthCheck$Id": "The identifier that Amazon Route 53 assigned to the health check when you created it. When you add or update a resource record set, you use this value to specify which health check to use. The value can be up to 64 characters long.
", "ResourceRecordSet$HealthCheckId": "If you want Amazon Route 53 to return this resource record set in response to a DNS query only when the status of a health check is healthy, include the HealthCheckId
element and specify the ID of the applicable health check.
Route 53 determines whether a resource record set is healthy based on one of the following:
By periodically sending a request to the endpoint that is specified in the health check
By aggregating the status of a specified group of health checks (calculated health checks)
By determining the current state of a CloudWatch alarm (CloudWatch metric health checks)
Route 53 doesn't check the health of the endpoint that is specified in the resource record set, for example, the endpoint specified by the IP address in the Value
element. When you add a HealthCheckId
element to a resource record set, Route 53 checks the health of the endpoint that you specified in the health check.
For more information, see the following topics in the Amazon Route 53 Developer Guide:
When to Specify HealthCheckId
Specifying a value for HealthCheckId
is useful only when Route 53 is choosing between two or more resource record sets to respond to a DNS query, and you want Route 53 to base the choice in part on the status of a health check. Configuring health checks makes sense only in the following configurations:
Non-alias resource record sets: You're checking the health of a group of non-alias resource record sets that have the same routing policy, name, and type (such as multiple weighted records named www.example.com with a type of A) and you specify health check IDs for all the resource record sets.
If the health check status for a resource record set is healthy, Route 53 includes the record among the records that it responds to DNS queries with.
If the health check status for a resource record set is unhealthy, Route 53 stops responding to DNS queries using the value for that resource record set.
If the health check status for all resource record sets in the group is unhealthy, Route 53 considers all resource record sets in the group healthy and responds to DNS queries accordingly.
Alias resource record sets: You specify the following settings:
You set EvaluateTargetHealth
to true for an alias resource record set in a group of resource record sets that have the same routing policy, name, and type (such as multiple weighted records named www.example.com with a type of A).
You configure the alias resource record set to route traffic to a non-alias resource record set in the same hosted zone.
You specify a health check ID for the non-alias resource record set.
If the health check status is healthy, Route 53 considers the alias resource record set to be healthy and includes the alias record among the records that it responds to DNS queries with.
If the health check status is unhealthy, Route 53 stops responding to DNS queries using the alias resource record set.
The alias resource record set can also route traffic to a group of non-alias resource record sets that have the same routing policy, name, and type. In that configuration, associate health checks with all of the resource record sets in the group of non-alias resource record sets.
Geolocation Routing
For geolocation resource record sets, if an endpoint is unhealthy, Route 53 looks for a resource record set for the larger, associated geographic region. For example, suppose you have resource record sets for a state in the United States, for the entire United States, for North America, and a resource record set that has *
for CountryCode
is *
, which applies to all locations. If the endpoint for the state resource record set is unhealthy, Route 53 checks for healthy resource record sets in the following order until it finds a resource record set for which the endpoint is healthy:
The United States
North America
The default resource record set
Specifying the Health Check Endpoint by Domain Name
If your health checks specify the endpoint only by domain name, we recommend that you create a separate health check for each endpoint. For example, create a health check for each HTTP
server that is serving content for www.example.com
. For the value of FullyQualifiedDomainName
, specify the domain name of the server (such as us-east-2-www.example.com
), not the name of the resource record sets (www.example.com
).
Health check results will be unpredictable if you do the following:
Create a health check that has the same value for FullyQualifiedDomainName
as the name of a resource record set.
Associate that health check with the resource record set.
The ID for the health check for which you want detailed information. When you created the health check, CreateHealthCheck
returned the ID in the response, in the HealthCheckId
element.
The key signing key (KSK) name that you specified isn't a valid name.
", + "base": "The key-signing key (KSK) name that you specified isn't a valid name.
", "refs": { } }, "InvalidKeySigningKeyStatus": { - "base": "The key signing key (KSK) status isn't valid or another KSK has the status INTERNAL_FAILURE
.
The key-signing key (KSK) status isn't valid or another KSK has the status INTERNAL_FAILURE
.
A key signing key (KSK) is a complex type that represents a public/private key pair. The private key is used to generate a digital signature for the zone signing key (ZSK). The public key is stored in the DNS and is used to authenticate the ZSK. A KSK is always associated with a hosted zone; it cannot exist by itself.
", + "base": "A key-signing key (KSK) is a complex type that represents a public/private key pair. The private key is used to generate a digital signature for the zone signing key (ZSK). The public key is stored in the DNS and is used to authenticate the ZSK. A KSK is always associated with a hosted zone; it cannot exist by itself.
", "refs": { - "CreateKeySigningKeyResponse$KeySigningKey": "The key signing key (KSK) that the request creates.
", + "CreateKeySigningKeyResponse$KeySigningKey": "The key-signing key (KSK) that the request creates.
", "KeySigningKeys$member": null } }, "KeySigningKeyAlreadyExists": { - "base": "You've already created a key signing key (KSK) with this name or with the same customer managed key (CMK) ARN.
", + "base": "You've already created a key-signing key (KSK) with this name or with the same customer managed customer master key (CMK) ARN.
", "refs": { } }, "KeySigningKeyInParentDSRecord": { - "base": "The key signing key (KSK) is specified in a parent DS record.
", + "base": "The key-signing key (KSK) is specified in a parent DS record.
", "refs": { } }, "KeySigningKeyInUse": { - "base": "The key signing key (KSK) that you specified can't be deactivated because it's the only KSK for a currently-enabled DNSSEC. Disable DNSSEC signing, or add or enable another KSK.
", + "base": "The key-signing key (KSK) that you specified can't be deactivated because it's the only KSK for a currently-enabled DNSSEC. Disable DNSSEC signing, or add or enable another KSK.
", "refs": { } }, "KeySigningKeyWithActiveStatusNotFound": { - "base": "A key signing key (KSK) with ACTIVE
status wasn't found.
A key-signing key (KSK) with ACTIVE
status wasn't found.
The key signing keys (KSKs) in your account.
" + "GetDNSSECResponse$KeySigningKeys": "The key-signing keys (KSKs) in your account.
" } }, "LastVPCAssociation": { @@ -1533,7 +1533,7 @@ } }, "NoSuchKeySigningKey": { - "base": "The specified key signing key (KSK) doesn't exist.
", + "base": "The specified key-signing key (KSK) doesn't exist.
", "refs": { } }, @@ -1704,7 +1704,7 @@ "ListTrafficPolicyInstancesByPolicyResponse$TrafficPolicyInstanceTypeMarker": "If IsTruncated
is true
, TrafficPolicyInstanceTypeMarker
is the DNS type of the resource record sets that are associated with the first traffic policy instance in the next group of MaxItems
traffic policy instances.
If the value of IsTruncated
in the previous response was true
, you have more traffic policy instances. To get more traffic policy instances, submit another ListTrafficPolicyInstances
request. For the value of trafficpolicyinstancetype
, specify the value of TrafficPolicyInstanceTypeMarker
from the previous response, which is the type of the first traffic policy instance in the next group of traffic policy instances.
If the value of IsTruncated
in the previous response was false
, there are no more traffic policy instances to get.
If IsTruncated
is true
, TrafficPolicyInstanceTypeMarker
is the DNS type of the resource record sets that are associated with the first traffic policy instance that Amazon Route 53 will return if you submit another ListTrafficPolicyInstances
request.
The DNS record type. For information about different record types and how data is encoded for them, see Supported DNS Resource Record Types in the Amazon Route 53 Developer Guide.
Valid values for basic resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| NS
| PTR
| SOA
| SPF
| SRV
| TXT
Values for weighted, latency, geolocation, and failover resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| PTR
| SPF
| SRV
| TXT
. When creating a group of weighted, latency, geolocation, or failover resource record sets, specify the same value for all of the resource record sets in the group.
Valid values for multivalue answer resource record sets: A
| AAAA
| MX
| NAPTR
| PTR
| SPF
| SRV
| TXT
SPF records were formerly used to verify the identity of the sender of email messages. However, we no longer recommend that you create resource record sets for which the value of Type
is SPF
. RFC 7208, Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1, has been updated to say, \"...[I]ts existence and mechanism defined in [RFC4408] have led to some interoperability issues. Accordingly, its use is no longer appropriate for SPF version 1; implementations are not to use it.\" In RFC 7208, see section 14.1, The SPF DNS Record Type.
Values for alias resource record sets:
Amazon API Gateway custom regional APIs and edge-optimized APIs: A
CloudFront distributions: A
If IPv6 is enabled for the distribution, create two resource record sets to route traffic to your distribution, one with a value of A
and one with a value of AAAA
.
Amazon API Gateway environment that has a regionalized subdomain: A
ELB load balancers: A
| AAAA
Amazon S3 buckets: A
Amazon Virtual Private Cloud interface VPC endpoints A
Another resource record set in this hosted zone: Specify the type of the resource record set that you're creating the alias for. All values are supported except NS
and SOA
.
If you're creating an alias record that has the same name as the hosted zone (known as the zone apex), you can't route traffic to a record for which the value of Type
is CNAME
. This is because the alias record must have the same type as the record you're routing traffic to, and creating a CNAME record for the zone apex isn't supported even for an alias record.
The DNS record type. For information about different record types and how data is encoded for them, see Supported DNS Resource Record Types in the Amazon Route 53 Developer Guide.
Valid values for basic resource record sets: A
| AAAA
| CAA
| CNAME
| DS
|MX
| NAPTR
| NS
| PTR
| SOA
| SPF
| SRV
| TXT
Values for weighted, latency, geolocation, and failover resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| PTR
| SPF
| SRV
| TXT
. When creating a group of weighted, latency, geolocation, or failover resource record sets, specify the same value for all of the resource record sets in the group.
Valid values for multivalue answer resource record sets: A
| AAAA
| MX
| NAPTR
| PTR
| SPF
| SRV
| TXT
SPF records were formerly used to verify the identity of the sender of email messages. However, we no longer recommend that you create resource record sets for which the value of Type
is SPF
. RFC 7208, Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1, has been updated to say, \"...[I]ts existence and mechanism defined in [RFC4408] have led to some interoperability issues. Accordingly, its use is no longer appropriate for SPF version 1; implementations are not to use it.\" In RFC 7208, see section 14.1, The SPF DNS Record Type.
Values for alias resource record sets:
Amazon API Gateway custom regional APIs and edge-optimized APIs: A
CloudFront distributions: A
If IPv6 is enabled for the distribution, create two resource record sets to route traffic to your distribution, one with a value of A
and one with a value of AAAA
.
Amazon API Gateway environment that has a regionalized subdomain: A
ELB load balancers: A
| AAAA
Amazon S3 buckets: A
Amazon Virtual Private Cloud interface VPC endpoints A
Another resource record set in this hosted zone: Specify the type of the resource record set that you're creating the alias for. All values are supported except NS
and SOA
.
If you're creating an alias record that has the same name as the hosted zone (known as the zone apex), you can't route traffic to a record for which the value of Type
is CNAME
. This is because the alias record must have the same type as the record you're routing traffic to, and creating a CNAME record for the zone apex isn't supported even for an alias record.
The type of the resource record set.
", "TestDNSAnswerResponse$RecordType": "The type of the resource record set that you submitted a request for.
", "TrafficPolicy$Type": "The DNS type of the resource record sets that Amazon Route 53 creates when you use a traffic policy to create a traffic policy instance.
", @@ -1756,7 +1756,7 @@ "base": null, "refs": { "ActivateKeySigningKeyRequest$HostedZoneId": "A unique string used to identify a hosted zone.
", - "AliasTarget$HostedZoneId": "Alias resource records sets only: The value used depends on where you want to route traffic:
Specify the hosted zone ID for your API. You can get the applicable value using the AWS CLI command get-domain-names:
For regional APIs, specify the value of regionalHostedZoneId
.
For edge-optimized APIs, specify the value of distributionHostedZoneId
.
Specify the hosted zone ID for your interface endpoint. You can get the value of HostedZoneId
using the AWS CLI command describe-vpc-endpoints.
Specify Z2FDTNDATAQYW2
.
Alias resource record sets for CloudFront can't be created in a private zone.
Specify the hosted zone ID for the region that you created the environment in. The environment must have a regionalized subdomain. For a list of regions and the corresponding hosted zone IDs, see AWS Elastic Beanstalk in the \"AWS Service Endpoints\" chapter of the Amazon Web Services General Reference.
Specify the value of the hosted zone ID for the load balancer. Use the following methods to get the hosted zone ID:
Service Endpoints table in the \"Elastic Load Balancing Endpoints and Quotas\" topic in the Amazon Web Services General Reference: Use the value that corresponds with the region that you created your load balancer in. Note that there are separate columns for Application and Classic Load Balancers and for Network Load Balancers.
AWS Management Console: Go to the Amazon EC2 page, choose Load Balancers in the navigation pane, select the load balancer, and get the value of the Hosted zone field on the Description tab.
Elastic Load Balancing API: Use DescribeLoadBalancers
to get the applicable value. For more information, see the applicable guide:
Classic Load Balancers: Use DescribeLoadBalancers to get the value of CanonicalHostedZoneNameId
.
Application and Network Load Balancers: Use DescribeLoadBalancers to get the value of CanonicalHostedZoneId
.
AWS CLI: Use describe-load-balancers
to get the applicable value. For more information, see the applicable guide:
Classic Load Balancers: Use describe-load-balancers to get the value of CanonicalHostedZoneNameId
.
Application and Network Load Balancers: Use describe-load-balancers to get the value of CanonicalHostedZoneId
.
Specify Z2BJ6XQ5FK7U4H
.
Specify the hosted zone ID for the region that you created the bucket in. For more information about valid values, see the table Amazon S3 Website Endpoints in the Amazon Web Services General Reference.
Specify the hosted zone ID of your hosted zone. (An alias resource record set can't reference a resource record set in a different hosted zone.)
Alias resource records sets only: The value used depends on where you want to route traffic:
Specify the hosted zone ID for your API. You can get the applicable value using the AWS CLI command get-domain-names:
For regional APIs, specify the value of regionalHostedZoneId
.
For edge-optimized APIs, specify the value of distributionHostedZoneId
.
Specify the hosted zone ID for your interface endpoint. You can get the value of HostedZoneId
using the AWS CLI command describe-vpc-endpoints.
Specify Z2FDTNDATAQYW2
.
Alias resource record sets for CloudFront can't be created in a private zone.
Specify the hosted zone ID for the region that you created the environment in. The environment must have a regionalized subdomain. For a list of regions and the corresponding hosted zone IDs, see AWS Elastic Beanstalk endpoints and quotas in the the Amazon Web Services General Reference.
Specify the value of the hosted zone ID for the load balancer. Use the following methods to get the hosted zone ID:
Elastic Load Balancing endpoints and quotas topic in the Amazon Web Services General Reference: Use the value that corresponds with the region that you created your load balancer in. Note that there are separate columns for Application and Classic Load Balancers and for Network Load Balancers.
AWS Management Console: Go to the Amazon EC2 page, choose Load Balancers in the navigation pane, select the load balancer, and get the value of the Hosted zone field on the Description tab.
Elastic Load Balancing API: Use DescribeLoadBalancers
to get the applicable value. For more information, see the applicable guide:
Classic Load Balancers: Use DescribeLoadBalancers to get the value of CanonicalHostedZoneNameId
.
Application and Network Load Balancers: Use DescribeLoadBalancers to get the value of CanonicalHostedZoneId
.
AWS CLI: Use describe-load-balancers
to get the applicable value. For more information, see the applicable guide:
Classic Load Balancers: Use describe-load-balancers to get the value of CanonicalHostedZoneNameId
.
Application and Network Load Balancers: Use describe-load-balancers to get the value of CanonicalHostedZoneId
.
Specify Z2BJ6XQ5FK7U4H
.
Specify the hosted zone ID for the region that you created the bucket in. For more information about valid values, see the table Amazon S3 Website Endpoints in the Amazon Web Services General Reference.
Specify the hosted zone ID of your hosted zone. (An alias resource record set can't reference a resource record set in a different hosted zone.)
The ID of the private hosted zone that you want to associate an Amazon VPC with.
Note that you can't associate a VPC with a hosted zone that doesn't have an existing VPC association.
", "ChangeInfo$Id": "The ID of the request.
", "ChangeResourceRecordSetsRequest$HostedZoneId": "The ID of the hosted zone that contains the resource record sets that you want to change.
", @@ -1885,7 +1885,7 @@ "refs": { "CreateHealthCheckResponse$Location": "The unique URL representing the new health check.
", "CreateHostedZoneResponse$Location": "The unique URL representing the new hosted zone.
", - "CreateKeySigningKeyResponse$Location": "The unique URL representing the new key signing key (KSK).
", + "CreateKeySigningKeyResponse$Location": "The unique URL representing the new key-signing key (KSK).
", "CreateQueryLoggingConfigResponse$Location": "The unique URL representing the new query logging configuration.
", "CreateReusableDelegationSetResponse$Location": "The unique URL representing the new reusable delegation set.
", "CreateTrafficPolicyInstanceResponse$Location": "A unique URL that represents a new traffic policy instance.
", @@ -1916,7 +1916,7 @@ "ServeSignature": { "base": null, "refs": { - "DNSSECStatus$ServeSignature": "Indicates your hosted zone signging status: SIGNING
, NOT_SIGNING
, or INTERNAL_FAILURE
. If the status is INTERNAL_FAILURE
, see StatusMessage
for information about steps that you can take to correct the problem.
A status INTERNAL_FAILURE
means there was an error during a request. Before you can continue to work with DNSSEC signing, including working with key signing keys (KSKs), you must correct the problem by enabling or disabling DNSSEC signing for the hosted zone.
A string that represents the current hosted zone signing status.
Status can have one of the following values:
DNSSEC signing is enabled for the hosted zone.
DNSSEC signing is not enabled for the hosted zone.
DNSSEC signing is in the process of being removed for the hosted zone.
There is a problem with signing in the hosted zone that requires you to take action to resolve. For example, the customer managed customer master key (CMK) might have been deleted, or the permissions for the customer managed CMK might have been changed.
There was an error during a request. Before you can continue to work with DNSSEC signing, including with key-signing keys (KSKs), you must correct the problem by enabling or disabling DNSSEC signing for the hosted zone.
An integer that specifies how the key is used. For key signing key (KSK), this value is always 257.
", + "KeySigningKey$Flag": "An integer that specifies how the key is used. For key-signing key (KSK), this value is always 257.
", "KeySigningKey$SigningAlgorithmType": "An integer used to represent the signing algorithm. This value must follow the guidelines provided by RFC-8624 Section 3.1.
", "KeySigningKey$DigestAlgorithmType": "An integer used to represent the delegation signer digest algorithm. This value must follow the guidelines provided by RFC-8624 Section 3.3.
" } @@ -1936,32 +1936,32 @@ "SigningKeyName": { "base": null, "refs": { - "ActivateKeySigningKeyRequest$Name": "An alphanumeric string used to identify a key signing key (KSK).
", - "CreateKeySigningKeyRequest$Name": "An alphanumeric string used to identify a key signing key (KSK). Name
must be unique for each key signing key in the same hosted zone.
An alphanumeric string used to identify a key signing key (KSK).
", - "DeleteKeySigningKeyRequest$Name": "An alphanumeric string used to identify a key signing key (KSK).
", - "KeySigningKey$Name": "An alphanumeric string used to identify a key signing key (KSK). Name
must be unique for each key signing key in the same hosted zone.
A string used to identify a key-signing key (KSK). Name
can include numbers, letters, and underscores (_). Name
must be unique for each key-signing key in the same hosted zone.
A string used to identify a key-signing key (KSK). Name
can include numbers, letters, and underscores (_). Name
must be unique for each key-signing key in the same hosted zone.
A string used to identify a key-signing key (KSK).
", + "DeleteKeySigningKeyRequest$Name": "A string used to identify a key-signing key (KSK).
", + "KeySigningKey$Name": "A string used to identify a key-signing key (KSK). Name
can include numbers, letters, and underscores (_). Name
must be unique for each key-signing key in the same hosted zone.
A string specifying the initial status of the key signing key (KSK). You can set the value to ACTIVE
or INACTIVE
.
A string that represents the current key signing key (KSK) status.
Status can have one of the following values:
The KSK is being used for signing.
The KSK is not being used for signing.
There is an error in the KSK that requires you to take action to resolve.
There was an error during a request. Before you can continue to work with DNSSEC signing, including actions that involve this KSK, you must correct the problem. For example, you may need to activate or deactivate the KSK.
A string specifying the initial status of the key-signing key (KSK). You can set the value to ACTIVE
or INACTIVE
.
A string that represents the current key-signing key (KSK) status.
Status can have one of the following values:
The KSK is being used for signing.
The KSK is not being used for signing.
The KSK is in the process of being deleted.
There is a problem with the KSK that requires you to take action to resolve. For example, the customer managed customer master key (CMK) might have been deleted, or the permissions for the customer managed CMK might have been changed.
There was an error during a request. Before you can continue to work with DNSSEC signing, including actions that involve this KSK, you must correct the problem. For example, you may need to activate or deactivate the KSK.
The status message provided for the following DNSSEC signing status: INTERNAL_FAILURE
. The status message includes information about what the problem might be and steps that you can take to correct the issue.
The status message provided for the following key signing key (KSK) statuses: ACTION_NEEDED
or INTERNAL_FAILURE
. The status message includes information about what the problem might be and steps that you can take to correct the issue.
The status message provided for the following key-signing key (KSK) statuses: ACTION_NEEDED
or INTERNAL_FAILURE
. The status message includes information about what the problem might be and steps that you can take to correct the issue.
The Amazon resource name (ARN) for a customer managed key (CMK) in AWS Key Management Service (KMS). The KeyManagementServiceArn
must be unique for each key signing key (KSK) in a single hosted zone. To see an example of KeyManagementServiceArn
that grants the correct permissions for DNSSEC, scroll down to Example.
You must configure the CMK as follows:
Enabled
ECC_NIST_P256
Sign and verify
The key policy must give permission for the following actions:
DescribeKey
GetPublicKey
Sign
The key policy must also include the Amazon Route 53 service in the principal for your account. Specify the following:
\"Service\": \"api-service.dnssec.route53.aws.internal\"
For more information about working with CMK in KMS, see AWS Key Management Service concepts.
", - "KeySigningKey$KmsArn": "The Amazon resource name (ARN) used to identify the customer managed key (CMK) in AWS Key Management Service (KMS). The KmsArn
must be unique for each key signing key (KSK) in a single hosted zone.
You must configure the CMK as follows:
Enabled
ECC_NIST_P256
Sign and verify
The key policy must give permission for the following actions:
DescribeKey
GetPublicKey
Sign
The key policy must also include the Amazon Route 53 service in the principal for your account. Specify the following:
\"Service\": \"api-service.dnssec.route53.aws.internal\"
For more information about working with the customer managed key (CMK) in KMS, see AWS Key Management Service concepts.
", + "CreateKeySigningKeyRequest$KeyManagementServiceArn": "The Amazon resource name (ARN) for a customer managed customer master key (CMK) in AWS Key Management Service (AWS KMS). The KeyManagementServiceArn
must be unique for each key-signing key (KSK) in a single hosted zone. To see an example of KeyManagementServiceArn
that grants the correct permissions for DNSSEC, scroll down to Example.
You must configure the customer managed CMK as follows:
Enabled
ECC_NIST_P256
Sign and verify
The key policy must give permission for the following actions:
DescribeKey
GetPublicKey
Sign
The key policy must also include the Amazon Route 53 service in the principal for your account. Specify the following:
\"Service\": \"api-service.dnssec.route53.aws.internal\"
For more information about working with a customer managed CMK in AWS KMS, see AWS Key Management Service concepts.
", + "KeySigningKey$KmsArn": "The Amazon resource name (ARN) used to identify the customer managed customer master key (CMK) in AWS Key Management Service (AWS KMS). The KmsArn
must be unique for each key-signing key (KSK) in a single hosted zone.
You must configure the CMK as follows:
Enabled
ECC_NIST_P256
Sign and verify
The key policy must give permission for the following actions:
DescribeKey
GetPublicKey
Sign
The key policy must also include the Amazon Route 53 service in the principal for your account. Specify the following:
\"Service\": \"api-service.dnssec.route53.aws.internal\"
For more information about working with the customer managed CMK in AWS KMS, see AWS Key Management Service concepts.
", "KeySigningKey$SigningAlgorithmMnemonic": "A string used to represent the signing algorithm. This value must follow the guidelines provided by RFC-8624 Section 3.1.
", "KeySigningKey$DigestAlgorithmMnemonic": "A string used to represent the delegation signer digest algorithm. This value must follow the guidelines provided by RFC-8624 Section 3.3.
", "KeySigningKey$DigestValue": "A cryptographic digest of a DNSKEY resource record (RR). DNSKEY records are used to publish the public key that resolvers can use to verify DNSSEC signatures that are used to secure certain kinds of information provided by the DNS system.
", @@ -2090,8 +2090,8 @@ "base": null, "refs": { "ChangeInfo$SubmittedAt": "The date and time that the change request was submitted in ISO 8601 format and Coordinated Universal Time (UTC). For example, the value 2017-03-27T17:48:16.751Z
represents March 27, 2017 at 17:48:16.751 UTC.
The date when the key signing key (KSK) was created.
", - "KeySigningKey$LastModifiedDate": "The last time that the key signing key (KSK) was changed.
", + "KeySigningKey$CreatedDate": "The date when the key-signing key (KSK) was created.
", + "KeySigningKey$LastModifiedDate": "The last time that the key-signing key (KSK) was changed.
", "StatusReport$CheckedTime": "The date and time that the health checker performed the health check in ISO 8601 format and Coordinated Universal Time (UTC). For example, the value 2017-03-27T17:48:16.751Z
represents March 27, 2017 at 17:48:16.751 UTC.
You've reached the limit for the number of key signing keys (KSKs). Remove at least one KSK, and then try again.
", + "base": "You've reached the limit for the number of key-signing keys (KSKs). Remove at least one KSK, and then try again.
", "refs": { } }, diff --git a/models/apis/s3control/2018-08-20/api-2.json b/models/apis/s3control/2018-08-20/api-2.json index 480e9ba3715..e228fd3f651 100755 --- a/models/apis/s3control/2018-08-20/api-2.json +++ b/models/apis/s3control/2018-08-20/api-2.json @@ -1628,6 +1628,10 @@ "shape":"S3SetObjectTaggingOperation", "box":true }, + "S3DeleteObjectTagging":{ + "shape":"S3DeleteObjectTaggingOperation", + "box":true + }, "S3InitiateRestoreObject":{ "shape":"S3InitiateRestoreObjectOperation", "box":true @@ -2025,6 +2029,7 @@ "S3PutObjectCopy", "S3PutObjectAcl", "S3PutObjectTagging", + "S3DeleteObjectTagging", "S3InitiateRestoreObject", "S3PutObjectLegalHold", "S3PutObjectRetention" @@ -2420,6 +2425,11 @@ "ObjectLockRetainUntilDate":{"shape":"TimeStamp"} } }, + "S3DeleteObjectTaggingOperation":{ + "type":"structure", + "members":{ + } + }, "S3ExpirationInDays":{ "type":"integer", "min":0 diff --git a/models/apis/s3control/2018-08-20/docs-2.json b/models/apis/s3control/2018-08-20/docs-2.json index 6957c3ba39d..51755d3b1f0 100755 --- a/models/apis/s3control/2018-08-20/docs-2.json +++ b/models/apis/s3control/2018-08-20/docs-2.json @@ -4,7 +4,7 @@ "operations": { "CreateAccessPoint": "Creates an access point and associates it with the specified bucket. For more information, see Managing Data Access with Amazon S3 Access Points in the Amazon Simple Storage Service Developer Guide.
Using this action with Amazon S3 on Outposts
This action:
Requires a virtual private cloud (VPC) configuration as S3 on Outposts only supports VPC style access points.
Does not support ACL on S3 on Outposts buckets.
Does not support Public Access on S3 on Outposts buckets.
Does not support object lock for S3 on Outposts buckets.
For more information, see Using Amazon S3 on Outposts in the Amazon Simple Storage Service Developer Guide .
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
The following actions are related to CreateAccessPoint
:
This API operation creates an Amazon S3 on Outposts bucket. To create an S3 bucket, see Create Bucket in the Amazon Simple Storage Service API.
Creates a new Outposts bucket. By creating the bucket, you become the bucket owner. To create an Outposts bucket, you must have S3 on Outposts. For more information, see Using Amazon S3 on Outposts in Amazon Simple Storage Service Developer Guide.
Not every string is an acceptable bucket name. For information on bucket naming restrictions, see Working with Amazon S3 Buckets.
S3 on Outposts buckets do not support
ACLs. Instead, configure access point policies to manage access to buckets.
Public access.
Object Lock
Bucket Location constraint
For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and x-amz-outpost-id
in your API request, see the Examples section.
The following actions are related to CreateBucket
for Amazon S3 on Outposts:
S3 Batch Operations performs large-scale Batch Operations on Amazon S3 objects. Batch Operations can run a single operation or action on lists of Amazon S3 objects that you specify. For more information, see S3 Batch Operations in the Amazon Simple Storage Service Developer Guide.
This operation creates an S3 Batch Operations job.
Related actions include:
", + "CreateJob": "You can use S3 Batch Operations to perform large-scale batch operations on Amazon S3 objects. Batch Operations can run a single operation on lists of Amazon S3 objects that you specify. For more information, see S3 Batch Operations in the Amazon Simple Storage Service Developer Guide.
This operation creates a S3 Batch Operations job.
Related actions include:
", "DeleteAccessPoint": "Deletes the specified access point.
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
The following actions are related to DeleteAccessPoint
:
Deletes the access point policy for the specified access point.
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
The following actions are related to DeleteAccessPointPolicy
:
This API operation deletes an Amazon S3 on Outposts bucket. To delete an S3 bucket, see DeleteBucket in the Amazon Simple Storage Service API.
Deletes the Amazon S3 on Outposts bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted. For more information, see Using Amazon S3 on Outposts in Amazon Simple Storage Service Developer Guide.
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
Related Resources
", @@ -13,8 +13,8 @@ "DeleteBucketTagging": "This operation deletes an Amazon S3 on Outposts bucket's tags. To delete an S3 bucket tags, see DeleteBucketTagging in the Amazon Simple Storage Service API.
Deletes the tags from the Outposts bucket. For more information, see Using Amazon S3 on Outposts in Amazon Simple Storage Service Developer Guide.
To use this operation, you must have permission to perform the PutBucketTagging
action. By default, the bucket owner has this permission and can grant this permission to others.
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
The following actions are related to DeleteBucketTagging
:
Removes the entire tag set from the specified S3 Batch Operations job. To use this operation, you must have permission to perform the s3:DeleteJobTagging
action. For more information, see Controlling access and labeling jobs using tags in the Amazon Simple Storage Service Developer Guide.
Related actions include:
", "DeletePublicAccessBlock": "Removes the PublicAccessBlock
configuration for an AWS account. For more information, see Using Amazon S3 block public access.
Related actions include:
", - "DeleteStorageLensConfiguration": "Deletes the Amazon S3 Storage Lens configuration. For more information about S3 Storage Lens, see Working with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:DeleteStorageLensConfiguration
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Deletes the Amazon S3 Storage Lens configuration tags. For more information about S3 Storage Lens, see Working with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:DeleteStorageLensConfigurationTagging
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Deletes the Amazon S3 Storage Lens configuration. For more information about S3 Storage Lens, see Assessing your storage activity and usage with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:DeleteStorageLensConfiguration
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Deletes the Amazon S3 Storage Lens configuration tags. For more information about S3 Storage Lens, see Assessing your storage activity and usage with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:DeleteStorageLensConfigurationTagging
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Retrieves the configuration parameters and status for a Batch Operations job. For more information, see S3 Batch Operations in the Amazon Simple Storage Service Developer Guide.
Related actions include:
", "GetAccessPoint": "Returns configuration information about the specified access point.
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
The following actions are related to GetAccessPoint
:
Returns the access point policy associated with the specified access point.
The following actions are related to GetAccessPointPolicy
:
This operation gets an Amazon S3 on Outposts bucket's tags. To get an S3 bucket tags, see GetBucketTagging in the Amazon Simple Storage Service API.
Returns the tag set associated with the Outposts bucket. For more information, see Using Amazon S3 on Outposts in the Amazon Simple Storage Service Developer Guide.
To use this operation, you must have permission to perform the GetBucketTagging
action. By default, the bucket owner has this permission and can grant this permission to others.
GetBucketTagging
has the following special error:
Error code: NoSuchTagSetError
Description: There is no tag set associated with the bucket.
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
The following actions are related to GetBucketTagging
:
Returns the tags on an S3 Batch Operations job. To use this operation, you must have permission to perform the s3:GetJobTagging
action. For more information, see Controlling access and labeling jobs using tags in the Amazon Simple Storage Service Developer Guide.
Related actions include:
", "GetPublicAccessBlock": "Retrieves the PublicAccessBlock
configuration for an AWS account. For more information, see Using Amazon S3 block public access.
Related actions include:
", - "GetStorageLensConfiguration": "Gets the Amazon S3 Storage Lens configuration. For more information, see Working with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:GetStorageLensConfiguration
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Gets the tags of Amazon S3 Storage Lens configuration. For more information about S3 Storage Lens, see Working with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:GetStorageLensConfigurationTagging
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Gets the Amazon S3 Storage Lens configuration. For more information, see Assessing your storage activity and usage with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:GetStorageLensConfiguration
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Gets the tags of Amazon S3 Storage Lens configuration. For more information about S3 Storage Lens, see Assessing your storage activity and usage with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:GetStorageLensConfigurationTagging
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Returns a list of the access points currently associated with the specified bucket. You can retrieve up to 1000 access points per call. If the specified bucket has more than 1,000 access points (or the number specified in maxResults
, whichever is less), the response will include a continuation token that you can use to list the additional access points.
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
The following actions are related to ListAccessPoints
:
Lists current S3 Batch Operations jobs and jobs that have ended within the last 30 days for the AWS account making the request. For more information, see S3 Batch Operations in the Amazon Simple Storage Service Developer Guide.
Related actions include:
", "ListRegionalBuckets": "Returns a list of all Outposts buckets in an Outpost that are owned by the authenticated sender of the request. For more information, see Using Amazon S3 on Outposts in the Amazon Simple Storage Service Developer Guide.
For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and x-amz-outpost-id
in your request, see the Examples section.
Gets a list of Amazon S3 Storage Lens configurations. For more information about S3 Storage Lens, see Working with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:ListStorageLensConfigurations
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Gets a list of Amazon S3 Storage Lens configurations. For more information about S3 Storage Lens, see Assessing your storage activity and usage with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:ListStorageLensConfigurations
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Associates an access policy with the specified access point. Each access point can have only one policy, so a request made to this API replaces any existing policy associated with the specified access point.
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
The following actions are related to PutAccessPointPolicy
:
This action puts a lifecycle configuration to an Amazon S3 on Outposts bucket. To put a lifecycle configuration to an S3 bucket, see PutBucketLifecycleConfiguration in the Amazon Simple Storage Service API.
Creates a new lifecycle configuration for the Outposts bucket or replaces an existing lifecycle configuration. Outposts buckets only support lifecycle configurations that delete/expire objects after a certain period of time and abort incomplete multipart uploads. For more information, see Managing Lifecycle Permissions for Amazon S3 on Outposts.
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
The following actions are related to PutBucketLifecycleConfiguration
:
This action puts a bucket policy to an Amazon S3 on Outposts bucket. To put a policy on an S3 bucket, see PutBucketPolicy in the Amazon Simple Storage Service API.
Applies an Amazon S3 bucket policy to an Outposts bucket. For more information, see Using Amazon S3 on Outposts in the Amazon Simple Storage Service Developer Guide.
If you are using an identity other than the root user of the AWS account that owns the Outposts bucket, the calling identity must have the PutBucketPolicy
permissions on the specified Outposts bucket and belong to the bucket owner's account in order to use this operation.
If you don't have PutBucketPolicy
permissions, Amazon S3 returns a 403 Access Denied
error. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowed
error.
As a security precaution, the root user of the AWS account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action.
For more information about bucket policies, see Using Bucket Policies and User Policies.
All Amazon S3 on Outposts REST API requests for this action require an additional parameter of x-amz-outpost-id
to be passed with the request and an S3 on Outposts endpoint hostname prefix instead of s3-control
. For an example of the request syntax for Amazon S3 on Outposts that uses the S3 on Outposts endpoint hostname prefix and the x-amz-outpost-id
derived using the access point ARN, see the Examples section.
The following actions are related to PutBucketPolicy
:
Sets the supplied tag-set on an S3 Batch Operations job.
A tag is a key-value pair. You can associate S3 Batch Operations tags with any job by sending a PUT request against the tagging subresource that is associated with the job. To modify the existing tag set, you can either replace the existing tag set entirely, or make changes within the existing tag set by retrieving the existing tag set using GetJobTagging, modify that tag set, and use this action to replace the tag set with the one you modified. For more information, see Controlling access and labeling jobs using tags in the Amazon Simple Storage Service Developer Guide.
If you send this request with an empty tag set, Amazon S3 deletes the existing tag set on the Batch Operations job. If you use this method, you are charged for a Tier 1 Request (PUT). For more information, see Amazon S3 pricing.
For deleting existing tags for your Batch Operations job, a DeleteJobTagging request is preferred because it achieves the same result without incurring charges.
A few things to consider about using tags:
Amazon S3 limits the maximum number of tags to 50 tags per job.
You can associate up to 50 tags with a job as long as they have unique tag keys.
A tag key can be up to 128 Unicode characters in length, and tag values can be up to 256 Unicode characters in length.
The key and values are case sensitive.
For tagging-related restrictions related to characters and encodings, see User-Defined Tag Restrictions in the AWS Billing and Cost Management User Guide.
To use this operation, you must have permission to perform the s3:PutJobTagging
action.
Related actions include:
", "PutPublicAccessBlock": "Creates or modifies the PublicAccessBlock
configuration for an AWS account. For more information, see Using Amazon S3 block public access.
Related actions include:
", "PutStorageLensConfiguration": "Puts an Amazon S3 Storage Lens configuration. For more information about S3 Storage Lens, see Working with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:PutStorageLensConfiguration
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Put or replace tags on an existing Amazon S3 Storage Lens configuration. For more information about S3 Storage Lens, see Working with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:PutStorageLensConfigurationTagging
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Put or replace tags on an existing Amazon S3 Storage Lens configuration. For more information about S3 Storage Lens, see Assessing your storage activity and usage with Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
To use this action, you must have permission to perform the s3:PutStorageLensConfigurationTagging
action. For more information, see Setting permissions to use Amazon S3 Storage Lens in the Amazon Simple Storage Service Developer Guide.
Updates an existing S3 Batch Operations job's priority. For more information, see S3 Batch Operations in the Amazon Simple Storage Service Developer Guide.
Related actions include:
", "UpdateJobStatus": "Updates the status for the specified job. Use this operation to confirm that you want to run a job or to cancel an existing job. For more information, see S3 Batch Operations in the Amazon Simple Storage Service Developer Guide.
Related actions include:
" }, @@ -1065,7 +1065,7 @@ "base": null, "refs": { "LifecycleRuleAndOperator$Prefix": "Prefix identifying one or more objects to which the rule applies.
", - "LifecycleRuleFilter$Prefix": "Prefix identifying one or more objects to which the rule applies.
", + "LifecycleRuleFilter$Prefix": "Prefix identifying one or more objects to which the rule applies.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
The prefix of the destination bucket where the metrics export will be delivered.
" } }, @@ -1216,7 +1216,7 @@ "S3BucketDestination": { "base": "A container for the bucket where the Amazon S3 Storage Lens metrics export files are located.
", "refs": { - "StorageLensDataExport$S3BucketDestination": "A container for the bucket where the S3 Storage Lens metrics export will be located.
" + "StorageLensDataExport$S3BucketDestination": "A container for the bucket where the S3 Storage Lens metrics export will be located.
This bucket must be located in the same Region as the storage lens configuration.
Directs the specified job to run a PUT Copy object call on every object in the manifest.
" } }, + "S3DeleteObjectTaggingOperation": { + "base": "Contains no configuration parameters because the DELETE Object tagging API only accepts the bucket name and key name as parameters, which are defined in the job's manifest.
", + "refs": { + "JobOperation$S3DeleteObjectTagging": "Directs the specified job to execute a DELETE Object tagging call on every object in the manifest.
" + } + }, "S3ExpirationInDays": { "base": null, "refs": { @@ -1284,7 +1290,7 @@ "S3KeyArnString": { "base": null, "refs": { - "JobManifestLocation$ObjectArn": "The Amazon Resource Name (ARN) for a manifest object.
" + "JobManifestLocation$ObjectArn": "The Amazon Resource Name (ARN) for a manifest object.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.