-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(cli): S3 asset uploads are rejected by commonly referenced encryption SCP (introduces bootstrap stack v9) #17668
Conversation
Title does not follow the guidelines of Conventional Commits. Please adjust title before merge. |
b4f01d3
to
f41186e
Compare
paramsEncryption = { ServerSideEncryption: 'AES256' }; | ||
break; | ||
case BucketEncryption.SSEAlgorithm_aws_kms: | ||
paramsEncryption = { ServerSideEncryption: 'aws:kms' }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we potentially need to include the correct KMS key ARN?
If the user has configured a custom key ARN as "DefaultEncryption" and we just pass ServerSideEncryption: 'aws:kms'
, which key is going to be used? The user's key or aws/s3
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested the following setups
- encryption key type: a) Amazon S3 key (SSE-S3) -> yields to { ServerSideEncryption: 'AES256' };, no key configurable
- encryption key type: b) AWS Key Management Service key (SSE-KMS)
- b1) AWS managed key (aws/s3)
- b2) custom AWS KMS key
see details below for the configurations.
In summary:
We set the SSEAlgorithm, and the bucket automatically use whatever encryption is currently set up as default in the bucket. The bucket stores which is the default key to be used for default encryption. In fact the bucket would use the default encryption anyway but we need the header to get through the SCP rules.
I tried all the below scenarios below and deploy works. It would not make sense for default encryption to use another key than configured in the bucket.
a)
{
"ServerSideEncryptionConfiguration": {
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
},
"BucketKeyEnabled": false
}
]
}
}
b1)
{
"ServerSideEncryptionConfiguration": {
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "aws:kms",
"KMSMasterKeyID": "arn:aws:kms:us-east-1:703956386855:alias/aws/s3"
},
"BucketKeyEnabled": true
}
]
}
}
b2)
{
"ServerSideEncryptionConfiguration": {
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "aws:kms",
"KMSMasterKeyID": "arn:aws:kms:us-east-1:703956386855:key/833b292e-bc1f-47ec-af3f-1c1b6504815b"
},
"BucketKeyEnabled": true
}
]
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm that's odd -- the behaviour we are seeing is that when CDK only includes the x-amz-server-side-encryption: aws:kms
header, the objects end up being encrypted with aws/s3
key, which currently is not permitted by our bucket policy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vludax
Do you have default bucket encryption enabled for the bucket (with specific KMS Key)?
Did you use cdk bootstrap
to reinitialize the cdk resources?
Can you also provide the bucket policy, so that I can reproduce it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The following bucket policy statement blocks the uploads -- when we remove this statement, the files get uploaded successfully and encrypted with aws/s3
:
{
"Sid": "Enforce default bucket encryption by rejecting any overrides",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<bucket-name>/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "false"
},
"StringNotLikeIfExists": {
"s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:ap-southeast-2:XXX:key/YYY"
}
}
}
We did not run the bootstrap as we are using a custom synthesizer in our CDK -- that doesn't seem to specify encryption options anywhere. I'd also note that our uploads are working as expected with v1.118.0 of the CDK -- the files are successfully uploaded and encrypted with the right KMS key with the above policy statement in place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DOES_NOT_EXIST | ||
} | ||
|
||
async function bucketEncryption(s3: AWS.S3, bucket: string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add caching to this function so we're not going to do the same lookup more than once?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there are a lot of methods not being cached like bucketOwner etc. If we add caching it would make sense to add it to a level which ensures that the cache does have an effect. Right now I do not understand the system well enough to add caching.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. Will do that myself then.
paramsEncryption = { ServerSideEncryption: 'aws:kms' }; | ||
break; | ||
case BucketEncryption.DOES_NOT_EXIST: | ||
this.host.emitMessage(EventType.UPLOAD, `No bucket named '${destination.bucketName}'. Is account ${await account()} bootstrapped?`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure at what level EventType.UPLOAD
shows, but I'd like this level to be one that is not displayed by default (but only if you add -v
). Can you confirm that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the mapping is as follows:
const EVENT_TO_LEVEL: Record<EventType, LogLevel> = {
build: 'verbose',
cached: 'verbose',
check: 'verbose',
debug: 'verbose',
fail: 'error',
found: 'verbose',
start: 'info',
success: 'info',
upload: 'verbose',
};
Nevertheless, adjusted logging to debug type since message is not directly related to upload.
Pull request has been modified.
Thank you for contributing! Your pull request will be updated from master and then merged automatically (do not update manually, and be sure to allow changes to be pushed to your fork). |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
Thank you for contributing! Your pull request will be updated from master and then merged automatically (do not update manually, and be sure to allow changes to be pushed to your fork). |
…tion SCP (introduces bootstrap stack v9) (aws#17668) Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted. CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality. Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present. This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9. If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header). Fixes aws#11265. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…tion SCP (introduces bootstrap stack v9) (aws#17668) Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted. CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality. Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present. This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9. If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header). Fixes aws#11265. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
In #17668, cross-account S3 asset publishing was broken. The reason is that the `account()` function was always broken, using the default account instead of the target account. However, previously this function was only called in an irrecoverable situation anyway, and its failure would be rare. The recent change also calls this function for logging purposes in a happy-case scenario, but then triggers an error during the logging. Fix the invocation to use the right account. Fixes #17988.
In #17668, cross-account S3 asset publishing was broken. The reason is that the `account()` function was always broken, using the default account instead of the target account. However, previously this function was only called in an irrecoverable situation anyway, and its failure would be rare. The recent change also calls this function for logging purposes in a happy-case scenario, but then triggers an error during the logging. Fix the invocation to use the right account. Fixes #17988. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Even though a custom KMS key is specified as the default encryption key on the file assets bucket, all uploaded assets are encrypted using the default key. The reason is that in #17668 we added an explicit `encryption: kms` parameter to the `putObject` operation, so that an SCP that is commonly in use across large organizations to prevent files from ending up unencrypted, can be used (the SCP can only validate call parameters, such as whether the `putObject` call includes the parameter that reuests encryption, not the effective end result, such as whether a file would end up encrypted). However, we did not include the KMS Key Id into the `putObject` request, which caused S3 to fall back to the default key. Solution: also look up the key id and pass that along as well. Fixes #18262.
Even though a custom KMS key is specified as the default encryption key on the file assets bucket, all uploaded assets are encrypted using the default key. The reason is that in #17668 we added an explicit `encryption: kms` parameter to the `putObject` operation, so that an SCP that is commonly in use across large organizations to prevent files from ending up unencrypted, can be used (the SCP can only validate call parameters, such as whether the `putObject` call includes the parameter that reuests encryption, not the effective end result, such as whether a file would end up encrypted). However, we did not include the KMS Key Id into the `putObject` request, which caused S3 to fall back to the default key. Solution: also look up the key id and pass that along as well. Fixes #18262. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…tion SCP (introduces bootstrap stack v9) (aws#17668) Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted. CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality. Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present. This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9. If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header). Fixes aws#11265. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
In aws#17668, cross-account S3 asset publishing was broken. The reason is that the `account()` function was always broken, using the default account instead of the target account. However, previously this function was only called in an irrecoverable situation anyway, and its failure would be rare. The recent change also calls this function for logging purposes in a happy-case scenario, but then triggers an error during the logging. Fix the invocation to use the right account. Fixes aws#17988. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Even though a custom KMS key is specified as the default encryption key on the file assets bucket, all uploaded assets are encrypted using the default key. The reason is that in aws#17668 we added an explicit `encryption: kms` parameter to the `putObject` operation, so that an SCP that is commonly in use across large organizations to prevent files from ending up unencrypted, can be used (the SCP can only validate call parameters, such as whether the `putObject` call includes the parameter that reuests encryption, not the effective end result, such as whether a file would end up encrypted). However, we did not include the KMS Key Id into the `putObject` request, which caused S3 to fall back to the default key. Solution: also look up the key id and pass that along as well. Fixes aws#18262. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted.
CDK configures the
DefaultEncryptionConfiguration
on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets haveDefaultEncryptionConfiguration
set, so that this SCP is not necessary... but there is no arguing with deployed reality.Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the
PutObject
call so that the SCP can see that they are present.This requires adding a new permission to the
cdk-assets
role, namelys3:GetEncryptionConfiguration
, so requires a new bootstrap stack version: version 9.If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to
getEncryptionConfiguration
fails, the CDK will fall back to the old behavior (not specifying any header).Fixes #11265.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license