Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(cli): S3 asset uploads are rejected by commonly referenced encryption SCP (introduces bootstrap stack v9) #17668

Merged
merged 9 commits into from
Nov 26, 2021

Conversation

ArlindNocaj
Copy link
Contributor

@ArlindNocaj ArlindNocaj commented Nov 23, 2021

Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted.

CDK configures the DefaultEncryptionConfiguration on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have DefaultEncryptionConfiguration set, so that this SCP is not necessary... but there is no arguing with deployed reality.

Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the PutObject call so that the SCP can see that they are present.

This requires adding a new permission to the cdk-assets role, namely s3:GetEncryptionConfiguration, so requires a new bootstrap stack version: version 9.

If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to getEncryptionConfiguration fails, the CDK will fall back to the old behavior (not specifying any header).

Fixes #11265.


By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license

@mergify
Copy link
Contributor

mergify bot commented Nov 23, 2021

Title does not follow the guidelines of Conventional Commits. Please adjust title before merge.

@github-actions github-actions bot added the @aws-cdk/aws-s3 Related to Amazon S3 label Nov 24, 2021
@gitpod-io
Copy link

gitpod-io bot commented Nov 24, 2021

…ncryption. Added s3:GetEncryptionConfiguration to bootstrap-template to be able to read the s3 bucket encryption with file-publishing role. (aws#11265)
…nfluence the currently bootstrapped cdk projects. (aws#11265)
@ArlindNocaj ArlindNocaj force-pushed the feature-add-sse-header branch from b4f01d3 to f41186e Compare November 24, 2021 09:16
@github-actions github-actions bot added the package/tools Related to AWS CDK Tools or CLI label Nov 24, 2021
@ArlindNocaj ArlindNocaj changed the title Feature add sse header support fix(cli): adding server side encryption SSE header based on bucket encryption Nov 24, 2021
paramsEncryption = { ServerSideEncryption: 'AES256' };
break;
case BucketEncryption.SSEAlgorithm_aws_kms:
paramsEncryption = { ServerSideEncryption: 'aws:kms' };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we potentially need to include the correct KMS key ARN?

If the user has configured a custom key ARN as "DefaultEncryption" and we just pass ServerSideEncryption: 'aws:kms', which key is going to be used? The user's key or aws/s3 ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested the following setups

  • encryption key type: a) Amazon S3 key (SSE-S3) -> yields to { ServerSideEncryption: 'AES256' };, no key configurable
  • encryption key type: b) AWS Key Management Service key (SSE-KMS)
    • b1) AWS managed key (aws/s3)
    • b2) custom AWS KMS key

see details below for the configurations.

In summary:
We set the SSEAlgorithm, and the bucket automatically use whatever encryption is currently set up as default in the bucket. The bucket stores which is the default key to be used for default encryption. In fact the bucket would use the default encryption anyway but we need the header to get through the SCP rules.
I tried all the below scenarios below and deploy works. It would not make sense for default encryption to use another key than configured in the bucket.

a)

{
    "ServerSideEncryptionConfiguration": {
        "Rules": [
            {
                "ApplyServerSideEncryptionByDefault": {
                    "SSEAlgorithm": "AES256"
                },
                "BucketKeyEnabled": false
            }
        ]
    }
}

b1)

{
    "ServerSideEncryptionConfiguration": {
        "Rules": [
            {
                "ApplyServerSideEncryptionByDefault": {
                    "SSEAlgorithm": "aws:kms",
                    "KMSMasterKeyID": "arn:aws:kms:us-east-1:703956386855:alias/aws/s3"
                },
                "BucketKeyEnabled": true
            }
        ]
    }
}

b2)

{
    "ServerSideEncryptionConfiguration": {
        "Rules": [
            {
                "ApplyServerSideEncryptionByDefault": {
                    "SSEAlgorithm": "aws:kms",
                    "KMSMasterKeyID": "arn:aws:kms:us-east-1:703956386855:key/833b292e-bc1f-47ec-af3f-1c1b6504815b"
                },
                "BucketKeyEnabled": true
            }
        ]
    }
}

Screenshot 2021-11-24 at 15 13 44

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm that's odd -- the behaviour we are seeing is that when CDK only includes the x-amz-server-side-encryption: aws:kms header, the objects end up being encrypted with aws/s3 key, which currently is not permitted by our bucket policy.

Copy link
Contributor Author

@ArlindNocaj ArlindNocaj Jan 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vludax
Do you have default bucket encryption enabled for the bucket (with specific KMS Key)?
Did you use cdk bootstrap to reinitialize the cdk resources?
Can you also provide the bucket policy, so that I can reproduce it?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
The following bucket policy statement blocks the uploads -- when we remove this statement, the files get uploaded successfully and encrypted with aws/s3:

{
    "Sid": "Enforce default bucket encryption by rejecting any overrides",
    "Effect": "Deny",
    "Principal": "*",
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::<bucket-name>/*",
    "Condition": {
        "Null": {
            "s3:x-amz-server-side-encryption": "false"
        },
        "StringNotLikeIfExists": {
            "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:ap-southeast-2:XXX:key/YYY"
        }
    }
}

We did not run the bootstrap as we are using a custom synthesizer in our CDK -- that doesn't seem to specify encryption options anywhere. I'd also note that our uploads are working as expected with v1.118.0 of the CDK -- the files are successfully uploaded and encrypted with the right KMS key with the above policy statement in place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vludax please note that this is already being addressed and fixed here: #18262

DOES_NOT_EXIST
}

async function bucketEncryption(s3: AWS.S3, bucket: string) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add caching to this function so we're not going to do the same lookup more than once?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are a lot of methods not being cached like bucketOwner etc. If we add caching it would make sense to add it to a level which ensures that the cache does have an effect. Right now I do not understand the system well enough to add caching.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. Will do that myself then.

paramsEncryption = { ServerSideEncryption: 'aws:kms' };
break;
case BucketEncryption.DOES_NOT_EXIST:
this.host.emitMessage(EventType.UPLOAD, `No bucket named '${destination.bucketName}'. Is account ${await account()} bootstrapped?`);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure at what level EventType.UPLOAD shows, but I'd like this level to be one that is not displayed by default (but only if you add -v). Can you confirm that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the mapping is as follows:

const EVENT_TO_LEVEL: Record<EventType, LogLevel> = {
  build: 'verbose',
  cached: 'verbose',
  check: 'verbose',
  debug: 'verbose',
  fail: 'error',
  found: 'verbose',
  start: 'info',
  success: 'info',
  upload: 'verbose',
};

Nevertheless, adjusted logging to debug type since message is not directly related to upload.

@rix0rrr rix0rrr changed the title fix(cli): adding server side encryption SSE header based on bucket encryption fix(cli): S3 asset uploads are rejected by commonly used encryption SCP (bootstrap stack v9) Nov 24, 2021
@rix0rrr rix0rrr changed the title fix(cli): S3 asset uploads are rejected by commonly used encryption SCP (bootstrap stack v9) fix(cli): S3 asset uploads are rejected by commonly referenced encryption SCP (bootstrap stack v9) Nov 24, 2021
@rix0rrr rix0rrr changed the title fix(cli): S3 asset uploads are rejected by commonly referenced encryption SCP (bootstrap stack v9) fix(cli): S3 asset uploads are rejected by commonly referenced encryption SCP (introduces bootstrap stack v9) Nov 24, 2021
@mergify mergify bot dismissed rix0rrr’s stale review November 25, 2021 07:35

Pull request has been modified.

@mergify
Copy link
Contributor

mergify bot commented Nov 26, 2021

Thank you for contributing! Your pull request will be updated from master and then merged automatically (do not update manually, and be sure to allow changes to be pushed to your fork).

@aws-cdk-automation
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: AutoBuildProject89A8053A-LhjRyN9kxr8o
  • Commit ID: f6dbef1
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@mergify mergify bot merged commit 8191f1f into aws:master Nov 26, 2021
@mergify
Copy link
Contributor

mergify bot commented Nov 26, 2021

Thank you for contributing! Your pull request will be updated from master and then merged automatically (do not update manually, and be sure to allow changes to be pushed to your fork).

beezly pushed a commit to beezly/aws-cdk that referenced this pull request Nov 29, 2021
…tion SCP (introduces bootstrap stack v9) (aws#17668)

Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted.

CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality.

Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present.

This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9.

If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header).

Fixes aws#11265.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
pedrosola pushed a commit to pedrosola/aws-cdk that referenced this pull request Dec 1, 2021
…tion SCP (introduces bootstrap stack v9) (aws#17668)

Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted.

CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality.

Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present.

This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9.

If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header).

Fixes aws#11265.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
rix0rrr added a commit that referenced this pull request Dec 14, 2021
In #17668, cross-account S3 asset publishing was broken.

The reason is that the `account()` function was always broken, using the
default account instead of the target account. However, previously this
function was only called in an irrecoverable situation anyway, and its
failure would be rare.

The recent change also calls this function for logging purposes in
a happy-case scenario, but then triggers an error during the logging.

Fix the invocation to use the right account.

Fixes #17988.
mergify bot pushed a commit that referenced this pull request Dec 14, 2021
In #17668, cross-account S3 asset publishing was broken.

The reason is that the `account()` function was always broken, using the
default account instead of the target account. However, previously this
function was only called in an irrecoverable situation anyway, and its
failure would be rare.

The recent change also calls this function for logging purposes in
a happy-case scenario, but then triggers an error during the logging.

Fix the invocation to use the right account.

Fixes #17988.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
rix0rrr added a commit that referenced this pull request Jan 10, 2022
Even though a custom KMS key is specified as the default encryption key
on the file assets bucket, all uploaded assets are encrypted using the
default key.

The reason is that in #17668 we added an explicit `encryption: kms`
parameter to the `putObject` operation, so that an SCP that is
commonly in use across large organizations to prevent files from
ending up unencrypted, can be used (the SCP can only validate
call parameters, such as whether the `putObject` call includes
the parameter that reuests encryption, not the effective end result,
such as whether a file would end up encrypted).

However, we did not include the KMS Key Id into the `putObject`
request, which caused S3 to fall back to the default key.

Solution: also look up the key id and pass that along as well.

Fixes #18262.
mergify bot pushed a commit that referenced this pull request Jan 10, 2022
Even though a custom KMS key is specified as the default encryption key
on the file assets bucket, all uploaded assets are encrypted using the
default key.

The reason is that in #17668 we added an explicit `encryption: kms`
parameter to the `putObject` operation, so that an SCP that is
commonly in use across large organizations to prevent files from
ending up unencrypted, can be used (the SCP can only validate
call parameters, such as whether the `putObject` call includes
the parameter that reuests encryption, not the effective end result,
such as whether a file would end up encrypted).

However, we did not include the KMS Key Id into the `putObject`
request, which caused S3 to fall back to the default key.

Solution: also look up the key id and pass that along as well.

Fixes #18262.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
TikiTDO pushed a commit to TikiTDO/aws-cdk that referenced this pull request Feb 21, 2022
…tion SCP (introduces bootstrap stack v9) (aws#17668)

Many organizations around the world have started to use a specific Service Control Policy (SCP) from this blog post: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ in order to make sure all S3 bucket uploads are encrypted.

CDK configures the `DefaultEncryptionConfiguration` on the bucket so that objects are always encrypted by default, but this specific SCP can only check that individual upload actions include the "enable encryption" option. That means that even though the end result would still be satisfied (objects are encrypted in S3), the SCP would nevertheless reject the CDK upload. We would rather people use AWS Config to make sure all buckets have `DefaultEncryptionConfiguration` set, so that this SCP is not necessary... but there is no arguing with deployed reality.

Change the CDK upload code path to first read out the default encryption configuration from the bucket, only to then mirror those exact same settings in the `PutObject` call so that the SCP can see that they are present.

This requires adding a new permission to the `cdk-assets` role, namely `s3:GetEncryptionConfiguration`, so requires a new bootstrap stack version: version 9.

If you want this new behavior because your organization applies this specific SCP, you must upgrade to bootstrap stack version 9. If you do not care about this new behavior you don't have to do anything: if the call to `getEncryptionConfiguration` fails, the CDK will fall back to the old behavior (not specifying any header).

Fixes aws#11265.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
TikiTDO pushed a commit to TikiTDO/aws-cdk that referenced this pull request Feb 21, 2022
In aws#17668, cross-account S3 asset publishing was broken.

The reason is that the `account()` function was always broken, using the
default account instead of the target account. However, previously this
function was only called in an irrecoverable situation anyway, and its
failure would be rare.

The recent change also calls this function for logging purposes in
a happy-case scenario, but then triggers an error during the logging.

Fix the invocation to use the right account.

Fixes aws#17988.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
TikiTDO pushed a commit to TikiTDO/aws-cdk that referenced this pull request Feb 21, 2022
Even though a custom KMS key is specified as the default encryption key
on the file assets bucket, all uploaded assets are encrypted using the
default key.

The reason is that in aws#17668 we added an explicit `encryption: kms`
parameter to the `putObject` operation, so that an SCP that is
commonly in use across large organizations to prevent files from
ending up unencrypted, can be used (the SCP can only validate
call parameters, such as whether the `putObject` call includes
the parameter that reuests encryption, not the effective end result,
such as whether a file would end up encrypted).

However, we did not include the KMS Key Id into the `putObject`
request, which caused S3 to fall back to the default key.

Solution: also look up the key id and pass that along as well.

Fixes aws#18262.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-s3 Related to Amazon S3 package/tools Related to AWS CDK Tools or CLI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[cli] deploy cannot specify S3 SSE for asset upload
5 participants