This repository assumes familiarity with the starter repository and core concepts. Basic setup should already be completed. This repository focuses solely on terraform and AWS.
Runtime Configuration
Runtime Configuration allows you to set up and manage configurations that define how your infrastructure is deployed and managed. It helps you control various aspects such as environment variables, command execution, and more.
More information: Runtime Configuration
AWS Cloud Integration
AWS Cloud Integration enables you to connect your Spacelift account with your AWS environment, facilitating automated deployments and infrastructure management.
More information: AWS Cloud Integration
Private Workers
Private Workers allow you to run jobs on dedicated, isolated instances within your VPC, enhancing security and compliance.
More information: Private Workers
Drift Detection
Drift Detection helps identify changes in your infrastructure that occur outside of your Spacelift configurations, ensuring that your deployed infrastructure remains consistent with your defined state.
More information: Drift Detection
Stack Dependencies
Stack Dependencies manage the relationships between different stacks, ensuring that dependencies are respected and resources are provisioned or destroyed in the correct order.
More information: Stack Dependencies
Contexts with Auto Attachment and Hooks
Contexts allow you to define reusable sets of environment variables and settings that can be automatically attached to stacks. Hooks enable you to run custom scripts or commands at various points in the stack lifecycle.
More information: Contexts with Auto Attachment and Hooks
More Complex Policies and Integrating with Security Tools
This section covers advanced policy configurations and the integration of security tools like Checkov to enhance your infrastructure's security posture.
More information: Integrating Security Tools
- Fork this repository.
- Via the UI, create an administrative stack in the root space pointing to this repository.
- Name this stack
intermediate-repo
. - Set the project root as
Getting-started
.
The project root points to the directory within the repo where the project should start executing. This is especially useful for monorepos.
-
Add two variables to this stack:
TF_VAR_role_name
TF_VAR_role_arn
Follow the setup guide from AWS to retrieve these values.
Note: Do not manually create the Cloud integration, the stack will use these environment variables to do this for you.
-
Trigger the
intermediate-repo
stack.
Explanation of resources being created:
- Creating a space for all our resources to go into, isolating it from the rest of our account.
- Creating a stack to use an AWS EC2 private worker module.
- Creating a stack with a drift detection schedule.
- Creating two stacks with a stack dependency.
- Creating two policies which will be discussed further later.
- Mounting a file containing a JSON-encoded list of Spacelift's outgoing IPs.
- Creating a worker pool with the private key and worker pool config.
- Setting environment variables for the worker pool ID to be used in other stacks to utilize the private worker pool.
- Setting environment variables for the private key and worker pool config.
Note: We are using a runtime config file with the stack default AWS region set to eu-west-1
, which will apply to all stacks.
- Create an admin API key in the intermediate-repo space.
- Save these variables on the private worker stack:
TF_VAR_spacelift_api_key_id
TF_VAR_spacelift_api_key_secret
TF_VAR_spacelift_api_endpoint
(https://(youraccountname).app.spacelift.io)
These variables are needed to allow for autoscaling.
Explanation of Private Worker Stack
- This stack is using the following module
- The
Intermediate-repo
stack has already added variables relating to the worker pool and a mounted file with the IP addresses needed. - Triggering a run on this stack will:
- Create your VPC, subnets, and a security group with unrestricted egress and restricted ingress to the IP addresses needed.
- Create your EC2 instance private worker.
- Trigger a run on the drift detection stack.
- Optionally add
TF_VAR_drift_detection_schedule
environment variable (defaults to every 15 minutes). Example Value :["*/15 * * * *"]
for every 15 minutes - Trigger the stack with drift detection enabled. It will create a context.
- Manually add a label to this context via the UI.
- After 15 minutes, check if an reconcile run was started.
- Trigger the
Dependencies stack
stack. - Once the stack is finished, trigger a run on the Infra stack to create the
DB_CONNECTION_STRING
, which will then automatically start a run in the app stack and save this output as an input to be used.
[Here is a walkthrough video](#10 (comment)
Explanation of Stack Dependencies
- This stack will create two stacks and establish a stack dependency between them with a shared output.
- The Infra stack will output
DB_CONNECTION_STRING
and save it as an input ofTF_VAR_APP_DB_URL
to the App stack.
Activity 1: Contexts and Policies
- Our context
Tflint
and policyTflintchecker
were both created with the labelautoattach:tflint
. - Add the label
tflint
to the stackDependencies stack
and watch both the context and policy get attached to the stack. - Trigger a run on this stack. The hooks will now install
tflint
, run the tool, and then save these findings in a third-party metadata section of our policy input, which we then use in our policy.
More information: Integrating Security Tools with Spacelift
Activity 2: Pull Request Notification
-
Open a pull request against any of the stacks.
-
Wait for a comment from the PR notification policy that was created. It will add a comment based on the following conditions:
- If the stack has failed in any stage not due to a policy, it will post the relevant logs.
- If the stack has failed due to a policy, it will give a summary of the policies and any relevant deny messages.
- If the stack has finished successfully, it will post a summary of the run, the policies used, and any changes to be made.
More information: Notification Policy
- Run
terraform destroy -auto-approve
as a task in theintermediate-repo
stack.
Explanation of Resource Destruction
- Our stack has also created stack-destructors, which handle the execution of destroying the resources on our created stacks first to ensure all resources are destroyed.
More reading: Ordered Stack Creation and Deletion