This repository contains stacks for various solutions in AWS. These stacks are used for Proof-of-Concept (POC) and demonstration.
-
Review and change the configurations before using it for production: the current configuration should not be used for production without further review and adaptation.
-
Be mindful of the costs incurred: while this solution is developed to be cost-effective, please be mindful of the costs incurred.
- Initial Setup
- Multi-Architecture Pipeline
- Elastic Container Service (ECS)
- Elastic Kubernetes Service (EKS)
- API Gateway and Lambda
- Egress VPC
- Application Load Balancer (ALB) Rule Restriction
-
Install npm packages with
npm install
. -
Configure AWS CLI in order to bootstrap your AWS account for the CDK.
aws configure set aws_access_key_id {{ACCESS_KEY_ID}}
aws configure set aws_secret_access_key {{SECRET_ACCESS_KEY}}
aws configure set region {{REGION, e.g. ap-southeast-1}}
aws configure set output json
-
Bootstrap AWS account for CDK with
cdk bootstrap
. -
Create an EC2 Key Pair named "EC2DefaultKeyPair" (leave other settings as default).
-
Rename 'sample.env' to '.env' and fill up all the values.
-
Create a connection in Developer Tools (ensure that you are creating in your ideal region). Copy the ARN of the connection to your
.env
file. This is required for solutions like Multi-Architecture Pipeline.
cdk deploy multi-arch-pipeline
The pipeline will create Docker images for amd64 and arm64 architectures and store them in an Elastic Container Registry (ECR) repository. A Docker manifest will also be created and uploaded to the registry so that the Docker images for the respective architectures can be retrieved automatically with the 'latest' tag.
Dependency: Multi-Architecture Pipeline
cdk deploy ecs
Creates a new ECS cluster. The ECS cluster has an ECS service that uses Fargate for compute resources. An Application Load Balancer (ALB) will be created to expose the ECS service. The cluster also has an EC2 Auto-Scaling Group (ASG) as the capacity provider that scales on 70% CPU utilization. A CloudWatch dashboard will be created to visualize the CPU utilization of both services.
cdk deploy ecs-cicd
Creates a new CodePipeline, ECR repository, and S3 bucket to build and deploy a container image to the ECS cluster.
Dependency: Multi-Architecture Pipeline
# Deploy a cluster
cdk deploy eks
# Deploy a cluster with Cluster Autoscaler installed
cdk deploy eks-ca
These resources will be created:
- A VPC with public and private subnets and a NAT gateway
- An EKS cluster with 1 managed node group (or 2 if Cluster Autoscaler is installed)
- A bastion host to manage the EKS cluster
- The necessary IAM roles and policies
Access the bastion host with 'ec2-user' using SSH or EC2 Instance Connect.
❗ The commands listed in the sections under EKS should be executed in the bastion host. Some environment variables (e.g. AWS_REGION, AWS_ACCOUNT_ID, AWS_EKS_CLUSTER) are already populated in the bastion host.
aws configure set aws_access_key_id {{ACCESS_KEY_ID}}
aws configure set aws_secret_access_key {{SECRET_ACCESS_KEY}}
./setup-bastion-host.sh
Region is set by 'setup-bastion-host.sh' automatically in the bastion host.
kubectl get svc
- Download the bash script to install / remove add-ons.
curl -o eks-add-ons.sh https://raw.githubusercontent.com/tchangkiat/aws-cdk-stacks/main/scripts/EKS/eks-add-ons.sh
chmod +x eks-add-ons.sh
- Install add-ons with
-i
argument or remove add-ons with-r
argument. Both ID and alias of the add-ons can be used.
Example #1: Install Karpenter
./eks-add-ons.sh -i karpenter
# OR
./eks-add-ons.sh -i 1
Example #2: Install multiple add-ons
./eks-add-ons.sh -i "karpenter load-balancer-controller"
# OR
./eks-add-ons.sh -i "1 2"
Example #3: Remove multiple add-ons
./eks-add-ons.sh -r "karpenter load-balancer-controller"
# OR
./eks-add-ons.sh -r "1 2"
-
Karpenter ("karpenter")
-
AWS Load Balancer Controller ("load-balancer-controller")
-
AWS EBS CSI Driver ("ebs-csi-driver")
-
Amazon CloudWatch Container Insights ("container-insights")
-
Prometheus and Grafana ("prometheus-grafana")
- Prerequisite: AWS EBS CSI Driver
-
Ingress NGINX Controller ("ingress-nginx-controller")
- Also installs cert-manager
-
AWS App Mesh Controller ("app-mesh-controller")
-
AWS Gateway API Controller ("gateway-api-controller")
-
Amazon EMR on EKS ("emr-on-eks")
-
JupyterHub ("jupyterhub")
- Prerequisites: Karpenter, AWS Load Balancer Controller, and AWS EBS CSI Driver
-
Ray ("ray")
- Prerequisites: Karpenter
-
Argo CD ("argocd")
- Prerequisites: Karpenter, AWS Load Balancer Controller
-
Open Policy Agent Gatekeeper ("opa-gatekeeper")
- Includes a constraint template and constraint
❗ Prerequisite #1: Deploy the Multi-Architecture Pipeline. To use your own container image from a registry, replace <URL> and execute
export CONTAINER_IMAGE_URL=<URL>
.
❗ Prerequisite #2: Install AWS Load Balancer Controller.
- Deploy the application.
curl https://raw.githubusercontent.com/tchangkiat/sample-express-api/master/eks/deployment.yaml -o sample-deployment.yaml
sed -i "s|\[URL\]|${CONTAINER_IMAGE_URL}|g" sample-deployment.yaml
kubectl apply -f sample-deployment.yaml
- Remove the application.
kubectl delete -f sample-deployment.yaml
- Deploy the Metrics Server:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
- The above deployment may take minutes to complete. Check the status with this command:
kubectl get apiservice v1beta1.metrics.k8s.io -o json | jq '.status'
- Assuming that the sample application was deployed, execute the following command to configure HPA for the deployment:
kubectl autoscale deployment sample-express-api -n sample \
--cpu-percent=50 \
--min=1 \
--max=10
- Check the details of HPA.
kubectl get hpa -n sample
- Remove the HPA and Metrics Server.
kubectl delete hpa sample-express-api -n sample
kubectl delete -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Credit: EKS Workshop
- Install pre-requisites if they are not installed yet.
./eks-add-ons.sh -i "karpenter load-balancer-controller"
- Setup Argo CD and install Argo CD CLI.
./eks-add-ons.sh -i argocd
- Create an application in Argo CD and link it to the repository. Nginx is used as an example below.
export EKS_CLUSTER_ARN=`kubectl config view -o jsonpath='{.current-context}'`
export ARGOCD_CLUSTER_URL=`argocd cluster list | grep $EKS_CLUSTER_ARN | awk '{print $1}'`
kubectl create namespace nginx
argocd app create nginx --repo https://github.com/tchangkiat/aws-cdk-stacks.git --path assets/argocd --dest-server $ARGOCD_CLUSTER_URL --dest-namespace nginx
- Sync the application in Argo CD to deploy Nginx.
argocd app sync nginx
- Get the load balancer's CNAME to access Nginx.
kubectl get svc -n nginx | awk '{print $4}'
- Remove Nginx application from Argo CD
argocd app delete nginx -y
kubectl delete ns nginx
- Remove Argo CD.
./eks-add-ons.sh -r argocd
- Remove pre-requisites.
./eks-add-ons.sh -r "karpenter load-balancer-controller"
-
Install AWS App Mesh Controller with
./eks-add-ons.sh -i app-mesh-controller
-
The Sample Application is used for the following App Mesh setup. Please set it up first before proceeding.
-
Generate the necessary manifest and set up App Mesh.
curl -o setup-app-mesh.sh https://raw.githubusercontent.com/tchangkiat/aws-cdk-stacks/main/scripts/EKS/setup-app-mesh.sh
chmod +x setup-app-mesh.sh
# Command format is ./setup-app-mesh.sh <application name> <namespace> <container port>
./setup-app-mesh.sh sample-express-api sample 8000
-
After App Mesh resources are set up, execute
kubectl rollout restart deployment sample-express-api -n sample
to restart the deployment. Verify if the Envoy proxy container is injected into each Pod of the deployment withkubectl describe pod <Pod Name> -n sample
. -
Execute
kubectl rollout restart deployment sample-express-api-gateway -n sample
to re-create the Virtual Gateway Pod. This resolves the "readiness probe failed" error (i.e. status is "Running" but "Ready" is "0/1").
❗ Modify your source code to use the AWS X-Ray SDK. This was already done for the Sample Application.
- Update App Mesh Controller to enable X-Ray so that the X-Ray Daemon container will be injected into the Pods automatically
helm upgrade -i appmesh-controller eks/appmesh-controller --namespace appmesh-system --set region=$AWS_REGION --set serviceAccount.create=false --set serviceAccount.name=appmesh-controller \
--set tolerations[0].key=CriticalAddonsOnly \
--set tolerations[0].operator=Exists \
--set tolerations[0].effect=NoSchedule \
--set nodeSelector."kubernetes\\.io/arch"=arm64 \
--set image.repository=public.ecr.aws/appmesh/appmesh-controller \
--set image.tag=v1.12.7-linux_arm64 \
--set tracing.enabled=true \
--set tracing.provider=x-ray
-
Execute
kubectl rollout restart deployment sample-express-api -n sample
to restart the deployment. Verify if the X-Ray Daemon container is injected into each Pod of the deployment withkubectl describe pod <Pod Name> -n sample
. -
Execute
kubectl rollout restart deployment sample-express-api-gateway -n sample
to re-create the Virtual Gateway Pod. The "Ready" value of the Pod should be "2/2" because the x-ray-daemon container is injected in the Pod.
- Remove App Mesh setup of the sample application.
curl -o remove-app-mesh.sh https://raw.githubusercontent.com/tchangkiat/aws-cdk-stacks/main/scripts/EKS/remove-app-mesh.sh
chmod +x remove-app-mesh.sh
# Command format is ./remove-app-mesh.sh <namespace>
./remove-app-mesh.sh sample-express-api sample
- Remove AWS App Mesh Controller with
./eks-add-ons.sh -r app-mesh-controller
❗ Prerequisite #1: Deploy the Multi-Architecture Pipeline. To use your own container image from a registry, replace <URL> and execute
export CONTAINER_IMAGE_URL=<URL>
.
❗ Prerequisite #2: Install AWS Load Balancer Controller.
❗ Prerequisite #3: Install Sample Application.
-
Install AWS Gateway API Controller with
./eks-add-ons.sh -i gateway-api-controller
-
Set up Gateway for Sample Application.
curl -o vpc-lattice-gateway.yaml https://raw.githubusercontent.com/tchangkiat/sample-express-api/master/eks/vpc-lattice/vpc-lattice-gateway.yaml
kubectl apply -f vpc-lattice-gateway.yaml
- Set up HttpRoute for Sample Application.
curl -o vpc-lattice-httproute.yaml https://raw.githubusercontent.com/tchangkiat/sample-express-api/master/eks/vpc-lattice/vpc-lattice-httproute.yaml
kubectl apply -f vpc-lattice-httproute.yaml
- Remove HttpRoute for Sample Application.
kubectl delete -f vpc-lattice-httproute.yaml
- Remove Gateway for Sample Application.
kubectl delete -f vpc-lattice-gateway.yaml
- Remove AWS Gateway API Controller with
./eks-add-ons.sh -r gateway-api-controller
- Install the pre-requisites if they are not installed yet.
./eks-add-ons.sh -i "karpenter load-balancer-controller ebs-csi-driver"
- Install JupyterHub and Ray
./eks-add-ons.sh -i "jupyterhub ray"
- Once all the Pods are 'running', run the following command in the terminal on your client machine. Access JupyterHub using
http://localhost:8080
and Ray Dashboard usinghttp://localhost:8265
. JupyterHub may take a few minutes to initialize after installing. During this time, you will notice a blank page and a loading animation in your browser when you access the URL.
kubectl port-forward --namespace=jupyter service/proxy-public 8080:http & \
kubectl port-forward --address 0.0.0.0 service/raycluster-head-svc 8265:8265 &
- Use the username and password found in the terminal (example below) to log in to JupyterHub.
JupyterHub Username: user1 / admin1
JupyterHub Password: <generated password>
- Once you accessed JupyterHub, you can upload and use the example notebook from
/assets/ray/pytorch-ray-example.ipynb
.
- Remove JupyterHub and Ray.
./eks-add-ons.sh -r "jupyterhub ray"
- Remove pre-requisites.
./eks-add-ons.sh -r "karpenter load-balancer-controller ebs-csi-driver"
sh assets/api-gateway/lambda-zip.sh
cdk deploy api-gateway
Deploy a REST API in API Gateway with Lambda Integration and Authorizer.
# Replace '<...>' with the respective values
# Get a JWT token
curl https://<API ID>.execute-api.ap-southeast-1.amazonaws.com/v1/auth
# Verify
curl -H "Authorization: <JWT token retrieved from the previous command>" https://<API ID>.execute-api.ap-southeast-1.amazonaws.com/v1
cdk deploy egress-vpc
Deploy an egress VPC with Transit Gateway. VPN-related resources are deployed for the VPN connection between the Transit Gateway and the simulated customer's on-prem environment.
-
Follow section 4 and 5 in the following article to deploy an EC2 instance with strongSwan to establish a Site-to-Site VPN -> Simulating Site-to-Site VPN Customer Gateways Using strongSwan.
Below are the values to fill up some of the parameters of the CloudFormation template used in the article above (for the other parameters, follow the instructions in the section 5 of the article):-
Stack Name:
egress-vpc-vpn
-
Name of secret in AWS Secrets Manager for VPN Tunnel 1 Pre-Shared Key:
egress-vpc-psk1
-
Name of secret in AWS Secrets Manager for VPN Tunnel 2 Pre-Shared Key:
egress-vpc-psk2
-
VPC ID: select
egress-vpc-customer-vpc
-
VPC CIDR Block:
30.0.0.0/16
-
Subnet ID for VPN Gateway: select
egress-vpc-customer-vpc/PublicSubnet1
-
Elastic IP Address Allocation ID: can be found in the output of the CDK stack. The value should start with
eipalloc-
-
❗ Wait until the VPN Gateway (EC2 Instance) is created and verify that both IPSec tunnels are 'UP' (Site-to-Site VPN Connections > egress-vpc-vpn > Tunnel details), before proceeding to step 4 and 5. This will take a few minutes.
-
Add a route to
20.0.0.0/16
in the route table (Target: Instance > infra-vpngw-test) ofegress-vpc-customer-vpc/PrivateSubnet1
in order to route requests from instances inegress-vpc-customer-vpc/PrivateSubnet1
to instances inegress-vpc-vpc-1/PrivateSubnet1
. -
Create a Transit Gateway Association and Propagation in the Transit Gateway Route Table for the VPN Transit Gateway attachment. Once you completed this step successfully, you should see a route
30.0.0.0/16
propagated in the Transit Gateway Route Table. Note: this step cannot be automated because there is no way to retrieve the VPN Transit Gateway attachment and then create an association and propagation programmatically.
❗ The connection between
egress-vpc-vpc-1
andegress-vpc-customer-vpc
will be established in a few minutes after completing step 3.
-
Connect to
egress-vpc-demo-instance
andegress-vpc-demo-instance-2
using Session Manager. If you encounter the errorUnable to start command: failed to start pty since RunAs user ssm-user does not exist
, ensure that theRun As
configuration in Session Manager > Preferences isec2-user
. -
Use
ifconfig
in the instances to retrieve the private IP addresses -
Ping each other using the private IP addresses - e.g.
ping 30.0.1.30
inegress-vpc-demo-instance
. You should receive similar results as shown below:
egress-vpc-demo-instance
: 64 bytes from 30.0.1.30: icmp_seq=1 ttl=253 time=2.49 msegress-vpc-demo-instance-2
: 64 bytes from 20.0.0.20: icmp_seq=1 ttl=252 time=3.52 ms
- Ping a domain (e.g. amazon.com) in one of these instances. You should observe similar results as shown above.
- Delete
egress-vpc-vpn
andegress-vpc
CloudFormation stacks.
cdk deploy alb-rule-restriction
- Connect to Bastion Host and run the following command. You should receive a response from Nginx.
curl <ALB DNS Name>:80
- Run the following command. You should receive a response from the ALB: "Denied by ALB".
curl <ALB DNS Name>:8080
cdk destroy alb-rule-restriction