This project develops a Continous Integration and Continous Delivery Pipeline by using only AWS services like CodeCommit, Elastic Beanstalk and RDS
Continuous integration(CI) is a DevOps software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of continuous integration are to find and address bugs quicker, improve software quality, and reduce the time it takes to validate and release new software updates.
Continous Delivery(CD) is a software development practice where code changes are automatically prepared for a release to production. Continuous delivery expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When properly implemented, developers will always have a deployment-ready build artifact that has passed through a standardized test process.
This project focuses on implementing a full CI-CD pipeline using AWS services for the vprofile-project. It serves as an alternative choice of developing and deploying a pipeline without using external tools like Jenkins and Github. Instead we use AWS CodeCommit to store the source code and AWS Codebuild service to build the code and pipeline service to execute the pipeline.
- AWS CodeCommit
- AWS CodeBuild
- Elastic Beanstalk
- Amazon Relational Database Service (RDS)
- Amazon Elasticache
- AmazonMQ
- S3
- EC2
The following things take place when a commit takes place:
- Whenever a commit is made the CodeCommit pipeline is triggered automatically.
- The code is prepared and sent to Codebuild service from the CodeCommit repository.
- The Codebuild service then builds the artifact from the source code and uploads the artifact into a S3 bucket.
- Elastic Beanstalk (EB) fetches the artifact from the S3 bucket and hosts the application from the bucket.
- The EB security group is connected with the backend security group to allow visiting traffic to the backend services.
- Backend service group includes services like:
- AmazonMQ service
- Amazon Relational Database Service (RDS)
- Amazon Elasticache
-
Create an EC2 key-pair to be used to launch ec2 instances.
-
Create an IAM user to be used with Elastic Beanstalk with the following rules:
- AWSElasticBeanstalkRoleSNS
- AdministratorAccess-AWSElasticBeanstalk
- AWSElasticBeanstalkCustomPlatformforEC2Role
- AWSElasticBeanstalkWebTier
-
Create RDS instance
-
Search for Amazon RDS on the console.
-
Go to RDS and create an instance.
-
Choose the required version for the project (MySQL 5.7 is required for my project).
-
under Additional configuration -> Database options provide the database name you want to create, otherwise no database will be created.
-
Store the database credentials by clicking on the view connection details button after instance is created.
-
-
Create and configure message broker (RabbbitMQ)
-
Configure and create cache service (Memcached Cluster)
- Search for Elasticache on the console.
- Go to Elasticache console.
- Create a parameter group:
- Create a subnet group:
- Create memcached cluster:
-
Search for Elastic Beanstalk on the console.
-
Go to Elastic Beanstalk and click on Create application.
-
Configure the environment:
-
Configure Service Access:
-
Setup networking, database and tags:
- Leave it as it is.
-
Configure instance traffic and scaling:
-
Configure updates, monitoring and rolling:
-
CREATE A REPOSITORY :
- Give some name and description and create
-
CONFIGURE USER POLICIES:
- IAM -> create user, programmatic access -> create policy :
- Select policy from existing policies -> create user -> delete access keys and secret keys -> upload ssh keys for codecommmit.
- Generate keys in the local machine:
- cd into C:Users/cr7su/.ssh folder then run
- name the key file and create using the folllowing command.
ssh-keygen
-
Upload the public key contents to the console -> upload ssh keys for codecommmit.
-
Generate a config file with the following contents: host : repo arn user : public rsa key identity : path to private rsa key
-
Edit permissions of the config file.
chmod 600 config
- Run the following command to authenticate over ssh.
ssh arn_of_repository
-
COMMIT DATA FROM GITHUB TO CODECOMMIT:
- Copy the config file, public-key and private-key file into the git-repository.
- Execute the following commands to checkout all the branches from the github repository.
git checkout master git branch -a | grep -v HEAD | cut -d ' /' -f3 | grep -v master > branches
(in linux)
for i in 'cat branches'; do git checkout $i; done
(in windows powershell)
$branches = Get-Content branches foreach ($branch in $branches) { git checkout $branch }
- Fetch all tags
git fetch --tags
- Remove remote repository.
git remote rm origin
- Add the CodeCommit repository as the new remote repository.
git remote add origin "codecommit repo ssh url"
- Push all the check-out branches from github repository into the Codecommit repository.
git push origin --all
- Push any tags present.
git push --tags
- CODECOMMIT CODEBUILD
-
Create project:
- Select repository and branch as vp-rem
- Managed image go for ubuntu
- Create a new service role
- Select 3gb memory and 2 vCPUs
- insert build commands by opening editor and pasting contents of buildspec.yaml file
- Create s3 bucket and select that bucket for artifact storage
- Configure cloudwatch logs
- Edit the codebuild-vprofile-Build-service-role in IAM and add AmazonS3FullAccess permission to it.
-
Run the build
-
- CONFIGURATION -> CONFIGURE INSTANCE TRAFFIC AND SCALING
- Go to processes and edit the existing process:
- Under Healthcheck section change path to /login.
- Under Session section check the stickiness enabled option.
- Save and apply the changes then wait for the changes to be completed successfully.
- Go to processes and edit the existing process:
- CODECOMMIT PIPELINE
- Name the pipeline
- Select source as Codecommit, select your repository and branch
- Select Codebuild as your build provider and specify your existing project
- Select Deployer as Elastic beanstalk, select your app, select your env
- Review and create pipeline.
As soon as the pipeline is created, it is triggered and the execution takes place.
As artifact is deployed, I validated it by visiting the following pages:
Here we can see that the backend services RDS, Elasticache and RabbitMQ are also configured correctly.
As documented in this markdown format file, I have invested a significant amount of time in researching, learning, debugging to implement this project. If you appreciate this document please share with friends and do give it a try.