Hello Service is a JSON micro-service written in Go with a simple purpose - to view how many times the server has said hello to a name. (Refer to the API section for more details).
Hello Service makes use of various components and tools in order to simplify the process of management, maintenance, and deployment of the service.
The following major AWS components are used:
- EC2 Container Service (ECS) - ECS is used to manage a cluster of EC2 instances, which run the service. It facilitates the deployment of tasks to these instances in a reliable manner. An Elastic Load Balancer (ELB) is attached to distribute traffic across the EC2 instances.
- Docker Container Registry (ECR) - ECR is used to store the docker images of the service. These images are referenced by ECS via task definitions, and deployed onto EC2 instances.
- ElastiCache - An ElastiCache node running Redis is used for quick storage and retrieval of a Sorted Set.
Others:
- Terraform - Terraform allows for complete deployment of Hello Service infrastructure via scripts. It is used to easily manage the state of infrastructure and modify it reliably and quickly.
- Docker - A Docker image based on gliderlabs/alpine is used, allowing for consistency across development environments and painless deployments.
- AWS Command Line Interface (AWS CLI) - AWS CLI commands are used to reliably deploy new revisions of the service.
- Redis - Redis is used for data storage in the local development environment. This offers parity with the ElastiCache node running Redis.
Hello Service can be run locally on your machine via Docker - this greatly speeds up the process of testing new changes.
Note: Hello Service has currently only been tested on OSX 10.11.5
The following must be set up locally:
- Navigate into the directory of the project source.
- Run
make
. This will statically compile the service and subsequently build a docker image. Run this whenever changes are made to the source code of the Go project. - Run
redis-server
. This will initiate the local redis server. It may be desirable to run this on a separate terminal tab. - Run
docker run --publish 80:80 --env-file ./.env nytimes/hello
. This will run the service on localhost port 80. The environment variables outlined in the .env are loaded into the image. It may be necessary to confirm that the REDIS_CLUSTER_ADDRESS is accurate.
- Run
docker ps
to list the docker containers currently running. - Use
docker kill CONTAINER_ID
ordocker stop CONTAINER_ID
to stop the service.
Note: Infrastructure Deployment has currently only been tested on OSX 10.11.5
The following must be set up locally:
- Navigate into the
deployment
directory of the project source. - Run
terraform get
. This will install any terraform modules used. - Run
terraform plan
. Follow the prompts. This will provide an overview of the infrastructure that will be deployed. Ensure that there are no errors before proceeding. - Run
terraform apply
. Follow the prompts. This command will instantiate the infrastructure required to run Hello Service on AWS, and may take a few moments. - Run
terraform show
. This provides an overview of the deployed infrastructure. - Search for
aws_elasticache_cluster
. Add the value ofcache_nodes.0.address
to your clipboard. - In the project source directory, create a file called
task_revision.json
, pasting the contents oftask_revision_example.json
. Paste the value in your clipboard into the REDIS_CLUSTER_ADDRESS. - Use
terraform show
to obtain therepository_url
fromaws_ecr_repository.hello-repository
. Update the appropriate value in thetask_revision.json
file. Do not include the leadinghttps://
. Setting up thetask_revision.json
file is necessary for deploying service updates in the future. - The service will live at the address of the Load Balancer. Use
terraform show
, findaws_elb.hello_service_elb
, and take thedns_name
. Please note that a Release Deployment must be run for the actual service to run on the infrastructure, as the Docker image must be pushed to the repository, and the service updated with that image.
Note: The
terraform.tfstate
andterraform.tfstate.backup
files that are generated should be kept (and likely committed to version control). These are necessary to iterate on the infrastructure of the service.
- Make changes as desired to
deployment/infrastructure.tf
- Run
terraform plan
. This provides an overview of the resources that will be changed, added, and destroyed. - Run
terraform apply
to update the infrastructure. If infrastructure in theinfrastructure.tf
file has been deleted, these will be destroyed on AWS.
- Run
terraform plan -destroy
. This provides an overview of the resources that will be destroyed. - Run
terraform destroy
to destroy the infrastructure.
Note: Release Deployment has currently only been tested on OSX 10.11.5
- AWS Command Line Interface (AWS CLI) - AWS CLI commands are used to reliably deploy new revisions of the service.
- Go
- Docker
- Navigate into the directory of the project source. Ensure that the
task_revision.json
file is set up, as described in the Deploying the Infrastructure section - Run
make
. This will statically compile the service and subsequently build a docker image. - Run
docker tag nytimes/hello:latest YOUR_REPOSITORY_URL_HERE:latest
. Be sure to enter the URL of the ECR Repository. - Run
docker push YOUR_REPOSITORY_URL_HERE:latest
. If a response is received requesting to login, paste the commands provided, then attempt this step again. - Run
aws ecs register-task-definition --cli-input-json file://task_revision.json
. This will create a new task revision, which is necessary to update the service. - Run
aws ecs update-service --cluster ecs-hello --service hello-service --task-definition hello_service:REVISION_NUMBER
. ECS will run a blue-green deployment of the updates. This may take a while. Be sure to include the revision number from the output of step 5 into this command.
Outputs a hello message including the provided name, and updates a Sorted Set on Redis, containing the name and a score of how many times the name was provided to this endpoint.
Hello, "{name}"
Outputs a JSON structure containing the names that have been provided to the hello endpoint coupled with a count of how many times they were provided.
{
"counts": [
{
"name": "Bob",
"count": "1"
},
{
"name": "Bill",
"count": "3"
}
]
}
Outputs a list of information about the server.
On AWS:
{
"ec2_instance_id": "i-4f066f57",
"uptime": 65473,
"cpu_utilization_percent": 0.27196008620621565,
"disk_utilization_percent": 43.82975528085612,
"total_ram_bytes_used": 95997952,
"total_ram_bytes_available": 947920896
}
Local Machine:
{
"ec2_instance_id": "local",
"uptime": 65473,
"cpu_utilization_percent": 0.27196008620621565,
"disk_utilization_percent": 43.82975528085612,
"total_ram_bytes_used": 95997952,
"total_ram_bytes_available": 947920896
}
Outputs the health of all cluster instances that are currently active on the load balancer.
On AWS:
{
"node_healths": [
{
"ec2_instance_id": "i-0236ca5d",
"uptime": 65706,
"cpu_utilization_percent": 0.33148680902462224,
"disk_utilization_percent": 43.642749476192,
"total_ram_bytes_used": 93929472,
"total_ram_bytes_available": 949989376
},
{
"ec2_instance_id": "i-4f066f57",
"uptime": 65704,
"cpu_utilization_percent": 0.16437408080283544,
"disk_utilization_percent": 43.82975528085612,
"total_ram_bytes_used": 96153600,
"total_ram_bytes_available": 947765248
}
]
}
Local Machine:
{
"node_healths": [
{
"ec2_instance_id": "local",
"uptime": 65706,
"cpu_utilization_percent": 0.33148680902462224,
"disk_utilization_percent": 43.642749476192,
"total_ram_bytes_used": 93929472,
"total_ram_bytes_available": 949989376
}
]
}
Improvements to consider for future iterations of Hello Service:
- Aggregation of deployment commands into a simple script. This could be a shell script, or any other scripting language using the AWS SDK.
- Log Centralization. It is currently necessary to SSH individually onto an instance to view the logs. Consider using the ECS Log Collector.
- Redis connection management. If the connection to Redis is lost, this situation is not gracefully handled.
- An Alert system for critical errors. Consider using something like Pager Duty.
- Graceful JSON error responses
- Output ECR Repository URL as well as Redis Address from infrastructure.tf