Skip to content

AWS Lambda function that processes incoming data from a Kinesis stream, saving the processed output into a DynamoDB table

Notifications You must be signed in to change notification settings

PhilRanzato/lambda-aws

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

lambda-aws

Terraform Deploy

This repository is the starting point for a Infrastructure as Code pipeline.

AWS Lambda Function

The repository contains a AWS Lambda Function written in Python3 that processes AWS Kinesis Data Stream logs and writes the processed results into a AWS DynamoDB table.

Example of input before processing:

{
    "id": 33,
    "message": "This function will count the number of occurrences of the word error."
}

Example of input after processing:

{
    "id": 33,
    "message": "This function will count the number of occurrences of the word error.",
    "errors": 1
}

Continuous Integration and Continuous Deployment

Each time a new Pull Request is created (and/or a new push on master), the IaC pipeline (here implemented as GitHub action) starts and deploys the whole infrastructure using Terraform:

  • An AWS Kinesis Data Stream named application-logs-stream
  • An AWS DynamoDB table named application-logs-table
  • An AWS Lambda function named kinesis-to-dynamodb

The pipeline takes care of:

  • Job Lint: Linting Lambda Python3 code.
  • Job Terraform-deploy: Zipping the file lambda.py into a function.zip that will be moved to the Terraform repository and uploaded as AWS Lambda source file. The result file will also be pushed on a new branch starting from the HEAD (it will be then deleted only for Pull Requests).
  • Init Terraform backend on a AWS s3 bucket.
  • Applying Terraform resources in two steps:
    1. Firstly the AWS Kinesis Data Stream and the AWS DynamoDB table.
    2. Then the Lambda function.
  • Job Kinesis-test: Test the whole workflow by installing and configuring the AWS Kinesis Agent on the GitHub runner, using repository secrets divided into a repository environment. Then it runs a test Python application that produces 3 log lines that will be provided to the AWS Kinesis Data Stream using the AWS Kinesis Agent.
  • Job Terraform-destroy: eventually, the pipeline will clean up the infrastructure by destroying all Terraform generated resources and, in the case of a Pull Request, remove the branch that contains the function.zip since it will be created again once merged.

Terraform destroy should not be executed when the pipeline refers to master/release but for convenience, it is destroyed all the times.

Additionally, a Jenkinsfile example to deploy the workflow is provided in the repository.

To see more about the Terraform code, visit the below repository.

References

About

AWS Lambda function that processes incoming data from a Kinesis stream, saving the processed output into a DynamoDB table

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages