Last week, AWS VPC resources were defined in a Network account and shared across the accounts in the Sandbox OU
It’s time to launch the MVP(minimum viable product). The developer has pushed the source code to a Github repo. As a DevOps engineer, your task is to build the system and publish the service for initial testing.
Application Overview
It is written in Python and uses the Django web framework.
NGINX serves as a reverse proxy.
Gunicorn implements the web server gateway interface(WSGI), translating HTTP requests into something Python can understand.
Postgres is the chosen database for storing the authenticated user data.
Note
All these components fit in an EC2 instance and serve the purpose.
There are a handful of tasks out there for DevOps Engineers;
Building a custom AMI using HashiCorp Packer and Bash scripting
Infrastructure provision by Terraform
Managing CI/CD pipeline through GitHub Actions
Implementation
The diagram depicts two CI/CD workflows. One builds the AMI using Packer, and the other deploys an EC2 from the custom AMI. Why Packer? We aim to build immutable images and deploy them without additional configuration. Using Packer, a custom AMI is built from the parent image(Ubuntu) using a BASH script. The script installs the necessary packages and sets up a Python environment for our application to run. For every GitHub release, an AMI gets created corresponding to the release version.
The ec2.yml workflow is set to manual trigger with an input variable “version.” This ensures that the application release matches the AMI version.
Take a look at the packer template below. Two plugins are being used. The amazon is used to build AMI on AWS, and amazon-ami-management is our post-processor plugin, which keeps only the last 2 releases of your AMI.
The AMI details are provided in the source block, which is triggered by the build block underneath. The build block defines a couple of provisioner blocks to leverage file transfer and script execution. The packer build command requires a few arguments, such as vpc_id and subnet_id, which are defined as variables.
Let us move on to the image.yml workflow file. This workflow is triggered every time a new release is published. Within build_job step 6, secrets.sh gets created to store the DB credentials. Security-minded people out there, I know this is not a recommended practice. In the upcoming post, I will use AWS-managed services to store the secrets. Okay, back to the workflow. GitHub Actions uses OIDC integration with AWS. In the last step, the packer builds the image, and we use the value of github.ref_name as the version. This value replaces the AMI version in the packer template.
Next, we need to deploy an EC2 instance from this custom AMI. Let’s examine the ec2.yml workflow. The launch_ec2 job uses terraform to deploy the instance. In the terraform apply command, an input variable named “AMI version” is being supplied. This ensures that the application release matches the AMI version.
The terraform code for ec2 looks easy on the eyes. The custom AMI needs to be imported first. The instance requires an ingress security rule to allow inbound HTTP traffic.