In this two-part series, we will create a fully functional AWS CodePipeline for containerized applications that will run a real-world build and deployment process. When moving containers to the cloud there are many options and services for hosting, managing, and deploying them. It can be a little overwhelming to start digging in. In this post, we will breakdown the various AWS tools, and provision/configure them for use in an automated pipeline.

Overview Of AWS Tools

The AWS services we will use include:

  1. Building the code
  2. Running unit tests
  3. Deploying to an integration environment
  4. Running integration test
  5. Deploying to production

It will also cover some variations to give a better overview of what options you have when using AWS developer tools.

First, let's take a look at which AWS tools we are going to use.

  • CodeCommit - Source code repository.
  • CodeBuild - Build our code and the docker image that will run it.
  • ECR - Private container registry where our docker images will be stored.
  • ECS - Container orchestration tool (setup and deployment)
  • Lambda - Running integration tests and managing AWS resources to keep costs to a minimum.
  • CodePipeline - Orchestrate the other pieces to build, deploy, test, and promote the app.

Even if you are already familiar with Docker, some of the ECS jargon may not be clear to you. To summarize, the building block of ECS deployments are Task Definitions, which define how a container is going to be run by ECS. This includes which image to run, what ports are mapped, resource allocation, and so on. It is possible to associate multiple containers to a task definition, but typically it’s a one to one relationship which is what we will do here. In terms of the pipeline, it doesn’t really matter though.

A Task is a running instance of a task definition. A Service is basically a provider for a task. It can run any number of instances of a task, provide load balancing, auto-scaling, and network configuration.

AWS ECS Basic Structure

Finally, the top-level organizational unit in ECS is a cluster, which is just a group of one or more Services. You must have at least one task definition, one service, and one cluster to use ECS.

Setting Up code Tools FOR ECS deployments

Now we are ready to start implementing. If you are starting from scratch, it’s best to start by creating a CodePipeline. The wizard will have you create your CodeBuild job and other necessary pieces along the way, and all the permissions will be set up nicely.

However, many teams are not ready to jump right into a pipeline. Instead, they start just by getting their code into CodeCommit, and setting up a CodeBuild job. So we will follow that common path here. The only difference is that you have to set up a couple of permissions manually.

Setting up a CodeCommit repo is straightforward. Working with it from the command line is similar to any other hosted Git repo such as Github and Bitbucket. You can follow the tutorial here.

Also note that AWS integrates with other popular SCM providers, so it is not necessary to put your code in CodeCommit.

We will also need an ECR repo to store the docker images that will be built by CodeBuild. Creating an ECR repo is very straightforward, just give it a name, so we won’t dive into that here.

Create an S3 bucket for your CodeBuild job to put build artifacts.

When creating your CodeBuild job, just point it at your code repo. In the Environment section, use a managed image, and pick the Ubuntu image, which includes docker by default. In the artifacts section of the wizard, select the S3 bucket you just created.

The build process is described by a buildspec.yml file. If you want to name the file something else, you can put a custom name in the Buildspec section of the CodeBuild wizard. But before we do that, let's make sure our CodeBuild job has the necessary permissions to push docker images to the ECR repo that was just created.

Go to the IAM role created for the CodeBuild job. Edit the JSON and add this block. This will allow CodeBuild to write to your ECR repo:

{
 "Effect": "Allow",
 "Action": [
 "ecr:CompleteLayerUpload",
 "ecr:GetAuthorizationToken",
 "ecr:UploadLayerPart",
 "ecr:InitiateLayerUpload",
 "codecommit:ListRepositories",
 "ecr:BatchCheckLayerAvailability",
 "ecr:PutImage"
 ],
 "Resource": "*"
}

Our build process is going to have five main steps:

  1. Build the app (in this case it will do a `Gradle build`)
  2. Build a new docker image
  3. Push that image to ECR
  4. Create an info file about the new image called image definitions.json (more on that later)
  5. Push that file too S3.

Here is a sample buildspec.yml to do that:

version: 0.2 
phases: 
 install: 
 commands: 
 - echo not installing anything... 
 pre_build: 
 commands: 
 - echo Loggin in to Amazon ECR... 
 - IMAGE_REPO_NAME=hello-cloud 
 - REPO_URI=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME 
 - $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION) 
 - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) 
 - IMAGE_TAG=${COMMIT_HASH:=latest} 
 build: 
 commands: 
 - echo Build started on `date` 
 - ./gradlew build 
 - echo Build docker image 
 # Use repo_uri as the image name and tag it as latest 
 - docker build -t $REPO_URI:latest . 
 - echo tag docker image 
 # Tag source should match the argument passed to -t in docker build 
 - docker tag $REPO_URI:latest $REPO_URI:$IMAGE_TAG 
 post_build: 
 commands: 
 - echo Build completed on `date` 
 - echo Pushing the Docker image... 
 - echo "Build with $IMAGE_REPO_NAME, $REPO_URI, $IMAGE_TAG" 
 - docker push $REPO_URI:$IMAGE_TAG 
 - printf '[{"name":"%s","imageUri":"%s"}]' $IMAGE_REPO_NAME $REPO_URI:$IMAGE_TAG > imagedefinitions.json 
 - cat imagedefinitions.json 
artifacts: 
 files: imagedefinitions.json 

Commit that file to your repo, and that should trigger a build job that will push a shiny new docker image into your ECR.

Setting Up An ECS Environment

Now we are ready to run that image in ECS. Let’s start by defining our task. Remember, a task in ECS is essentially just a running container. The task definition is the configuration of that task instance.

Go to the task definition screen, click create new definition and follow the wizard.

  • Choose a Fargate task. That way there are no EC2 instances for us to manage.
  • The task IAM role is for the running task. It’s only needed if your task will access other AWS resources. For this example, you can leave it blank.
  • The task execution role is what will actually pull down the docker image from ECR and start your task. The UI can create a role if you don’t have one already. The default permissions it gives are sufficient for this example.
  • For task resources, you can keep everything minimal if you are just experimenting.
  • Define a container for this task
    • To keep CodePipeline happy name the container the same as the task definition.
    • Copy image uri from ECR
    • This example won’t have anything proxying the servlet to port 80, so map port 8080.
    • May want to add a custom health check, e.g. CMD-SHELL,curl localhost:8080/hello
    • You can add multiple containers to a task for very tightly coupled containers, but typically you just have one container per task.

Next, create a cluster. Basically, all you have to do is name it. This cluster is going to be our integration test environment. Next, create a service for the task we just defined. A service runs instances of our task and provides all the related configuration, such as load balancing. To keep it simple we will have to not configure load balancing for auto-scaling.

Setting the instance number to 1 will cause the service to spin up our task as soon as it is completed. To shutdown the service, click the button to update it and set the number of tasks to zero. It should immediately begin shutting down the task.

We now have our build tools and infrastructure ready. In the next post, we will create a CodePipeline coordinate and automate these pieces.