Defining the CDK stack constructs – Running Containers in AWS

Mike Naughton | May 14th, 2023


We will define the previously discussed infrastructure components inside a CDK stack. The complete CDK project is available in your Cloud9 IDE, in the chapter-7/chapter-7-cdk/ directory. We will not go through the CDK project initiation steps in detail as these were already covered in the previous chapter, A Programmatic Approach to IaC with AWS CDK. Let’s dive into the code-level constructs and gain some understanding of what is needed to get the application up and running. As usual, all the CDK construct definitions can be found inside the lib/ folder of the project directory – in this case, chapter-7/chapter-7-cdk/. Let’s dive into each infrastructure component in detail.

Networking foundations and the ECS cluster

A common starting point for all infrastructure stack definitions is to create a VPC, subnets, and corresponding route tables. You will also need to set up an internet gateway for the VPC and a NAT gateway in each of the public subnets to enable packet traversals over the internet:

These simple constructs abstract a lot of complexity from the user. If you perform cdk synth at this point, you will appreciate the heavy lifting that goes on behind the scenes in generating the CloudFormation template for this stack.

Now that the cluster is ready, it’s time to define the task definition that will host both of our containers.

Adding a task definition for our ECS cluster

As part of this configuration, we will also create the IAM policies that can be attached to the task role and the execution role. These permissions give these constructs access rights to the relevant AWS services:

You might notice that we have also added IAM permissions for the ECR service. This is to support the consumption of Docker images from the ECR repository, in case you decide to not use the existing image from DockerHub.

Task definitions help us configure the parameters, resources, and mount points that will eventually be used by containers provisioned as part of this task – the application and database container in our case. Now, let’s proceed with defining the containers and their respective settings.

Configuring the application and database containers

Think of all the configurations that would be needed for an application to run inside a container and respond to traffic originating from the internet. On the other hand, we don’t want to expose our database to the outside world and it’s just going to be available for the application over localhost. We will start by allocating a chunk of CPU and memory resources to both containers from the bigger task pool. By default, the Flask application runs on port 5000. This is something we want to expose to the outside world:

As you can see, we are using separate image identifiers for both containers. One points to our Flask application code image, while the other points to an official MongoDB image, but both are hosted in DockerHub. You might also remember that we discussed mounting an EFS filesystem to the MongoDB container to persist all changes to the underlying data. Thiscan be achieved with dbContainer. addMountPoints, as seen in the code. We also added a host port mapping for the applicationcontainer using the appContainer.addPortMappings function.

At this point, however, ECS will complain that the mount point is not recognized. This is understandable because we haven’t provisioned an EFS filesystem yet, and secondly, we didn’t map it to the task definition before using it in the container definition. So, let’s ensure that these prerequisites are covered.

Leave a Reply

Your email address will not be published. Required fields are marked *