10 minutes to deploy a Docker compose stack on AWS illustrated with Hasura and Postgres

Development
Tuesday, March 9, 2021

Introduction

The ecs-cli command is a little gem πŸ’Ž

πŸ‘‰ ecs-cli allows you to deploy a Docker stack very easily on AWS ECS using the same syntax as the docker-compose file format version 1, 2 and 3

πŸ‘‰ The selling point of ecs-cli is to reuse your docker-compose.yml files to deploy your containers to AWS

πŸ‘‰ ecs-cli translates a docker-compose-yml to ECS Task Desfinitions and Services

In this article we will explore how to:

  • Use the toolecs-cli to create an AWS ECS cluster to orchestrate a set of Docker Containers
  • Add observability to the cluster thanks to AWS Cloud LogGroups
  • Use ecs-cli to deploy a set of Docker containers on the Cluster
  • Leverage AWS EFS to add persistence to the Cluster and add support of stateful workloads

Amazon Elastic File System is a cloud storage service provided by Amazon Web Services designed to provide scalable, elastic, concurrent with some restrictions, and encrypted file storage for use with both AWS cloud services and on-premises resources

As an example we will deploy a Docker stack composed of:

  • HASURA : an open source-engine that gives you an instant GraphQL & Rest API
  • PostgresSQL 13.2 for the persistence layer

Target architecture

illustrations/global-architecture.png

Docker stack

This Docker Stack will be deployed on the AWS ECS Cluster

illustrations/docker-compose-stack.png

7 Steps

  1. Install ecs-cli
  2. Configure ecs-cli
  3. Create the cluster Stack
  4. Create a Docker Compose Stack
  5. Deploy the docker compose stack on AWS ECS
  6. Create an elastic filesystem AWS EFS
  7. Add persistence to Postgres SQL thanks to AWS EFS

Prerequisites (for macOS)

Step1 : Install ecs-cli

The first step is to install the ecs-cli command on your system:

The complete installation procedure for macOS, Linux and Windows is available with this link.

For macOS the installation procedure is as follows:

πŸ‘‰ Download ecs-cli binary

1sudo curl -Lo /usr/local/bin/ecs-cli https://amazon-ecs-cli.s3.amazonaws.com/ecs-cli-darwin-amd64-latest

πŸ‘‰ install gnupg (a free implementation of OpenPGP standard)

1brew install gnupg

πŸ‘‰ get the public key of ecs-cli (I have copied the key in a GIST for simplicity)

1https://gist.githubusercontent.com/raphaelmansuy/5aab3c9e6c03e532e9dcf6c97c78b4ff/raw/f39b4df58833f09eb381700a6a854b1adfea482e/ecs-cli-signature-key.key

πŸ‘‰ import the signature

1gpg --import ./signature.key

πŸ‘‰ make ecs-cli executable

1sudo chmod +x /usr/local/bin/ecs-cli

πŸ‘‰ verify the setup

1ecs-cli --version

Configure ecs-cli πŸ‘©β€πŸŒΎ

Prerequisite

  • AWS CLI v2 must be installed. If it's not the case you can follow these instructions on this link.
  • You need to have an AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY with administrative privileges

To create your AWS_ACCESS_KEY_ID you can read this documentation

Your environment variables must be configured with a correct pair of AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY

1export AWS_ACCESS_KEY_ID="Your Access Key"
2export AWS_SECRET_ACCESS_KEY="Your Secret Access Key"
3export AWS_DEFAULT_REGION=us-west-2

The following script configure an ECS-profile called tutorial for a cluster named tutorial-cluster on the us-west-2 region with a default launch type based on EC2 instances:

configure.sh

1#!/bin/bash
2set -e
3PROFILE_NAME=tutorial
4CLUSTER_NAME=tutorial-cluster
5REGION=us-west-2
6LAUNCH_TYPE=EC2
7ecs-cli configure profile --profile-name "$PROFILE_NAME" --access-key "$AWS_ACCESS_KEY_ID" --secret-key "$AWS_SECRET_ACCESS_KEY"
8ecs-cli configure --cluster "$CLUSTER_NAME" --default-launch-type "$LAUNCH_TYPE" --region "$REGION" --config-name "$PROFILE_NAME"

Step2 : Creation of an ECS-Cluster πŸš€

We will create an ECS-Cluster based on ec2 instance.

ECS allows 2 launch types EC2 and FARGATE

  • EC2 (Deploy and manage your own cluster of EC2 instances for running the containers)
  • AWS Fargate (Run containers directly, without any EC2 instances)

If we want to connect to the ec2 instances with ssh we need to have a key pair

πŸ‘‰ Creation of a key pair called tutorial-cluster :

1aws ec2 create-key-pair --key-name tutorial-cluster \
2 --query 'KeyMaterial' --output text > ~/.ssh/tutorial-cluster.pem

πŸ‘‰ Creation of the Cluster tutorial-cluster with 2 ec2-instances t3.medium

create-cluster.sh

1#!/bin/bash
2KEY_PAIR=tutorial-cluster
3 ecs-cli up \
4 --keypair $KEY_PAIR \
5 --capability-iam \
6 --size 2 \
7 --instance-type t3.medium \
8 --tags project=tutorial-cluster,owner=raphael \
9 --cluster-config tutorial \
10 --ecs-profile tutorial

We have added 2 tags project=tutorial and owner=raphael to easily identify the resources created by the command

πŸ‘‰ Result

1INFO[0006] Using recommended Amazon Linux 2 AMI with ECS Agent 1.50.2 and Docker version 19.03.13-ce
2INFO[0007] Created cluster cluster=tutorial-cluster region=us-west-2
3INFO[0010] Waiting for your cluster resources to be created...
4INFO[0010] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
5INFO[0073] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
6INFO[0136] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
7VPC created: vpc-XXXXX
8Security Group created: sg-XXXXX
9Subnet created: subnet-AAAA
10Subnet created: subnet-BBBB
11Cluster creation succeeded.

This command create:

  • A new public VPC
    • An internet gateway
    • The routing tables
  • 2 public subnets in 2 availability zones
  • 1 security group
  • 1 autoscaling group
    • 2 ec2 instances
  • 1 ecs cluster

illustrations/Screen_Shot_2021-03-08_at_11.36.36.png

We can now deploy a sample Docker application on the newly created ECS Cluster:

πŸ‘‰ Create a file called docker-compose.yml

1version: "3"
2services:
3 webdemo:
4 image: "amazon/amazon-ecs-sample"
5 ports:
6 - "80:80"

This stack can best tested locally

1docker-compose up

Results:

latest: Pulling from amazon/amazon-ecs-sample Digest: sha256:36c7b282abd0186e01419f2e58743e1bf635808231049bbc9d77e59e3a8e4914 Status: Downloaded newer image for amazon/amazon-ecs-sample:latest

illustrations/Screen_Shot_2021-03-08_at_13.01.06.png

πŸ‘‰ We can now deploy this stack on AWS ECS:

1ecs-cli compose --project-name tutorial --file docker-compose.yml \
2--debug service up \
3--deployment-max-percent 100 --deployment-min-healthy-percent 0 \
4--region us-west-2 --ecs-profile tutorial --cluster-config tutorial

πŸ‘‰ To verify that the service is running we can use this command:

1ecs-cli ps

Results:

1Name State Ports TaskDefinition Health
2tutorial-cluster/2e5af2d48dbc41c1a98/webdemo RUNNING 34.217.107.14:80->80/tcp tutorial:2 UNKNOWNK

The stack is deployed and accessible with the IP address 34.217.107.14

πŸ‘‰ We can now browse the deployed Website:

1open http://34.217.107.14

πŸ‘‰ Open the port 22 to connect to the EC2 instances of the cluster

1# Get my IP
2myip="$(dig +short myip.opendns.com @resolver1.opendns.com)"
3
4# Get the security group
5sg="$(aws ec2 describe-security-groups --filters Name=tag:project,Values=tutorial-cluster | jq '.SecurityGroups[].GroupId')"
6
7# Add port 22 to the Security Group of the VPC
8aws ec2 authorize-security-group-ingress \
9 --group-id $sg \
10 --protocol tcp \
11 --port 22 \
12 --cidr "$myip/32" | jq '.'

πŸ‘‰ Connection to the instance

1chmod 400 ~/.ssh/tutorial-cluster.pem
2ssh -i ~/.ssh/tutorial-cluster.pem ec2-user@34.217.107.14

πŸ‘‰ Once we are connected to the remoter server: we can observe the running containers:

1docker ps
1CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
27deaa49ed72c amazon/amazon-ecs-sample "/usr/sbin/apache2 -…" 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp ecs-tutorial-3-webdemo-9cb1a49483a9cfb7b101
3cd1d2a9807d4 amazon/amazon-ecs-agent:latest "/agent" 55 minutes ago Up 55 minutes (healthy) ecs-agent

Step3 : Adding observability 🀩

If I want to collect the logs for my running instances, I can create AWS CloudWatch Log Groups.

For that we can modify the docker-compose.yml file:

1version: "2"
2services:
3 webdemo:
4 image: "amazon/amazon-ecs-sample"
5 ports:
6 - "80:80"
7 logging:
8 driver: awslogs
9 options:
10 awslogs-group: tutorial
11 awslogs-region: us-west-2
12 awslogs-stream-prefix: demo

πŸ‘‰ And then redeploy the service with a create-log-groups option

1ecs-cli compose --project-name tutorial --file docker-compose.yml \
2--debug service up \
3--deployment-max-percent 100 --deployment-min-healthy-percent 0 \
4--region us-west-2 --ecs-profile tutorial --cluster-config tutorial \
5--create-log-groups

illustrations/Screen_Shot_2021-03-08_at_15.07.28.png

πŸ‘‰ We can now delete the service πŸ—‘

1ecs-cli compose --project-name tutorial --file docker-compose.yml \
2--debug service down \
3--region us-west-2 --ecs-profile tutorial --cluster-config tutorial

πŸ‘‰ Deploying a more complex stack

We are now ready to deploy HASURA and Postgres

illustrations/docker-compose-hasura.png

docker-compose.yml

1version: '3'
2services:
3 postgres:
4 image: postgres:12
5 restart: always
6 volumes:
7 - db_data:/var/lib/postgresql/data
8 environment:
9 POSTGRES_PASSWORD: postgrespassword
10 graphql-engine:
11 image: hasura/graphql-engine:v1.3.3
12 ports:
13 - "80:8080"
14 depends_on:
15 - "postgres"
16 restart: always
17 environment:
18 HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres
19 ## enable the console served by server
20 HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
21 ## enable debugging mode. It is recommended to disable this in production
22 HASURA_GRAPHQL_DEV_MODE: "true"
23 HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
24 ## uncomment next line to set an admin secret
25 # HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
26volumes:
27 db_data:

πŸ‘‰ We can test the stack locally:

1docker-compose up &

Then

1open localhost

illustrations/Screen_Shot_2021-03-08_at_15.25.17.png

πŸ‘‰ We can now deploy this stack on AWS ECS

But before that we need to update the file docker-compose.yml

We must add:

  • A logging directive
  • A links directive

illustrations/docker-compose-hasura-step2.png

1version: '3'
2services:
3 postgres:
4 image: postgres:12
5 restart: always
6 volumes:
7 - db_data:/var/lib/postgresql/data
8 environment:
9 POSTGRES_PASSWORD: postgrespassword
10 logging:
11 driver: awslogs
12 options:
13 awslogs-group: tutorial
14 awslogs-region: us-west-2
15 awslogs-stream-prefix: hasura-postgres
16 graphql-engine:
17 image: hasura/graphql-engine:v1.3.3
18 ports:
19 - "80:8080"
20 depends_on:
21 - "postgres"
22 links:
23 - postgres
24 restart: always
25 environment:
26 HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres
27 ## enable the console served by server
28 HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
29 ## enable debugging mode. It is recommended to disable this in production
30 HASURA_GRAPHQL_DEV_MODE: "true"
31 HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
32 ## uncomment next line to set an admin secret
33 # HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
34 logging:
35 driver: awslogs
36 options:
37 awslogs-group: tutorial
38 awslogs-region: us-west-2
39 awslogs-stream-prefix: hasura
40volumes:
41 db_data:

We need to create a file called ecs-params.yml to specify extra parameters:

1version: 1
2task_definition:
3 ecs_network_mode: bridge

This file will be used by the ecs-cli command.

πŸ‘‰ we can then launch the stack:

1ecs-cli compose --project-name tutorial --file docker-compose.yml \
2 --debug service up \
3--deployment-max-percent 100 --deployment-min-healthy-percent 0 \
4 --region us-west-2 --ecs-profile tutorial \
5--cluster-config tutorial --create-log-groups

Results:

1DEBU[0000] Parsing the compose yaml...
2DEBU[0000] Docker Compose version found: 3
3DEBU[0000] Parsing v3 project...
4WARN[0000] Skipping unsupported YAML option for service... option name=restart service name=postgres
5WARN[0000] Skipping unsupported YAML option for service... option name=depends_on service name=graphql-engine
6WARN[0000] Skipping unsupported YAML option for service... option name=restart service name=graphql-engine
7DEBU[0000] Parsing the ecs-params yaml...
8DEBU[0000] Parsing the ecs-registry-creds yaml...
9DEBU[0000] Transforming yaml to task definition...
10DEBU[0004] Finding task definition in cache or creating if needed TaskDefinition="{\n ContainerDefinitions: [{\n Command: [],\n Cpu: 0,\n DnsSearchDomains: [],\n DnsServers: [],\n DockerSecurityOptions: [],\n EntryPoint: [],\n Environment: [{\n Name: \"POSTGRES_PASSWORD\",\n Value: \"postgrespassword\"\n }],\n Essential: true,\n ExtraHosts: [],\n Image: \"postgres:12\",\n Links: [],\n LinuxParameters: {\n Capabilities: {\n\n },\n Devices: []\n },\n Memory: 512,\n MountPoints: [{\n ContainerPath: \"/var/lib/postgresql/data\",\n ReadOnly: false,\n SourceVolume: \"db_data\"\n }],\n Name: \"postgres\",\n Privileged: false,\n PseudoTerminal: false,\n ReadonlyRootFilesystem: false\n },{\n Command: [],\n Cpu: 0,\n DnsSearchDomains: [],\n DnsServers: [],\n DockerSecurityOptions: [],\n EntryPoint: [],\n Environment: [\n {\n Name: \"HASURA_GRAPHQL_ENABLED_LOG_TYPES\",\n Value: \"startup, http-log, webhook-log, websocket-log, query-log\"\n },\n {\n Name: \"HASURA_GRAPHQL_DATABASE_URL\",\n Value: \"postgres://postgres:postgrespassword@postgres:5432/postgres\"\n },\n {\n Name: \"HASURA_GRAPHQL_ENABLE_CONSOLE\",\n Value: \"true\"\n },\n {\n Name: \"HASURA_GRAPHQL_DEV_MODE\",\n Value: \"true\"\n }\n ],\n Essential: true,\n ExtraHosts: [],\n Image: \"hasura/graphql-engine:v1.3.3\",\n Links: [],\n LinuxParameters: {\n Capabilities: {\n\n },\n Devices: []\n },\n Memory: 512,\n Name: \"graphql-engine\",\n PortMappings: [{\n ContainerPort: 8080,\n HostPort: 80,\n Protocol: \"tcp\"\n }],\n Privileged: false,\n PseudoTerminal: false,\n ReadonlyRootFilesystem: false\n }],\n Cpu: \"\",\n ExecutionRoleArn: \"\",\n Family: \"tutorial\",\n Memory: \"\",\n NetworkMode: \"\",\n RequiresCompatibilities: [\"EC2\"],\n TaskRoleArn: \"\",\n Volumes: [{\n Name: \"db_data\"\n }]\n}"
11DEBU[0005] cache miss taskDef="{\n\n}" taskDefHash=4e57f367846e8f3546dd07eadc605490
12INFO[0005] Using ECS task definition TaskDefinition="tutorial:4"
13WARN[0005] No log groups to create; no containers use 'awslogs'
14INFO[0005] Updated the ECS service with a new task definition. Old containers will be stopped automatically, and replaced with new ones deployment-max-percent=100 deployment-min-healthy-percent=0 desiredCount=1 force-deployment=false service=tutorial
15INFO[0006] Service status desiredCount=1 runningCount=1 serviceName=tutorial
16INFO[0027] Service status desiredCount=1 runningCount=0 serviceName=tutorial
17INFO[0027] (service tutorial) has stopped 1 running tasks: (task ee882a6a66724415a3bdc8fffaa2824c). timestamp="2021-03-08 07:30:33 +0000 UTC"
18INFO[0037] (service tutorial) has started 1 tasks: (task a1068efe89614812a3243521c0d30847). timestamp="2021-03-08 07:30:43 +0000 UTC"
19INFO[0074] (service tutorial) has started 1 tasks: (task 1949af75ac5a4e749dfedcb89321fd67). timestamp="2021-03-08 07:31:23 +0000 UTC"
20INFO[0080] Service status desiredCount=1 runningCount=1 serviceName=tutorial
21INFO[0080] ECS Service has reached a stable state desiredCount=1 runningCount=1 serviceName=tutorial

πŸ‘‰ And then we can verify that our container are running on AWS ECS Cluster

1ecs-cli ps

Results

1Name State Ports TaskDefinition Health
2tutorial-cluster/00d7ff5191dd4d11a9b52ea64fb9ee26/graphql-engine RUNNING 34.217.107.14:80->8080/tcp tutorial:10 UNKNOWN
3tutorial-cluster/00d7ff5191dd4d11a9b52ea64fb9ee26/postgres RUNNING tutorial:10 UNKNOWN

πŸ‘‰ And then: πŸ’ͺ

1open http://34.217.107.14

illustrations/Screen_Shot_2021-03-08_at_16.09.29.png

πŸ‘‰ We can now stop the stack

1ecs-cli compose down

To add persistent support to my solution we can leverage AWS EFS : Elastic File System

Step 4: Add a persistent layer to my cluster

illustrations/efs-file-system.png

πŸ‘‰ Create an EFS file system named hasura-db-file-system

1aws efs create-file-system \
2 --performance-mode generalPurpose \
3 --throughput-mode bursting \
4 --encrypted \
5 --tags Key=Name,Value=hasura-db-filesystem

Results:

1{
2 "OwnerId": "XXXXX",
3 "CreationToken": "10f91a50-0649-442d-b4ad-2ce67f1546bf",
4 "FileSystemId": "fs-5574bd52",
5 "FileSystemArn": "arn:aws:elasticfilesystem:us-west-2:XXXXX:file-system/fs-5574bd52",
6 "CreationTime": "2021-03-08T16:40:19+08:00",
7 "LifeCycleState": "creating",
8 "Name": "hasura-db-filesystem",
9 "NumberOfMountTargets": 0,
10 "SizeInBytes": {
11 "Value": 0,
12 "ValueInIA": 0,
13 "ValueInStandard": 0
14 },
15 "PerformanceMode": "generalPurpose",
16 "Encrypted": true,
17 "KmsKeyId": "arn:aws:kms:us-west-2:XXXXX:key/97542264-cc64-42f9-954e-4af2b17f72aa",
18 "ThroughputMode": "bursting",
19 "Tags": [
20 {
21 "Key": "Name",
22 "Value": "hasura-db-filesystem"
23 }
24 ]
25}

πŸ‘‰ Add mount points to each subnet of the VPC:

1aws ec2 describe-subnets --filters Name=tag:project,Values=tutorial-cluster \
2 | jq ".Subnets[].SubnetId" | \
3xargs -ISUBNET aws efs create-mount-target \
4 --file-system-id fs-5574bd52 --subnet-id SUBNET

The next step is to allow NFS connection from the VPC

We need first to get the security group associated with each mount target

1efs_sg=$(aws efs describe-mount-targets --file-system-id fs-5574bd52 \
2 | jq ".MountTargets[0].MountTargetId" \
3 | xargs -IMOUNTG aws efs describe-mount-target-security-groups \
4 --mount-target-id MOUNTG | jq ".SecurityGroups[0]" | xargs echo )

πŸ‘‰ Then we need to open the TCP port 2049 for the security group of the VPC

1vpc_sg="$(aws ec2 describe-security-groups \
2 --filters Name=tag:project,Values=tutorial-cluster \
3 | jq '.SecurityGroups[].GroupId' | xargs echo)"

πŸ‘‰ Then we need to authorize the TCP/2049 port from the default security group of the VPC

1aws ec2 authorize-security-group-ingress \
2--group-id $efs_sg \
3--protocol tcp \
4--port 2049 \
5--source-group $vpc_sg \
6--region us-west-2

πŸ‘‰ We can now modify the ecs-params.yml to add persistence support:

  • We use the ID of the EFS volume that has been created on the latest step : fs-5574bd52
1version: 1
2task_definition:
3 ecs_network_mode: bridge
4 efs_volumes:
5 - name: db_data
6 filesystem_id: fs-5574bd52
7 transit_encryption: ENABLED

πŸ‘‰ Then we can redeploy our stack:

1ecs-cli compose --project-name tutorial --file docker-compose.yml \
2 --debug service up \
3--deployment-max-percent 100 --deployment-min-healthy-percent 0 \
4 --region us-west-2 --ecs-profile tutorial \
5--cluster-config tutorial --create-log-groups

πŸ‘‰ Et voilΓ  : the stack is operational πŸŽ‰ πŸ¦„ πŸ’ͺ

Summary

πŸ’ͺ We have deployed an ECS-CLI Cluster and launched a docker compose stack

πŸš€ The next step will be to expose and secure the stack using an AWS Application Load Balancer

The scripts associated with this article is available at

πŸ‘‰ https://github.com/raphaelmansuy/using-ecs-cli-tutorial-01.git

Subscribe to our Newsletter

We deliver high quality blog posts written by professionals monthly. And we promise no spam.

elitizon ltd.

Β© 2020 elitizon ltd. All Rights Reserved.