The ecs-cli
command is a little gem π
π ecs-cli
allows you to deploy a Docker stack very easily on AWS ECS using the same syntax as the docker-compose file format version 1, 2 and 3
π The selling point of ecs-cli
is to reuse your docker-compose.yml
files to deploy your containers to AWS
π ecs-cli
translates a docker-compose-yml
to ECS Task Desfinitions and Services
In this article we will explore how to:
ecs-cli
to create an AWS ECS cluster to orchestrate a set of Docker ContainersAmazon Elastic File System is a cloud storage service provided by Amazon Web Services designed to provide scalable, elastic, concurrent with some restrictions, and encrypted file storage for use with both AWS cloud services and on-premises resources
As an example we will deploy a Docker stack composed of:
This Docker Stack will be deployed on the AWS ECS Cluster
ecs-cli
ecs-cli
Docker Compose Stack
AWS ECS
AWS EFS
AWS EFS
Prerequisites (for macOS)
ecs-cli
The first step is to install the ecs-cli
command on your system:
The complete installation procedure for macOS, Linux and Windows is available with this link.
For macOS the installation procedure is as follows:
π Download ecs-cli
binary
1sudo curl -Lo /usr/local/bin/ecs-cli https://amazon-ecs-cli.s3.amazonaws.com/ecs-cli-darwin-amd64-latest
π install gnupg (a free implementation of OpenPGP standard)
1brew install gnupg
π get the public key of ecs-cli
(I have copied the key in a GIST for simplicity)
1https://gist.githubusercontent.com/raphaelmansuy/5aab3c9e6c03e532e9dcf6c97c78b4ff/raw/f39b4df58833f09eb381700a6a854b1adfea482e/ecs-cli-signature-key.key
π import the signature
1gpg --import ./signature.key
π make ecs-cli
executable
1sudo chmod +x /usr/local/bin/ecs-cli
π verify the setup
1ecs-cli --version
ecs-cli
π©βπΎPrerequisite
To create your AWS_ACCESS_KEY_ID you can read this documentation
Your environment variables must be configured with a correct pair of AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY
1export AWS_ACCESS_KEY_ID="Your Access Key"2export AWS_SECRET_ACCESS_KEY="Your Secret Access Key"3export AWS_DEFAULT_REGION=us-west-2
The following script configure an ECS-profile called tutorial
for a cluster named tutorial-cluster
on the us-west-2
region with a default launch type based on EC2 instances:
configure.sh
1#!/bin/bash2set -e3PROFILE_NAME=tutorial4CLUSTER_NAME=tutorial-cluster5REGION=us-west-26LAUNCH_TYPE=EC27ecs-cli configure profile --profile-name "$PROFILE_NAME" --access-key "$AWS_ACCESS_KEY_ID" --secret-key "$AWS_SECRET_ACCESS_KEY"8ecs-cli configure --cluster "$CLUSTER_NAME" --default-launch-type "$LAUNCH_TYPE" --region "$REGION" --config-name "$PROFILE_NAME"
We will create an ECS-Cluster based on ec2 instance.
ECS allows 2 launch types EC2
and FARGATE
If we want to connect to the ec2 instances with ssh we need to have a key pair
π Creation of a key pair called tutorial-cluster
:
1aws ec2 create-key-pair --key-name tutorial-cluster \2 --query 'KeyMaterial' --output text > ~/.ssh/tutorial-cluster.pem
π Creation of the Cluster tutorial-cluster
with 2 ec2-instances t3.medium
create-cluster.sh
1#!/bin/bash2KEY_PAIR=tutorial-cluster3 ecs-cli up \4 --keypair $KEY_PAIR \5 --capability-iam \6 --size 2 \7 --instance-type t3.medium \8 --tags project=tutorial-cluster,owner=raphael \9 --cluster-config tutorial \10 --ecs-profile tutorial
We have added 2 tags project=tutorial
and owner=raphael
to easily identify the resources created by the command
π Result
1INFO[0006] Using recommended Amazon Linux 2 AMI with ECS Agent 1.50.2 and Docker version 19.03.13-ce2INFO[0007] Created cluster cluster=tutorial-cluster region=us-west-23INFO[0010] Waiting for your cluster resources to be created...4INFO[0010] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS5INFO[0073] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS6INFO[0136] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS7VPC created: vpc-XXXXX8Security Group created: sg-XXXXX9Subnet created: subnet-AAAA10Subnet created: subnet-BBBB11Cluster creation succeeded.
This command create:
We can now deploy a sample Docker application on the newly created ECS Cluster:
π Create a file called docker-compose.yml
1version: "3"2services:3 webdemo:4 image: "amazon/amazon-ecs-sample"5 ports:6 - "80:80"
This stack can best tested locally
1docker-compose up
Results:
latest: Pulling from amazon/amazon-ecs-sample Digest: sha256:36c7b282abd0186e01419f2e58743e1bf635808231049bbc9d77e59e3a8e4914 Status: Downloaded newer image for amazon/amazon-ecs-sample:latest
π We can now deploy this stack on AWS ECS:
1ecs-cli compose --project-name tutorial --file docker-compose.yml \2--debug service up \3--deployment-max-percent 100 --deployment-min-healthy-percent 0 \4--region us-west-2 --ecs-profile tutorial --cluster-config tutorial
π To verify that the service is running we can use this command:
1ecs-cli ps
Results:
1Name State Ports TaskDefinition Health2tutorial-cluster/2e5af2d48dbc41c1a98/webdemo RUNNING 34.217.107.14:80->80/tcp tutorial:2 UNKNOWNK
The stack is deployed and accessible with the IP address 34.217.107.14
π We can now browse the deployed Website:
1open http://34.217.107.14
π Open the port 22 to connect to the EC2 instances of the cluster
1# Get my IP2myip="$(dig +short myip.opendns.com @resolver1.opendns.com)"34# Get the security group5sg="$(aws ec2 describe-security-groups --filters Name=tag:project,Values=tutorial-cluster | jq '.SecurityGroups[].GroupId')"67# Add port 22 to the Security Group of the VPC8aws ec2 authorize-security-group-ingress \9 --group-id $sg \10 --protocol tcp \11 --port 22 \12 --cidr "$myip/32" | jq '.'
π Connection to the instance
1chmod 400 ~/.ssh/tutorial-cluster.pem2ssh -i ~/.ssh/tutorial-cluster.pem ec2-user@34.217.107.14
π Once we are connected to the remoter server: we can observe the running containers:
1docker ps
1CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES27deaa49ed72c amazon/amazon-ecs-sample "/usr/sbin/apache2 -β¦" 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp ecs-tutorial-3-webdemo-9cb1a49483a9cfb7b1013cd1d2a9807d4 amazon/amazon-ecs-agent:latest "/agent" 55 minutes ago Up 55 minutes (healthy) ecs-agent
If I want to collect the logs for my running instances, I can create AWS CloudWatch Log Groups.
For that we can modify the docker-compose.yml
file:
1version: "2"2services:3 webdemo:4 image: "amazon/amazon-ecs-sample"5 ports:6 - "80:80"7 logging:8 driver: awslogs9 options:10 awslogs-group: tutorial11 awslogs-region: us-west-212 awslogs-stream-prefix: demo
π And then redeploy the service with a create-log-groups option
1ecs-cli compose --project-name tutorial --file docker-compose.yml \2--debug service up \3--deployment-max-percent 100 --deployment-min-healthy-percent 0 \4--region us-west-2 --ecs-profile tutorial --cluster-config tutorial \5--create-log-groups
π We can now delete the service π
1ecs-cli compose --project-name tutorial --file docker-compose.yml \2--debug service down \3--region us-west-2 --ecs-profile tutorial --cluster-config tutorial
We are now ready to deploy HASURA and Postgres
docker-compose.yml
1version: '3'2services:3 postgres:4 image: postgres:125 restart: always6 volumes:7 - db_data:/var/lib/postgresql/data8 environment:9 POSTGRES_PASSWORD: postgrespassword10 graphql-engine:11 image: hasura/graphql-engine:v1.3.312 ports:13 - "80:8080"14 depends_on:15 - "postgres"16 restart: always17 environment:18 HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres19 ## enable the console served by server20 HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console21 ## enable debugging mode. It is recommended to disable this in production22 HASURA_GRAPHQL_DEV_MODE: "true"23 HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log24 ## uncomment next line to set an admin secret25 # HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey26volumes:27 db_data:
π We can test the stack locally:
1docker-compose up &
Then
1open localhost
π We can now deploy this stack on AWS ECS
But before that we need to update the file docker-compose.yml
We must add:
logging
directivelinks
directive1version: '3'2services:3 postgres:4 image: postgres:125 restart: always6 volumes:7 - db_data:/var/lib/postgresql/data8 environment:9 POSTGRES_PASSWORD: postgrespassword10 logging:11 driver: awslogs12 options:13 awslogs-group: tutorial14 awslogs-region: us-west-215 awslogs-stream-prefix: hasura-postgres16 graphql-engine:17 image: hasura/graphql-engine:v1.3.318 ports:19 - "80:8080"20 depends_on:21 - "postgres"22 links:23 - postgres24 restart: always25 environment:26 HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres27 ## enable the console served by server28 HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console29 ## enable debugging mode. It is recommended to disable this in production30 HASURA_GRAPHQL_DEV_MODE: "true"31 HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log32 ## uncomment next line to set an admin secret33 # HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey34 logging:35 driver: awslogs36 options:37 awslogs-group: tutorial38 awslogs-region: us-west-239 awslogs-stream-prefix: hasura40volumes:41 db_data:
We need to create a file called ecs-params.yml
to specify extra parameters:
1version: 12task_definition:3 ecs_network_mode: bridge
This file will be used by the ecs-cli
command.
π we can then launch the stack:
1ecs-cli compose --project-name tutorial --file docker-compose.yml \2 --debug service up \3--deployment-max-percent 100 --deployment-min-healthy-percent 0 \4 --region us-west-2 --ecs-profile tutorial \5--cluster-config tutorial --create-log-groups
Results:
1DEBU[0000] Parsing the compose yaml...2DEBU[0000] Docker Compose version found: 33DEBU[0000] Parsing v3 project...4WARN[0000] Skipping unsupported YAML option for service... option name=restart service name=postgres5WARN[0000] Skipping unsupported YAML option for service... option name=depends_on service name=graphql-engine6WARN[0000] Skipping unsupported YAML option for service... option name=restart service name=graphql-engine7DEBU[0000] Parsing the ecs-params yaml...8DEBU[0000] Parsing the ecs-registry-creds yaml...9DEBU[0000] Transforming yaml to task definition...10DEBU[0004] Finding task definition in cache or creating if needed TaskDefinition="{\n ContainerDefinitions: [{\n Command: [],\n Cpu: 0,\n DnsSearchDomains: [],\n DnsServers: [],\n DockerSecurityOptions: [],\n EntryPoint: [],\n Environment: [{\n Name: \"POSTGRES_PASSWORD\",\n Value: \"postgrespassword\"\n }],\n Essential: true,\n ExtraHosts: [],\n Image: \"postgres:12\",\n Links: [],\n LinuxParameters: {\n Capabilities: {\n\n },\n Devices: []\n },\n Memory: 512,\n MountPoints: [{\n ContainerPath: \"/var/lib/postgresql/data\",\n ReadOnly: false,\n SourceVolume: \"db_data\"\n }],\n Name: \"postgres\",\n Privileged: false,\n PseudoTerminal: false,\n ReadonlyRootFilesystem: false\n },{\n Command: [],\n Cpu: 0,\n DnsSearchDomains: [],\n DnsServers: [],\n DockerSecurityOptions: [],\n EntryPoint: [],\n Environment: [\n {\n Name: \"HASURA_GRAPHQL_ENABLED_LOG_TYPES\",\n Value: \"startup, http-log, webhook-log, websocket-log, query-log\"\n },\n {\n Name: \"HASURA_GRAPHQL_DATABASE_URL\",\n Value: \"postgres://postgres:postgrespassword@postgres:5432/postgres\"\n },\n {\n Name: \"HASURA_GRAPHQL_ENABLE_CONSOLE\",\n Value: \"true\"\n },\n {\n Name: \"HASURA_GRAPHQL_DEV_MODE\",\n Value: \"true\"\n }\n ],\n Essential: true,\n ExtraHosts: [],\n Image: \"hasura/graphql-engine:v1.3.3\",\n Links: [],\n LinuxParameters: {\n Capabilities: {\n\n },\n Devices: []\n },\n Memory: 512,\n Name: \"graphql-engine\",\n PortMappings: [{\n ContainerPort: 8080,\n HostPort: 80,\n Protocol: \"tcp\"\n }],\n Privileged: false,\n PseudoTerminal: false,\n ReadonlyRootFilesystem: false\n }],\n Cpu: \"\",\n ExecutionRoleArn: \"\",\n Family: \"tutorial\",\n Memory: \"\",\n NetworkMode: \"\",\n RequiresCompatibilities: [\"EC2\"],\n TaskRoleArn: \"\",\n Volumes: [{\n Name: \"db_data\"\n }]\n}"11DEBU[0005] cache miss taskDef="{\n\n}" taskDefHash=4e57f367846e8f3546dd07eadc60549012INFO[0005] Using ECS task definition TaskDefinition="tutorial:4"13WARN[0005] No log groups to create; no containers use 'awslogs'14INFO[0005] Updated the ECS service with a new task definition. Old containers will be stopped automatically, and replaced with new ones deployment-max-percent=100 deployment-min-healthy-percent=0 desiredCount=1 force-deployment=false service=tutorial15INFO[0006] Service status desiredCount=1 runningCount=1 serviceName=tutorial16INFO[0027] Service status desiredCount=1 runningCount=0 serviceName=tutorial17INFO[0027] (service tutorial) has stopped 1 running tasks: (task ee882a6a66724415a3bdc8fffaa2824c). timestamp="2021-03-08 07:30:33 +0000 UTC"18INFO[0037] (service tutorial) has started 1 tasks: (task a1068efe89614812a3243521c0d30847). timestamp="2021-03-08 07:30:43 +0000 UTC"19INFO[0074] (service tutorial) has started 1 tasks: (task 1949af75ac5a4e749dfedcb89321fd67). timestamp="2021-03-08 07:31:23 +0000 UTC"20INFO[0080] Service status desiredCount=1 runningCount=1 serviceName=tutorial21INFO[0080] ECS Service has reached a stable state desiredCount=1 runningCount=1 serviceName=tutorial
π And then we can verify that our container are running on AWS ECS Cluster
1ecs-cli ps
Results
1Name State Ports TaskDefinition Health2tutorial-cluster/00d7ff5191dd4d11a9b52ea64fb9ee26/graphql-engine RUNNING 34.217.107.14:80->8080/tcp tutorial:10 UNKNOWN3tutorial-cluster/00d7ff5191dd4d11a9b52ea64fb9ee26/postgres RUNNING tutorial:10 UNKNOWN
π And then: πͺ
1open http://34.217.107.14
π We can now stop the stack
1ecs-cli compose down
To add persistent support to my solution we can leverage AWS EFS : Elastic File System
π Create an EFS file system named hasura-db-file-system
1aws efs create-file-system \2 --performance-mode generalPurpose \3 --throughput-mode bursting \4 --encrypted \5 --tags Key=Name,Value=hasura-db-filesystem
Results:
1{2 "OwnerId": "XXXXX",3 "CreationToken": "10f91a50-0649-442d-b4ad-2ce67f1546bf",4 "FileSystemId": "fs-5574bd52",5 "FileSystemArn": "arn:aws:elasticfilesystem:us-west-2:XXXXX:file-system/fs-5574bd52",6 "CreationTime": "2021-03-08T16:40:19+08:00",7 "LifeCycleState": "creating",8 "Name": "hasura-db-filesystem",9 "NumberOfMountTargets": 0,10 "SizeInBytes": {11 "Value": 0,12 "ValueInIA": 0,13 "ValueInStandard": 014 },15 "PerformanceMode": "generalPurpose",16 "Encrypted": true,17 "KmsKeyId": "arn:aws:kms:us-west-2:XXXXX:key/97542264-cc64-42f9-954e-4af2b17f72aa",18 "ThroughputMode": "bursting",19 "Tags": [20 {21 "Key": "Name",22 "Value": "hasura-db-filesystem"23 }24 ]25}
π Add mount points to each subnet of the VPC:
1aws ec2 describe-subnets --filters Name=tag:project,Values=tutorial-cluster \2 | jq ".Subnets[].SubnetId" | \3xargs -ISUBNET aws efs create-mount-target \4 --file-system-id fs-5574bd52 --subnet-id SUBNET
The next step is to allow NFS connection from the VPC
We need first to get the security group associated with each mount target
1efs_sg=$(aws efs describe-mount-targets --file-system-id fs-5574bd52 \2 | jq ".MountTargets[0].MountTargetId" \3 | xargs -IMOUNTG aws efs describe-mount-target-security-groups \4 --mount-target-id MOUNTG | jq ".SecurityGroups[0]" | xargs echo )
π Then we need to open the TCP port 2049 for the security group of the VPC
1vpc_sg="$(aws ec2 describe-security-groups \2 --filters Name=tag:project,Values=tutorial-cluster \3 | jq '.SecurityGroups[].GroupId' | xargs echo)"
π Then we need to authorize the TCP/2049 port from the default security group of the VPC
1aws ec2 authorize-security-group-ingress \2--group-id $efs_sg \3--protocol tcp \4--port 2049 \5--source-group $vpc_sg \6--region us-west-2
π We can now modify the ecs-params.yml
to add persistence support:
fs-5574bd52
1version: 12task_definition:3 ecs_network_mode: bridge4 efs_volumes:5 - name: db_data6 filesystem_id: fs-5574bd527 transit_encryption: ENABLED
π Then we can redeploy our stack:
1ecs-cli compose --project-name tutorial --file docker-compose.yml \2 --debug service up \3--deployment-max-percent 100 --deployment-min-healthy-percent 0 \4 --region us-west-2 --ecs-profile tutorial \5--cluster-config tutorial --create-log-groups
π Et voilΓ : the stack is operational π π¦ πͺ
πͺ We have deployed an ECS-CLI Cluster and launched a docker compose stack
π The next step will be to expose and secure the stack using an AWS Application Load Balancer
The scripts associated with this article is available at
π https://github.com/raphaelmansuy/using-ecs-cli-tutorial-01.git
We deliver high quality blog posts written by professionals monthly. And we promise no spam.