Back

How to deploy an application to Amazon ECS using GitLab CI

A guide to continuously integrate and deploy your Dockerized application to AWS ECS with GitLab

Before we dive into the actual deployment of an application in ECS using GitLab CI, below are the basic steps required to enable CI/CD in GitLab.

  1. Add .gitlab-ci.yml to the project’s root folder.
  2. Setup a Runner.

Adding .gitlab-ci.yml

In the root folder of your Gitlab project add a file with the name .gitlab-ci.yml . Gitlab uses this file for CI/CD. All your configurations for CI/CD should be added here. Sample configuration can be found in the below section.

Setup a Runner

Runners are virtual machines that run the CI/CD pipeline defined in .gitlab-ci.yml. By default, GitLab uses some default public shared runners. However, you can configure your own runner. Please refer to gitlab-runner to configure runners on your own. In this blog I am resorting to a shared runner (default runner ) provided by GitLab.

Before deploying an application to Amazon ECS, let us throw light upon the basic flow of CI/CD in GitLab.

Here is a basic example that covers most of the GitLab CI pipeline syntax. I am considering a simple Node.js application that installs various dependencies, executes tests and then builds in stages using .gitlab-ci.yml.

image: node:10.5.0
cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - node_modules/

stages:
  - build
  - test
  - deploy
before_script:
  - npm install


build-min-code-job:
  stage: build
  script:
    - npm run minifier

run-unit-test-job:
  stage: test
  script:
    - npm run test

deploy-staging-job:
  stage: deploy
  script:
    - npm run deploy-stage
  only:
    - develop

deploy-production-job:
  stage: deploy
  script:
    - npm run deploy-prod
  only:
    - master

It has got 3 stages namely., build, test and deploy. Each stage running inside their respective jobs like build-min-code-job and run-unit-test-job, you may give any name to a job.

In the script above, the image node:10.5.0 is getting used as default image in all the stages. It is getting pulled from DockerHub.

cache is used for caching all the dependencies in a folder named node_modules and it is available to all the stages since it has been declared global. It saves time while running jobs

key: ${CI_COMMIT_REF_SLUG}, makes this same cache available to jobs configured for each branch in your repository.

cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - node_modules/

before_script is used to run commands before starting the pipeline. For instance, I am installing dependencies here.

before_script:
  - npm install

I am then running npm run minifier and npm run test in separate stages viz., build stage and test stage. The jobs to run these build and test stages are build-min-code-job and run-unit-test-job respectively.

build-min-code-job:
  stage: build
  script:
    - npm run minifier

run-unit-test-job:
  stage: test
  script:
    - npm run test

In the below snippet, only develop indicates that the job deploy-staging-job run, only when changes are pushed to develop branch.

Similarly we have only master in config. Rest of the jobs run for all branches.

 deploy-staging-job:
  stage: deploy
  script:
    - npm run deploy-stage
  only:
    - develop

deploy-production-job:
  stage: deploy
  script:
    - npm run deploy-prod
  only:
    - master

So the pipeline looks something like this

Since I have not created a develop branch, the deploy-staging-job is missing in the pipeline shown above. It runs only when commits are pushed to a develop branch.

Now let us move on to setting up CI/CD for deploying an application to Amazon ECS cluster.

Prerequisite: Have an AWS cluster running in ECS with task definition.

for more information regarding creating Amazon ECS cluster, refer this CREATE-AMAZON-ECS-CLUSTER

Before we begin with the script you need to declare variables with appropriate values. Below are the variables

AWS_ACCESS_KEY_ID: Provide your access key

AWS_SECRET_ACCESS_KEY: Provide your secret access key

CI_AWS_ECS_CLUSTER: Name of ECS cluster

CI_AWS_ECS_SERVICE: Service name which is running the task definition

CI_AWS_ECS_TASK_DEFINITION: Name of the task definition of your application

REPOSITORY_URL: URL of amazon ECR repository

NOTE: you can provide your desired name to these variables but ensure you declare those variables under settings->CI/CD->Variables(expand it)->add variable

You can also declare variables and initialize it inside the script itself. But it is not a good idea to declare your access keys with values in script as it may cause security vulnerabilities.

Let us understand the script in steps now !!

I have split the script into parts for explanation..for full script scroll below to the end.

Firstly, to enable the support of docker commands, we are making use of Docker image

image: docker:latest

services need to be embedded with this Docker image in order to get full functionality of docker.

services:
  - docker:dind

Under before_script,

  1. I am installing all the dependencies required for the forthcoming stages. As explained in the beginning of this blog, this is applicable to all the stages.
  2. We also login to the amazon ECR registry.
  3. Through apk, I am installing jq and python pip. We do not have to use an alpine image since docker:latest image itself supports apk add too.
before_script:
  - apk add --update --no-cache jq py-pip
  - pip install awscli
  - $(aws ecr get-login --no-include-email --region ap-south-1)
  - IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"

There are two stages viz., build and deploy.

In the build-job running build stage, the image is being built with the name of the repository URL of amazon ECR with the tag $IMAGE_TAG (this just acts like a build number for every pipeline run used to differentiate the image names) and pushing to ECR registry

stages:
  - build
  - deploy

build-job:
  stage: build
  script:
    - docker build -t $REPOSITORY_URL:$IMAGE_TAG .
    - docker push $REPOSITORY_URL:$IMAGE_TAG
  only:
    - master

Inside deploy stage in deploy-job,

retrieving the current task definition using aws ecs describe-task-definition command and storing output in file input.json..overwriting old image name with new image to the task definition using jq tool in input.json file..I have hard coded the region..you can declare variable for region.

- echo `aws ecs describe-task-definition --task-definition  $CI_AWS_ECS_TASK_DEFINITION --region ap-south-1` > input.json
- echo $(cat input.json | jq '.taskDefinition.containerDefinitions[].image="'$REPOSITORY_URL':'$IMAGE_TAG'"') >  input.json

deleting fields like revision, status, Arn from input.json file which gets fetched with new values when new task definition created with this input file input.json.

- echo $(cat input.json | jq '.taskDefinition') > input.json
- echo $(cat input.json | jq  'del(.taskDefinitionArn)' | jq 'del(.revision)' | jq 'del(.status)' | jq 'del(.requiresAttributes)' | jq 'del(.compatibilities)') > input.json

registering a new task definition with input file of input.json.

- aws ecs register-task-definition --cli-input-json file://input.json --region ap-south-1

Also retrieving the revision number of the new task definition and storing it in the revision variable. (it always output the latest revision of task definition)

revision=$(aws ecs describe-task-definition --task-definition $CI_AWS_ECS_TASK_DEFINITION --region ap-south-1 | egrep "revision" | tr "/" " " | awk '{print $2}' | sed 's/"$//' | cut -d "," -f 1)

and updating service with new task definition by giving it the latest revision

- aws ecs update-service --cluster $CI_AWS_ECS_CLUSTER --service $CI_AWS_ECS_SERVICE  --task-definition $CI_AWS_ECS_TASK_DEFINITION:$revision --region ap-south-1

So the overall code looks like this !!!!!!!

image: docker:latest

services:
  - docker:dind

before_script:
  - apk add --update --no-cache jq py-pip
  - pip install awscli
  - $(aws ecr get-login --no-include-email --region ap-south-1)
  - IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"

stages:
  - build
  - deploy

build-job:
  stage: build
  script:
    - docker build -t $REPOSITORY_URL:$IMAGE_TAG .
    - docker push $REPOSITORY_URL:$IMAGE_TAG
  only:
    - master


deploy-job:
  stage: deploy
  script:
    - echo `aws ecs describe-task-definition --task-definition  $CI_AWS_ECS_TASK_DEFINITION --region ap-south-1` > input.json
    - echo $(cat input.json | jq '.taskDefinition.containerDefinitions[].image="'$REPOSITORY_URL':'$IMAGE_TAG'"') >  input.json
    - echo $(cat input.json | jq '.taskDefinition') > input.json
    - echo $(cat input.json | jq  'del(.taskDefinitionArn)' | jq 'del(.revision)' | jq 'del(.status)' | jq 'del(.requiresAttributes)' | jq 'del(.compatibilities)') > input.json
    - aws ecs register-task-definition --cli-input-json file://input.json --region ap-south-1 
    - revision=$(aws ecs describe-task-definition --task-definition $CI_AWS_ECS_TASK_DEFINITION --region ap-south-1 | egrep "revision" | tr "/" " " | awk '{print $2}' | sed 's/"$//' | cut -d "," -f 1)
    - aws ecs update-service --cluster $CI_AWS_ECS_CLUSTER --service $CI_AWS_ECS_SERVICE  --task-definition $CI_AWS_ECS_TASK_DEFINITION:$revision --region ap-south-1

This pipeline script of Gitlab CI runs successfully …you can test it out.

I hope you enjoyed reading..Thank you :))

Nashrah