Avatar

How modern DevOps is done. Part 2: wiring up CI/CD!

← Back to list
Posted on 15.03.2024
Last updated on 04.12.2024
Image by AI on ChatGPT (Dall-E)

Articles in this series

🧐 How modern DevOps is done. Part 2: wiring up CI/CD!
Refill!

Table of contents

In the previous article I've covered the process of Docker image building, and then pushing it to the repository. In real life nobody does it manually, we let the continuous delivery work for us. So let's create a pipeline or two that automate the process.

I will be using GitHub actions, it will do just fine for the demo project. For a real life app I would encourage taking a look at DroneCI, as it can be self-hosted and seamlessly integrated with K8s, for that could be a crucial factor for big companies to achieve cost saving and keeping their secrets in-house.

# Pipelines and branching models

The pipelines are typically wired up on specific events that GitHub triggers. In order to understand what events to use, I need to choose the branching model first. There are two main options:

  • Git flow - older (some say ancient), less agile, but in my opinion also safer and slightly more flexible.
  • Git trunk - invented for faster development, way more popular nowadays, but can be problematic in case if the production branch gets polluted with unstable changes.

I'll go with the second option. The master (main or trunk) branch will be referred as a production branch.

So, three action pipelines must be created:

  • Lint and test the app on every push to a feature branch
  • Build a staging image every time a feature branch is merged to the production branch
  • Build a production image every time a new release is created

For now, I will only build images, not deploy them, since at this point there is even no cluster to deploy to. I'll improve the pipelines later down the road.

Some prefer deploying to production on merge instead, but I personally find this ultimately unsafe, since there would be little opportunity to prevent the mis-triggered deployment from happening, once it has started.

# Linting and testing on a push

On every push to a feature branch linter and unit tests must be executed, to maintain technical excellence and code quality. Here is a pipeline for this:

👉 📃  .github/workflows/lint-test.yml
name: Lint and test
on:
pull_request:
jobs:
lint-test:
name: Run linter, unit tests
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./devops/app
steps:
- uses: actions/checkout@v2.3.1
with:
fetch-depth: 0
- name: Detect changes
uses: dorny/paths-filter@v2
id: filter
with:
filters: |
app:
- 'cmd/**'
- 'go.mod'
- uses: actions/setup-go@v4
with:
go-version: "1.21.x"
cache: false
- name: Run lint
uses: golangci/golangci-lint-action@v3.4.0
with:
version: v1.55.1
args: -v --timeout=10m0s --config ./.golangci-lint.yml
skip-cache: true
working-directory: ./devops/app
- name: Run unit tests
run: |
go test -short -mod=mod -v -p=1 -count=1 ./...
The code is licensed under the MIT license

# Image build and push script

To avoid duplication across many pipelines it's a good idea to write a bash script that builds and push images. Let's take a look at this particular one.

👉 📃  app/scripts/build-and-push.sh
$
#!/usr/bin/env bash
REGION=asia-east1-docker.pkg.dev
PROJECT=go-app-390716
APP=devops-app
while getopts a:t:e: flag
do
case "${flag}" in
a) ACTION=${OPTARG};;
t) TAG=${OPTARG};;
e) ENV=${OPTARG};;
*) exit 1
esac
done
IMAGE="${REGION}"/"${PROJECT}"/devops-"${ENV}"/"${APP}":"${TAG}"
if [ "${ACTION}" = "build" ]
then
docker build -t "${IMAGE}" .
fi
if [ "${ACTION}" = "push" ]
then
gcloud config set project "${PROJECT}"
gcloud auth configure-docker "${REGION}"
docker push "${IMAGE}"
fi
The code is licensed under the MIT license

The script takes three arguments: the action ("build" or "push"), the image version (part of the tag) and the environment and creates a fully denoted image tag using the region, the project name and the application name.

It's a good practice to have separated projects for staging and live environments, due to safety reasons and ability to define access policies in a more granular fashion. However, for the demo project its fine to keep images in sub-folders instead.

I also add the script as an action to the Makefile:

👉 📃  app/Makefile
build_image:
@./scripts/build-push-image.sh -a build -t $(tag) -e $(env)
push_image:
@./scripts/build-push-image.sh -a push -t $(tag) -e $(env)
The code is licensed under the MIT license

# Service accounts and keys

In order to be able to push images to the registry, we need authentication. In case of GCP, one way to achieve this is to use Service Accounts. A Service Account is an unmanned type of account. Let's see how one can be created.

First, we go to the "service accounts" section of IAM.

We click "Create Service Account" and fill the form out:

In practice, you may want to have a dedicated role for CICD-related service accounts to give more granular access, but for the purpose of this article the Owner will do just fine.

Then we go inside and add a new JSON access key. The key will be downloaded then. After that we take the contents of the JSON file and base64 encode it:

$
base64 ~/Downloads/go-app-390716-d7f325da7e19.json > ~/sa_key.txt
The code is licensed under the MIT license

Then we add a new secret with the name GCP_SERVICE_ACCOUNT and the encoded file as a value.

# Test, build and deploy to staging on merge

Another pipeline is executed each time a feature branch is merged to master. At this point it's a little too late to run linter, but it is still worth running unit tests. At the end a Docker image is built and pushed to the registry.

Now we need to decide what kind of image tag to use. Some engineers use the latest tag for staging deployments, but I've seen problems with this approach. A better way would be to use the pull request commit hash as the image tag. The original hash is too long, so we shorten it to 7 symbols. This is the same as what Github does, so it eventually becomes easy to find out which image was built after which PR.

👉 📃  .github/workflows/test-build-deploy-staging.yml
name: Test, build and deploy to Staging environment
on:
push:
branches: [master]
jobs:
test-build-push:
name: Run unit tests, build and push a new image to Staging
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./devops/app
steps:
- uses: actions/checkout@v2.3.1
with:
fetch-depth: 0
- name: Detect changes
uses: dorny/paths-filter@v2
id: filter
with:
filters: |
app:
- 'cmd/**'
- 'go.mod'
- uses: actions/setup-go@v4
with:
go-version: "1.21.x"
cache: false
- name: Run tests
run: |
go test -short -mod=mod -v -p=1 -count=1 ./...
- name: Get image tag
id: commit_hash
run: |
SHORT_COMMIT_HASH=$(git rev-parse --short=7 "$GITHUB_SHA")
echo "IMAGE_TAG=$SHORT_COMMIT_HASH" >> $GITHUB_ENV
echo "Commit SHA: $GITHUB_SHA"
echo "Short commit hash: $SHORT_COMMIT_HASH"
- name: GCP Auth
run: |
echo "${{secrets.GCP_SERVICE_ACCOUNT}}" | base64 -d > ./google_sa.json
gcloud auth activate-service-account --key-file=./google_sa.json
- name: Build new image
run: |
make build_image tag=$IMAGE_TAG env=stg
- name: Push the image
run: |
make push_image tag=$IMAGE_TAG env=stg
The code is licensed under the MIT license

# Build for the live environment on release

This pipeline only builds the image, the deployment will happen later using Spinnaker.

If by chance you have a canary environment, you can rig up automatic deployments to canary each time there is a merge to master. Personally I prefer keeping this part of the process manual, as in case of the process is automated, it poses a risk of accidental deployments, that, once triggered, becomes difficult to abort in good time.

I strongly believe that live deployments should be as much perceived and deliberate as possible.

So,

👉 📃  .github/workflows/deploy-live.yml
$
name: Build for Live environment
on:
release:
types: [published]
jobs:
validate_tags:
runs-on: ubuntu-latest
env:
GITHUB_RELEASE_TAG: ${{ github.ref }}
outputs:
hasValidTag: ${{ steps.check-provided-tag.outputs.isValid }}
steps:
- name: Check Provided Tag
id: check-provided-tag
run: |
if [[ ${{ github.ref }} =~ refs\/tags\/v[0-9]+\.[0-9]+\.[0-9]+ ]]; then
echo "::set-output name=isValid::true"
else
echo "::set-output name=isValid::false"
fi
build-push:
name: Build and push the image to Live
needs: [validate_tags]
if: needs.validate_tags.outputs.hasValidTag == 'true'
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./devops/app
env:
GITHUB_RELEASE_TAG: ${{ github.ref }}
steps:
- uses: actions/checkout@v2
- name: Configure gcloud as docker auth helper
run: |
gcloud auth configure-docker
- name: GCP Auth
run: |
echo "${{secrets.GCP_SERVICE_ACCOUNT}}" | base64 -d > ./google_sa.json
gcloud auth activate-service-account --key-file=./google_sa.json
- name: Extract version
uses: mad9000/actions-find-and-replace-string@3
id: extract_version
with:
source: ${{ github.ref }}
find: "refs/tags/"
replace: ""
- name: Build new image
run: |
make build_image tag=${{ steps.extract_version.outputs.value }} env=live
- name: Push the image
run: |
make push_image tag=${{ steps.extract_version.outputs.value }} env=live
The code is licensed under the MIT license

# Making a new release

There is nothing simpler than this. Just go to the "releases" section of your repository and draft a new release. There is a regex in the pipeline above, that only lets tag names such as vXX.YY.ZZ through. The value of the tag will also become the image tag.

And that's all for today.

In the next article we will learn how to stop creating resources manually by putting Terraform into use!

The code for this part is here. Enjoy!


Avatar

Sergei Gannochenko

Business-oriented fullstack engineer, in ❤️ with Tech.
Golang, React, TypeScript, Docker, AWS, Jamstack.
19+ years in dev.