Today, I will document how to setup an advanced continuous integration (CI) pipeline for containers. Even if I will leverage GitHub Actions in this blog article, all the concepts and tools mentioned in this blog article could be easily leveraged from within any other CI tool like Jenkins, Azure DevOps, Google Cloud Build, etc.

First, let’s write a simple GitHub Actions definition to build and push a container into GitHub container registry:

name: simple-ci
on:
  push:
    branches:
      - main
  pull_request:
jobs:
  job:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Build container
        run: docker build --tag docker.pkg.github.com/${{ github.repository }}/container:${{ github.sha }} .
      - name: Log into container registry
        if: ${{ github.event_name == 'push' }}
        run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login docker.pkg.github.com -u ${{ github.actor }} --password-stdin
      - name: Push image in container registry
        if: ${{ github.event_name == 'push' }}
        run: docker push docker.pkg.github.com/${{ github.repository }}/container:${{ github.sha }}

That’s how simple it is. We are pushing the container in the container registry only if the trigger is a commit on the main branch. Otherwise we are just building the container on Pull requests.

Now let’s have a more complex and complete continuous integration pipeline with different checks and tests. Here are the tools I will use below on that regard:

First we need to setup the GCP Service Account which will be used to push the container image in a specific Google Artifact Registry from GitHub actions:

projectId=FIXME
artifactRegistryName=FIXME
artifactRegistryLocation=FIXME

gcloud config set project $projectId

saName=gha-$projectId-registry-push-sa
saId=$saName@$projectId.iam.gserviceaccount.com
gcloud iam service-accounts create $saName \
    --display-name=$saName
gcloud artifacts repositories add-iam-policy-binding $artifactRegistryName \
    --location $artifactRegistryLocation \
    --member "serviceAccount:$saId" \
    --role roles/artifactregistry.writer
gcloud iam service-accounts keys create ~/tmp/$saName.json \
    --iam-account $saId

gh auth login --web
gh secret set CONTAINER_REGISTRY_PUSH_PRIVATE_KEY < ~/tmp/$saName.json
rm ~/tmp/$saName.json
gh secret set CONTAINER_REGISTRY_PROJECT_ID -b"${projectId}"
gh secret set CONTAINER_REGISTRY_NAME -b"${artifactRegistryName}"
gh secret set CONTAINER_REGISTRY_HOST_NAME -b"${artifactRegistryLocation}-docker.pkg.dev"

And here is now the advanced GitHub Actions definition:

name: advanced-ci
on:
  push:
    branches:
      - main
  pull_request:
jobs:
  job:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Prepare environment variables
        run: |
          shortSha=`echo ${GITHUB_SHA} | cut -c1-7`
          echo "IMAGE_NAME=${{ secrets.CONTAINER_REGISTRY_HOST_NAME }}/${{ secrets.CONTAINER_REGISTRY_PROJECT_ID }}/${{ secrets.CONTAINER_REGISTRY_NAME }}/api:$shortSha" >> $GITHUB_ENV
      - name: Build container
        run: |
          docker build --tag ${IMAGE_NAME} .
      - name: Dockle
        run: |
          docker run -v /var/run/docker.sock:/var/run/docker.sock --rm goodwithtech/dockle:latest --exit-code 1 --exit-level fatal ${IMAGE_NAME}
      - name: Run Trivy vulnerability scanner
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: ${{ env.IMAGE_NAME }}
          format: 'template'
          template: '@/contrib/sarif.tpl'
          output: 'trivy-results.sarif'
          severity: 'CRITICAL,HIGH'
      - name: Upload Trivy scan results to GitHub Security tab
        uses: github/codeql-action/upload-sarif@v1
        with:
          sarif_file: 'trivy-results.sarif'
      - name: Run container locally as a test
        run: |
          docker run -d -p 8080:8080 --read-only --cap-drop=ALL --user=1000 ${IMAGE_NAME}
      - name: Installing KinD cluster
        uses: engineerd/setup-kind@v0.5.0
      - name: Configuring the KinD installation
        run: |
          kubectl cluster-info --context kind-kind
          kind get kubeconfig --internal >$HOME/.kube/config
          kubectl get nodes
      - name: Load image on the nodes of the KinD cluster
        run: |
          kind load docker-image ${IMAGE_NAME} --name=kind
      - name: Deploy and test Kubernetes manifests in KinD cluster
        run: |
          kubectl create deployment test --image=${IMAGE_NAME}
          kubectl wait --for=condition=available --timeout=120s deployment/test
          kubectl get all
          status=$(kubectl get pods -l app=test -o 'jsonpath={.items[0].status.phase}')
          if [ $status != 'Running' ]; then echo "Pod not running!" 1>&2; fi
      - name: Log into container registry
        if: ${{ github.event_name == 'push' }}
        env:
          CONTAINER_REGISTRY_PUSH_PRIVATE_KEY: ${{ secrets.CONTAINER_REGISTRY_PUSH_PRIVATE_KEY }}
        run: |
          echo "$CONTAINER_REGISTRY_PUSH_PRIVATE_KEY" > ${HOME}/gcloud.json
          gcloud auth activate-service-account --key-file=${HOME}/gcloud.json
          gcloud auth configure-docker ${{ secrets.CONTAINER_REGISTRY_HOST_NAME }} --quiet
      - name: Push image in container registry
        if: ${{ github.event_name == 'push' }}
        run: |
          docker push ${IMAGE_NAME}

Complementary to this, I also enabled GitHub’s Dependabot on my GitHub repository to frequently check if my Nuget packages or my container base images are up-to-date (really important on a security standpoint). Here is an example of a file I have with that.

Notes:

Complementary resources:

Here we are, hope you enjoyed that one and that you learned different tips for your own CI pipelines. Like you could see, compliance and security checks are shifted left, in other words they are taking into account early in the development process thanks to this CI definition. From there, we have a container ready to be deploy in Kubernetes.

Cheers!