GitLab CI Example Walkthrough

Introduction

In the previous posts, we’ve covered how to set up trusted SSL for internal services, then how to enable HTTPS and Container Repository features on a GitLab server, and finally how to add Runners to your server. In this case, we’ll focus on a Docker runner.

Now it’s time to play! Or…use these important resources for work purposes.

Vernacular

This post is going to make a few assumptions about your knowledge of Docker. First, you should know the difference between an Image vs. a Container. Second, you should know that it’s possible to link Containers together in various ways including shared network spaces.

Other than that, the GitLab specific vocabulary begins with Pipeline and Job. There are two sets of pipelines: Integration and Deployment. In general though, you describe a Pipeline with a .gitlab-ci.yml file at the top of a repository. You can change this in your repository settings, but this is the default location. In that file, you describe various Jobs which belong to Stages of the Pipeline. These stages can be, for example build, test, and deploy, and can executed in parallel.

A great number of features are available for jobs that specify when to run, what services to bring online (if supported by your runner), etc. We will not be covering all of those features here, but rather point you to the documentation.

Getting Started

This example will be covering the Docker-in-Docker type of configuration. What this means is that your pipeline can specify, either globally or locally (to a Job), a specific Docker Image where your script will execute. Furthermore, it also means that the container that script is running within is going to be destroyed when this stage exits. Therefore: you need to use cache and artifacts for files, or if you’re building a Docker image — cache it to the Container Repository we just setup, which is what we’ll show here.

1. Basic Configuration

Let’s say your repository has at its root:

my_repository
  -> Dockerfile
  -> .gitlab-ci.yml

The Dockerfile describes an image we are building and testing. For the .gitlab-ci.yml, we’ll begin with this.

image:
  - docker:latest

stages:
  - build

variables:
  DOCKER_DRIVER:        overlay
  CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
  # This becomes:
  #   curiosity.office.geontech.com:4567/your_group/and_repo:branch_name
  #   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...........
  # ^ is the registry image
  # . is the ref name

a_build_job:
  stage: build
  services:
    - docker:dind
  script:
    - # Build the image
    - docker build --pull --rm -t $CONTAINER_TEST_IMAGE .

Let’s unpack this example so far.

The first segment identifies that unless otherwise specified in a job, use the docker:latest image for the Jobs in this Pipeline.

The second states that we have 1 stage: build (if we don’t specify this, we get build, test, deploy).

The third, we set the DOCKER_DRIVER to overlay (for performance) and create the CONTAINER_TEST_IMAGE variable (see the comment for what the two pieces are).

Fourth, a_build_job states that its stage is build, we’ll be using Docker-in-Docker, and the script is trivial, just build the Dockerfile from the current working directory, tagged with our CONTAINER_TEST_IMAGE.

If we were to run this right now, all that would happen is we would get a pass/fail on whether or not the build succeeded. Plus that image we built would be destroyed when the container the job is in gets removed. If that’s all we cared to know, then job done. But let’s test the image, and to do that, we need to stash it in the registry.

2. Container Registry

As mentioned before, our current solution isn’t terribly interesting from an integration and test perspective…since it definitely didn’t get tested, and our image was just disintegrated when the build container exited. All we really tested is whether or not we could build it. We need to cache that image into a test stage.

First, update stages to include a test stage:

stages:
  - build
  - test

Second, both a_build_job and our yet to be named test stage job will be using Docker-in-Docker to push and pull images between stages. This will involve logging into and out of our Container Registry beforehand, so let’s template all of this into something for jobs that require Docker-in-Docker (DIND) services:

.dind_jobs: &dind_jobs
  services:
    - docker:dind
  before_script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  after_script:
    - docker logout $CI_REGISTRY

The above template specifies a services block for the Docker-in-Docker service as well as two scripts, before_script and after_script, which will execute before and after the script block wherever this template is used. Each is a login and a logout of the GitLab -provided CI_... variables.

Note: The CI_REGISTRY_PASSWORD is dynamically assigned for the given Job and expires in a matter of minutes (configurable in your server).

3. Updating the Build

Because the template provides the services block, our a_build_job now looks like:

a_build_job:
  stage: build
  <<: *dind_jobs
  script:
    - # Build the image
    - docker build --pull --rm -t $CONTAINER_TEST_IMAGE .
    - docker push $CONTAINER_TEST_IMAGE

Excellent: the build job will stash the built image in the Container Repository.

Now we need to fill out a test stage that pulls the image and runs a test.

4. Test Job

Let’s assume our Docker image completes with an executable test.sh script that runs our tests located within its WORKDIR. The new test job needs to pull the image and run that script.

a_test_job:
  stage: test
  <<: *dind_jobs
  script:
    - docker run --rm $CONTAINER_TEST_IMAGE test.sh

We’ve specified the stage as test, re-used our dind_jobs template for the wrapper scripts, and then specified our own script as a docker run... for that script as the entry point.

Note: We’re leveraging the fact that a docker run will automatically docker pull ... if the image is not available locally. This is especially helpful if two jobs run on different systems.

5. Clean Up

What if we wanted a stage that cleans up locally built images after whether tests fail? We can add an additional stage after test for this and create a job that always runs on failures. Since this will also require Docker-in-Docker, we’ll again re-use that template:

stages:
  - build
  - test
  - clean_fail

do_clean_fail:
  stage: clean_fail
  <<: *dind_jobs
  when: on_failure
  script:
    - docker rmi $(docker images $CONTAINER_TEST_IMAGE -q)

Now if a test stage fails, this job will run and try to remove that image.

Build Your Own Service

What if the thing we’re building can operate like a Service, for example OmniORB in Docker-REDHAWK? The simple solution is to modify your job (templates, etc.) so that the services listing includes your newly-built Service. It will then be linked to your runtime image’s container so that you can access it by name.

At this point you’re probably backtracking to the idea of linking containers, and mixing that with how you can use the linked container’s host name in liue of its IP address. And then you’re wondering: how do I do this when the name of the image is that long, DNS-incompatible registry name? The answer is alias:

test_my_service:
  stage: test
    image: docker:latest
    services:
      - name: docker:dind
      - name: $CONTAINER_TEST_IMAGE
        alias: myservice
  script:
      - nslookup myservice

The name you use for alias becomes that service’s hostname, which is accessible from within your Job’s container.

Conclusion

We did not discuss the Deployment, however GitLab does support having one pipeline trigger the next, etc. That’s a powerful feature we may cover in the future if there is interest. For now, you should have a basic understanding of how to run through a CI pipeline.


Recent Posts

Ready for an exciting change?

Work with US!