Home > Software engineering >  How to have multiple docker containers in a single Gitlab CI job?
How to have multiple docker containers in a single Gitlab CI job?

Time:11-05

My project has multiple components that need to run in separate containers and connect with one another. I am trying to run a test of the whole project within a single "test" stage job in Gitlab CI. To do this, I need to create multiple dockers and set up each of the components manually. Is there any way to even do this in Gitlab CI?

Any advice would be appreciated. Thanks!

CodePudding user response:

There are a couple primary ways to run multiple docker containers in GitLab jobs when using shared runners on gitlab.com

Services

You can use services to launch multiple containers. Say, for example, your job relies on a database, you can specify this in the services: key in your .gitlab-ci.yml. You can also optionally specify an alias: for a hostname by which these services can be reached from your job

my_job:
  environment:
    POSTGRES_PASSWORD: password
    POSTGRES_USERNAME: postgres
    POSTGRES_DB: dbname
  services:
    - name: postgres:latest
      alias: mydatabase.local

  script:
    - psql -h mydatabase.local -u $POSTGRES_USERNAME --password $POSTGRES_PASSWORD -d $POSTGRES_DB
    - ...


There are some limitations of this approach, including:

  • services cannot access repository files
  • If you need to build the service containers, you must build them and push the images to a registry in a previous build stage
  • not all docker options available to you (for example, volume mounts, independent environment variables, etc)

If these limitations affect you, then the following approach should be used:

Docker-in-docker (docker:dind)

You can also use docker within your jobs to setup multiple containers. This is accomplished using the docker:dind service. Then you can use docker run or docker-compose to setup additional containers you need for your job.

This is particularly useful if you must

my_job:
  image: docker:19.03.12
  variables:
    DOCKER_HOST: tcp://docker:2375
    DOCKER_TLS_CERTDIR: ""
    DOCKER_DRIVER: overlay2

  services:
    - docker:19.03.12-dind

  script:
    - docker run --rm -d -p 80:80 strm/helloworld-http
    - curl http://docker

You could also use docker build, docker-compose or any of the usual docker interfaces you would usually use to setup containers.

One important thing to note is that because your docker containers run via the docker:dind service, ports exposed by running containers are exposed through that service. So, unlike how you may be familiar with in local development, you can't use curl http://localhost:Port to reach port-mapped containers.

References:

  • Related