The project I am working on consists of multiple components, written in C#/.NET6, and deployed as docker containers on a Linux host. Each component has his own git repository on Gitlab and its Gitlab pipeline is building the docker container to the Gitlab 'container registry'. For instance, one component is called "runtime", another one "services", etc.
All docker containers are defined in in a docker-compose.yml file: the suite is started with a "docker compose up" command.
I created a 'integration test' project to check the data exchange between the running containers. I've a lot of complex Linux shell scripts to prepare the mock data and so on for the tests. I've a bunch of tests written in Python running in a py-env on the Linux host and also some other tests written in C# running in a dedicated docker container. I've actually different test scenarios or test group: each group has his own docker-compose-integration_$group.yml file to set up e.g. the mocked services.
All of this is run with
docker compose -f docker-compose.yml -f docker-compose-integration_$group.yml up -d
In multiple services defined in the docker compose file, I set up docker volumes to be able to check the data generated by the containers within my tests. For instance, the following is an extract of my docker-compose-integration_4.yml file for the 4th group of tests using the C# written tests running in a dedicated 'integration-tests-dotnet' container:
runtime:
extends:
file: ./docker-compose.yml
service: runtime
volumes:
- ./runtime/config_4.ini}:/etc/runtime/config.ini
- ${OUTPUT:-./output}/runtime/:/runtime/output/
integration-tests-dotnet:
volumes:
# share config for current group same as for runtime.
- ./runtime/config_4.ini}:/etc/runtime/config.ini
# share output folder from runtime.
- ./testData/:/opt/testData/
- ./output/runtime/:/opt/runtime/output/
- ./output/services/:/opt/services/output/
# share report file generated from the tests.
- ./output/integration/:/app/output/
Everything is running nicely on a Linux machine, or on WSL2 in my Windows PC (or on the Mac of a colleague).
The integration-test project has his own Gitlab pipeline.
Now, we would like to be able to run the integration tests within the Giltab pipeline, i.e. run "docker compose" from a Gitlab runner.
I do have already a 'docker in docker' capable runner, and added such a job in my .gitlab-ci.yml
run-integration-tests:
stage: integration-tests
variables:
DOCKER_TLS_CERTDIR: ''
DOCKER_HOST: tcp://localhost:2375/
services:
- name: docker:20.10.22-dind
command: ["--tls=false"]
tags:
- dind
image: $CI_REGISTRY_IMAGE:latest
This job is starting properly, BUT fails with the volume sharing.
This question Docker in Docker cannot mount volume raised already the issue with volume sharing by using the shared docker socket. The docker volume is shared from the HOST (i.e. on my runner). But the data are unknown from my host: they are only meant to be shared between the integration-test and the other containers. As Olivier wrote in that question, for a
host: H
docker container running on H: D
docker container running in D: D2
the docker compose with volume sharing is equivalent to
docker run ... -v <path-on-D>:<path-on-D2> ...
while I only something equivalent to the following can run:
docker run ... -v <path-on-H>:<path-on-D2> ...
But I have no data on H to share, I just want between D and D2!
Is the volume sharing on HOST limitation the same using my docker-in-docker runner as the shared socket?
If so, it seems I need to rework the infrastructure and the concept of volume sharing used here. Some suggests a Docker data volume containers. Maybe I should use more of the names volumes. Mayber tmpfs volumes? I need to check the data AFTER some container are exited, but I don't know if a container still running but in "exited" status has sill the tmpfs volume activated.
Is my analysis correct? Any other suggestions?