1

We are performing integration testing on a few containerized interconnected apps within a gitlab CI/CD pipeline.

Here is our integration stage in the .gitlab-ci.yml

run-integration:
  image:
    name: cypress/included:8.3.1
    entrypoint: [""]
  stage: test-integration
  before_script:
    - apt-get -y install make docker.io
    - service docker start
  script:
    - make run-integration

The containers have already been generated, so you may be wondering "why add docker.io?" The answer is that we want our integration testing report to include the git commit SHAs of the versions of the applications being tested. Each of the apps is being run in a docker container, and each container has the app's commit SHA saved as an environmental variable.

So the plan is that, within the make run-integration recipe, we'll use a docker exec command to run printenv within the container and get our SHA. Here is what that looks like in our Makefile:

run-integration:
    echo "Starting run-integration!"
    npm install
    npx cypress run; \
    exit_status=$$?; \
    export GIT_SHA=$$(docker exec <CONTAINER-NAME> printenv GIT_COMMIT_SHA); \
    node create-integration-test-report.js; \ # turns cypress results into an html report
                                              # and prints the local variable GIT_SHA onto the report
    exit $$exit_status

This works like a dream when I spin up the containers locally and use the make run-integration recipe locally, but in the pipeline it fails. Presently, this is my console error:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

It's not so much that this one error is inscrutable (it's certainly been satisfactorily addressed elsewhere, even if the solution is not working for me in the pipeline), but the amount of hassle required just for me to run the docker exec command is making me question the whole approach. If possible I'd like to not go as far down the rabbit hole as applying sudo or systemctl just to check on a value to print onto a report. I'm also not crazy about having to essentially install Docker onto the cypress image just to get this value. I have a few gitlab stages where I'm using apk or apt-get to make commands available (e.g. apk add make on multiple stages), and I feel a little weird about it.

The ultimate goal is to get the containerized app's git SHA onto the report. Is there a more direct way I'm missing?

Reed M
  • 87
  • 5
  • If the image has not been built before in the same pipeline, I'd run the command `docker inspect --format='{{index .RepoDigests 0}}' $IMAGE` in a gitlab-runner of type shell (where docker is available) and save the value to a GitLab CI artifact to be used later in the pipeline. – Davide Madrisan Dec 28 '21 at 20:17

0 Answers0