Forum,
I'm experiencing a strange issue related to the following CICD script:
image: docker:stable
variables:
DOCKER_DRIVER: overlay2
CONTAINER_RELEASE_IMAGE_APP: $CI_REGISTRY_IMAGE/app:latest
CONTAINER_RELEASE_IMAGE_APP_DEV: $CI_REGISTRY_IMAGE/app_dev:latest
# from https://storage.googleapis.com/kubernetes-release/release/stable.txt
K8S_STABLE_VERSION_URL: https://storage.googleapis.com/kubernetes-release/release/v1.18.4/bin/linux/amd64/kubectl
.k8s:
services:
- name: docker:18.09.7-dind
command: ["--mtu=1410"]
variables:
DOCKER_HOST: tcp://localhost:2375
tags:
- kube-onpremise
##################
# ---- DEV ----- #
##################
build_dev:
stage: build
script:
# Login
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
#Try to pull image from container registry
- docker pull $CONTAINER_RELEASE_IMAGE_APP_DEV || true
#Build --pull Dockerfile (build --pull will pull latest version of any base image(s) - in this case python - instead of reusing whatever
#you already have tagged locally). tag it with $CONTAINER_RELEASE_IMAGE_APP_DEV.
#--cache-from is flag for other layers (app code and dependency). In this case we use the $CONTAINER_RELEASE_IMAGE_APP_DEV as cache.
- docker build -f Dockerfile --pull -t $CONTAINER_RELEASE_IMAGE_APP_DEV .
#--cache-from $CONTAINER_RELEASE_IMAGE_APP_DEV
#Push image back into container registry
- docker push $CONTAINER_RELEASE_IMAGE_APP_DEV
only:
changes:
- public/**/*
- Dockerfile
- requirements.txt
- .gitlab-ci.yml
refs:
- dev
deploy_dev:
stage: deploy
script:
#Login, this time with deploy credentials
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
# - apk add --no-cache python py2-pip
# - pip install --no-cache-dir docker-compose
- ls -l
#Deploy application to docker swarm, --with-registry-auth indicates that registry authentication details (CI_DEPLOY...) Are send to swarm agents.
- docker stack deploy --compose-file=docker-compose.yml test_app --with-registry-auth
only:
- dev
When I create a new project, I always copy/paste this script from other projects - and it always works fine. I always start with a simple pipeline (For example, the test_app that this pipeline builds/deploys is a simple flask hello world script). I have another project with the same exact .gitlab-ci.yml
file, which builds and deploys succesfully, indicated by the following output:
Login Succeeded
$ docker stack deploy --compose-file=docker-compose.yml clever_api --with-registry-auth
Updating service clever_api_uitschieterdetectie_api (id: pdcb2cwpg2xe6wfez0vk7sgap)
Job succeeded
However, when I try to run the CICD script on the current project I keep getting:
Login Succeeded
$ docker stack deploy --compose-file=docker-compose.yml test_app --with-registry-auth
open docker-compose.yml: no such file or directory
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
I have the feeling that something is going wrong with enabling docker-compose
on the gitlab runner. However, I don't understand - docker-compose.yml and Dockerfile for succesful and failing projects are nearly identical. Am also using same runner for both projects. Running ls -l
shows me that the docker-compose.yaml is in fact in the working directory of the docker gitlab runner. I think problem must be something related to this question Using docker-compose in a GitLab CI pipeline, but trying the proposed solution (Commented out in my gitlab script for reference) gives me another error:
$ apk add --no-cache python py2-pip
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
py2-pip (missing):
required by: world[py2-pip]
python (missing):
required by: world[python]
Cleaning up file based variables
00:02
ERROR: Job failed: exit code 2
UPDATE 1:
I've found that the same runner is being used in the other project and there it is able to run docker-compose. So it must be installed somehow. Base image runners use is alpine:latest
.