1

It seems that right now there is no solution for this use-case. I have found other questions related to this issue; here, and here.

I am looking for solutions that will let me run docker-compose. I have found solutions for running docker, but not for running docker-compose.

I am hoping someone else has had this use-case and found a solution.


Edit: Let me clarify my use-case:

  1. When I detect a valid trigger (ie: push to repo) I need to start a new job.
  2. I need to setup an environment with multiple dockers/instances (docker-compose).
  3. The instances on this environment need access to code from git (mount volumes/create new images with the data).
  4. I need to run tests in this environment.
  5. I need to then retrieve results from these instances (JUnit test results for Jenkins to parse).

The problems I am having are with 2, and 3.

For 2 there is a problem running this in parallel (more than one job) since the docker context is shared (docker-in-docker issues). If this is running on more than one node then i get clashes because of shared resources (ports for example). my workaround is to only limit it to one running instance and queue the rest (not ideal for CI)

For 3 there is a problem mounting volumes since the docker context is shared (docker-in-docker issues). I can not mount the code that I checkout in the job because it is not present on the host that is responsible for running the docker instances that I trigger. my workaround is to build a new image from my template and just copy the code into the new image and then use that for the test (this works, but means I need to use docker cp tricks to get data back out, which is also not ideal)

Inbar Rose
  • 41,843
  • 24
  • 85
  • 131

1 Answers1

1

I think the better way is to use the pure Kubernetes resources to run tests directly by Kubernetes, not by docker-compose.

You can convert your docker-compose files into Kubernetes resources using kompose utility.

Probably, you will need some adaptation of the conversion result, or maybe you should manually convert your docker-compose objects into Kubernetes objects. Possibly, you can just use Jobs with multiple containers instead of a combination of deployments + services.

Anyway, I definitely recommend you to use Kubernetes abstractions instead of running tools like docker-compose inside Kubernetes.

Moreover, you still will be able to run tests locally using Minikube to spawn the small all-in-one cluster right on your PC.

Anton Kostenko
  • 8,200
  • 2
  • 30
  • 37
  • Thank you Anton, I have found a successful workaround to use docker-compose as I wanted. However it does not scale (I eliminated my need to mount volumes by creating new docker images as part of the CI that already included the data i wanted to mount by using ADD) But now the problem is that they all run in the same context (docker-in-docker issues again) So I have been looking at using Kubernetes-only solution. but it seems that is also not fully established, Do you have some links that you can show how to do roughly what I am attempting? – Inbar Rose May 07 '18 at 14:23
  • You can check [that](https://medium.com/@TheJBStart/running-your-e2e-tests-in-kubernetes-engine-d10cc03a7d7e) article for example. That guy uses Jobs for run e2e test in Kubernetes. – Anton Kostenko May 07 '18 at 14:31
  • Thank you for that link, it might prove useful, but it's not exactly what I am looking for, it is not using Kubernetes on Jenkins, which is what I need. – Inbar Rose May 07 '18 at 15:16
  • 1
    You can call Kubernetes from Jenkins without the plugin, just directly by `kubectl` or using Kubernetes API. So, those are 2 different things - spawning workers and spawning jobs right inside the cluster. Moreover, you can spawn Jobs even your Jenkins are not connected to the cluster. – Anton Kostenko May 07 '18 at 15:38