0

I need a suggestion about this problem statement. I am rolling out a k8s job which uses a docker image and it does some computation, later on, I needed some folder which is basically present in a different docker image.

I want to understand how would I tackle this scenario, given I have to use a loop and copy the content from almost 30 docker images.

My thought,

  1. install docker in my docker images which k8s job is using, run the container, copy the content, and kill it after that.
  2. roll out a new job to copy the content and copy it to a mount location, which can be utilized.

I am afraid, if I have limited access to the host on which the k8s job is running, would I be able to run native docker commands.

I am just thinking out loud. Appreciate the suggestions.

Jonas
  • 121,568
  • 97
  • 310
  • 388
Rajiv Rai
  • 235
  • 4
  • 16
  • It's likely possible to download the docker image you need without using `docker` cli, and probably produce a tar archive that you may be able to sort through in your k8s job. Does this answer help? https://stackoverflow.com/a/47624649/3543371 – jbielick Jul 08 '20 at 15:24
  • It'd be better to `COPY --from=other/image` in your job image's Dockerfile than to try to copy this in at runtime. – David Maze Jul 08 '20 at 16:55
  • Use an [`initContainer`](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) in which you copy the contents out of the given docker image into a k8s volume, and then mount that volume wherever you need access to the data. – larsks Jan 03 '21 at 19:28

0 Answers0