... a python script for ...
Just run it; don't package it into a Docker container. That's doubly true if its inputs and outputs are both local files, and it expects to do its thing and exit promptly: the filesystem isolation Docker provides works against you here.
This is, of course, technically possible. Depending on how exactly the support program container is set up, the "command" at the end of docker run
will be visible to the Python script in sys.argv
, like any other command-line options. You can use a docker run -v
option to publish parts of the host's filesystem into the container. So you might be able to run something like
docker run --rm -v $PWD/files:/data \
converter_image \
python convert.py /data/in.txt /data/out.pkl
where all of the /data
paths are in the container's private filesystem space.
There are two big caveats:
The host paths in the docker run -v
option are paths specifically on the physical host. If your HTTP service is also running in a container you need to know some host-system path you can write to that's also visible in your container filesystem.
Running any docker
command at all effectively requires root privileges. If any of the filenames or paths involved are dynamic, shell injection attacks can compromise your system. Be very careful with how you run this from a network-accessible script.