I have a python app consisting of image analysis models and 2 script files. In Main.py I have XMLRPC server to run forever listening to the clients.
if __name__ == "__main__":
server = SimpleXMLRPCServer(("0.0.0.0", 8888))
print("Listening on port 8888...")
server.register_function(result, "result")
server.serve_forever()
My Dcokerfile is:
# Start with NVIDIA's CUDA and cuDNN base image.
FROM nvidia/cuda:8.0-cudnn5-devel-ubuntu16.04
# Argument: the username & password.
ARG username
ARG user_password
# Update the system.
RUN echo "debconf debconf/frontend select Noninteractive" | debconf-set-selections
RUN apt-get update
RUN apt-get upgrade --assume-yes
...... bla bla bla
WORKDIR /home/${username}
# Copy the current directory contents into the container at /home/${username}
ADD . /home/${username}
...... bla bla bla
# Expose the ports and start the ssh daemon as entry point.
USER root
EXPOSE 22 6006 8888
ENTRYPOINT ["/usr/sbin/sshd", "-D"]
When I add CMD to run my Main.py The container does not work, It exiting immediately. What is the best practice which I be able to Run this container? I am using azure Data Science Virtual Machine for Linux Ubuntu.
I built my Dockerfile with:
docker build . --tag img_processing:V1 --build-arg "username=blabla" --build-arg "user_password=blabla"
And I Run my Container with:
docker run -d -p 4000:8888 img_processing
Currently I use docker exec -it my-app-container bash
and inside of my container I manage stuff and Run python Main.py &
to run the script in background which I don't think is a good way.
especially I have to find the way to scale up and process 3000 images at time. So each container needs to have same setup.
Any Idea?