1

I'm having some difficulty with a Docker Container that I spun up. I adapted some code that imports metrics for an EMC Isilon into an InfluxDB database for use in Grafana. I managed to get the code to run in the container but immediately after initial execution, the container exits with code 0. I'm learning Docker on the fly so it's a very real possibility that I'm missing something obvious (Please be gentle, absolutely taking advice but don't tear me apart if something is terribly obvious). I know links are taboo but I'm going to link to the original article and the git repo used (let me know if there is a better way to handle that).

Article: https://community.emc.com/blogs/keith/2017/01/26/isilon-data-insights-connector--do-it-yourself-isilon-monitoring

Git Repo: https://github.com/Isilon/isilon_data_insights_connector

I've tried setting stdin_open and tty on the docker-compose service I have configured. Unfortunately, that's the only thing I found online that might have kept the container running after execution.

[docker-compose]

  isilonscan:
    stdin_open: true
    tty: true
    build:
      args:
        - http_proxy=http://*****:3128
      context: ./Isilonscan/isilonscan-context
      dockerfile: Dockerfile
    volumes:
      - ./Isilonscan/isilonscan-data:/opt/isilon_data_insights_connector
      - ./Isilonscan/isi_data_insights_d.cfg:/opt/isilon_data_insights_connector/isi_data_insights_d.cfg
    depends_on:
      - influxdb
    command: ["python", "/opt/isilon_data_insights_connector/isi_data_insights_d.py", "start", "--config=/opt/isilon_data_insights_connector/isi_data_insights_d.cfg"]

[Dockerfile]

FROM python:2
WORKDIR /usr/src/app
COPY isilon-exporter /opt/isilon_data_insights_connector
RUN apt-get install git && \
cd /opt && \
git clone https://github.com/Isilon/isilon_data_insights_connector.git && \
cd ./isilon_data_insights_connector && \
#pip install --upgrade pip && \
#pip install --upgrade setuptools && \
pip install -r requirements.txt && \
apt-get remove git -y && \
apt-get clean all -y
ENTRYPOINT ["python", "/opt/isilon_data_insights_connector/isi_data_insights_d.py", "start", "--config=/opt/isilon_data_insights_connector/isi_data_insights_d.cfg"] 

Expected: The code should run and end on a new line. Every 30 seconds (or configured interval), the container should poll the Isilon for metrics and stick them in an InfluxDB (different container).

Actual: While launching docker-compose up, it writes the output that it executed each step properly and then gets to the ends and exits with code 0. Checking docker ps shows that it is in fact no longer running.

2 Answers2

1

There are many causes a docker container can exit. For instance getting killed by OOM killer etc.

Since your container is reporting the exit as a graceful one (return code of 0). It is very likely that the process running your script isi_data_insights_d.py had stopped due it having executed all of the required code.

Since there is nothing left to run in your .py script, the process exits causing the container to exit as well.

The trick to keep a container alive is to keep the main process busy. That is, the script/program must not exit.

What you could possibly do is to wrap the entire code with a loop then ask the process to go to sleep for a period of time before waking up to execute the same code again.

Alternatively you can use a process monitoring program like supervisord or you can write another python script to coordinate the execution between scripts. As for the latter the subprocess lib could be a good help.

Samuel Toh
  • 18,006
  • 3
  • 24
  • 39
  • Well that makes sense, I believe the script I'm running is starting another script that loops in the background. How do I configure the docker container to pay attention to the other script's code execution rather than the one that starts it? I didn't write the original code and don't know Python so making fundamental modifications to the code is going to be a no go, I'm not going to know how to do it. – Nathanial Meek Jan 25 '19 at 18:41
  • You can have a look at `supervisord`. It is a process which helps start and monitor your process. For instance it will restart it when your script crashes so it is pretty good. `tail -f` works but it is not the best practise because if `tail` dies prematurely the container will `exit` and or when your `script` crashes, the `container` would become a `zombie` container doing nothing. Whereas `supervisord` would restart it for you. http://supervisord.org/ – Samuel Toh Jan 25 '19 at 20:50
  • 1
    That's a good point! I'll take a look at supervisord tonight and see if I can get it to work early next week. Thanks for the advice, sounds like that'd work a lot better. – Nathanial Meek Jan 25 '19 at 21:58
0

Probably not the best solution but I ended up using bash -c to run multiple commands on the command option in the docker-compose file. At the end, I ran tail -f /path/to/logfile. Seems to have worked for the time being, at least until I find a better solution. Might reach out to the original dev and see if they'd like to add the docker info to their stuff to make things easier for anyone looking to deploy this. They might be able to modify their code to run better in a container.

  • I would recommend to put down this answer as it is not a proper solution. Not the best workaround either. See my comment above on why, – Samuel Toh Jan 25 '19 at 20:54