3

I have a container running cron -f, which stops the container from outputting to stdout as containers normally do. I need my cron jobs to output to stdout for datadog and docker logs to see it.

I got the idea to write to proc files from this answer: https://stackoverflow.com/a/46220104/5915915

With this, my cron's started failing. When I tried writing to the file manually inside the container, for example with

echo 'hi' > /proc/1/fd/1

I am told:

bash: 1: Permission denied

The result of ls -l is:

root@my-container:/proc/1/fd# ls -l
ls: cannot read symbolic link '0': Permission denied
ls: cannot read symbolic link '1': Permission denied
ls: cannot read symbolic link '2': Permission denied
ls: cannot read symbolic link '3': Permission denied
total 0
lrwx------ 1 root root 64 Mar  8 23:46 0
l-wx------ 1 root root 64 Mar  8 23:46 1
l-wx------ 1 root root 64 Mar  8 23:46 2
lrwx------ 1 root root 64 Mar  8 23:46 3

A similar problem has a solution in the comments by the op, but his files were somehow owned by his db user instead of root. This idea does not work for me. https://www.reddit.com/r/docker/comments/imx8zr/permission_denied_when_write_to_proc1fd1/

How is root's permission being denied? Or alternatively, is there a better way to get my cron's output to show in stdout?

David Jay Brady
  • 1,034
  • 8
  • 20

1 Answers1

2

A way around this is to have the file be owned by a non-root user. This can be done by calling cron -f inside of a startup script from CMD after switching users.

At the end of my Dockerfile

USER worker

In my docker-compose:

cron_service:
  CMD: ["./start.sh"]

In start.sh

cron -f

Now the proc files created for the cron process will be owned by worker, which cron is able to write to.

David Jay Brady
  • 1,034
  • 8
  • 20