48

I'm using the tail -f /dev/null command to keep a container up.

The line itself is placed in a script with an echo before and after. The echo under the tail -f /dev/null is expected to be unreachable but for some reason I see it in the logs.

Once the issue occurs every restart to the container will cause the container to start and complete. Only a rm and rebuild solves this issue.

I'm not sure if it is connected but one of the things I noticed that stopping and starting the computer in a very short interval helped me reproduce the issue.

Under what circumstances could tail -f /dev/null continue to the next line?

Base image: ubuntu 64x, 14.0.4

Computer OS: ubuntu 64x, 14.0.4

toydarian
  • 4,246
  • 5
  • 23
  • 35
Ika
  • 1,456
  • 1
  • 15
  • 24
  • Curios here... What kind of an output do you expect from /dev/null? What do you hope for? – Sokre May 08 '17 at 08:58
  • 6
    @Sokre - The `tail -f /dev/null` is a common idiom for keeping a container alive indefinitely if the "real" command isn't long-lived. – Oliver Charlesworth May 08 '17 at 09:07
  • 9
    Just to add some detail to `tail -f /dev/null`. `tail -f /dev/null` is usually added because the process (pid 1) in your docker container is not running in the foreground and if nothing is running in the foreground, docker automatically closes itself. – Bill Cheng May 08 '17 at 15:24
  • If you run a `docker diff` on the crashing container, does `/dev/null` show in the output? – BMitch Mar 03 '19 at 21:56
  • 2
    why not "up -d" ? – Ryabchenko Alexander Sep 03 '20 at 07:28
  • do you have any other output from tail command? tail -f can fail if file descriptor was closed, but in that case it usually produces error – lGSMl Oct 09 '20 at 14:54
  • 2
    tail -f does stop on EOF when reading from a non-seekable descriptor (eg pipe). As far as I know, /dev/null is mounted inside the docker container. Maybe something happens and is /dev/null is remounted triggering EOF. Also try -F (--follow) instead of -f so it follows the file in case is recreated – Jinxmcg Feb 11 '21 at 20:55
  • @RyabchenkoAlexander `up` is a `docker-compose` command, not a `docker` (CLI client) command. The OP isn't specifying which one s/he's using, but I suspect it's some variation of `docker run` – Marcello Romani Feb 28 '21 at 22:22
  • Continue to the next line? What does it mean? By default, the command is only one in the docker at the end of the Dockerfile. Please show your Dockerfile. – Martin Osusky Mar 09 '21 at 14:56
  • @MartinOsusky - he said that there is an echo command in the script before the tail, so it means, continue to the next line of the script. – Software Engineer Apr 01 '21 at 08:56

6 Answers6

23

Here is better way to keep the container running

sleep infinity
  • This is way better than the bash while sleep 3600 loop I've been using, thanks! – Ahi Tuna Dec 08 '21 at 20:59
  • 2
    Is this a supported use case, or does this just happen to work? The man page only says the value might be a floating point value. So is this parsed as floating point `Infinity` seconds? (I am a bit worried whether this is a stable solution) – Marcono1234 Feb 05 '22 at 18:17
  • @Marcono1234 `infinity` supports in **GNU coreutils**' `sleep`. See more detailed [answer here](https://stackoverflow.com/a/45396600/1677270). – SergA Feb 14 '23 at 15:43
  • `sleep` command falls under `/bin` directory btw. I used `/usr/bin` previously! – ssi-anik Aug 23 '23 at 23:45
3

To answer your question under what circumstances tail -f /dev/null might finish and therefore continue to the next line in something like a shell script:

/dev/null (as with everything in Linux) is a file. When executing tail onto any file, the file must be opened using a filedescriptor. It's not that tail -f /dev/null terminates because it's finished (it won't ever finish), it terminates because of interference with the filedescriptor which can happen due to a number of reasons, however, inside the container itself there is (most likely) nothing else happening that would interfere with the filedescriptor.

Since docker containers are just a somewhat fancy overlay of so called Linux namespaces all the processes that run inside a container (even if it is inside a separate PID namespace) actually run on your host. So for some reason your host is interfering with your filedescriptor.

To check for open filedescriptors created by a process you can execute the following command:

$ sudo ls -la /proc/<pid>/fd

You will see certain numbers in the output:

  • 0 stands for standard input.
  • 1 stands for standard output.
  • 2 stands for standard error.

The rest are files that are being opened by the process.

<pid> is the id of the process you want to look at. When running tail -f /dev/null as the entrypoint inside a container it is most likely going to have the pid 1 inside the container. In order to find the pid on your host machine you can simply grep for it like so:

$ sudo ps aux | grep 'tail -f /dev/null'

To close the filedescriptor yourself and manually reproduce what would happen in those cases you can use the GNU debugger gdb. Simply attach the debugger to the pid you found earlier:

$ sudo gdb attach <pid>

Now you can go ahead and choose which filedescriptor you want to close (most likely it is going to be number 3 since the process does not open any other files):

(gdb) call (int)close(3)
$1 = 0

Now check the logs of your container while leaving the debugger:

(gdb) quit

Depending on your configuration you are likely to see an error coming from tail in the container logs:

tail: error reading '/dev/null': Bad file descriptor

As explained earlier, there is also a filedescriptor for standard error (2). You can repeat the entire process and close both the standard error and the actual filedescriptor during the same debugger session:

(gdb) call (int)close(2)
$1 = 0
(gdb) call (int)close(3)
$2 = 0
(gdb) quit

Upon doing so there won't be an error visible in the container logs and in case of a bash script it is going to proceed with the next line.

As to check what exactly is interfering with your filedescriptor you would have to extensively monitor your host system during the moment of the occurrence.

F1ko
  • 3,326
  • 1
  • 9
  • 24
0

Once in some of mine testing environments /dev/null was a regular file somehow - maybe it is the case as well?

Otherwise I'd do echo EXIT CODE=$? as the second echo and dance from there. Additionally for testing - maybe try replacing tail with long sleep and then exec the tail command via docker exec and see if you can reproduce the same behavior.

noonex
  • 1,975
  • 1
  • 16
  • 18
0

I have had the same problem, the answer for this is how you write the route, it must be like this "tail -f dev/null", this is all.

  • 1
    This may have unexpected consequences. This will only work when the current working directory is `/` (in which case, it should be the same as doing `tail -f /dev/null` anyway). – robere2 Jul 10 '22 at 17:23
-5

Create a Dockerfile with your base image of choice (Ubuntu 64-bit 14.0.4 for example). At the end of your Dockerfile, add a line like this:

ENTRYPOINT ["tail", "-f", "/dev/null"]
MTCoster
  • 5,868
  • 3
  • 28
  • 49
-9

You can use docker command

docker run -d --name alpine alpine tail -f /dev/null

see also How to retain docker alpine container after "exit" is used?

Alan Turing
  • 2,482
  • 17
  • 20
  • 2
    This does not appear to be an answer to the question of why a command in a shell script, after the tail command, is apparently running. – BMitch Mar 03 '19 at 17:37
  • 2
    He is already using this, the trouble is that it is exiting and the container is restarting. He wants to know how it can exit this tail command. – Andreas Lorenzen Mar 04 '19 at 21:04