73

How can you suppress the Terminated message that comes up after you kill a process in a bash script?

I tried set +bm, but that doesn't work.

I know another solution involves calling exec 2> /dev/null, but is that reliable? How do I reset it back so that I can continue to see stderr?

Zombo
  • 1
  • 62
  • 391
  • 407
user14437
  • 3,130
  • 5
  • 21
  • 13
  • This should be merged with http://stackoverflow.com/questions/5719030/bash-silently-kill-background-function-process which is more sprawly but has more answers than this one. – tripleee Apr 06 '17 at 07:03

12 Answers12

161

In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.

Here is very simple example that kills the most recent background command. (Learn more about $! here.)

kill $!
wait $! 2>/dev/null

Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).

kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null

I was led here from bash: silently kill background function process.

Community
  • 1
  • 1
Mark Edgar
  • 4,707
  • 2
  • 24
  • 18
  • 1
    Cool, but it sets the exit status (the `$?`) to some weird value – Alexander Shcheblikin Oct 02 '14 at 21:03
  • 3
    @AlexanderShcheblikin `wait` returns the exit code of the background process, or 128 + (the number of the signal that killed it). – Henk Langeveld Dec 08 '14 at 21:57
  • This is generally wrong. The message is not generated at `wait`, but whenever `wait_for()` (at `jobs.c`) is called, which means it can be possibly generated when __any__ foreground job finishes. Try this: `sleep 10 & kill %1; sleep 0; wait 2>/dev/null`. wnoise's answer is mostly correct in this regard. The only way to completely suppress the message is by doing `exec 2>/dev/null`, the consequences of which being obvious. – alecov Nov 16 '19 at 20:20
21

The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.

see notify_of_job_status() in jobs.c.

As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.

(script 2> /dev/null)

which will lose all error messages, but just from that script, not from anything else run in that shell.

You can save and restore standard error, by redirecting a new filedescriptor to point there:

exec 3>&2          # 3 is now a copy of 2
exec 2> /dev/null  # 2 now points to /dev/null
script             # run script with redirected stderr
exec 2>&3          # restore stderr to saved
exec 3>&-          # close saved version

But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.


EDIT:

For more appropriate answer check answer given by Mark Edgar

Community
  • 1
  • 1
wnoise
  • 9,764
  • 37
  • 47
  • 1
    I launched a background process in a shell script. When kill it, I get the 'Terminated' message. I don't quiete understand what you mean by redirecting stderr temporary in a subshell. Doesn't this mean that it will not affect the script as it's being done in a subshell? Thus not work in my script? – user14437 Sep 17 '08 at 10:34
  • 6
    This answer is wrong. See the answer by Mark Edgar below. – Bruno Bronosky Feb 25 '15 at 18:15
12

Solution: use SIGINT (works only in non-interactive shells)

Demo:

cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF

sh silent.sh

http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798

MarcH
  • 18,738
  • 1
  • 30
  • 25
9

Maybe detach the process from the current shell process by calling disown?

Zombo
  • 1
  • 62
  • 391
  • 407
Matthias Kestenholz
  • 3,300
  • 1
  • 21
  • 26
  • disown has implication of shell not sending `SIGHUP` to the child when it terminates, see [this question](http://unix.stackexchange.com/questions/3886/difference-between-nohup-disown-and). Here is the [answer](http://stackoverflow.com/a/5722874/52499). – x-yuri May 24 '13 at 15:09
  • @x-yuri However, the process is being disowned right before being killed, so that should not be a problem. Example [here](https://superuser.com/a/633185/559711) – Pedro A Jul 20 '22 at 18:30
  • @PedroA What if the script is terminated before `kill` gets a chance to do its job? Unlikely or not, with `kill` + `wait` that's just not possible, and not more complex than `disown` + `kill`. So why bother with `disown`? Also, if you think about it, that's not what `disown` was created for. – x-yuri Jul 24 '22 at 12:05
5

The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:

#!/bin/sh

## assume script name is test.sh

foo() {
  trap 'exit 0' TERM ## here is the key
  while true; do sleep 1; done
}

echo before child
ps aux | grep 'test\.s[h]\|slee[p]'

foo &
pid=$!

sleep 1 # wait trap is done

echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'

kill $pid ## no need to redirect stdin/stderr

sleep 1 # wait kill is done

echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'

James Z.M. Gao
  • 516
  • 1
  • 8
  • 13
3

Is this what we are all looking for?

Not wanted:

$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+  Done                    sleep 3
$

Wanted:

$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$

As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.

'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.

ЯegDwight
  • 24,821
  • 10
  • 45
  • 52
Ralph
  • 39
  • 1
  • Why would you `set +m` the subshell? Also the accepted answer already provides the info you gave. – matthias krull Oct 05 '12 at 18:10
  • 1
    As matthias alluded to, this answer is incorrect. Bash does not display job control messages for commands executed in a subshell, so setting monitor mode for this is useless. `(set +m; sleep 3 &)` will product exactly the same effect as `(sleep 3 &)`. – Six Mar 04 '15 at 18:38
2

This also works for killall (for those who prefer it):

killall -s SIGINT (yourprogram) 

suppresses the message... I was running mpg123 in background mode. It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).

fboaventura
  • 173
  • 3
  • 7
1

disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt

clemep
  • 19
  • 2
  • disown has implication of shell not sending `SIGHUP` to the child when it terminates, see [this question](http://unix.stackexchange.com/questions/3886/difference-between-nohup-disown-and). Here is the [answer](http://stackoverflow.com/a/5722850/52499). – x-yuri May 24 '13 at 15:12
0

Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.

#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...

# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5 
kill "${pid}"
'
BenMorel
  • 34,448
  • 50
  • 182
  • 322
phily
  • 9
  • 1
0

Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.

    while true; do echo $RANDOM; done | while read line
    do
    echo Random is $line the last jobid is $(jobs -lp)
    jobs 2>&1 >/dev/null
    sleep 3
    done
J-o-h-n-
  • 59
  • 3
-1

Simple:

{ kill $! } 2>/dev/null

Advantage? can use any signal

ex:

{ kill -9 $PID } 2>/dev/null
fboaventura
  • 173
  • 3
  • 7
  • 2
    kill (except for -9) sends a signal and doesn't wait for a response. If this works for you, it does so via race condition. I've also seen code go from `{ kill $! } 2>/dev/null` to `{ kill $!; date } 2>/dev/null` to `{ kill $!; sleep 5 } 2>/dev/null` See Mark Edgar's answer for how to do this correctly. – Bruno Bronosky Feb 25 '15 at 18:21
-1

I found that putting the kill command in a function and then backgrounding the function suppresses the termination output

function killCmd() {
    kill $1
}

killCmd $somePID &
Al Joslin
  • 765
  • 10
  • 14
  • Nope doesn't work: bash still outputs a message at the first chance when the child process has been terminated. – Eric Apr 18 '18 at 09:32