74

shell gurus,

I have a bash shell script, in which I launch a background function, say foo(), to display a progress bar for a boring and long command:

foo()
{
    while [ 1 ]
    do
        #massively cool progress bar display code
        sleep 1
    done
}

foo &
foo_pid=$!

boring_and_long_command
kill $foo_pid >/dev/null 2>&1
sleep 10

now, when foo dies, I see the following text:

/home/user/script: line XXX: 30290 Killed                  foo

This totally destroys the awesomeness of my, otherwise massively cool, progress bar display.

How do I get rid of this message?

Philip Kirkbride
  • 21,381
  • 38
  • 125
  • 225
rouble
  • 16,364
  • 16
  • 107
  • 102
  • 15
    +1 for using 'massively cool' in re: a bash script :) – pepoluan Apr 19 '11 at 16:10
  • I can't reproduce this even after changing kill foo_pid to kill $foo_pid. – Tanktalus Apr 19 '11 at 16:40
  • @Tanktalus, I think that is because the script probably dies before the output is sent to stderr. I have added a sleep at the end of the pseudo code which should enable you to recreate the issue. – rouble Apr 19 '11 at 17:38
  • 3
    `while [ 1 ]; do` can be written as `while :; do`. – TrueY Jun 28 '13 at 09:04
  • 3
    This should be merged with http://stackoverflow.com/questions/81520/how-to-suppress-terminated-message-after-killing-in-bash which is more focused but lacks some of the answers from here. – tripleee Apr 06 '17 at 07:03
  • As far as I can tell, the problem only occurs when the `kill` command is used to kill a job _interactively_. Inside _scripts_ job-control messages such as the shown in the question do _not_ print (unless you _source_ the script _from the interactive prompt_). – mklement0 Jan 08 '22 at 03:02

11 Answers11

83
kill $foo_pid
wait $foo_pid 2>/dev/null

BTW, I don't know about your massively cool progress bar, but have you seen Pipe Viewer (pv)? http://www.ivarch.com/programs/pv.shtml

Mark Edgar
  • 4,707
  • 2
  • 24
  • 18
  • 1
    This doesn't work for me. I still get "Killed by signal 15." written to the terminal. I am trying to do this over ssh - I start a session and start a process, then later I ssh again and kill the process. When I try 'wait $pid', it says that the process is not a child (I think because it is a different session), and then the "Killed by signal 15." is still written to the terminal. Is there a way to suppress this in this situation? – David Doria Mar 17 '14 at 19:32
  • 5
    Great stuff; I suggest `{ kill $foo_pid && wait $foo_pid; } 2>/dev/null` so as to also silence the case where the targeted process is no longer alive. – mklement0 Jul 09 '16 at 04:55
  • 1
    Isn't there a (remote) possibility that the child dies before `bash` finishes the `kill` command, so that the termination report is generated then? – Davis Herring Sep 24 '17 at 15:54
  • Thanks @mklement0 - you can also use the `kill && wait` pattern with job ids: `{ kill %1 && wait %1; } 2>/dev/null` – jaygooby Aug 22 '19 at 14:25
  • Actually, I tried this but only `kill $foo_pid 2>/dev/null` worked for me. – Alaska Jan 08 '22 at 01:14
37

Just came across this myself, and realised "disown" is what we are looking for.

foo &
foo_pid=$!
disown

boring_and_long_command
kill $foo_pid
sleep 10

The death message is being printed because the process is still in the shells list of watched "jobs". The disown command will remove the most recently spawned process from this list so that no debug message will be generated when it is killed, even with SIGKILL (-9).

pix
  • 5,052
  • 2
  • 23
  • 25
  • 1
    Works great. I agree - this is the best solution for Bash. However disown is a Bash builtin command and is not available in most other shells. – mattst Nov 19 '15 at 10:58
  • 1
    @mattst: It is indeed worth pointing out that `disown` is not POSIX-compliant; it _is_, however, available in `ksh` and `zsh` as well. – mklement0 Jul 08 '16 at 22:48
  • 5
    Also, it seems that using `disown` has implications beyond just disassociating the current shell from the background process: http://unix.stackexchange.com/a/148698/54804 – mklement0 Jul 09 '16 at 04:53
  • @mklement0 Thanks for the info. and interesting link. `nohup` actually looks like an excellent solution for a couple of my scripts which use `disown` at the moment. For the original question `disown` is definately the one to use, as I am sure you realize. – mattst Jul 10 '16 at 12:41
  • 1
    @mattst: `disown` is fine for the question at hand (if the terminal dies prematurely, the background job will die on the next attempt to write to stdout), but, given the generic title of the question, it's worth pointing out the implications of `disown` beyond just silencing a subsequent `kill`. – mklement0 Jul 10 '16 at 23:09
7

Try to replace your line kill $foo_pid >/dev/null 2>&1 with the line:

(kill $foo_pid 2>&1) >/dev/null

Update:

This answer is not correct for the reason explained by @mklement0 in his comment:

The reason this answer isn't effective with background jobs is that Bash itself asynchronously, after the kill command has completed, outputs a status message about the killed job, which you cannot suppress directly - unless you use wait, as in the accepted answer.

  • 1
    Helped me when trying to kill non existing processes to prevent the `kill foo_pid failed: no such process` message – Koen. Aug 12 '12 at 12:58
  • @Koen. Yes, but this has nothing do with background jobs. You can silence any error message issued by `kill` _itself_ - as with any command - with `2>/dev/null`. The reason this answer isn't effective with _background jobs_ is that Bash itself _asynchronously_, _after_ the `kill` command has completed, outputs a status message about the killed job, which you cannot suppress directly - unless you use `wait`, as in the accepted answer. – mklement0 Jul 11 '16 at 04:03
  • @mklement0 I actually tried this today to find that only `kill $foo_pid 2>/dev/null` worked in silencing the error messages. – Alaska Jan 08 '22 at 01:15
  • @Alaska, as far as I can tell, the problem only occurs when the `kill` command is used to kill a job _interactively_. Inside _scripts_ job-control messages such as the shown in the question do _not_ print (unless you _source_ the script _from the interactive prompt_). (This contradicts the question). In other words: if you call `kill $foo_pid` from a (non-sourced) _script_, `kill $foo_pid 2>/dev/null` is sufficient (to cover the case where `$foo_pid` no longer exists). _Interactively_ (or in a script sourced interactively), you need `{ kill $foo_pid && wait $foo_pid; } 2>/dev/null` – mklement0 Jan 08 '22 at 03:07
5

This "hack" seems to work:

# Some trickery to hide killed message
exec 3>&2          # 3 is now a copy of 2
exec 2> /dev/null  # 2 now points to /dev/null
kill $foo_pid >/dev/null 2>&1
sleep 1            # sleep to wait for process to die
exec 2>&3          # restore stderr to saved
exec 3>&-          # close saved version

and it was inspired from here. World order has been restored.

Community
  • 1
  • 1
rouble
  • 16,364
  • 16
  • 107
  • 102
  • This works, but there is no need for the `>/dev/null 2>&1` part after `kill $foo_pid` as stderr (which is where the unwanted text is coming from) is already directed to /dev/null – Lee Netherton Apr 19 '11 at 16:43
5

This is a solution I came up with for a similar problem (wanted to display a timestamp during long running processes). This implements a killsub function that allows you to kill any subshell quietly as long as you know the pid. Note, that the trap instructions are important to include: in case the script is interrupted, the subshell will not continue to run.

foo()
{
    while [ 1 ]
    do
        #massively cool progress bar display code
        sleep 1
    done
}

#Kills the sub process quietly
function killsub() 
{

    kill -9 ${1} 2>/dev/null
    wait ${1} 2>/dev/null

}

foo &
foo_pid=$!

#Add a trap incase of unexpected interruptions
trap 'killsub ${foo_pid}; exit' INT TERM EXIT

boring_and_long_command

#Kill foo after finished
killsub ${foo_pid}

#Reset trap
trap - INT TERM EXIT
bbbco
  • 1,508
  • 1
  • 10
  • 25
2

Add at the start of the function:

trap 'exit 0' TERM
jilles
  • 10,509
  • 2
  • 26
  • 39
  • this works on macos. i'm using it to kill tail: trap 'exit 0' TERM ; (killall -m tail 2>&1) >/dev/null – Tomachi Aug 09 '19 at 05:16
1

Yet another way to disable job notifications is to put your command to be backgrounded in a sh -c 'cmd &' construct.

#!/bin/bash

foo()
{
   while [ 1 ]
   do
       sleep 1
   done
}

#foo &
#foo_pid=$!

export -f foo
foo_pid=`sh -c 'foo & echo ${!}' | head -1`

# if shell does not support exporting functions (export -f foo)
#arg1='foo() { while [ 1 ]; do sleep 1; done; }'
#foo_pid=`sh -c 'eval "$1"; foo & echo ${!}' _ "$arg1" | head -1`


sleep 3
echo kill ${foo_pid}
kill ${foo_pid}
sleep 3
exit
phily
  • 11
  • 1
1

You can use set +m before to suppress that. More information on that here

AhmadAssaf
  • 3,556
  • 5
  • 31
  • 42
0

Another way to do it:

    func_terminate_service(){

      [[ "$(pidof ${1})" ]] && killall ${1}
      sleep 2
      [[ "$(pidof ${1})" ]] && kill -9 "$(pidof ${1})" 

    }

call it with

    func_terminate_service "firefox"
Mike Q
  • 6,716
  • 5
  • 55
  • 62
0

The error message should come from the default signal handler which dump the signal source in the script. I met the similar errors only on bash 3.x and 4.x. To always quietly kill the child process everywhere(tested on bash 3/4/5, dash, ash, zsh), we could trap the TERM signal at the very first of child process:

#!/bin/sh

## assume script name is test.sh

foo() {
  trap 'exit 0' TERM ## here is the key
  while true; do sleep 1; done
}

echo before child
ps aux | grep 'test\.s[h]\|slee[p]'

foo &
foo_pid=$!

sleep 1 # wait trap is done

echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'

kill $foo_pid

sleep 1 # wait kill is done

echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'

James Z.M. Gao
  • 516
  • 1
  • 8
  • 13
0
 #!/bin/bash
 exec {safe2}>&2 2>/dev/null {fdevent}< <(
   sh -c 'echo $$; exec inotifywait -mqe CLOSE_WRITE /tmp' 2>&$safe2
   ) 2>&$safe2
 read -u $fdevent pidevent
 trap "$(trap -p EXIT)"$'\n'"kill $pidevent" EXIT

 grep -m1 somevent <&$fdevent

Illustrate a particular case because the own error file descriptor of the process substitution control is not accessible otherwise.

The same exec statement successively save error file descriptor, replace error by /dev/null to be inherited by process substitution, assign a new file descriptor to the process substitution output, and restore the original error file descriptor.

Inside the process substitution itself, the original error file descriptor is made active, but the undesired "process complete" message will be flushed to /dev/null.

Given that, on exit, inotifywait monitor will be silently killed.

However, depending on the process to be killed, SIGPIPE or some signal other than SIGTERM causes a silent exit wihout any effort and reflects a meaningful logic :

 #!/bin/bash
 exec {fdevent}< <(sh -c 'echo $$; exec inotifywait -mqe CLOSE_WRITE /tmp')
 read -u $fdevent pidevent
 ## work with file descriptor, sigpipe exception when done
 kill -PIPE $pidevent