308

I'm working on a Linux machine through SSH (Putty). I need to leave a process running during the night, so I thought I could do that by starting the process in background (with an ampersand at the end of the command) and redirecting stdout to a file.

To my surprise, that doesn't work. As soon as I close the Putty window, the process is stopped.

How can I prevent that from happening??

jww
  • 97,681
  • 90
  • 411
  • 885
GetFree
  • 40,278
  • 18
  • 77
  • 104
  • If you want to force the session to remain open, see https://stackoverflow.com/questions/25084288/keep-ssh-session-alive – tripleee Aug 04 '21 at 07:15

20 Answers20

314

Check out the "nohup" program.

JesperE
  • 63,317
  • 21
  • 138
  • 197
  • 4
    How do you stop it afterwards? – Derek Dahmer Jul 13 '10 at 17:49
  • 9
    Log in and do "kill ". Use "pidof" if you don't know the pid. – JesperE Jul 13 '10 at 18:01
  • 38
    You can use `nohup command > /dev/null 2>&1 &` to run in the background without creating any stdout or stderr output (no `nohup.out` file) – KCD Apr 01 '13 at 23:03
  • What if I need to provide some input? For example, I have a long-running script that I need to run in the background but it first asks for my FTP password. `nohup` doesn't help in this case. Is there a way to fiddle with `Ctrl+Z` / `bg`? – Sergey Dec 28 '13 at 06:29
  • 1
    Since I'm lazy and bad at memorizing cryptic sequences of characters, I wrote [this](https://gist.github.com/djwashburn/87e5b1a59aec2a6b1078), based on what @KCD said, and have been using it a lot. – Anomaly Oct 31 '15 at 22:20
  • 1
    This does not work at my side as this does not shield the program from getting SIGHUP when the ctty is closed. I have a program which must not receive SIGHUP, ever. `nohup` does not work, as it does not prevent SIGHUP, it just defaults to ignore it, which need not prevail. As I do not have `screen` nor `tmux` nor `at` nor similar, I need a way on shell level to disassociate a program from the ctty for sure. The only way I found was a hack to start the program with `ssh -T remote '(program&)&'` which makes it impossible to background the program in an interactive `ssh` session. – Tino Feb 28 '17 at 08:25
171

I would recommend using GNU Screen. It allows you to disconnect from the server while all of your processes continue to run. I don't know how I lived without it before I knew it existed.

gpojd
  • 22,558
  • 8
  • 42
  • 71
  • 7
    This is one of the greatest pieces of software I've ever used. Seriously. I have it running on a BSD box that I ssh into from EVERYWHERE, and can simply re-attach to my screen and have all of my terminals where I'm doing all sorts of stuff. – Adam Jaskiewicz Nov 12 '08 at 20:02
  • 1
    I can attest to this one. Screen is a great application. The ability to re-attach is amazing, and saves a lot of potentially lost work. – willasaywhat Nov 12 '08 at 21:06
  • I even use it on local machines, and attach multiple xterms to the same screen session (screen -x). That way I can open up many windows within my screen session, and freely switch my various xterms from window-to-window. – Adam Jaskiewicz Nov 12 '08 at 21:11
  • screen is definately on my top 3 too, truly awesome software. – Jonas Engström Nov 12 '08 at 21:39
  • 17
    Depends on whether you need to reconnect to the backgrounded app or not. If you do, then, yeah, screen is the only way to fly. If it's fire-and-forget, though, then nohup fits the bill just as nicely, if not better. – Dave Sherohman Nov 13 '08 at 00:36
  • There is also 'dtach' which implements only screen's detach mechanism, however it's a bit harder to use but gives control over the socket file that gets created for a detached process. I would recommend using dtach if this were a non-interactive scripted process that needed to remotely launch processes. Otherwise just use screen, it's easier. – Mark Renouf Apr 21 '09 at 14:47
  • yeah, screen is awesome, its like a non-gui vnc session :) – Aman Jain Jan 14 '10 at 02:44
  • 1
    +1 for screen. Or, as an alternative, tmux (I like this one more than screen) or even byobu, which is a nice frontend for screen or tmux. You can just type screen to get a shell to use and return later at any time, or run your command with screen, like "screen command": the screen session will exist as long as the process "command" exist, and if it's something very long, you can go back and look at its standard output at any time. – gerlos Feb 27 '14 at 19:07
  • 2
    Linode website offers a good [introduction](https://www.linode.com/docs/networking/ssh/using-gnu-screen-to-manage-persistent-terminal-sessions) on how to use `screen`. – mabalenk Mar 03 '17 at 16:03
82

When the session is closed the process receives the SIGHUP signal which it is apparently not catching. You can use the nohup command when launching the process or the bash built-in command disown -h after starting the process to prevent this from happening:

> help disown
disown: disown [-h] [-ar] [jobspec ...]
     By default, removes each JOBSPEC argument from the table of active jobs.
    If the -h option is given, the job is not removed from the table, but is
    marked so that SIGHUP is not sent to the job if the shell receives a
    SIGHUP.  The -a option, when JOBSPEC is not supplied, means to remove all
    jobs from the job table; the -r option means to remove only running jobs.
Robert Gamble
  • 106,424
  • 25
  • 145
  • 137
45

daemonize? nohup? SCREEN? (tmux ftw, screen is junk ;-)

Just do what every other app has done since the beginning -- double fork.

# ((exec sleep 30)&)
# grep PPid /proc/`pgrep sleep`/status
PPid:   1
# jobs
# disown
bash: disown: current: no such job

Bang! Done :-) I've used this countless times on all types of apps and many old machines. You can combine with redirects and whatnot to open a private channel between you and the process.

Create as coproc.sh:

#!/bin/bash

IFS=

run_in_coproc () {
    echo "coproc[$1] -> main"
    read -r; echo $REPLY
}

# dynamic-coprocess-generator. nice.
_coproc () {
    local i o e n=${1//[^A-Za-z0-9_]}; shift
    exec {i}<> <(:) {o}<> >(:) {e}<> >(:)
. /dev/stdin <<COPROC "${@}"
    (("\$@")&) <&$i >&$o 2>&$e
    $n=( $o $i $e )
COPROC
}

# pi-rads-of-awesome?
for x in {0..5}; do
    _coproc COPROC$x run_in_coproc $x
    declare -p COPROC$x
done

for x in COPROC{0..5}; do
. /dev/stdin <<RUN
    read -r -u \${$x[0]}; echo \$REPLY
    echo "$x <- main" >&\${$x[1]}
    read -r -u \${$x[0]}; echo \$REPLY
RUN
done

and then

# ./coproc.sh 
declare -a COPROC0='([0]="21" [1]="16" [2]="23")'
declare -a COPROC1='([0]="24" [1]="19" [2]="26")'
declare -a COPROC2='([0]="27" [1]="22" [2]="29")'
declare -a COPROC3='([0]="30" [1]="25" [2]="32")'
declare -a COPROC4='([0]="33" [1]="28" [2]="35")'
declare -a COPROC5='([0]="36" [1]="31" [2]="38")'
coproc[0] -> main
COPROC0 <- main
coproc[1] -> main
COPROC1 <- main
coproc[2] -> main
COPROC2 <- main
coproc[3] -> main
COPROC3 <- main
coproc[4] -> main
COPROC4 <- main
coproc[5] -> main
COPROC5 <- main

And there you go, spawn whatever. the <(:) opens an anonymous pipe via process substitution, which dies, but the pipe sticks around because you have a handle to it. I usually do a sleep 1 instead of : because its slightly racy, and I'd get a "file busy" error -- never happens if a real command is ran (eg, command true)

"heredoc sourcing":

. /dev/stdin <<EOF
[...]
EOF

This works on every single shell I've ever tried, including busybox/etc (initramfs). I've never seen it done before, I independently discovered it while prodding, who knew source could accept args? But it often serves as a much more manageable form of eval, if there is such a thing.

MackM
  • 2,906
  • 5
  • 31
  • 45
anthonyrisinger
  • 2,889
  • 1
  • 20
  • 10
  • 3
    why the down vote ... so what if the question is old; it's obviously relevant considering there are 11 other answers that suck. this solutions is, sans systemd, the idiomatic and accepted way to daemonize for the last 30 years, not pointless apps, eg. nohup et al. – anthonyrisinger Mar 10 '12 at 20:56
  • 7
    no matter how good your answer is, sometimes somebody on SO won't like it and will downvote. It's better not to worry about it too much. – Alex D Jan 17 '13 at 14:06
  • This technique doesn't work when trying to start a job programmatically via ssh, e.g. '$ ssh myhost "((exec sleep 30)&)"' – tbc0 Aug 14 '13 at 15:46
  • @tbc0 sure it does; Ctrl^C the ssh client after it hangs then login to the machine... you will find the `sleep 30` command is still running. i'm not sure off-hand why ssh does not release (though i'd guess the sleep process still holds a ref to ssh's pty or similar) but regardless, the sleep command is unaffected by any signals sent (such as the Ctrl^C) and will persist after ssh closes. – anthonyrisinger Oct 08 '13 at 18:40
  • 1
    @tbc0 ...try `ssh myhost "((exec sleep 500)&) >/dev/null"` – anthonyrisinger Oct 08 '13 at 18:45
  • 1
    @anthonyrisinger ok, that works. I think this is cleaner: `ssh myhost 'sleep 500 >&- 2>&- <&- &'` TMTOWTDI ;) – tbc0 Oct 11 '13 at 06:09
  • 1
    This is great. the only solution that actually works in busybox. it deserves more upvotes – Hamy Dec 19 '16 at 13:18
38
nohup blah &

Substitute your process name for blah!

Nathan Fellman
  • 122,701
  • 101
  • 260
  • 319
Brian Knoblauch
  • 20,639
  • 15
  • 57
  • 92
  • 2
    you might want to add redirect standard out and standard error. – David Nehme Nov 12 '08 at 19:34
  • 9
    nohup redirects stdout and stderr to nohup.out (or nohup.out and nohup.err depending on the version), so unless you are running multiple commands it is not necessary. – Chas. Owens Apr 21 '09 at 14:51
17

Personally, I like the 'batch' command.

$ batch
> mycommand -x arg1 -y arg2 -z arg3
> ^D

This stuffs it in to the background, and then mails the results to you. It's a part of cron.

Will Hartung
  • 115,893
  • 19
  • 128
  • 203
12

As others have noted, to run a process in the background so that you can disconnect from your SSH session, you need to have the background process properly disassociate itself from its controlling terminal - which is the pseudo-tty that the SSH session uses.

You can find information about daemonizing processes in books such as Stevens' "Advanced Network Program, Vol 1, 3rd Edn" or Rochkind's "Advanced Unix Programming".

I recently (in the last couple of years) had to deal with a recalcitrant program that did not daemonize itself properly. I ended up dealing with that by creating a generic daemonizing program - similar to nohup but with more controls available.

Usage: daemonize [-abchptxV][-d dir][-e err][-i in][-o out][-s sigs][-k fds][-m umask] -- command [args...]
  -V          print version and exit
  -a          output files in append mode (O_APPEND)
  -b          both output and error go to output file
  -c          create output files (O_CREAT)
  -d dir      change to given directory
  -e file     error file (standard error - /dev/null)
  -h          print help and exit
  -i file     input file (standard input - /dev/null)
  -k fd-list  keep file descriptors listed open
  -m umask    set umask (octal)
  -o file     output file (standard output - /dev/null)
  -s sig-list ignore signal numbers
  -t          truncate output files (O_TRUNC)
  -p          print daemon PID on original stdout
  -x          output files must be new (O_EXCL)

The double-dash is optional on systems not using the GNU getopt() function; it is necessary (or you have to specify POSIXLY_CORRECT in the environment) on Linux etc. Since double-dash works everywhere, it is best to use it.

You can still contact me (firstname dot lastname at gmail dot com) if you want the source for daemonize.

However, the code is now (finally) available on GitHub in my SOQ (Stack Overflow Questions) repository as file daemonize-1.10.tgz in the packages sub-directory.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
12

For most processes you can pseudo-daemonize using this old Linux command-line trick:

# ((mycommand &)&)

For example:

# ((sleep 30 &)&)
# exit

Then start a new terminal window and:

# ps aux | grep sleep

Will show that sleep 30 is still running.

What you have done is started the process as a child of a child, and when you exit, the nohup command that would normally trigger the process to exit doesn't cascade down to the grand-child, leaving it as an orphan process, still running.

I prefer this "set it and forget it" approach, no need to deal with nohup, screen, tmux, I/o redirection, or any of that stuff.

Paul Roub
  • 36,322
  • 27
  • 84
  • 93
RAM
  • 121
  • 1
  • 3
7

On a Debian-based system (on the remote machine) Install:

sudo apt-get install tmux

Usage:

tmux

run commands you want

To rename session:

Ctrl+B then $

set Name

To exit session:

Ctrl+B then D

(this leaves the tmux session). Then, you can log out of SSH.

When you need to come back/check on it again, start up SSH, and enter

tmux attach session_name

It will take you back to your tmux session.

Community
  • 1
  • 1
Max
  • 519
  • 1
  • 7
  • 14
5

nohup is very good if you want to log your details to a file. But when it goes to background you are unable to give it a password if your scripts ask for. I think you must try screen. its a utility you can install on your linux distribution using yum for example on CentOS yum install screen then access your server via putty or another software, in your shell type screen. It will open screen[0] in putty. Do your work. You can create more screen[1], screen[2], etc in same putty session.

Basic commands you need to know:

To start screen

screen


To create next screen

ctrl+a+c


To move to next screen you created

ctrl+a+n


To detach

ctrl+a+d


During work close your putty. And next time when you login via putty type

screen -r

To reconnect to your screen, and you can see your process still running on screen. And to exit the screen type #exit.

For more details see man screen.

Community
  • 1
  • 1
Adeel Ahmad
  • 1,671
  • 1
  • 17
  • 22
  • assuming that `yum` is the right tool, when you don't know the distro, is not good. you should make it clear on which distros `screen` can be installed with `yum`. – tymik May 26 '16 at 18:01
5

Nohup allows a client process to not be killed if a the parent process is killed, for argument when you logout. Even better still use:

nohup /bin/sh -c "echo \$\$ > $pidfile; exec $FOO_BIN $FOO_CONFIG  " > /dev/null

Nohup makes the process you start immune to termination which your SSH session and its child processes are kill upon you logging out. The command i gave provides you with a way you can store the pid of the application in a pid file so that you can correcly kill it later and allows the process to run after you have logged out.

Naveen Vijay
  • 15,928
  • 7
  • 71
  • 92
jcodeninja
  • 2,155
  • 4
  • 17
  • 14
5

If you use screen to run a process as root, beware of the possibility of privilege elevation attacks. If your own account gets compromised somehow, there will be a direct way to take over the entire server.

If this process needs to be run regularly and you have sufficient access on the server, a better option would be to use cron the run the job. You could also use init.d (the super daemon) to start your process in the background, and it can terminate as soon as it's done.

Dana the Sane
  • 14,762
  • 8
  • 58
  • 80
3

There's also the daemon command of the open-source libslack package.

daemon is quite configurable and does care about all the tedious daemon stuff such as automatic restart, logging or pidfile handling.

janv
  • 31
  • 1
  • This is particularly useful, because it will even let you do a bad thing, something that these other commands won't let you do (because they don't give you the chance to type in your password): sudo daemon xed – mmortal03 Aug 22 '22 at 09:50
3

Use screen. It is very simple to use and works like vnc for terminals. http://www.bangmoney.org/presentations/screen.html

Deon
  • 3,283
  • 3
  • 18
  • 8
2

i would also go for screen program (i know that some1 else answer was screen but this is a completion)

not only the fact that &, ctrl+z bg disown, nohup, etc. may give you a nasty surprise that when you logoff job will still be killed (i dunno why, but it did happened to me, and it didn't bother with it be cause i switched to use screen, but i guess anthonyrisinger solution as double forking would solve that), also screen have a major advantage over just back-grounding:

screen will background your process without losing interactive control to it

and btw, this is a question i would never ask in the first place :) ... i use screen from my beginning of doing anything in any unix ... i (almost) NEVER work in a unix/linux shell without starting screen first ... and i should stop now, or i'll start an endless presentation of what good screen is and what can do for ya ... look it up by yourself, it is worth it ;)

THESorcerer
  • 989
  • 9
  • 19
  • PS anthonyrisinger, you are good, i give you that but ... 30 years ? i bet that is a solution when &, bg, nohup or screen was not there yet, and no offense i appreciate your knowledge but that is far too complicated to use it :) – THESorcerer May 30 '12 at 09:25
  • 2
    [(aside: **see Tmux**)](http://tmux.sourceforge.net/) although this vastly predates me [1987], `&` (asynchronous execution) was introduced by [the Thompson shell in 1971](http://en.wikipedia.org/wiki/Thompson_shell), for the **first** version of UNIX ... [so it literally "has always been"](http://v6shell.org/man/osh.1.html) ;-) alas, I was too conservative -- it's actually been 41 years. – anthonyrisinger Sep 24 '12 at 04:08
2

Append this string to your command: >&- 2>&- <&- &. >&- means close stdout. 2>&- means close stderr. <&- means close stdin. & means run in the background. This works to programmatically start a job via ssh, too:

$ ssh myhost 'sleep 30 >&- 2>&- <&- &'
# ssh returns right away, and your sleep job is running remotely
$
tbc0
  • 1,563
  • 1
  • 17
  • 21
2

Accepted answer suggest using nohup. I would rather suggest using pm2. Using pm2 over nohup has many advantages, like keeping the application alive, maintain log files for application and lot more other features. For more detail check this out.

To install pm2 you need to download npm. For Debian based system

sudo apt-get install npm

and for Redhat

sudo yum install npm

Or you can follow these instruction. After installing npm use it to install pm2

npm install pm2@latest -g

Once its done you can start your application by

$ pm2 start app.js              # Start, Daemonize and auto-restart application (Node)
$ pm2 start app.py              # Start, Daemonize and auto-restart application (Python)

For process monitoring use following commands:

$ pm2 list                      # List all processes started with PM2
$ pm2 monit                     # Display memory and cpu usage of each app
$ pm2 show [app-name]           # Show all informations about application

Manage processes using either app name or process id or manage all processes together:

$ pm2 stop     <app_name|id|'all'|json_conf>
$ pm2 restart  <app_name|id|'all'|json_conf>
$ pm2 delete   <app_name|id|'all'|json_conf>

Log files can be found in

$HOME/.pm2/logs #contain all applications logs

Binary executable files can also be run using pm2. You have to made a change into the jason file. Change the "exec_interpreter" : "node", to "exec_interpreter" : "none". (see the attributes section).

#include <stdio.h>
#include <unistd.h>  //No standard C library
int main(void)
{
    printf("Hello World\n");
    sleep (100);
    printf("Hello World\n");

    return 0;
}

Compiling above code

gcc -o hello hello.c  

and run it with np2 in the background

pm2 start ./hello
haccks
  • 104,019
  • 25
  • 176
  • 264
2

If you're willing to run X applications as well - use xpra together with "screen".

1

I used screen command. This link has detail as to how to do this

https://www.rackaid.com/blog/linux-screen-tutorial-and-how-to/#starting

Shravan Ramamurthy
  • 3,896
  • 5
  • 30
  • 44
0

On systemd/Linux, systemd-run is a nice tool to launch session-independent processes.

tripleee
  • 175,061
  • 34
  • 275
  • 318
Eugene Shatsky
  • 401
  • 4
  • 8