0

I have a bunch of jobs that need to be submitted and I want their outputs to be written into a file instead of being printed on the terminal. To this end, I use "nohup ./do.sh &" inside a shell script "1.sh". The latter is executed using "./1.sh &" on the terminal. However, my job runs in a computational cluster and the internet connection is patchy here. Is there a way to run the whole job submission process in the background just so internet disruption does not end the process?

The content of 1.sh:

while [some condition]  
      do
            ifort -O3 iDFT_grafted.f90
            nohup ./do.sh &
            cd ../
      done
      cd ../

The content of do.sh

./a.out

Additional information- I tried nohup ./1.sh&. But this command writes the output of all individual jobs in separate folders into one nohup.out file. I want the output of all individual jobs in nohup.out files in their respective folders.

Update- The problem has been sorted out. The trick is to use simply "./do.sh>"output.txt"&" inside the job submission script "1.sh" and submit the job script with the command "nohup ./1.sh &". This lets the job submission continue even when I exit the terminal.

Dabu
  • 9
  • 5
  • Read about [crontabs](https://stackoverflow.com/tags/cron/info) . Check if you can create one on your system by executing `crrontab -e`. Good luck. – shellter Jun 18 '23 at 19:06
  • 1
    [How to prevent a background process from being stopped after closing SSH client in Linux](https://stackoverflow.com/q/285015/4154375) might be useful. – pjh Jun 18 '23 at 21:04
  • `nohup` only creates `nohup.out` if you didn't redirect terminal output explicitly – jhnc Jun 19 '23 at 00:48

1 Answers1

0

Redirect stderr and stdout to a relevant filename like so:

nohup ./script.sh > script_output.out 2>&1 &