44

I have a shell script which

  1. shuffles a large text file (6 million rows and 6 columns)
  2. sorts the file based the first column
  3. outputs 1000 files

So the pseudocode looks like this

file1.sh 

#!/bin/bash
for i in $(seq 1 1000)
do

  Generating random numbers here , sorting  and outputting to file$i.txt  

done

Is there a way to run this shell script in parallel to make full use of multi-core CPUs?

At the moment, ./file1.sh executes in sequence 1 to 1000 runs and it is very slow.

Thanks for your help.

Tony
  • 2,889
  • 8
  • 41
  • 45
  • 4
    If you find yourself needing to anything non trivial (e.g. multiprocessing etc.) in a shell script, it's time to rewrite it in a proper programming language. – Noufal Ibrahim Apr 05 '11 at 06:17

9 Answers9

93

Another very handy way to do this is with gnu parallel, which is well worth installing if you don't already have it; this is invaluable if the tasks don't necessarily take the same amount of time.

seq 1000 | parallel -j 8 --workdir $PWD ./myrun {}

will launch ./myrun 1, ./myrun 2, etc, making sure 8 jobs at a time are running. It can also take lists of nodes if you want to run on several nodes at once, eg in a PBS job; our instructions to our users for how to do that on our system are here.

Updated to add: You want to make sure you're using gnu-parallel, not the more limited utility of the same name that comes in the moreutils package (the divergent history of the two is described here.)

Jonathan Dursi
  • 50,107
  • 9
  • 127
  • 158
  • @Jonathan- Thanks for the pointer. I will ask my system administrator to install GNU parallel. It seems a useful utility to have on the system. Actually I was going to post the question on PBS, but you have already answered it. Cheers – Tony Apr 05 '11 at 13:54
  • 3
    If you sysadmin will not install it, it is easy to install yourself: Simply copy the perl script 'parallel' to a dir in your path and you are done. No compilation or installation of libraries needed. – Ole Tange Apr 13 '11 at 14:02
  • @Ole - Thanks for the tip. My sysadmin has agreed to install it on the system. – Tony Apr 13 '11 at 17:33
  • 1
    @Jonathan- When you refer to ./myrun, is it the modified script with "&" and "wait" or without them, that is the original shell script? Cheers – Tony Apr 13 '11 at 17:39
  • It's just the unmodified script. parallel does the bookkeeping work of spawning the jobs off and waiting until they're all done. Having said that, test and make sure everything works on small numbers of tasks before running 1000 at once... (I'm sure you would have done that, as it's obvious, but you'd be amazed how many people don't and go straight for the full-scale run as their "test".) – Jonathan Dursi Apr 13 '11 at 19:01
  • @Jonathan Dursi, strange, but I have no -W option in my parallel version. I have installed it from `moreutils` package. – Roman Newaza Apr 29 '13 at 03:14
  • @RomanNewaza , It looks like in recent versions `-W` is gone and you have to use `--workdir` ; I'll update my answer accordingly, thanks for pointing this out. – Jonathan Dursi Apr 29 '13 at 12:42
  • In my version of `parallel` I have `[-j maxjobs] [-l maxload] [-i] [-n]` in the man `OPTIONS` section. `[-c]` is present only in the `EXAMPLES`. Seams like its man page is lousy. – Roman Newaza Apr 30 '13 at 01:32
  • 1
    It turns out the moreutils package includes not gnu-parallel but Tollef's; the history of the evolution of the tools is at https://www.gnu.org/software/parallel/history.html – Jonathan Dursi May 10 '13 at 14:51
  • installing on debian/ubuntu: `apt install parallel` – Daniel Alder Feb 26 '19 at 11:54
46

Check out bash subshells, these can be used to run parts of a script in parallel.

I haven't tested this, but this could be a start:

#!/bin/bash
for i in $(seq 1 1000)
do
   ( Generating random numbers here , sorting  and outputting to file$i.txt ) &
   if (( $i % 10 == 0 )); then wait; fi # Limit to 10 concurrent subshells.
done
wait
Anders Lindahl
  • 41,582
  • 9
  • 89
  • 93
  • 4
    That will kick off all the thousand tasks in parallel, which might lead to too much swapping / contention for optimal work throughput, but it's certainly a reasonable and easy way to get started. – Tony Delroy Apr 05 '11 at 06:21
  • Good point! The simplest solution would be to have an outer loop that limits the number of started subshells and `wait` between them. – Anders Lindahl Apr 05 '11 at 06:22
  • 1
    @Anders: or just slip an "if (( $i % 10 == 0 )); then wait; fi" before the "done" in your loop above... – Tony Delroy Apr 05 '11 at 06:29
  • @Anders Thanks- It works quite well together with Tony's suggestions. – Tony Apr 05 '11 at 07:23
  • I've added Tonys suggestion to the answer above. – Anders Lindahl Apr 05 '11 at 08:13
  • Cool... +1 from me too then :-) – Tony Delroy Apr 05 '11 at 08:52
  • 1
    @Tony: I think it makes sense to leave it in. `wait` with no subshells running seems to do nothing, and if choose a number of concurrent subshells that isn't a factor of the number of tasks to run we might get active subshells still running when the loop ends. – Anders Lindahl Apr 05 '11 at 09:58
  • 9
    This solution works best if all the jobs take exactly the same time. If the jobs do not take the same time you will waste CPU time waiting for one of the long jobs to finish. In other words: It will not keep 10 jobs running at the same time at all times. – Ole Tange Apr 13 '11 at 13:58
  • It is good if different subshell handles different task. What if they operate on a single file ? Task allocation may be needed, but how to do that decently ? – Lewis Chan Jun 20 '18 at 09:02
  • See my below answer if you're worried about wasting CPU time like me. – Robert J Apr 16 '22 at 22:49
17

To make things run in parallel you use '&' at the end of a shell command to run it in the background, then wait will by default (i.e. without arguments) wait until all background processes are finished. So, maybe kick off 10 in parallel, then wait, then do another ten. You can do this easily with two nested loops.

Tony Delroy
  • 102,968
  • 15
  • 177
  • 252
  • Many thanks for your suggestions. All CPUs are now working. Do you have any idea how to make it run across the nodes? I am submitting the job to High Performance Computing using PBS with nodes=2:ppn=8, but only 1 node is working. – Tony Apr 05 '11 at 07:21
  • @Tony: I'd never heard of PBS until now... sounds interesting, but I've no idea how to use it. Sorry! – Tony Delroy Apr 05 '11 at 07:44
  • 1
    For the PBS question and across nodes, see http://stackoverflow.com/questions/5453427/does-a-pbs-batch-system-move-multiple-serial-jobs-across-nodes . – Jonathan Dursi Apr 05 '11 at 12:15
  • How does WAIT work? Can you update your answer with an example? I want to run several threads in a certain function but the next function must not start until all these threads are finished. – d-b Apr 13 '18 at 17:48
  • 1
    @d-b `wait` waits for background *processes* to finish, not *threads*. For example, `for FILE in huge.txt massive.log enormous.xml; do scp $FILE someuser@somehost:/tmp/ &; done; wait; echo "finished"` would run three `scp` (secure copy) processes to copy three files in parallel to a remove host's `/tmp` directory, and only output `"finished"` after all three copies were completed. – Tony Delroy Apr 14 '18 at 00:34
  • Sorry for the confusion about threads/processes. I was referring to "&" and of course that is processes in everyday speak. – d-b Apr 14 '18 at 07:35
9

There is a whole list of programs that can run jobs in parallel from a shell, which even includes comparisons between them, in the documentation for GNU parallel. There are many, many solutions out there. Another good news is that they are probably quite efficient at scheduling jobs so that all the cores/processors are kept busy at all times.

Eric O. Lebigot
  • 91,433
  • 48
  • 218
  • 260
4

There is a simple, portable program that does just this for you: PPSS. PPSS automatically schedules jobs for you, by checking how many cores are available and launching another job every time another one just finished.

Eric O. Lebigot
  • 91,433
  • 48
  • 218
  • 260
1

While the previous answers do work, IMO they can be hard to remember (except of course GNU parallel).

I am somewhat partial to a similar approach to the above (( $i % 10 == 0 )) && wait. I have also seen this written as ((i=i%N)); ((i++==0)) && wait

where: N is defined as the number of jobs that you want to run in parallel and i is the current job.

While the above approach works, it has diminishing returns as you have to wait for all processes to quit before having a new set of processes work, and this wastes CPU time for any task with any execution time (A.K.A. every task). In other words, the number of parallel tasks must reach 0 before starting new tasks with the previously described approach.

For me, this issue became apparent when executing a task with an inconsistent execution time (e.g. executing a request to purge user information from a database - the requestee might or might not exist, and if they do exist there could be orders of magnitudes of differences for records associated with different requestees). What I notices was some requests would be immediately fulfilled, while others would be queued to start waiting for one slightly longer running task to succeed. This translated to a task that would take hours/days to complete with the previously defined approach only taking tens of minutes.

I think that the below approach is a better solution for maintaining a constant task loading on systems without GNU parallel (e.g. vanilla macOS) and hopefully easier to remember than the above alphabet soup:

WORKER_LIMIT=6 # or whatever - remember to not bog down your system

while read -r LINE; do # this could be any kind of loop
    # there's probably a more elegant approach to getting the number of background processes.
    BACKGROUND_PROCESSES="$(jobs -r | wc -l | grep -Eow '[0-9]+')"

    if [[ $BACKGROUND_PROCESSES -eq $WORKER_LIMIT ]]; then
        # wait for 1 job to finish before starting a new one
        wait -n 
    fi

    # run something in a background shell
    python example.py -item "$LINE" &
done < something.list

# wait for all background jobs to finish
wait
Robert J
  • 840
  • 10
  • 20
  • What is the purpose of `grep`? `wc -l` already returns a single number I think. I would use `if [[ $(jobs | wc -l) -ge $WORKER_LIMIT ]]; then` instead. And what I suggest in addition is to add `trap "jobs -p | xargs kill 2>/dev/null" EXIT` at the top of the script which allows CTRL+C to kill all background jobs (or they would run until they are finished). – mgutt Apr 20 '23 at 13:09
  • Use this if bash does not support `-n`: `while [[ $(jobs | wc -l) -ge $WORKER_LIMIT ]]; do sleep 1; done` – mgutt Apr 20 '23 at 13:50
0
IDLE_CPU=1
NCPU=$(nproc)

int_childs() {
    trap - INT
    while IFS=$'\n' read -r pid; do
        kill -s SIGINT -$pid
    done < <(jobs -p -r)
    kill -s SIGINT -$$
}

# cmds is array that hold commands
# the complex thing is display which will handle all cmd output
# and serialized it correctly

trap int_childs INT
{
    exec 2>&1
    set -m

    if [ $NCPU -gt $IDLE_CPU ]; then
        for cmd in "${cmds[@]}"; do
            $cmd &
            while [ $(jobs -pr |wc -l) -ge $((NCPU - IDLE_CPU)) ]; do
                wait -n
            done
        done
        wait

    else
        for cmd in "${cmds[@]}"; do
            $cmd
        done
    fi
} | display
Zakaria
  • 851
  • 1
  • 7
  • 14
0

You might wanna take a look at runp. runp is a simple command line tool that runs (shell) commands in parallel. It's useful when you want to run multiple commands at once to save time. It's easy to install since it's a single binary. It's been tested on Linux (amd64 and arm) and MacOS/darwin (amd64).

jreisinger
  • 1,493
  • 1
  • 10
  • 21
-2

generating random numbers is easy. suppose u got a huge file like a shop database and u want to rewrite that file on some specific basis. My idea was to calculate number of cores, split file into how many cores, make a script.cfg file , split.sh and recombine.sh split.sh will split file in how many cores, clone script.cfg ( script that changes stuff in that huge files), clone script.cgf in how many cores, make them executable, search and replace in clones some variables that have to know what part of the file to process and run them in background when a clone is done generate a clone$core.ok file, so when all clones are done will tell to a loop to recombine partial results into a single one only when all .ok files are generated. it can be done with " wait" but i fancy my way

http://www.linux-romania.com/product.php?id_product=76 look at the bottom ,is partially translated in EN in this way i can procces 20000 articles with 16 columns in 2 minutes(quad core) instead of 8(single core) You have to care about CPU temperature, coz all cores are running at 100%