I'm running a shell script script.sh
in parallel which in each of its lines goes to a folder and run a Fortran code:
cd folder1 && ./code &
cd folder2 && ./code &
cd folder3 && ./code &
cd folder4 && ./code &
..
cd folder 96 && ./code
wait
cd folder 97 && ./code
..
..
..
cd folder2500 && ./code.sh
There are around 2500 Folders and code outputs are independent from each other. I have access to 96 CPUs and each job uses around 1% of CPU, so I run 96 jobs in parallel using the &
key and wait
command. Due to different reasons, not all 96 jobs finish at the same time. Some of them take 40 minutes, some of them 90 minutes, an important difference. So I was wondering if it is possible that the jobs that finish earlier use the available CPUs in order to optimize the execution time.
I tried also with GNU Parallel:
parallel -a script.sh
but it had the same issue, and I could not find in internet somebody with a similar problem.