1

I am running a python script using multiprocessing.

My bash script looks like this:

#!/bin/bash

#SBATCH -N 1
#SBATCH -c 16
#SBATCH -n 1
#SBATCH --mem-per-cpu=1G
#SBATCH --time=0-02:00:00     
#SBATCH  -C skylake
#SBATCH --output=my.stdout
#SBATCH --job-name="Ariel Test"
#SBATCH --mail-user=myname@company.com
#SBATCH --mail-type=BEGIN,END,FAIL,ARRAY_TASKS



# Put commands for executing job below this line
module load Python/2.7.13-foss-2017a
module load cx_Oracle
module load pandas/0.19.1-foss-2017a-Python-2.7.13
python /home/mp9293q/python_scripts/ariel_test_linear.py 

I am just wondering what the impact is of -n (number of tasks) on this script, especially as I have 4 lines of code to execute in the bash script?

I am assuming as I have n - 1, then this is just run sequentially from top to bottom and only once.

If I put n -2 in the bash script would the entire set of instructions twice? What would be the point in that? Wouldn't you need to parameterise the python script somehow for each task execution - if so how would you do this?

smackenzie
  • 2,880
  • 7
  • 46
  • 99
  • Maybe helpful: https://stackoverflow.com/questions/35280285/difference-between-slurm-sbatch-n-and-c – rtoijala Jun 17 '19 at 11:39
  • Also: https://stackoverflow.com/questions/39186698/what-does-the-ntasks-or-n-tasks-does-in-slurm – rtoijala Jun 17 '19 at 11:41
  • @rtoijala thanks, any advice on how you would parametrise tasks if n > 1? – smackenzie Jun 17 '19 at 11:53
  • 1
    That entirely depends on what you want your tasks to do. Usually I use n>1 when running software that uses MPI for parallelization. If you just have many nearly identical tasks that you want done in parallel, I would look at SLURM's array jobs syntax. – rtoijala Jun 17 '19 at 11:59

0 Answers0