0

I have a single computer with Debian installed that gives me the following output from lscpu:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 45
Model name:            Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
Stepping:              7
CPU MHz:               2711.791
CPU max MHz:           2800.0000
CPU min MHz:           1200.0000
BogoMIPS:              4000.03
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              20480K
NUMA node0 CPU(s):     0-7,16-23
NUMA node1 CPU(s):     8-15,24-31
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb kaiser tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts

I'm looking to maximize the performance of execution of a code, compiled with the Intel compiler, MKL and BLAS, by adjusting the parameters of mpirun and OpenMP. How can I exploit this computer to get the best performance?

What I tried:

  • mpirun -np 16 code ( doesn't use all resources according to htop )
  • mpirun -np 32 code ( worst case )
  • export OMP_NUM_THREADS=2 ; mpirun -np 16 ( doesn't use all resources according to htop )
Jérôme Richard
  • 41,678
  • 6
  • 29
  • 59
rbw
  • 187
  • 5
  • 1
    First you need to correctly pin MPI tasks and OpenMP threads. This is a two steps tango: pin each MPI task on n cores with running with n OpenMP threads per task. You generally want to have the threads of a given task on the same socket. If you do not pin the tasks and cores, results will be suboptimal and non reproducible. But if you mess up the pinning, the worst case scenario is all the OpenMP threads will time share the same core. – Gilles Gouaillardet Jun 19 '21 at 03:16
  • For thread pinning and possible Numa effect, [this answer](https://stackoverflow.com/a/64415109/12939557) could help you. Finding the optimal number of OpenMP thread per MPI process is not easy (they are plenty of research paper on it since few decades) as it is highly dependent of both the hardware and the application. Please note that BLAS level 1&2 tends not to scale well because of Numa effects and the quite scarse memory throughput on modern platforms (compared to their computing power). – Jérôme Richard Jun 19 '21 at 10:23
  • Is this purely computational work? Does the execution reside purely in L3 cache or does it have to access RAM or go to an external component? Depending on the answer, you may not see 100% core utilization and you are striving for something that will not occur. – D-Klotz Jun 21 '21 at 13:57

0 Answers0