Questions tagged [openmpi]

Open MPI is an open source implementation of the Message Passing Interface, a library for distributed memory parallel programming.

The Open MPI Project is an open-source implementation of the Message Passing Interface, a standardized and portable message-passing system designed to leverage to computational power of massively parallel, distributed memory computers.

Message passing is one of the distributed memory models most often used, with MPI being the most used message passing API, offering two types of communication between processes: point-to-point or collective. MPI can run in distributed and shared memory architectures.

An application using MPI consists usually of multiple simultaneously running processes, normally on different CPUs, which are able to communicate with each other. Normally, this type of application is programmed using the SPMD model. Nevertheless, most MPI implementations also support the MPMD model.

More information about the MPI standard may be found on the official MPI Forum website, in the official documentation, and in the Open MPI documentation.

1341 questions
160
votes
5 answers

MPICH vs OpenMPI

Can someone elaborate the differences between the OpenMPI and MPICH implementations of MPI ? Which of the two is a better implementation ?
lava
  • 1,945
  • 2
  • 14
  • 15
60
votes
11 answers

fatal error: mpi.h: No such file or directory #include

when I compile my script with only #include it tells me that there is no such file or directory. But when i include the path to mpi.h as #include "/usr/include/mpi/mpi.h" (the path is correct) it returns: In file included from…
user2804865
  • 976
  • 2
  • 9
  • 15
38
votes
1 answer

OpenMPI-bin error after update (K)Ubuntu 18.04 to 20.04

I have just upgraded my Kubuntu from 18.04 to 20.04. Unfortunately there is an error that keeps showing up everytime I use apt upgrade or installing something with apt. The error is: update-alternatives: error: /var/lib/dpkg/alternatives/mpi…
Muhammad Radifar
  • 1,267
  • 1
  • 7
  • 8
37
votes
4 answers

How do you check the version of OpenMPI?

I'm compiling my code on a server that has OpenMPI, but I need to know which version I'm on so I can read the proper documentation. Is there a constant in that I can print to display my current version?
Zak
  • 12,213
  • 21
  • 59
  • 105
34
votes
2 answers

mpirun - not enough slots available

Usually when I use mpirun, I can "overload" it, using more processors than there acctually are on my computer. For example, on my four-core mac, I can run mpirun -np 29 python -c "print 'hey'" no problem. I'm on another machine now, which is…
kilojoules
  • 9,768
  • 18
  • 77
  • 149
33
votes
3 answers

When do I need to use MPI_Barrier()?

I wonder when do I need to use barrier? Do I need it before/after a scatter/gather for example? Or should OMPI ensure all processes have reached that point before scatter/gather-ing? Similarly, after a broadcast can I expect all processes to already…
Jiew Meng
  • 84,767
  • 185
  • 495
  • 805
24
votes
1 answer

difference between MPI_Send() and MPI_Ssend()?

I know MPI_Send() is a blocking call ,which waits until it is safe to modify the application buffer for reuse. For making the send call synchronous(there should be a handshake with the receiver) , we need to use MPI_Ssend() . I want to know the…
Ankur Gautam
  • 1,412
  • 5
  • 15
  • 27
21
votes
2 answers

Kubernetes and MPI

I want to run an MPI job on my Kubernetes cluster. The context is that I'm actually running a modern, nicely containerised app but part of the workload is a legacy MPI job which isn't going to be re-written anytime soon, and I'd like to fit it into…
Ben
  • 843
  • 8
  • 21
20
votes
1 answer

Using Pytorch's Multiprocessing along with Distributed

I am trying to spawn a couple of process using pytorch's multiprocessing module within a openmpi distributed back-end. What I have is the following code: def run(rank_local, rank, world_size, maingp): print("I WAS SPAWNED ", rank_local, " OF ",…
usamazf
  • 3,195
  • 4
  • 22
  • 40
20
votes
8 answers

DLL load failed: The specified module could not be found when doing "from mpi4py import MPI"

I am trying to use Mpi4py 1.3 with python 2.7 on Windows 7 64bits. I downloaded the installable version from here which includes OpenMPI 1.6.3 so in the installed directory (*/Python27\Lib\site-packages\mpi4py\lib) following libraries exist:…
Aso Agile
  • 417
  • 1
  • 6
  • 13
18
votes
4 answers

How to determine MPI rank/process number local to a socket/node

Say, I run a parallel program using MPI. Execution command mpirun -n 8 -npernode 2 launches 8 processes in total. That is 2 processes per node and 4 nodes in total. (OpenMPI 1.5). Where a node comprises 1 CPU (dual core) and network…
ritter
  • 7,447
  • 7
  • 51
  • 84
17
votes
1 answer

MPI_Rank return same process number for all process

I'm trying to run this sample hello world program with openmpi and mpirun on debian 7. #include #include int main (int argc, char **argv) { int nProcId, nProcNo; int nNameLen; char…
hamedkh
  • 909
  • 3
  • 18
  • 35
16
votes
1 answer

Syntax of the --map-by option in openmpi mpirun v1.8

Looking at the following extract from the openmpi manual --map-by Map to the specified object, defaults to socket. Supported options include slot, hwthread, core, L1cache, L2cache, L3cache, socket, numa, board, node, sequential,…
el_tenedor
  • 644
  • 1
  • 8
  • 19
15
votes
3 answers

Unable to use all cores with mpirun

I'm testing a simple MPI program on my desktop (Ubuntu LTS 16.04/ Intel® Core™ i3-6100U CPU @ 2.30GHz × 4/ gcc 4.8.5 /OpenMPI 3.0.0) and mpirun won't let me use all of the cores on my machine (4). When I run: $ mpirun -n 4 ./test2 I get the…
James Smith
  • 327
  • 1
  • 3
  • 11
15
votes
1 answer

Difference between running a program with and without mpirun

I implemented a peer-to-peer connection in MPI using MPI_Open_port and MPI_Comm_accept. I run a server and a client program using rafael@server1:~$ mpirun server rafael@server2:~$ mpirun client on different computers. I noticed that…
zonksoft
  • 2,400
  • 1
  • 24
  • 36
1
2 3
89 90