1

In my program, I want only a for-loop to be run in parallel. The rest of the code should run serially. Even though this might not be the best way, I want to use the approach described here (see answer by Chris):

link

So in short, I let rank 0 do the serial part.

Now the problem is that I have several loops including a while loop. The structure is as follows:

# serial part
# start of while loop {
     # parallel part
# end of while loop
# end of serial part

The code strukture looks like this:

boost::mpi::environment env;
boost::mpi::communicator comm;

if(comm.rank()==0)
{

while(...)
{

} // !!!! end the if loop here?

// start parallel for loop here
for(....){}


// continue serial part
 if(comm.rank()==0)
 {
 //...

} // end of while loop
} // end of if loop

Is it right to close the serial part (if-loop) directly after the while-loop was opened?
And secondly, how do I tell the other ranks to wait for rank 0 to be finished?

Community
  • 1
  • 1
beginneR
  • 3,207
  • 5
  • 30
  • 52
  • 2
    Maybe OpenMP is better suited for your problem. Are you running or expecting to run your program on more than one compute node simultaneously? – Hristo Iliev Oct 21 '15 at 12:26
  • 1
    As you've written your pseudo-code everything is guarded by the first `if(comm.rank()==0)` statement so will be ignored by all the other processes. So I'm not sure what the desired structure of the code is. Perhaps you want a sequence of serial - parallel - serial rather than trying to nest a parallel block inside a serial block ? Or perhaps you want something else ? – High Performance Mark Oct 21 '15 at 12:26
  • @HristoIliev The reason why I didn't use OpenMP was because of problems with global variables. But I have to say that I am neither an expert in OpenMP nor MPI. – beginneR Oct 21 '15 at 14:01
  • @HighPerformanceMark I would like to nest a parallel block inside a serial block if that's possible. The while criterion should be tested by rank 0 but the for loop inside the while loop should be run in parallel. – beginneR Oct 21 '15 at 14:03
  • Alright, thank you very much. I'll try to figure out a way – beginneR Oct 21 '15 at 14:32
  • A short follow-up question: How do you get the value of a variable of one specific process, say rank 0? – beginneR Oct 21 '15 at 14:54
  • 1
    The type of parallelism you're looking for can be found in OpenMP. Not in MPI. To answer your short follow-up question: in MPI, you can only access the value of a variable on your process (unless you specifically receive another process' variable with a matching send). In OpenMP, if they're private, you cannot. If they're shared then you always have access. – NoseKnowsAll Oct 21 '15 at 19:53

1 Answers1

1

This:

# serial part
# start of while loop 
     # parallel part
# end of while loop
# end of serial part

isn't how MPI works. There are no serial or parallel regions in an MPI program.

When you launch an MPI program, using mpiexec or mpirun, you launch a fleet of a fixed number* of identical* serial programs which may communicate between themselves from time to time using calls to the MPI library. The running of these individual serial programs initially differs only* in their rank, and each must make decisions on how to run based on that. Each process running one of these serial programs sees only its own variables, and work must be done - in the form of calling MPI functions - to communicate those values between the different processes.


* usually

Jonathan Dursi
  • 50,107
  • 9
  • 127
  • 158