150

What is the difference between these two?

[A]

#pragma omp parallel
{ 
    #pragma omp for
    for(int i = 1; i < 100; ++i)
    {
        ...
    }
}

[B]

#pragma omp parallel for
for(int i = 1; i < 100; ++i)
{
   ...
}
dreamcrash
  • 47,137
  • 25
  • 94
  • 117
Hyunjik Bae
  • 2,721
  • 2
  • 22
  • 32

7 Answers7

110

These are equivalent.

#pragma omp parallel spawns a group of threads, while #pragma omp for divides loop iterations between the spawned threads. You can do both things at once with the fused #pragma omp parallel for directive.

Krzysztof Kosiński
  • 4,225
  • 2
  • 18
  • 23
  • In my code I am using this very structure. However when I use `schedule(static, chunk)` clause in for directive, I get a problem. The code runs fine but when I am invoking this code from an MPI program then it runs into an infinite loop. The loop counter is zero in all iterations of this loop. I have the loop counter defined as private in the `#pragma omp parallel` directive. No idea why it only fails when MPI is invoking the code. I am somewhat sure that each MPI process is running on a different processor of the cluster if that matters. No idea if schedule is causing the problem. – Rohit Banga Oct 03 '11 at 02:29
  • The same thing works fine when I use the `#pragma omp parallel for` directive. There ought to be some difference. – Rohit Banga Oct 03 '11 at 02:30
  • 1
    Update: As it turns out, I am observing this problem only when I use the schedule clause so I guess it is not depending on whether I use the combined parallel for or two different directives. – Rohit Banga Oct 03 '11 at 19:52
81

I don't think there is any difference, one is a shortcut for the other. Although your exact implementation might deal with them differently.

The combined parallel worksharing constructs are a shortcut for specifying a parallel construct containing one worksharing construct and no other statements. Permitted clauses are the union of the clauses allowed for the parallel and worksharing contructs.

Taken from http://www.openmp.org/mp-documents/OpenMP3.0-SummarySpec.pdf

The specs for OpenMP are here:

https://openmp.org/specifications/

Grv10India
  • 15
  • 1
  • 6
Ade Miller
  • 13,575
  • 1
  • 42
  • 75
37

Here is example of using separated parallel and for here. In short it can be used for dynamic allocation of OpenMP thread-private arrays before executing for cycle in several threads. It is impossible to do the same initializing in parallel for case.

UPD: In the question example there is no difference between single pragma and two pragmas. But in practice you can make more thread aware behavior with separated parallel and for directives. Some code for example:

#pragma omp parallel
{ 
    double *data = (double*)malloc(...); // this data is thread private

    #pragma omp for
    for(1...100) // first parallelized cycle
    {
    }

    #pragma omp single 
    {} // make some single thread processing

    #pragma omp for // second parallelized cycle
    for(1...100)
    {
    }

    #pragma omp single 
    {} // make some single thread processing again

    free(data); // free thread private data
}
Community
  • 1
  • 1
NtsDK
  • 951
  • 1
  • 9
  • 19
15

Although both versions of the specific example are equivalent, as already mentioned in the other answers, there is still one small difference between them. The first version includes an unnecessary implicit barrier, encountered at the end of the "omp for". The other implicit barrier can be found at the end of the parallel region. Adding "nowait" to "omp for" would make the two codes equivalent, at least from an OpenMP perspective. I mention this because an OpenMP compiler could generate slightly different code for the two cases.

phadjido
  • 394
  • 3
  • 9
13

There are obviously plenty of answers, but this one answers it very nicely (with source)

#pragma omp for only delegates portions of the loop for different threads in the current team. A team is the group of threads executing the program. At program start, the team consists only of a single member: the master thread that runs the program.

To create a new team of threads, you need to specify the parallel keyword. It can be specified in the surrounding context:

#pragma omp parallel
{
   #pragma omp for
   for(int n = 0; n < 10; ++n)
   printf(" %d", n);
}

and:

What are: parallel, for and a team

The difference between parallel, parallel for and for is as follows:

A team is the group of threads that execute currently. At the program beginning, the team consists of a single thread. A parallel construct splits the current thread into a new team of threads for the duration of the next block/statement, after which the team merges back into one. for divides the work of the for-loop among the threads of the current team.

It does not create threads, it only divides the work amongst the threads of the currently executing team. parallel for is a shorthand for two commands at once: parallel and for. Parallel creates a new team, and for splits that team to handle different portions of the loop. If your program never contains a parallel construct, there is never more than one thread; the master thread that starts the program and runs it, as in non-threading programs.

https://bisqwit.iki.fi/story/howto/openmp/

Socob
  • 1,189
  • 1
  • 12
  • 26
fogx
  • 1,749
  • 2
  • 16
  • 38
7

I am seeing starkly different runtimes when I take a for loop in g++ 4.7.0 and using

std::vector<double> x;
std::vector<double> y;
std::vector<double> prod;

for (int i = 0; i < 5000000; i++)
{
   double r1 = ((double)rand() / double(RAND_MAX)) * 5;
   double r2 = ((double)rand() / double(RAND_MAX)) * 5;
   x.push_back(r1);
   y.push_back(r2);
}

int sz = x.size();

#pragma omp parallel for

for (int i = 0; i< sz; i++)
   prod[i] = x[i] * y[i];

the serial code (no openmp ) runs in 79 ms. the "parallel for" code runs in 29 ms. If I omit the for and use #pragma omp parallel, the runtime shoots up to 179ms, which is slower than serial code. (the machine has hw concurrency of 8)

the code links to libgomp

Christian Rau
  • 45,360
  • 10
  • 108
  • 185
parcompute
  • 131
  • 1
  • 4
  • 2
    i think it's because omp parallel executes loop in separate thread without dividing it into threads, so main thread is waiting for second thread finished. and time spends on synchronizing. – Antigluk Oct 24 '12 at 15:38
  • 9
    That is because without a `#pragma omp for` there is no multi-threaded sharing of the loop at all. But that wasn't the OPs case anyway, try again with an additional `#pragma omp for` inside the `#pragm omp parallel` and it should run similar (if not the same) like the `#pragma omp parallel for` version. – Christian Rau Oct 14 '13 at 15:27
  • 2
    I see this answer as the best one as it shows they are not "equivalent" – Failed Scientist Mar 25 '17 at 09:45
  • `#pragma omp parallel for` instructs the comiler to parallelize the next `for` block. With `#pragma omp parallel` alone, you have many threads who run the same code. I.e. each thread runs the whole `for` cycle. The slow down comes from race conditions when several/all threads try to access the same memory. This is rookie mistake number one in using OpenMP. – Dimitar Slavchev May 23 '22 at 16:32
  • 2
    @Failed Scientist Please read Christian Rau's comment. it only shows that omp parallel for is not equivalent to just omp parallel. Nothing more. It says nothing about equivalency of omp parallel for with omp parallel AND omp for – Paul Childs Dec 22 '22 at 22:07
7

TL;DR: The only difference is that the 1st code calls 2 implicit barriers whereas the 2nd only 1.


A more detail answer using as reference the modern official OpenMP 5.1 standard.

The OpenMP clause:

#pragma omp parallel

creates a parallel region with a team of threads, where each thread will execute the entire block of code that the parallel region encloses.

From the OpenMP 5.1 one can read a more formal description :

When a thread encounters a parallel construct, a team of threads is created to execute the parallel region (..). The thread that encountered the parallel construct becomes the primary thread of the new team, with a thread number of zero for the duration of the new parallel region. All threads in the new team, including the primary thread, execute the region. Once the team is created, the number of threads in the team remains constant for the duration of that parallel region.

The:

#pragma omp parallel for 

creates a parallel region (as described before), and to the threads of that region the iterations of the loop that it encloses will be assigned, using the default chunk size and schedule (which is typically static). Bear in mind, however, that those defaults might differ among different concrete implementation of the OpenMP standard.

From the OpenMP 5.1 you can read a more formal description :

The worksharing-loop construct specifies that the iterations of one or more associated loops will be executed in parallel by threads in the team in the context of their implicit tasks. The iterations are distributed across threads that already exist in the team that is executing the parallel region to which the worksharing-loop region binds.

Moreover,

The parallel loop construct is a shortcut for specifying a parallel construct containing a loop construct with one or more associated loops and no other statements.

Or informally, #pragma omp parallel for is a combination of the constructor #pragma omp parallel with #pragma omp for.

For both versions that you have shown if one uses chunk_size=1 and static schedule the execution flow would result in something like:

enter image description here

Code-wise the loop would be transformed to something logically similar to:

for(int i=omp_get_thread_num(); i < n; i+=omp_get_num_threads())
{  
    //...
}

where omp_get_thread_num()

The omp_get_thread_num routine returns the thread number, within the current team, of the calling thread.

and omp_get_num_threads()

Returns the number of threads in the current team. In a sequential section of the program omp_get_num_threads returns 1.

or in other words, for(int i = THREAD_ID; i < n; i += TOTAL_THREADS). With THREAD_ID ranging from 0 to TOTAL_THREADS - 1, and TOTAL_THREADS representing the total number of threads of the team created on the parallel region.

dreamcrash
  • 47,137
  • 25
  • 94
  • 117