0

I've read that, in order to ensure thread safety, it's convenient to seed the RNG inside the parallel region like this:

int seedbase = 392872;

#pragma omp parallel 
{
   srand(omp_get_thread_id * seedbase);
   #pragma omp for
   ....
}

But what if my parallelized section is inside another loop? If I had a situation like this:

int seedbase = 392872;
for(int i=0; i<100; ++i)
{
    #pragma omp parallel 
    {
      srand(omp_get_thread_id * seedbase);
      #pragma omp for
      ....
    }
}

Where should I initialize my RNG?

  • So you *want* each iteration to create a group of threads that join at the end of it? – StoryTeller - Unslander Monica Mar 07 '17 at 16:33
  • Yes, because the outer loop range could be very small, so, let's say I have 8 threads, if I put my omp for before it, and the loop range is (0,3), I don't get all threads working, while in this case, since my inner loop it's surely greater than 8, I am sure that I get all threads working – Francesco Di Lauro Mar 07 '17 at 16:37
  • Related: [Using stdlib's rand() from multiple threads](http://stackoverflow.com/q/6161322/2402272). – John Bollinger Mar 07 '17 at 17:07
  • Yes, I'm currently using rand_r so it's more thread safe. The problem remains, how do I correctly initialize it? – Francesco Di Lauro Mar 07 '17 at 17:11
  • 1
    You don't use `srand()` with `rand_r()` *at all*, see `man rand_r`: *Like rand(), rand_r() returns a pseudo-random integer in the range [0, RAND_MAX]. The seedp argument is a pointer to an unsigned int that is used to store state between calls. If rand_r() is called with the same initial value for the integer pointed to by seedp, and that value is not modified between calls, then the same pseudo-random sequence will result.* – EOF Mar 07 '17 at 17:14

1 Answers1

2

I've read that, in order to ensure thread safety, it's convenient to seed the RNG inside the parallel region like this:

Convenience notwithstanding, your technique is not effective. Regardless of where you call srand(), the standard rand() function cannot be made thread safe. It relies on internal static data that is modified on every call, so if you call it (or srand()) from multiple threads without some kind of synchronization then you thereby create a data race.

At one time POSIX defined a function rand_r(). If you have that then it would serve your purpose, but it has now been marked obsolete. The correct use of rand_r() with OpenMP would involve establishing a private (OpenMP sense) variable inside the parallel region to hold the seed. Initialize it differently or identically in each thread, depending on whether you want the same sequence of random numbers in each thread. You then pass a pointer to that variable as the argument to rand_r(). After each call, you probably want to take the return value, or something derived from it, as the new seed.

John Bollinger
  • 160,171
  • 8
  • 81
  • 157
  • If rand_r is now obsolete, what (standard) function should I call? – Francesco Di Lauro Mar 07 '17 at 17:35
  • It depends on what platform(s) you're programming for. Although it is obsolete, there is no direct replacement for `rand_r()`, and it is unlikely that implementations will actually remove it in the foreseeable future. You could consider using it anyway. Alternatively, if you only need to support glibc-based systems, then there is also `random_r()` (and an accompanying `srandom_r()`. – John Bollinger Mar 07 '17 at 17:54