I have scientific research code that looks like this:
#define TRIALS 1000000
#define LEN 10
int i;
for(i=0;i<TRIALS;i++) {
uint8_t r[LEN];
getRand(r, LEN);
doExperiment(r);
}
where I am getting random numbers using /dev/urandom:
void getRand(uint8_t *r, int len) {
int rand = open("/dev/urandom", O_RDONLY);
read(rand, r, len);
close(rand);
}
Note: I do not require my experiment to be repeatable so do not care about having a fixed seed. However, it is mission critical that my random numbers are high quality (reasonably close to being cryptographically secure) so that the statistics of my results are valid. Speed is also very important.
I plan to parallelise this code, firstly using OpenMP by just sticking a #pragma omp parallel for
in front of my loop.
Question: What is the best way to generate random numbers concurrently (feel free to suggest not using /dev/urandom)? Should I put a mutex around calls to getRand() and allow my code to serialise on getting random numbers, should I attempt to generate all the random numbers I require up front beforehand, or should I have a separate thread which fills a buffer of random numbers which is read from (with a mutex lock) in a producer-consumer fashion? Is the best solution different if I were to use /dev/random instead, which is a finite resource and might block?
I have read through the relating posts on generating random numbers in parallel, but wish to address a question specifically in reference to using /dev/{urandom,random}.