18

In rand() considered harmful it is pointed out that srand(time(NULL)) is bad because srand takes an unsigned int, but for Microsoft's compiler, time_t by default is a 64-bit number, therefore a narrowing conversion happens. However, time_t is implementation-defined.

Since I see srand(time(NULL)) so prevalent (even on this site), should it be discouraged?

manlio
  • 18,345
  • 14
  • 76
  • 126
user4111372
  • 217
  • 2
  • 3

1 Answers1

12

Since I see srand(time(NULL)) so prevalent (even on this site), should it be discouraged?

It depends on how you want to use the output from from your generator (in this case, the output of rand()).

If you only need a uniform distribution for single runs of your program, then srand(time(NULL)) is fine. This would be acceptable in a simulation where you only need a uniform distribution of numbers quickly.

If you want to submit a batch job so that multiple instances of your program run at the same time (and are effectively started at the same time), then srand(time(NULL)) will probably result in one or more instances producing the same random stream.

If you need a secure output, then you should not use srand(time(NULL)) because its often a Linear Congruential Generator (LCG). Joan Boyar taught us how to break them years ago. See Inferring sequences produced by a linear congruential generator missing low-order bits.

As for the problem with time_t, just fold it to fit the argument expected by srand if time_t is too large. You might even fold in the process PID so that batch simulation jobs work as intended/expected.

jww
  • 97,681
  • 90
  • 411
  • 885
  • 3
    When `/dev/urandom` is not available, to make the seed harder to guess, and to avoid collisions, Perl mixes the process ID, a stack pointer and the time (preferably seconds + microseconds from gettimeofday). http://perl5.git.perl.org/perl.git/blob/v5.20.1:/util.c#l4409 – Schwern Oct 06 '14 at 06:43