Typically, the seeding of srand() is done by:
srand(time(NULL));
In my case, I use random numbers to generate an identifier for my client process at runtime on the network. The process sometimes restarts and generates a new identifier. As the number of clients increases, there's a good chance that two clients call srand(time(NULL))
within the same sec, which creates two identical identifiers, or a collision as seen by the server side. Some people suggested a finer resolution:
srand((time.tv_sec * 1000) + (time.tv_usec / 1000));
But The trouble here is that the seed will repeat every 24 days or so, and when the number of machines is large enough, there's still a chance of collision. There's another solution:
srand(time.tv_usec * time.tv_sec);
But this seems problematic to me too because the the modulus of this product (the higher bits overflow and get abandoned) is not evenly distributed within the range of the unsigned int
seed value. For example, for every sec, time.tv_usec == 0
leads to the same seed.
So is there a way to seed srand() in my case?
Edit: the client runs on Linux, Windows, Android and iOS, so /dev/random
or /dev/urandom
isn't always available.
P.S. I'm aware of the GUID/UUID approach, but I'd like to know if it's possible to just seed srand() properly in this case.