I recently answered a question on here, which was edited by member @dtb.
filling a array with uniqe random numbers between 0-9 in c#
The question involved the use of Random() - and said member had edited my answer to avoid this common pitfall (as he put it)
My original code wasn't vulnerable to this issue in isolation, as was as follows:
public void DoStuff()
{
var rand = new Random();
while() {} //Inner loop that uses 'rand'
}
The answer was edited, as it appeared the intent was to call the function in a loop - which would have made the code vulnerable - it was changed to essentially:
public void DoStuff(Random R)
{
while() {} //Inner loop that uses 'rand'
}
My immediate thought was - this just pushed the burden of maintaining the Random instance further up the stack - and if the programmer in question doen't understand how it is implemented, they'll still fall into this pitfall.
I guess my question is - why isn't Random() in .Net implemented in the same way as it is in Javascript, for example - with one static, global Random() (Say, per AppDomain?)? Naturally, you could still provide more explicit overloads/methods that use the current time as a seed, for the very rare cases when you need to control the seed.
Surely this would avoid this very common pitfall. Can anyone enlighten me as to the benifits of the .Net approach?
EDIT: I understand that taking control of the seed is important - however, enforcing this by default seems to be the weird choice. If someone wants to take control of the seed, it's likely they know what they're doing.
Cheers.