It's not quite clear to me what you want to do, but if you want a uniform distribution between a minimum and a maximum, you can use randomRs
:
Prelude> :m +System.Random
Prelude System.Random> g <- newStdGen
Prelude System.Random> take 10 $ randomRs (10, 100) g
[48,93,21,50,84,57,25,80,68,18]
If you want these random numbers to sum to a particular number, you can basically start picking from the left until you get close enough. The inits
function could help you with that:
Prelude System.Random> :m +Data.List
Prelude System.Random Data.List> take 10 $ inits $ randomRs (10, 100) g
[[],[48],[48,93],[48,93,21],[48,93,21,50],[48,93,21,50,84],[48,93,21,50,84,57],
[48,93,21,50,84,57,25],[48,93,21,50,84,57,25,80],[48,93,21,50,84,57,25,80,68]]
Instead of take 10
, you could start going through this list of lists until you find one that's close enough. For instance, you could calculate the sum of all of those lists:
Prelude System.Random Data.List> fmap sum $ take 10 $ inits $ randomRs (10, 100) g
[0,48,141,162,212,296,353,378,458,526]
So, if you're aiming for, say, 500, you can see that the ninth sum is 458, whereas the tenth sum is too high. In other words, the first nine numbers will get you to 458. How will you reach 500?
One option is simply to to say that then the last number has to be 500 - 458 = 42, but then I'm not sure that the distribution counts as perfectly uniform any longer, because the last number is deterministic.
Another option would be to keep generating random numbers until you have a sequence that's a perfect fit.
Since I don't know the exact requirements, I can't advice on which way would be best.
In the above example, I used fmap sum
to illustrate my point. The problem with this is that by doing this, you throw away the numbers that generated the sum. As far as I understand, you actually want those numbers, so you'll probably need a more complicated left fold that both calculates the sum, while still remembering the numbers that produced it. You can use foldl
or foldl'
for that.