I have a game program that is played by a robot. For simplicity's sake, the game has 2 buttons - "win" and "try again". To win, the robot must simply push the "win" button.
The game involves a countdown timer that starts at 10 and runs to 0, ticking once per second. During each tick of the timer, the robot picks one of the two buttons. When the timer is at 10, the chance of the robot clicking win is very small. As the timer gets closer to 0, the chance of the robot clicking the "win" button increases. And of course, the robot may never click the win button at all.
What I'm looking for in the end is that the robot click "win" about 90% of the time with those win clicks being weighted closer to the timer being 0.
I did some research on probability (absolute novice) and my understanding is that the sum of the probabilities at each tick of the time should total up to .90 in order to get my desired result. Example:
countdownTimerTickNumber | probabilityOfClickingWin
====================================================
10 | 0
9 | 0.0001
8 | 0.005
7 | 0.01
6 | 0.02
5 | 0.04
4 | 0.08
3 | 0.1
2 | 0.15
1 | 0.2
0 | 0.294
----------------------------------
Total probabilityOfClickingWin over all ticks: .9
Here is some pseudo code to show how I use the probabilities from the table above to actually determine which button the robot clicks. It is called during each tick:
function bool doClickWin(probabilityOfClickingWin)
{
if (probabilityOfClickingWin >= new Random().NextDouble())
return true;
return false;
}
However, if I run my program many times, I'm finding that the actual percentage of the time that the robot clicks "win" is much lower than 90% (approx 60%).
Can anyone tell me what I'm doing wrong? Thanks in advance.