I have a nested for loop that performs some calculations and the math has been simplified to a great extent, however I still have a performance issue that I'm not sure I can resolve. I don't believe it can be resolved due to the sheer number of times these for loops execute. Now I'm not familiar with using any analytical tools to help determine where the slow downs occur within these but I am fairly certain that it's just the number of times these loops execute.
I would greatly appreciate any help in helping trim this down and increase the performance of this code. I'm trying to stay away from a HPC or highly parallelized solution, but if that's the only way to make this truly effective then I'll look into going down that road.
Here's the code with X= 20,000 and N_zero= 45,420 (values pulled from actual tests):
Dictionary<decimal, int> n_alpha = new Dictionary<decimal, int>();
Random rand = new Random();
decimal r = 0m;
decimal check=0m;
for (int i = 0; i < X; i++)
{
B = N_0 = N_1 = N0_ = N1_ = 0;
for (int j = 0; j < N_zero; j++)
{
// need a random decimal value between 0 and 1
r = (decimal)rand.Next() / int.MaxValue;
if (r <= r1)
{
N0_ += 1;
N_0 += 1;
}
else if (r1 < r && r <= r2)
{
B += 1;
N0_ += 1;
N_1 += 1;
}
else if (r2 < r && r <= r3)
{
B += 1;
N_0 += 1;
N1_ += 1;
}
else if (r > r3)
{
N1_ += 1;
N_1 += 1;
}
}
check = N_0 * N_1 * N0_ * N1_;
if (check != 0)
{
decimal a = 1 - (B * N_zero) / ((N_0 *N1_) + (N0_ * N_1 ));
// technically only tracking 4 decimal points, so key should reflect this
decimal key = Math.Round(a, 4);
if (n_alpha.ContainsKey(key))
{
n_alpha[key] += 1;
}
else
{
n_alpha.Add(key, 1);
}
}
}