-1

I am using this crude monte-carlo integration technique to find out the value of $\pi$ and noticed that the integration value steadily deviates from the actual value as I increase the number of sample points. The code is:

#include<iostream>
#include<cmath>
#include<cstdlib>

using namespace std;

float f(float x)//definition of the integrand
{
    return sqrt(1-x*x);
}

float rand1()//random number generator between 0 and 1
{
    float s=rand();
    return s/(RAND_MAX+1.0);
}

float calcint(float xi,float xf,float yi,float yf,float N)//integrator
{
    float n=0;
    for(int i=0;i<N;i++)
    {
        float x=(xf-xi)*rand1();float y=(yf-yi)*rand1();
        if (y<f(x))
        {
            n=n+1;
        }
    }
    return n/N*(xf-xi)*(yf-yi);
}

int main()
{
    float N=100000000;
    for (int i=1; i<N; i=i+N/10)//lists integration value for different sampling
    {
        cout<<i<<"\t"<<4*calcint(0,1,0,1,i)<<endl;
    }
    return 0;
}

The output is,

10000000 3.14188

20000000 3.14059

30000000 2.23696

40000000 1.67772

50000000 1.34218

60000000 1.11848

70000000 0.958698

80000000 0.838861

90000000 0.745654

Why does it happen? Are Monte-carlo integration techniques guaranteed to converge with larger number of sample points?

  • 1
    First, you are dividing by 0 in `calcInt` when `N` is 0. Second, you are mixing float and int , expecting no loss of precision. – PaulMcKenzie Oct 01 '17 at 19:32
  • I fixed the first problem. But I don't see where I am mixing int and float. I am only comparing them in the condition for the for statement. I am not really converting them. Could you please elaborate how that may affect the results? And why are they decreasing steadily? If there is a problem in the conversion that might be happening during comparison, then the result should suddenly decrease to a value and stay there. –  Oct 01 '17 at 19:41
  • Off topic: `if (y < f(x))` can be expressed as `if (y * y < 1 - x * x)`. – Captain Giraffe Oct 01 '17 at 19:42
  • 1
    You are mixing float and int in 1) `for` loop limits `i=i+N/10` 2) Your call to `calcInt` (look at the last parameter). You are converting them, even though not explicitly. – PaulMcKenzie Oct 01 '17 at 19:43
  • @PaulMcKenzie I see those conversions, but still they do not explain why the integration value decreases. See the accepted answer, I should have declared `n` as integer, or of the same type as `N`. Otherwise, The n stops increasing but N does and that results in a false(smaller) n/N value. Anyway, thanks for the effort. I appreciate it. –  Oct 01 '17 at 19:57
  • @CaptainGiraffe I have to binge-learn a lot of monte-carlo techniques and finally apply it in path integrals. So I am trying to be as general as possible. –  Oct 01 '17 at 20:14
  • Why do you use fp variables to hold integers? – curiousguy Oct 02 '17 at 08:07
  • @curiousguy i need to devide by N. If I don’t use floating point numbers and the result is less than 1, it gives 0. Is there a way to overcome that without mixing them? –  Oct 02 '17 at 08:17
  • What about changing `n/N*(xf-xi)*(yf-yi)` to `n*(xf-xi)*(yf-yi)/N` ? Also, why do you use floats instead of doubles? – curiousguy Oct 02 '17 at 17:38

1 Answers1

1

The problem is limited precision of float type. It has 24 significant precision bits, and the maximal possible integer number that can be represented by float type is 16777216 while 16777217 cannot be represented because it requires 25 significant bits (1 0000 0000 0000 0000 0000 0001 in binary). See more details here: Which is the first integer that an IEEE 754 float is incapable of representing exactly?

It means that when you add 1.0f to 16777216.0f, the result will be 16777216.0f instead of 16777217.0f. Therefore, you should use integer type for n instead of floating point type to count the number of events.

Andrey Nasonov
  • 2,619
  • 12
  • 26
  • Wow! You are wonderful. I see the problem now. That was so stupid. And yes, it solves the problem. –  Oct 01 '17 at 19:45