You could easily estimate what kind of error (error bars) you should get, that's the beauty of the Monte Carlo. For this, you have to compute second momentum and estimate variance and std.deviation. Good thing is that collected value would be the same as what you collect for mean, because you just added up 1 after 1 after 1.
Then you could get estimation of the simulation sigma, and error bars for desired value. Sorry, I don't know enough Javascript, so code here is in C#:
using System;
namespace Pi
{
class Program
{
static void Main(string[] args)
{
ulong N = 1_000_000_000UL; // number of samples
var rng = new Random(312345); // RNG
ulong v = 0UL; // collecting mean values here
ulong v2 = 0UL; // collecting squares, should be the same as mean
for (ulong k = 0; k != N; ++k) {
double x = rng.NextDouble();
double y = rng.NextDouble();
var r = (x * x + y * y < 1.0) ? 1UL : 0UL;
v += r;
v2 += r * r;
}
var mean = (double)v / (double)N;
var varc = ((double)v2 / (double)N - mean * mean ) * ((double)N/(N-1UL)); // variance
var stdd = Math.Sqrt(varc); // std.dev, should be sqrt(Pi/4 (1-Pi/4))
var errr = stdd / Math.Sqrt(N);
Console.WriteLine($"Mean = {mean}, StdDev = {stdd}, Err = {errr}");
mean *= 4.0;
errr *= 4.0;
Console.WriteLine($"PI (1 sigma) = {mean - 1.0 * errr}...{mean + 1.0 * errr}");
Console.WriteLine($"PI (2 sigma) = {mean - 2.0 * errr}...{mean + 2.0 * errr}");
Console.WriteLine($"PI (3 sigma) = {mean - 3.0 * errr}...{mean + 3.0 * errr}");
}
}
}
After 109 samples I've got
Mean = 0.785405665, StdDev = 0.410540627166729, Err = 1.29824345388086E-05
PI (1 sigma) = 3.14157073026184...3.14167458973816
PI (2 sigma) = 3.14151880052369...3.14172651947631
PI (3 sigma) = 3.14146687078553...3.14177844921447
which looks about right. It is easy to see that in ideal case variance would be equal to (Pi/4)*(1-Pi/4). It is really not necessary to compute v2
, just set it to v
after simulation.
I, frankly, don't know why you're getting not what's expected. Precision loss in summation might be the answer, or what I suspect, you simulation is not producing independent samples due to seeding and overlapping sequences (so actual N is a lot lower than 900 trillion).
But using this method you control error and check how computation is going.
UPDATE
I've plugged in your numbers to show that you're clearly underestimating the value. Code
N = 893_547_800_000UL;
v = 701_766_448_388UL;
v2 = v;
var mean = (double)v / (double)N;
var varc = ((double)v2 / (double)N - mean * mean ) * ((double)N/(N-1UL));
var stdd = Math.Sqrt(varc); // should be sqrt(Pi/4 (1-Pi/4))
var errr = stdd / Math.Sqrt(N);
Console.WriteLine($"Mean = {mean}, StdDev = {stdd}, Err = {errr}");
mean *= 4.0;
errr *= 4.0;
Console.WriteLine($"PI (1 sigma) = {mean - 1.0 * errr}...{mean + 1.0 * errr}");
Console.WriteLine($"PI (2 sigma) = {mean - 2.0 * errr}...{mean + 2.0 * errr}");
Console.WriteLine($"PI (3 sigma) = {mean - 3.0 * errr}...{mean + 3.0 * errr}");
And output
Mean = 0.785370909522692, StdDev = 0.410564786603016, Err = 4.34332975349809E-07
PI (1 sigma) = 3.14148190075886...3.14148537542267
PI (2 sigma) = 3.14148016342696...3.14148711275457
PI (3 sigma) = 3.14147842609506...3.14148885008647
So, clearly you have problem somewhere (code? accuracy lost in representation? accuracy lost in summation? repeated/non-independent sampling?)