1

I'm trying to calculate the Euler's number (e = 2.718281828459045235) by doing a factorial function and then calling it to calculate the constant with a while loop. The problem is that I declare an i variable to make the loop work under the circumstance i<50, as seen below, and it gives me the following output:

The number of euler is: 2.7182818284590455

However, I want the program to output more decimals, but no matter how I increase the i<50 condition in the loop, it always gives the same amount of decimals.

I've tried to change the while loop's condition to i<150 and I modified the "long" variable to a "ulong" one, but nothing happens, it always gives the same output. Any idea how to change that?

Any kind of help will be appreciated, thank you!

My code:

using System;

namespace eulerNumber
{
    class eulerCalculator
    {

        public static double factorial(long n)
        {
            if (n == 0)
            {
                return 1;
            }

            else if (n == 1)
            {
                return 1;
            }
            else
            {
                return n * factorial(n - 1);
            }
        }
        static void Main(string[] args)
        {
            double addition = 0, i = 0;

            while(i<50)
            {
                addition = addition + (1 / factorial((long)i)); 
                i++;
            }
            Console.WriteLine("The number of euler is: " + addition);
        }
    }
}

dann
  • 37
  • 8
  • 9
    Double-precision floating point numbers only hold about 17 digits of information. To get any more, you will have to use an extended precision math library. Also note that computing 172! by itself will overflow a double. One might ask what you hope to accomplish. If this is just for fun, then what you have is the best you can do. Doing anything more becomes a lot of work. – Tim Roberts May 18 '22 at 19:58
  • There's only so much precision a `double` has. Calculating the number in a `double` yourself is of course only useful as an exercise -- if you want to get the most precise value the system can offer, it's `Math.E`, which is `2.718281828459045090795598298427648842334747314453125` -- precise only to 16 decimal digits. – Jeroen Mostert May 18 '22 at 20:03
  • Yes, I'm just doing this for fun and as an exercice to see what happens. I guess then that there's no need to go further, thank you! – dann May 18 '22 at 20:21
  • If you want a follow-up exercise, it might be explaining why your code does not yield the most precise approximation possible in a `double` (`Math.E` is also not precise, but still a closer approximation than your result), and ways to fix that. That does take you into the realm of the details of floating-point calculations, but that might be interesting too. – Jeroen Mostert May 18 '22 at 20:39
  • @dann I converted my comment into answer with working standalone C++ example (no libs used) ... – Spektre May 20 '22 at 09:51
  • Possibly using `decimal` will yield more digits as it uses more bits to represent numbers. – John Alexiou May 23 '22 at 13:34

2 Answers2

2

A fairly simple improvement is to perform the addition starting from the smallest terms first, so they are similar in size, and the processor can perform the addition without as much loss of precision.

Note: factorial(50) is 50 * factorial(49), and the last two terms are 1/factorial(49) + 1/factorial(50). Apply some algebra to get 1/factorial(49) + 1.0/50/factorial(49), which is (1 + 1.0/50) / factorial(49)

Say you only calculate the numerator, and keep track of what factorial appears in the denominator without calculating it. Now you have two very nice properties:

  1. You never have to calculate numbers that overflow (like factorial(i)) and
  2. Whatever rounding error there is in the update equation, doesn't matter to the final answer, because it's an error in a term that's going to get divided some more and become even smaller

That leads to the following code:

double accumulator = 1.0;

for( int i = 50; i > 0; --i )
{
    accumulator = 1.0 + accumulator / i;
}

Demo: https://rextester.com/FEZB44588


Extension to use .NET's BigInteger class allows us to have many more digits of precision

BigInteger scale = BigInteger.Pow(10, 60);
BigInteger accumulator = scale;

for( int i = 75; i > 0; --i )
{
    accumulator = scale + accumulator / i;
}

result (insert the decimal point):

2.718281828459045235360287471352662497757247093699959574966967

first 50 decimal places from Wikipedia:

2.71828182845904523536028747135266249775724709369995...

Note that Wikipedia's verbiage is slightly wrong, this is not the value rounded to 50 decimal places, these are the first 50 decimal digits of a sequence that continues

Ben Voigt
  • 277,958
  • 43
  • 419
  • 720
  • Of course 50 terms is overkill for a `double`, 17 suffice. You may also wish to format the result explicitly to show more decimals ("G100") to show there's really no hidden difference between `Math.E` and the final result (or just `Math.E == accumulator`, a rare case where comparing `double`s directly is OK). – Jeroen Mostert May 18 '22 at 21:04
  • @JeroenMostert: I chose to go the other way and actually calculate more digits – Ben Voigt May 18 '22 at 21:06
  • +1 interesting tweak ... but still slow as you divide in loop also the result of `accumulator / i` is not `BigInteger` but more like `BigDecimal` which slows the consequent addition quite a lot. Could you evaluate or measure the complexity? so I can compare it to my approach (just currious) from my naive point of view its around `O(n^4)` for naive implementation where `n` is number of digits. However internally Karatsuba and Newton-Rapson might be used so after that it might be `~O(n^2.58)` however the same apply for my approach `~O(n^2.38)` and still with much smaller constant time. – Spektre May 23 '22 at 08:01
  • Now I see its fixed point (I missed the `scale` mening before) have implemented your and my approach on my `arbnum` (dynamic arbitrary precision float) for 100 digits of e, your approach is enough to compute in target bitwidth and 70 iterations, mine (binary base) needs twice the bits however the speed difference is huge (due to division and iteration involved in your approach) mine took `2.686 ms` and yours `11.327 ms` for 50 digits its `0.582 ms` mine and yours `2.214 ms` with 41 iterations so yours grows faster meaning worse complexity. Also getting the number of iterations n is tricky – Spektre May 23 '22 at 13:06
  • @Spektre: Sure, I made minimal changes to the OP's approach, changing the order of operations to reduce rounding error, and doing common subexpression identification to reduce the number of operations. But in the end it's just new code for the same Taylor expansion. So not surprised that a better formulation of the problem can beat it (although I'm still using a library `BigInteger`, and it still apparently managed to beat your custom decimal version). So kudos to you for identifying a different formula that translated into real world performance. – Ben Voigt May 23 '22 at 16:05
  • @BenVoigt yes the decadic is slow as it has naive `O(n^2)` multiplication (and no funny stuff so its understandable to rookies) and `c^10` step is not very computing friedly however its complexity is still better (should be the same as binary version just the constant time is big) also we do not account for the number to string conversion for the other versions which makes slightly unfair comparison as the string one does not need it at all ... and for true binary format printing huge decadic numbers is not easy task ... – Spektre May 23 '22 at 17:59
2

I was bored so I encoded the e=(1+1/x)^x approach into simple C++ (sorry not a C# coder) without any libs or funny stuff computed directly on strings ...

I also ported the algorithm from binary into decadic base here the code:

//---------------------------------------------------------------------------
// string numbers are in format "?.??????????????"
// where n is number of digits after decimal separator
void str_addmul(char *xy,char *x,char *y,int k,int n)   // xy = x+y*k, k = 0..9
    {
    int i,a,cy;
    for (cy=0,i=n+1;i>=0;i--)
        {
        if (i==1) i--;          // skip decimal separator
        a=(x[i]-'0')+((y[i]-'0')*k)+cy;
        cy   = a/10;
        xy[i]=(a%10)+'0';
        }
    xy[1]='.';
    xy[n+2]=0;
    }
//---------------------------------------------------------------------------
// string numbers are in format "?.??????????????"
// where n is number of digits after decimal separator
void str_mul(char *xy,char *_x,char *_y,int n)  // xy = x*y
    {
    int i,j;
    // preserve original _x,_y and use temp x,y instead
    char *x=new char[n+3]; for (i=0;i<n+3;i++) x[i]=_x[i];
    char *y=new char[n+3]; for (i=0;i<n+3;i++) y[i]=_y[i];
    // xy = 0.000...000
    i=0;
    xy[i]='0'; i++;
    xy[i]='.'; i++;
    for (;i<n+2;i++) xy[i]='0';
    xy[i]=0;
    // xy = x*y
    for (i=0;i<n+2;i++)
        {
        if (i==1) i++;          // skip decimal separator
        str_addmul(xy,xy,x,y[i]-'0',n);
        // x/=10
        for (j=n+1;j>2;j--) x[j]=x[j-1];
        x[2]=x[0]; x[0]='0';
        }
    delete[] x;
    delete[] y;
    }
//---------------------------------------------------------------------------
char* compute_e(int n)      //  e = (1+1/x)^x where x is big power of 10
    {
    int i,x10,m=n+n+4;
    char* c=new char[m+3];  // ~double precision
    char* a=new char[m+3];  // ~double precision
    char* e=new char[n+3];  // target precision
    // x = 10^m
    // c = 1 + 1/x = 1.000...0001;
    i=0;
    c[i]='1'; i++;
    c[i]='.'; i++;
    for (;i<m+1;i++) c[i]='0';
    c[i]='1'; i++;
    c[i]=0;
    // c = c^x
    for (x10=0;x10<m;x10++)
        {
        str_mul(a,c,c,m);   // c^2
        str_mul(c,a,a,m);   // c^4
        str_mul(c,c,c,m);   // c^8
        str_mul(c,c,a,m);   // c^10
        }
    // e = c
    for (i=0;i<n+2;i++) e[i]=c[i]; e[i]=0;
    delete[] a;
    delete[] c;
    return e;
    }
//---------------------------------------------------------------------------

Usage:

char *e=compute_e(100); // 100 is number of digits after decimal point
cout << e;  // just output the string somewhere
delete[] e; // release memory after 

And result for compute_e(100) vs. reference:

e(100) = 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274
e(ref) = 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274

The code is slow because its computed on string and in base10 instead of on array of integers in base2 and only naive math operations implementations are used... However still 100 digits is done in 335 ms and 200 digits 2612.525 ms on my ancient PC and should be incomparably faster than your iteration with the same precision...

To get to the base10 algorithm the equation is:

x = 10^digits
e = (1 + 1/x) ^ x

so when you write x and the term (1 + 1/x) in decadic you will get:

x             =   1000000000000 ... 000000
c = (1 + 1/x) = 1.0000000000000 ... 000001

now I just modified the power by squaring into power by 10ing ? where instead of c = c*c I iterate c = c*c*c*c*c*c*c*c*c*c and that is it...

Thanks to the fact the stuff is computed on strings in base 10 no need to convert between number representation and text for printing...

And at last to obtain precision of n decadic digits we have to compute with m = 2*n + 4 digits precision and just cut of the final result to n digits ...

So just port the stuff to C# you can use strings instead of char* so you can get rid of the new[]/delete[] the rest should be the same in C# ...

Was a bit curious so I measure the stuff a bit:

[   0.640 ms] e( 10) = 2.7182818284
[   3.756 ms] e( 20) = 2.71828182845904523536
[  11.172 ms] e( 30) = 2.718281828459045235360287471352
[  25.234 ms] e( 40) = 2.7182818284590452353602874713526624977572
[  46.053 ms] e( 50) = 2.71828182845904523536028747135266249775724709369995
[  77.368 ms] e( 60) = 2.718281828459045235360287471352662497757247093699959574966967
[ 121.756 ms] e( 70) = 2.7182818284590452353602874713526624977572470936999595749669676277240766
[ 178.508 ms] e( 80) = 2.71828182845904523536028747135266249775724709369995957496696762772407663035354759
[ 251.713 ms] e( 90) = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178
[ 347.418 ms] e(100) = 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274
              e(ref) = 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274

and using this reveals ~O(n^2.8) complexity

Here 100 digit of e implemented on my arbnum (arbitrary precison float) both binary (1+1/x)^x and Ben Voigt approach for comparison:

//---------------------------------------------------------------------------
const int n=100;    // digits
//---------------------------------------------------------------------------
arbnum compute_e0()
    {
    arbnum c,x;
    int bit;
    const int bits =int(float(n)/0.30102999566398119521373889472449)+1;
    const int bits2=(bits<<1);
    // e=(1+1/x)^x  ... x -> +inf
    c.one(); c>>=bits; c++;  // c = 1.000...0001b = (1+1/x)          = 2^-bits + 1
    for (bit=bits;bit;bit--)        // c = c^x = c^(2^bits) = e
        {
        c*=c;
        c._normalize(bits2); // this just cut off the result to specific number of fractional bits only to speed up the computation instead you should cut of only last zeros !!!
        }
    c._normalize(bits);
    return c;
    }
//---------------------------------------------------------------------------
arbnum compute_e1()
    {
    // e = 1 + e/i ... i = { n, n-1, n-2, .... 1 }
    int i;
    arbnum e;
    const int bits =int(float(n)/0.30102999566398119521373889472449)+1;
    for (e=1.0,i=70;i;i--)
        {
        e/=i;
        e._normalize(bits);
        e++;
        }
    return e;
    }
//---------------------------------------------------------------------------

and here the results:

    reference e  = 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274
[   2.799 ms] e0 = 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274
[  10.918 ms] e1 = 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274

Karatsuba and approximation divider are used under the hood

Spektre
  • 49,595
  • 11
  • 110
  • 380