1

Hello better programmers than me. Not a huge deal, but I am curious about this function, and more importantly the result it gives sometimes. So I defined a recursive power function for a homework assignment that is to include negative exponents. For positive values and 0, it works fine, but when I enter some negative values, the answer is really strange. Here is the function:

public static double Power(int base, int exp){
    if (exp == 0)
        return 1.0;
    else if(exp >=1)
        return base * Power(base, exp - 1);
    else 
        return (1.0/base) * Power(base, exp + 1);            
}

So for a call Power(5, -1) the function returns 0.2, like it should. But for say Power(5, -2) the function returns 0.04000000000000001 instead of just 0.04. Again, this isn't a huge deal since it's for homework and not "real life", but just curious as to why this happened. I assume it has something to do with how computer memory or a double value is stored but really have no idea. Thanks all!

PS, this is coded in Java using Netbeans if that makes a difference.

2 Answers2

0

Floating point rounding errors can be reduced by careful organization of your arithmetic. In general, you want to minimize the number of rounding operations, and the number of calculations done on rounded results.

I made a small change to your function:

  public static double Power(int base, int exp) {
    if (exp == 0)
      return 1.0;
    else if (exp >= 1)
      return base * Power(base, exp - 1);
    else
      return (1.0 / Power(base, -exp));
  }

For your test case, Power(5, -2), this does only one rounded calculation, the division at the top of the recursion. It gets the closest double to 1/25.0, which prints as 0.04.

Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75
0

It's a 1990's thing

This will likely be an ignored or controversial answer but I think it needs to be said.

Others have focused on the message that "floating point" calculations (eg one involving one or more numbers that are "doubles") do approximate math.

My focus in this answer is on the message that, even though it is this way in ordinary Java code, and indeed ordinary code in most programming languages, computations with numbers like 0.1 don't have to be approximate.


A few languages treat numbers like 0.1 as rational numbers, a ratio between two integers (numerator over denominator, in this case 1 over 10 or one tenth) just like they are in school math. Computations involving nothing but integers and rationals is 100% accurate (ignoring integer overflow and/or OOM).

Unfortunately, rational computations can get pathologically slow if the denominator gets too large.


Some languages take a compromise position. They treat some rationals as rationals (so with 100% accuracy) and only give up on 100% accuracy, switching to floats, when rational calculations would be pathologically slow.

For example, here's some code in a relatively new and forward looking programming language:

sub Power(\base, \exp) {
    given exp {
        when 0       { 1.0                             }
        when * >= 1  { base     * Power(base, exp - 1) }
        default      { 1.0/base * Power(base, exp + 1) }
    }
}

This duplicates your code in this other language.

Now use this function to get results for a list of exponents:

for 1000,20,2,1,0,-1,-2,-20,-1000 -> \exp { say Power 5, exp }

Running this code in glot.io displays:

9332636185032188789900895447238171696170914463717080246217143397959
6691097577563445444032709788110235959498993032424262421548752135403
2394841520817203930756234410666138325150273995075985901831511100490
7962651131182405125147959337908051782711254151038106983788544264811
1946981422866095922201766291044279845616944888714746652800632836845
2647429261829862165202793195289493607117850663668741065439805530718
1363205998448260419541012132296298695021945146099042146086683612447
9295203482686461765792691604742006593638904173789582211836507804555
6628444273925387517127854796781556346403714877681766899855392069265
4394240087119736747017498626266907472967625358039293762338339810469
27874558605253696441650390625
95367431640625
25
5
1
0.2
0.04
0.000000000000010
0

The above results are 100% accurate -- until the last exponent, -1000. We can see where the language gives up on 100% accuracy if we check the types of the results (using WHAT):

for 1000,20,2,1,0,-1,-2,-20,-1000 -> \exp { say WHAT Power 5, exp }

displays:

(Rat)
(Rat)
(Rat)
(Rat)
(Rat)
(Rat)
(Rat)
(Rat)
(Num)

Converting Rats (the default rational type) into FatRats (the arbitrary precision rational type) avoids inaccuracy even with pathologically large denominators:

sub Power(\base, \exp) {
    given exp {
        when 0       { 1.0.FatRat                             }
        when * >= 1  { base            * Power(base, exp - 1) }
        default      { 1.0.FatRat/base * Power(base, exp + 1) }
    }
}

This yields the same display as our original code except for the last calculation which comes out as:

0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000011

I don't know if that's accurate, but, aiui, it's supposed to be.

Community
  • 1
  • 1
raiph
  • 31,607
  • 3
  • 62
  • 111