3

To clarify first:

  • 2^3 = 8. That's equivalent to 2*2*2. Easy.
  • 2^4 = 16. That's equivalent to 2*2*2*2. Also easy.
  • 2^3.5 = 11.313708... Er, that's not so easy to grok.

Want I want is a simple algorithm which most clearly shows how 2^3.5 = 11.313708. It should preferably not use any functions apart from the basic addition, subtract, multiply, or divide operators.

The code certainly doesn't have to be fast, nor does it necessarily need to be short (though that would help). Don't worry, it can be approximate to a given user-specified accuracy (which should also be part of the algorithm). I'm hoping there will be a binary chop/search type thing going on, as that's pretty simple to grok.

So far I've found this, but the top answer is far from simple to understand on a conceptual level.

The more answers the merrier, so I can try to understand different ways of attacking the problem.

My language preference for the answer would be C#/C/C++/Java, or pseudocode for all I care.

Community
  • 1
  • 1
Dan W
  • 3,520
  • 7
  • 42
  • 69
  • 3
    Looks like you're trying to understand the math itself. Maybe SO isn't the best place for it. More generally, maybe looking at pseudo-code or algorithms isn't the way to go. Look for an easy-to-understand math article about the subject. – SimpleVar Apr 25 '15 at 04:37
  • I think he's looking for the exact opposite. Most exponentiation implementations rely on mathematical relations that aren't easy to come up. He's looking for an algorithm that may not be the best one, but helps him understand how a programmer could implement it with as little mathematical tricks as possible. – Juan Lopes Apr 25 '15 at 04:39

5 Answers5

5

Ok, let's implement pow(x, y) using only binary searches, addition and multiplication.

Driving y below 1

First, take this out of the way:

pow(x, y) == pow(x*x, y/2)
pow(x, y) == 1/pow(x, -y)

This is important to handle negative exponents and drive y below 1, where things start getting interesting. This reduces the problem to finding pow(x, y) where 0<y<1.

Implementing sqrt

In this answer I assume you know how to perform sqrt. I know sqrt(x) = x^(1/2), but it is easy to implement it just using a binary search to find y = sqrt(x) using y*y=x search function, e.g.:

#define EPS 1e-8

double sqrt2(double x) {
    double a = 0, b = x>1 ? x : 1; 
    while(abs(a-b) > EPS) {
        double y = (a+b)/2;
        if (y*y > x) b = y; else a = y;
    }
    return a;
}

Finding the answer

The rationale is that every number below 1 can be approximated as a sum of fractions 1/2^x:

0.875 = 1/2 + 1/4 + 1/8
0.333333... = 1/4 + 1/16 + 1/64 + 1/256 + ...

If you find those fractions, you actually find that:

x^0.875 = x^(1/2+1/4+1/8) = x^(1/2) * x^(1/4) * x^(1/8)

That ultimately leads to

sqrt(x) * sqrt(sqrt(x)) * sqrt(sqrt(sqrt(x)))

So, implementation (in C++)

#define EPS 1e-8

double pow2(double x, double y){
    if (x < 0 and abs(round(y)-y) < EPS) {
        return pow2(-x, y) * ((int)round(y)%2==1 ? -1 : 1);
    } else if (y < 0) {
        return 1/pow2(x, -y);
    } else if(y > 1) {
        return pow2(x * x, y / 2);
    } else {
        double fraction = 1;
        double result = 1;

        while(y > EPS) {
            if (y >= fraction) {
                y -= fraction;
                result *= x;
            }

            fraction /= 2;
            x = sqrt2(x);
        }
        return result;
    }
}
Juan Lopes
  • 10,143
  • 2
  • 25
  • 44
  • @DanW Sorry, there was an error in `sqrt2` implementation when the value was < 1. It's fixed now. – Juan Lopes Apr 25 '15 at 11:55
  • Perhaps also fix negative roots put to odd integer powers (they currently return 0). – Dan W Apr 25 '15 at 12:09
  • This will be a little more problematic, because it involves implementing `sqrt` in the domain of the complex. Even if we don't return any complex number with the algorithm. – Juan Lopes Apr 25 '15 at 12:11
  • Can't you simply Abs the root, then negative the result at the end if the exponent was an odd number? And in this case if the exponent is also negative then also do 1/result (unless it's negative AND even, where you don't need to do any of these alterations). – Dan W Apr 25 '15 at 12:17
  • The problem are the real exponents `pow(-27, 1/3.0) == -3`, for example. But to get there from the algorithm, we go through a lot of complex numbers. – Juan Lopes Apr 25 '15 at 12:21
  • True, but the power function won't implement that functionality in most languages or calculators either. – Dan W Apr 25 '15 at 12:26
  • Anyway, great job. I'll wait a little more time for other answers, study them to see which one I prefer (taking into account possible generalization to tetration!) and tick the one I think's best. – Dan W Apr 25 '15 at 12:31
  • I've added my [own implementation](http://stackoverflow.com/a/29877278/848344) now too if you'd like to let me know what you think of it. Problem is there are now at least three contenders to getting the tick! – Dan W Apr 26 '15 at 12:05
3

Deriving ideas from the other excellent posts, I came up with my own implementation. The answer is based on the idea that base^(exponent*accuracy) = answer^accuracy. Given that we know the base, exponent and accuracy variables beforehand, we can perform a search (binary chop or whatever) so that the equation can be balanced by finding answer. We want the exponent in both sides of the equation to be an integer (otherwise we're back to square one), so we can make accuracy any size we like, and then round it to the nearest integer afterwards.

I've given two ways of doing it. The first is very slow, and will often produce extremely high numbers which won't work with most languages. On the other hand, it doesn't use log, and is simpler conceptually.

public double powSimple(double a, double b)
{
    int accuracy = 10;

    bool negExponent = b < 0;
    b = Math.Abs(b);
    bool ansMoreThanA = (a>1 && b>1) || (a<1 && b<1);   // Example 0.5^2=0.25 so answer is lower than A.
    double accuracy2 = 1.0 + 1.0 / accuracy;
    double total = a;
    for (int i = 1; i < accuracy* b; i++) total = total*a;

    double t = a;           
    while (true) {
        double t2 = t;
        for(int i = 1; i < accuracy; i++) t2 = t2 * t; // Not even a binary search. We just hunt forwards by a certain increment
        if((ansMoreThanA && t2 > total) || (!ansMoreThanA && t2 < total)) break;
        if (ansMoreThanA) t *= accuracy2; else t /= accuracy2;
    }
    if (negExponent) t = 1 / t;
    return t;
}

This one below is a little more involved as it uses log(). But it is much quicker and doesn't suffer from the super-high number problems as above.

public double powSimple2(double a, double b)
{
    int accuracy = 1000000;

    bool negExponent= b<0;
    b = Math.Abs(b);
    double accuracy2 = 1.0 + 1.0 / accuracy;
    bool ansMoreThanA = (a>1 && b>1) || (a<1 && b<1);   // Example 0.5^2=0.25 so answer is lower than A.

    double total = Math.Log(a) * accuracy * b;

    double t = a;
    while (true) {
        double t2 = Math.Log(t) * accuracy;
        if ((ansMoreThanA && t2 > total) || (!ansMoreThanA && t2 < total)) break;
        if (ansMoreThanA) t *= accuracy2; else t /= accuracy2;
    }
    if (negExponent) t = 1 / t;
    return t;
}
Dan W
  • 3,520
  • 7
  • 42
  • 69
  • 1
    No idea why or who downvoted this. It performs as expected, and is perhaps the shortest and the simplest out of all of answers. Perhaps they'd care to comment. – Dan W Apr 27 '15 at 06:03
2

You can verify that 2^3.5 = 11.313708 very easily: check that 11.313708^2 = (2^3.5)^2 = 2^7 = 128

I think the easiest way to understand the computation you would actually do for this would be to refresh your understanding of logarithms - one starting point would be http://en.wikipedia.org/wiki/Logarithm#Exponentiation.

If you really want to compute non-integer powers with minimal technology one way to do that would be to express them as fractions with denominator a power of two and then take lots of square roots. E.g. x^3.75 = x^3 * x^(1/2) * x^(1/4) then x^(1/2) = sqrt(x), x^(1/4) = sqrt(sqrt(x)) and so on.

Here is another approach, based on the idea of verifying a guess. Given y, you want to find x such that x^(a/b) = y, where a and b are integers. This equation implies that x^a = y^b. You can calculate y^b, since you know both numbers. You know a, so you can - as you originally suspected - use binary chop or perhaps some numerically more efficient algorithm to solve x^a = y^b for x by simply guessing x, computing x^a for this guess, comparing it with y^b, and then iteratively improving the guess.

Example: suppose we wish to find 2^0.878 by this method. Then set a = 439, b = 500, so we wish to find 2^(439/500). If we set x=2^(439/500) we have x^500 = 2^439, so compute 2^439 and (by binary chop or otherwise) find x such that x^500 = 2^439.

mcdowella
  • 19,301
  • 2
  • 19
  • 25
  • That works for 3.5, but how about where the exponent is something horrible like 3.8784569? I'm not keen on the use of sqrt as that complicates things further. – Dan W Apr 25 '15 at 04:57
  • Express 3.8784569 as a binary number with a binary point: 3.8784569 = 11.11.... (because 3.878 > 3.75) and take lots and lots of square roots - or learn about logarithms and exponentials, which is worth doing anyway. – mcdowella Apr 25 '15 at 05:19
  • I want to avoid using sqrt and log if I can, not because it's extra work for me, but because I want to try to generalize the algorithm to work with tetration in the most intuitive possible way afterwards. – Dan W Apr 25 '15 at 09:28
  • I have added another approach based on verifying a guess, which does not require square roots. A quick scan of http://en.wikipedia.org/wiki/Tetration#Extension_to_real_heights suggests that people have already considered generalizing tetration to real heights and that it may not be possible to get all the properties you might hope for at the same time - of course that might mean that some generalization not previously considered might appeal to you. – mcdowella Apr 25 '15 at 17:23
  • I'm a little confused by your second approach. Which variables are the base and exponent? Also what is `a`? Are you thinking of something similar to my own implementation [here](http://stackoverflow.com/a/29877278/848344)? – Dan W Apr 26 '15 at 12:17
  • I have added an example to try and clarify the second method. I haven't looked at your example in detail, but it seems to use logs, which I was asked to avoid, so I don't think they can be identical, although yours seems to be using some sort of iterative method to solve an equation, so there is presumably some similarity somewhere. – mcdowella Apr 26 '15 at 12:33
  • Thanks. Only the second function in my answer used log() by the way (the first working self-contained function didn't). – Dan W Apr 26 '15 at 12:34
1

Most of it comes down to being able to invert the power operation.

In other words, the basic idea is that (for example) N2 should be basically the "opposite" of N1/2 so that if you do something like:

M = N2

L = M1/2

Then the result you get in L should be the same as the original value in N (ignoring any rounding and such).

Mathematically, that means that N1/2 is the same as sqrt(N), N1/3 is the cube root of N, and so on.

The next step after that would be something like N3/2. This is pretty much the same idea: the denominator is a root, and the numerator is a power, so N3/2 is the square root of the cube of N (or the cube of the square root of N--works out the same).

With decimals, we're just expressing a fraction in a slightly different form, so something like N3.14 can be viewed as N314/100--the hundredth root of N raised to the power 314.

As far as how you compute these: there are quite a few different ways, depending heavily on the compromise you prefer between complexity (chip area, if you're implementing it in hardware) and speed. The obvious way is to use a logarithm: AB = Log-1(Log(A)*B).

For a more restricted set of inputs, such as just finding the square root of N, you can often do better than that extremely general method though. For example, the binary reducing method is quite fast--implemented in software, it's still about the same speed as Intel's FSQRT instruction.

Community
  • 1
  • 1
Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • Looks interesting - I'll take a deeper look soon. I did want to avoid using log/pow though if at all possible. – Dan W Apr 26 '15 at 04:49
  • I think I may have implemented your general idea based on what you said about the exponent being 314/100. Perhaps take a [look here](http://stackoverflow.com/a/29877278/848344) to confirm. – Dan W Apr 26 '15 at 12:11
0

As stated in the comments, its not clear if you want a mathematical description of how fractional powers work, or an algorithm to calculate fractional powers.

I will assume the latter.

For almost all functions (like y = 2^x) there is a means of approximating the function using a thing called the Taylor Series http://en.wikipedia.org/wiki/Taylor_series. This approximates any reasonably behaved function as a polynomial, and polynomials can be calculated using only multiplication, division, addition and subtraction (all of which the CPU can do directly). If you calculate the Taylor series for y = 2^x and plug in x = 3.5 you will get 11.313...

This almost certainly not how exponentiation is actually done on your computer. There are many algorithms which run faster for different inputs. For example, if you calculate 2^3.5 using the Taylor series, then you would have to look at many terms to calculate it with any accuracy. However, the Taylor series will converge much faster for x = 0.5 than for x = 3.5. So one obvious improvement is to calculate 2^3.5 as 2^3 * 2^0.5, as 2^3 is easy to calculate directly. Modern exponentiation algorithms will use many, many tricks to speed up processing - but the principle is still much the same, approximate the exponentiation function as some infinite sum, and calculate as many terms as you need to get the accuracy that is required.

Peter Webb
  • 671
  • 4
  • 14