Math.pow()
, pow()
, whatever it's called, a lot of languages (and calculators) have some built in function to calculate x=a^b for floats (or doubles). There's the special case of a being negative and b not being an integer. Some will return NaN
, others give a complex result (ahem Python). But some are actually capable of giving real results, and so what I'm wondering is how. To explain my confusion:
Suppose b is rational: b=c/d. Now we look at the parity of c and d:
d is even: no real x -> NaN or error
d is odd, c is even: positive x
d is odd, c is odd: negative x
Floats are stored in a particular format that means if it were interpreted literally it would always be an even d (power of 2 actually). There's no way to know the real parity of c and d since that information is lost in computation. It would just have to guess.
So my guess as to what it's doing - it tries to find a rational c/d close to b, with odd d and both c and d less than some threshold t. Smaller t means it can be more sure it's correct but it'll work on less numbers. If that succeeds, it uses the parity of c. Otherwise it pretends d is even. After all, a float can be anything and the math library doesn't want to give a possibly wrong result by assuming it's rational when it might not be.
That's only my guess though. If anyone has actually seen the code in one of these power functions (or a specification, that's just as good) and could provide insight, that'd be great.