2

Why in Haskell 0^0 == 1 ? Why not 0^0 == 0? Or maybe should raise some error...

*Main> 0^0
1
*Main> 0**0
1.0

Thanks on advance

GHCi, version 7.10.3

MWB
  • 11,740
  • 6
  • 46
  • 91
  • 10
    https://en.wikipedia.org/wiki/0%5E0 gives a variety of reasons for it to be 0 or 1 in various contexts both mathematical and computational. – Josh Lee Jul 19 '17 at 03:17
  • 1
    As others have said, there are several mathematical reasons for choosing either 0 or 1 ([explanation from mathSE](https://math.stackexchange.com/a/11155)). Beyond that, [this](https://hackage.haskell.org/package/base-4.9.1.0/docs/src/GHC.Real.html#%5E) is the source code for `^`. – Alec Jul 19 '17 at 03:19
  • 1
    There is a pleasant interpretation of `0^0` in the algebra of types, much like the one in set theory: `0^0` can also be written `Void -> Void` and it has one inhabitant, the empty function `id :: Void -> Void`. The categorical interpretation of `0^0` is also that it must have one inhabitant: in the category sets and functions, every object has an identity morphism, including the empty set. – Rein Henrichs Jul 19 '17 at 23:48
  • It turns out to be a lot more convenient for number theory, aside from anything else. – dfeuer Jul 20 '17 at 19:15
  • 1
    Possible duplicate of [Why is Math.pow(0, 0) === 1?](https://stackoverflow.com/questions/19955968/why-is-math-pow0-0-1) – MWB Jul 21 '17 at 07:39

4 Answers4

9

It makes a bit of sense when you look at the signatures.

(^) :: (Num a, Integral b) => a -> b -> a

This one is designed to work for nonnegative integer exponents. It's likely implemented recursively, so it behaves like repeated multiplication. Thus, it makes sense that "anything to the zero power is one" is the rule that takes precedent, since we're really talking about repeated multiplication.

(^^) :: (Fractional a, Integral b) => a -> b -> a

This one is a lot like the previous one, except that it works on negative exponents too, since its base is fractional. Still, it behaves like repeated multiplication or repeated division (if the exponent is positive or negative, respectively), so again, it makes sense that repeating either of those operations zero times should result in 1, the multiplicative identity.

(**) :: Floating a => a -> a -> a

In Haskell, the floating point types generally conform to the IEEE standard, and IEEE specifically defines pow(0.0, 0.0) as 1.0. So Haskell, I imagine, is simply conforming to a standard so that it behaves consistently with other languages in this case.

Ry-
  • 218,210
  • 55
  • 464
  • 476
Silvio Mayolo
  • 62,821
  • 6
  • 74
  • 116
2

Haskell does it that way because mathematics defines it that way. Math does it that way because 0⁰ = 1·0⁰, which is 1 multiplied by something else zero times, which is 1 not multiplied by anything. Mathematicians figure it makes more sense to stick to the rule that anything to the zeroth power is 1 (the nullary product) than the rule that zero to any power is zero.

This makes a lot of sense when you try to define exponents in terms of multiplications and divisions. For example, if you were trying to define ^ in Haskell, you might come up with:

(^) a b = product $ replicate b a

This is equivalent to:

(^) a b = foldr (*) 1 (replicate b a)

A list containing zero numbers is empty. The product of an empty list is 1, or else a lot of things would break, like product (xs++[]) not being equal to (product xs) * (product []).

Or if you wrote the simplest possible recursive solution:

(^) _ 0 = 1
(^) a b = a*(a^(b-1))

You would then need a special case in addition to the base and recursive cases to define 0⁰ as anthing other than 1.

PS

As @leftroundabout points out, my answer assumes we’re using discrete math. Computer scientists almost always are, and Haskell was designed by academic computer scientists.

If we are working with continuous functions on a computer, we’re necessarily doing numeric approximations. In that case, the most efficient implementation will be the one that uses the FPU of the machine we’re running on. In 2017, that will follow the IEEE standard, which says that pow( 0.0, 0.0 ) = 1.0.

It’s just slightly simpler to write and prove statements about an exponent function that follows the convention.

Davislor
  • 14,674
  • 2
  • 34
  • 49
  • As already commented elsewhere... Yes, _it does make sense_ to define it this way. However, Mathematicians generally do **not** do this, because there is another conflicting and (very-)arguably just as plausible definition of 0⁰ = 0. (It is only plausible at all in the continuous case, i.e. it would actually be conceivable to have `0^0 = 1` but `0**0 = 0`... fortunately IEEE754 does not demand anything like that.) – leftaroundabout Jul 20 '17 at 20:28
  • I’ve always seen 0⁰ defined as 1 (Donald E. Knuth argued for this at length, and gives that definition in *The Art of Computer Programming*, for example), with a minority of mathematicians arguing that it should be unspecified. The situations in which it would make sense not to define it (such as a pseudo-ring which has a **0** but not a **1**) are rarely adjacent to computer programming. Wikipedia claims that Cauchy was in the undefined camp in 1821. – Davislor Jul 20 '17 at 21:45
  • A pseudo-ring with no 1? Very exotic. No, what I mean is just the issue of `^ : ℝ×ℝ → ℂ` being discontinuous at `(0,0)` (which, to non-discrete mathematicians, already reads a bit like: _we should better always keep clear of that spot_), but being continuous along certain paths, each with a completely different limit. — Once again: _I agree_, this shouldn't matter to us seeing as, if we just leave analysis/topology aside, 0⁰ = 1 is already perfectly evident. My point was just that this sentiment is not so widespread outside of computer science, as I, being a physicist, regularly experience. – leftaroundabout Jul 20 '17 at 22:47
  • Those are valid points. Computer scientists use discrete math, as you know, except when we’re finding a numeric approximation. In that case, the most efficient implementation is going to use the FPU instruction, which will follow the IEEE definition that pow(0.0, 0.0) = 1.0. So, there are contexts where we should remember that 0⁰ = 1 is arbitrary, but the implementation in Haskell will be simpler if we follow the convention. Besides, the designers were academic computer scientists. – Davislor Jul 20 '17 at 23:38
1

Haskell is functional programming language. Functional languages use λ-calculus in their basis. Number literals are encoded using Church encoding in λ-calculus. So if you encode 0^0 by Church and then normalize λ-term using β-reductions you will get 1 like this:

0^0 = (λn.λm.λs.λz.m n s z) (λs.λz.z) (λs.λz.z) = λs.λz.s z = 1

I think this should explain why Haskell decided to follow chosen model.

Shersh
  • 9,019
  • 3
  • 33
  • 61
  • This is an interesting remark, but without reference seems very speculative. I rather doubt that this was the primary motivation for that particular choice. – leftaroundabout Jul 19 '17 at 11:50
  • @leftaroundabout Sure, I don't want to say this was primary motivation :) Moreover, I think it wasn't. But I still find it beautiful how different models consistent with each other. – Shersh Jul 19 '17 at 11:53
0

This is simply a law of mathematics. Any positive number raised to the 0 power is equal to 1. There should be no error

boost
  • 17
  • 1
  • 7
    The counter argument to that is that 0 to any power is 0. –  Jul 19 '17 at 03:04
  • 3
    0 is not positive. It is non-negative. See [this](https://math.stackexchange.com/a/11155) for an excellent analysis. – Alec Jul 19 '17 at 03:20
  • Alec is right, however I somewhat agree with this answer: since in the discrete case (set/type exponentiation) 0⁰ = 1 is clearly the sensible choice, whereas analysis simply can't say anything about it via limits because the function is discontinuous in the vicinity of (0,0) anyway, there's no good reason to not always adapt the 0^0 = 1 convention, as indeed IEEE754 has also done for “real” numbers. – leftaroundabout Jul 19 '17 at 07:25