2

What I mean is that, from definition:

fix f is the least fixed point of the function f

In other words:

the least defined x such that f x = x.

The least defined value of any nullary type is undefined. There is some ambiguity here still as undefined may mean either "will throw an error" (the better) or "will never terminate" (the worse). As both reasoning and trial show, fix (+1) and fix pred :: Word both never seem to approach termination (even though pred 0 is an error), so the worse one of "never terminate" likely always gets chosen between these two alternatives. (What can escape fix is the boring const 1, but on that later.)

This is not the useful way of applying fix.

The useful way of applying fix is something along the lines of:

fix (\f x -> if x > 7 then f (x - 1) else x)

-- That is, using fix to magically produce a recursive function. (This never ceases to amaze me.) This may be regarded as choosing between 2 functions: \f x -> f (x - 1) and \_ x -> x, one of which does never evaluate its first bound variable. This is a dangerous toy, as we may still obtain a function that is non-terminating for half its domain as easily as by accidentally flipping > to < or - to +.

So, somehow, of these two functions:

f1 = \f x -> f (x - 1)

f2 = \_ x -> x

-- The latter is "least defined ". If we squint hard, we can actually recognise our boring fellow const in it, just flip'd.

Now, this is counter-intuitive. f2 is actually more fault tolerant, so, in a sense, it's defined for more input values than f1. (In particular, this is what lets it escape the otherwise infinite loop of fix). Specifically, f2 is defined for all the same pairs (f, x) of input as f1 plus the (undefined, x). Along the same line, const 1 is the most fault tolerant of all unary functions.

So, how do I understand definedness now?


some justification for the question

There is this answer nearby that kind of offers a different intuition for fix than the one proposed in this question. It also emphasizes that for a full understanding one must pass to an external tutorial on denotational semantics.

I would like to have an answer that either supports, or explains the wrongs of, the intuition proposed here, and also, if domain theory really is behind that cursive ordering put forward in the comments, includes at least some rules of thumb that allow, however limited, but practical application of domain theory. One particular part of the question that I'd like to see light shed upon is whether a fixable function can always be decomposed on a constant function and a reduction function, and what would the definition of these classes of functions be.

My wish is for an actual, practical answer on building useful and safe fix-encoded recursion, backed by solid mathematical reasoning.

duplode
  • 33,731
  • 7
  • 79
  • 150
Ignat Insarov
  • 4,660
  • 18
  • 37
  • Uh, I think that means least according to some *ordering*. – Davislor Feb 10 '18 at 01:44
  • Here it’s the least function `f` such that `\x -> if x > 7 then f (x - 1) else x` is equivalent to `f`. So I don’t think that says anything about picking between those two functions `f1` and `f2` you’ve given. – Ry- Feb 10 '18 at 01:45
  • "The latter is "least defined "" - this is not true (nor is it actually relevant to the definition of `fix` - that talks about the defindedness of `fix f` and not of `f`). If two terms contain no bottoms then they are equally defined. "There is some ambiguity here still as undefined may mean either "will throw an error" (the better) or "will never terminate" (the worse)." - there is no ambiguity, because you may not observe the difference in pure code. – user2407038 Feb 10 '18 at 01:48
  • @user2407038 For all practical purposes though, there is a life and death difference inside this purely unambiguous undefinedness. And we may as well infer it from code analysis. I'd even say we must. – Ignat Insarov Feb 10 '18 at 01:51
  • Possible duplicate of [How do I use fix, and how does it work?](https://stackoverflow.com/questions/4787421/how-do-i-use-fix-and-how-does-it-work) – Davislor Feb 10 '18 at 02:04
  • @Kindaro "For all practical purposes though, there is a life and death difference inside this purely unambiguous undefinedness." - I cannot parse this sentence. I don't believe it matters though; when you read "least defined" in this context it is explicitly referring to a semantics in which non-termination and errors are equally defined. You may talk of a different semantics and what `fix` means in a language with such different semantics; but that language is not Haskell. – user2407038 Feb 10 '18 at 02:05
  • 3
    In this context, “least defined” does not mean “most-undefined,” it means, “defined and lowest in a particular ordering of the domain.” Also, it applies to the argument, not the function. – Davislor Feb 10 '18 at 02:07
  • 3
    The ordering to which @Davislor refers to is detailed [here](https://en.wikibooks.org/wiki/Haskell/Denotational_semantics). – user2407038 Feb 10 '18 at 02:10
  • @Davislor I think it would be great to have some answer that details on this *ordering* rather than hinting upon it. I will be reading the link on denotational semantics, but still I'd rather have an (hopefully more practical) answer here as well. – Ignat Insarov Feb 10 '18 at 02:32
  • I’m not sure whether that’s a request for me specifically, or to the better-informed readers here on SX, but unfortunately, I’m not qualified to do it. – Davislor Feb 10 '18 at 02:39
  • Check out `fix (1:)`. – dfeuer Feb 10 '18 at 11:01

1 Answers1

6

In Haskell, functions are pure. They take input and yield output. So the natural question arises: what about functions that don't terminate?

f x = 1 + f x

This function will lock your interpreter in an infinite loop. But mathematically, we have to say it "returns" something, or else it's not a function. We call this special "something went wrong" value the "bottom" value and we denote it with the symbol .

So, mathematically speaking, the Haskell Int type contains every integer, plus an additional special element: . We call a type which contains a "lifted" type, and pretty much all types you'll use in Haskell are lifted.[1] As it turns out, an infinite loop is not the only way to invoke . We can also do so by "crashing" the interpreter in other ways. The most common way you'll see is with undefined, a built-in function which halts the program with a generic error.

But there's another problem. Specifically, the halting problem. If is supposed to represent infinite loops and other unpredictable issues, there are certain functions we can't write in Haskell. For example, the following pseudocode is nonsensical.

doesItHalt :: a -> Bool
doesItHalt ⊥ = False
doesItHalt _ = True

This would check if the resulting value is or not, which would solve the halting problem. Clearly, we can't do this in Haskell. So what functions can we define in Haskell? We can define those which are monotonic with respect to the definedness ordering. We define to be this ordering. We say is the least-defined value, so ⊥ ⊑ x for all x. is a partial ordering, so some elements cannot be compared. While 1 ⊑ 1, it is not true that either 1 ⊑ 2 or 2 ⊑ 1. In pure English, we're saying that 1 is certainly less-than-or-equally-defined-as 1 (clearly; they're the same value), but it doesn't make sense to say 1 is more or less defined than 2. They're just... different values.

In Haskell, we can only define functions which are monotonic with respect to this ordering. So we can only define functions for which, for all values a and b, if a ⊑ b then f a ⊑ f b. Our above doesItHalt fails, since, for instance, ⊥ ⊑ "foo" but f ⊥ = False and f "foo" = True. Like we said before, two fully defined but non-equal values are not comparable. So this function fails to work.

In simple terms, the reason we define the ordering to be this way is because, when we "inspect" a value using pattern matching, that acts as an assertion that it must be at least defined enough for us to see the parts we matched on. If it's not, then we'll always get , since the program will crash.

It's worth noting, before we discuss fix, that there are "partially defined" values. For example, 1 : ⊥ (in Haskell, we could write this as 1 : undefined) is a list whose first element is defined but for which the tail of the list is undefined. In some sense, this value is "more defined" than a simple , since we can at least pattern match and extract the first value. So we would say ⊥ ⊑ (1 : ⊥) ⊑ (1 : []). Thus, we end up with a hierarchy of "definedness".

Now, fix says it returns the least-defined fixed point. A fixed point of a function f is a value x such that x = f x. Let's try it out with a few examples and see if we can figure out why it says it that way. Let's define a function

f 0 = 1
f x = x

This function has a lot of fixed points. For any x other than zero, f x = x. By our "least-defined" principle, which one will be returned? will, actually, since f ⊥ will return . If we pass undefined to f, the first pattern match will cause the program to crash, since we're comparing an undefined value against 0. So is a fixed-point, and since it's the least possible defined value, it will be returned by fix f. Internally, fix works by applying the function to itself infinitely, so this makes some amount of sense. Applying f (f (f ...)) will keep trying to compare the inner argument against 0 and will never terminate. Now let's try a different function.

g x = 0

Applying this function to itself infinitely gives 0. As it turns out, fix g = 0. Why was 0 returned in this case and not ? As it turns out, fails to be a fixed-point of this function. g ⊥ = 0. Since the argument is never inspected and Haskell is a non-strict language, g will return 0, even if you pass undefined or error "Oops!" or some absurd infinitely recursing value as an argument. Thus, the only fixed point of g, even over a lifted type, is 0, since g 0 = 0. Thus, 0 is truly the least-defined fixed point of g.

So, in summary, we define a mathematical framework in order to rigorously describe the notion that certain functions don't terminate. "least-defined" is just a very mathematically precise way of saying that fix won't work on functions which are always completely strict in their argument. If f ⊥ is going to return then fix f is also going to return .


* Most of this answer was paraphrased and summarized from the wiki page on Denotational Semantics. I encourage reading that page for more information; it's written to be very understandable to non-mathematicians and is very informative.

[1] A few primitive types are unlifted. For example, GHC-specific Int# is an integer type that does not contain and is used internally in some places.

Silvio Mayolo
  • 62,821
  • 6
  • 74
  • 116