50

Both Agda and Idris effectively prohibit pattern matching on values of type Type. It seems that Agda always matches on the first case, while Idris just throws an error.

So, why is typecase a bad thing? Does it break consistency? I haven't been able to find much information regarding the topic.

Bartłomiej Semańczyk
  • 59,234
  • 49
  • 233
  • 358
András Kovács
  • 29,931
  • 3
  • 53
  • 99

3 Answers3

56

It's really odd that people think pattern matching on types is bad. We get a lot of mileage out of pattern matching on data which encode types, whenever we do a universe construction. If you take the approach that Thorsten Altenkirch and I pioneered (and which my comrades and I began to engineer), the types do form a closed universe, so you don't even need to solve the (frankly worth solving) problem of computing with open datatypes to treat types as data. If we could pattern match on types directly, we wouldn't need a decoding function to map type codes to their meanings, which at worst reduces clutter, and at best reduces the need to prove and coerce by equational laws about the behaviour of the decoding function. I have every intention of building a no-middleman closed type theory this way. Of course, you need that level 0 types inhabit a level 1 datatype. That happens as a matter of course when you build an inductive-recursive universe hierarchy.

But what about parametricity, I hear you ask?

Firstly, I don't want parametricity when I'm trying to write type-generic code. Don't force parametricity on me.

Secondly, why should types be the only things we're parametric in? Why shouldn't we sometimes be parametric in other stuff, e.g., perfectly ordinary type indices which inhabit datatypes but which we'd prefer not to have at run time? It's a real nuisance that quantities which play a part only in specification are, just because of their type, forced to be present.

The type of a domain has nothing whatsoever to do with whether quantification over it should be parametric.

Let's have (e.g. as proposed by Bernardy and friends) a discipline where both parametric/erasable and non-parametric/matchable quantification are distinct and both available. Then types can be data and we can still say what we mean.

pigworker
  • 43,025
  • 18
  • 121
  • 214
  • 4
    Very nice! By the last paragraph, do you mean that we could have two distinct dependent quantifiers, one parametric and one indexing, like in the Dependent Haskell proposal? – András Kovács Sep 24 '14 at 20:43
  • 6
    Yes. Or more than two. There are so many ways to profit from directed information flow. Parametricity in System F stems from quantification against the flow of information, so you don't actually get the stuff you abstract over, so you're bound to treat it uniformly. In dependent type theory as we know it, we aren't prevented from accessing types by inherent information flow restrictions: it's just that there's bugger all we can do with the information when we've got it. Type Theory In Color is a big step in the right direction, and there's a lot more to come. – pigworker Sep 24 '14 at 22:17
  • 2
    I think this is a great answer about the possible roles of parametricity in ongoing and future language design. I somewhat adapted my answer below to account for the role of parametricity in the design of already existing languages. – Toxaris Sep 26 '14 at 09:59
17

Many people see matching on types as bad because it breaks parametricity for types.

In a language with parametricity for types, when you see a variable

f : forall a . a -> a

you immediately know a lot about the possible values of f. Intuitively: Since f is a function, it can be written:

f x = body

The body needs to be of type a, but a is unknown so the only available value of type a is x. If the language allows nontermination, f could also loop. But can it make the choice between looping or returning x based on the value of x? No, because a is unknown, f doesn't know which functions to call on x in order to the make the decision. So there are really just two options: f x = x and f x = f x. This is a powerful theorem about the behavior of f that we get just by looking at the type of f. Similar reasoning works for all types with universally quantified type variables.

Now if f could match on the type a, many more implementations of f are possible. So we would lose the powerful theorem.

greatBigDot
  • 505
  • 3
  • 9
Toxaris
  • 7,156
  • 1
  • 21
  • 37
  • 5
    Also, no typecase means you get to erase types during compilation, which leads to more efficient programs. – Vitus Apr 22 '14 at 18:39
  • 2
    @Vitus: I think there might be other reasons why you can't erase types. For example, maybe the target language uses a type case to implement different calling conventions for different instantiations of a parametric function. If the choice of calling convention is invisible to the source language, you would still have parametricity there. – Toxaris Apr 23 '14 at 22:10
  • 4
    Good point. Perhaps I should have said it the other way around - type case implies no type erasure. – Vitus Apr 23 '14 at 23:32
  • 3
    can we say that we could lose "free theorems"? – sinan Sep 24 '14 at 10:13
  • 3
    @sinan: Yes, the "powerful theorems about he behavior of `f`" in my answer are called "free theorems" by Wadler. – Toxaris Sep 26 '14 at 20:34
1

In Agda, you cannot pattern matching on Set because it isn't an inductive type.

asr
  • 1,166
  • 6
  • 18