47

The type blows my mind:

class Contravariant (f :: * -> *) where
  contramap :: (a -> b) -> f b -> f a

Then I read this, but contrary to the title, I wasn't any more enlightened.

Can someone please give an explanation of what a contravariant functor is and some examples?

rityzmon
  • 1,945
  • 16
  • 26
  • 2
    First result on Google for "contravariant functor example": [a blog post about the `contravariant` package by Tom Ellis on Oliver Charles' website](https://ocharles.org.uk/blog/guest-posts/2013-12-21-24-days-of-hackage-contravariant.html), which describes both a trivial example and a more elaborate and useful example of a contravariant functor. –  Jun 26 '16 at 01:12

5 Answers5

61

From a programmer's point of view the essence of functor-ness is being able to easily adapt things. What I mean by "adapt" here is that if I have an f a and I need an f b, I'd like an adaptor that will fit my f a in my f b-shaped hole.

It seems intuitive that if I can turn an a into a b, that I might be able to turn a f a into an f b. And indeed that's the pattern that Haskell's Functor class embodies; if I supply an a -> b function then fmap lets me adapt f a things into f b things, without worrying about whatever f involves.1

Of course talking about paramterised types like list-of-x [x], Maybe y, or IO z here, and the thing we get to change with our adaptors is the x, y, or z in those. If we want the flexibility to get an adaptor from any possible function a -> b then of course the thing we're adapting has to be equally applicable to any possible type.

What is less intuitive (at first) is that there are some types which can be adapted almost exactly the same way as functory ones, only they're "backwards"; for these if we want to adapt an f a to fill a need for a f b we actually need to supply a b -> a function, not an a -> b one!

My favourite concrete example is actually the function type a -> r (a for argument, r for result); all of this abstract nonsense makes perfect sense when applied to functions (and if you've done any substantial programming you've almost certainly used these concepts without knowing the terminology or how widely-applicable they are), and the two notions are so obviously dual to each other in this context.

It's fairly well known that a -> r is a functor in r. This makes sense; if I've got an a -> r and I need an a -> s, then I could use an r -> s function to adapt my original function simply by post-processing the result.2

If, on the other hand, I have an a -> r function and what I need is a b -> r, then again it's clear that I can address my need by pre-processing arguments before passing them to the original function. But what do I pre-process them with? The original function is a black box; no matter what I do it's always expecting a inputs. So I need to turn my b values into the a values it expects: my pre-processing adaptor needs a b -> a function.

What we've just seen is that the function type a -> r is a covariant functor in r, and a contravariant functor in a. I think of this as saying we can adapt a function's result, and the result type "changes with" the adaptor r -> s, while when we adapt a function's argument the argument type changes "in the opposite direction" to the adaptor.

Interestingly, the implementation of the function-result fmap and the function-argument contramap are almost exactly the same thing: just function composition (the . operator)! The only difference is on which side you compose the adaptor function:3

fmap :: (r -> s) -> (a -> r) -> (a -> s)
fmap adaptor f = adaptor . f
fmap adaptor = (adaptor .)
fmap = (.)

contramap' :: (b -> a) -> (a -> r) -> (b -> r)
contramap' adaptor f = f . adaptor
contramap' adaptor = (. adaptor)
contramap' = flip (.)

I consider the second definition from each block the most insightful; (covariantly) mapping over a function's result is composition on the left (post-composition if we want to take a "this-happens-after-that" view), while contravariantly mapping over a function's argument is composition on the right (pre-composition).

This intuition generalises pretty well; if an f x structure can give us values of type x (just like an a -> r function gives us r values, at least potentially), it might be a covariant Functor in x, and we could use an x -> y function to adapt it into being an f y. But if an f x structure receives values of type x from us (again, like an a -> r function's argument of type a), then it might be a Contravariant functor and we'd need to use a y -> x function to adapt it to being an f y.

I find it interesting to reflect that this "sources are covariant, destinations are contravariant" intuition reverses when you're thinking from the perspective of an implementer of the source/destination rather than a caller. If I'm trying to implement an f x that receives x values I can "adapt my own interface" so I get to work with y values instead (while still presenting the "receives x values" interface to my callers) by using an x -> y function. Usually we don't think this way around; even as the implementer of the f x I think about adapting the things I'm calling rather than "adapting my caller's interface to me". But it's another perspective you can take.

The only semi-real-world use I've made of Contravariant (as opposed to implicitly using the contravariance of functions in their arguments by using composition-on-the-right, which is very common) was for a type Serialiser a that could serialise x values. Serialiser had to be a Contravariant rather than a Functor; given I can serialise Foos, I can also serialise Bars if I can Bar -> Foo.4 But when you realise that Serialiser a is basically a -> ByteString it becomes obvious; I'm just repeating a special case of the a -> r example.

In pure functional programming, there's not very much use in having something that "receives values" without it also giving something back so all the contravariant functors tend to look like functions, but nearly any straightforward data structure that can contain values of an arbitrary type will be a covariant functor in that type parameter. This is why Functor stole the good name early and is used all over the place (well, that and that Functor was recognised as a fundamental part of Monad, which was already in wide use before Functor was defined as a class in Haskell).

In imperative OO I believe contravariant functors may be significantly more common (but not abstracted over with a unified framework like Contravariant), although it's also very easy to have mutability and side effects mean that a parameterised type just couldn't be a functor at all (commonly: your standard container of a that is both readable and writable is both an emitter and a sink of a, and rather than meaning it's both covariant and contravariant it turns out that means it's neither).


1 The Functor instance of each individual f says how to apply arbitrary functions to the particular form of that f, without worrying about the particular types f is being applied to; a nice separation of concerns.

2 This functor is also a monad, equivalent to the Reader monad. I'm not going to go beyond functors in detail here, but given the rest of my post an obvious question would be "is the a -> r type also some sort of contravariant monad in a then?". Contravariance doesn't apply to monads unfortunately (see Are there contravariant monads?), but there is a contravariant analogue of Applicative: https://hackage.haskell.org/package/contravariant-1.4/docs/Data-Functor-Contravariant-Divisible.html

3 Note that my contramap' here doesn't match the actual contramap from Contravariant as implemented in Haskell; you can't make a -> r an actual instance of Contravariant in Haskell code simply because the a is not the last type parameter of (->). Conceptually it works perfectly well, and you can always use a newtype wrapper to swap the type parameters and make that an instance (the contravariant defines the the Op type for exactly this purpose).

4 At least for a definition of "serialise" that doesn't necessarily include being able to reconstruct the Bar later, since it would serialise the a Bar identically to the Foo it mapped to with no way to include any information about what the mapping was.

Qqwy
  • 5,214
  • 5
  • 42
  • 83
Ben
  • 68,572
  • 20
  • 126
  • 174
  • I am struggling to follow your explanation and yet can't. Let me ask a question. What **rigorously** do you mean by saying that "the function type a -> r is a covariant functor in r, and a contravariant functor in a"? Since functor is a mapping between categories, I assume both 'a' and 'b' to be distinct categories, however are they? It seems to me like a -> b exists within **the same category** of SET (HASK, if you wish). Can you give a bit more explanation please? Coz' contra/co-variance is a theoretical headache for me indeed. – Zazaeil Jul 11 '18 at 17:11
  • 1
    @SerejaBogolubov I tried to answer your question, but I just can't fit it into a comment (and it's not really appropriate comment material anyway). I think your confusion is that my answer is written from the Haskell/programmers perspective on co- and contra-variant functors as represented by the Haskell typeclasses `Functor` and `Contravariant`. It isn't really a good discussion of co- and contra-variant functors from the category theory perspective (which is more general). I would be very happy to explain how the two viewpoints connect if you ask as a question instead of as a comment. – Ben Jul 31 '18 at 02:18
  • @ben Did a question about the connection between the progammer's viewpoint of contravariance and the category theory viewpoint of contravariance get asked? I'd love to read your answer to that. I'm trying to gain an intuition of that connection myself. – alx9r Dec 13 '18 at 14:24
  • 1
    @alx9r I didn't see one (but I may have missed it). If you can't find one already existing, feel free to ask one yourself! If you ping me here I'll do my best to answer (though there are lots of other people on SO with a better grasp of the category theory than me) – Ben Dec 14 '18 at 07:58
  • @ben I asked [this question](https://stackoverflow.com/questions/53854853/why-is-there-a-distinction-between-co-and-contravariant-functors-in-haskell-but). Hopefully I'm on the right track. – alx9r Dec 19 '18 at 15:58
17

First of all @haoformayor's answer is excellent so consider this more an addendum than a full answer.

Definition

One way I like to think about Functor (co/contravariant) is in terms of diagrams. The definition is reflected in the following ones. (I am abbreviating contramap with cmap)

      covariant                           contravariant
f a ─── fmap φ ───▶ f b             g a ◀─── cmap φ ─── g b
 ▲                   ▲               ▲                   ▲
 │                   │               │                   │
 │                   │               │                   │
 a ────── φ ───────▶ b               a ─────── φ ──────▶ b

Note: that the only change in those two definition is the arrow on top, (well and the names so I can refer to them as different things).

Example

The example I always have in head when speaking about those is functions - and then an example of f would be type F a = forall r. r -> a (which means the first argument is arbitrary but fixed r), or in other words all functions with a common input. As always the instance for (covariant) Functor is just fmap ψ φ = ψ . φ`.

Where the (contravariant) Functor is all functions with a common result - type G a = forall r. a -> r here the Contravariant instance would be cmap ψ φ = φ . ψ.

But what the hell does this mean

φ :: a -> b and ψ :: b -> c

usually therefore (ψ . φ) x = ψ (φ x) or x ↦ y = φ x and y ↦ ψ y makes sense, what is ommited in the statement for cmap is that here

φ :: a -> b but ψ :: c -> a

so ψ cannot take the result of φ but it can transform its arguments to something φ can use - therefore x ↦ y = ψ x and y ↦ φ y is the only correct choice.

This is reflected in the following diagrams, but here we have abstracted over the example of functions with common source/target - to something that has the property of being covariant/contravariant, which is a thing you often see in mathematics and/or haskell.

                 covariant
f a ─── fmap φ ───▶ f b ─── fmap ψ ───▶ f c
 ▲                   ▲                   ▲
 │                   │                   │
 │                   │                   │
 a ─────── φ ──────▶ b ─────── ψ ──────▶ c


               contravariant
g a ◀─── cmap φ ─── g b ◀─── cmap ψ ─── g c
 ▲                   ▲                   ▲
 │                   │                   │
 │                   │                   │
 a ─────── φ ──────▶ b ─────── ψ ──────▶ c

Remark:

In mathematics you usually require a law to call something functor.

        covariant
   a                        f a
  │  ╲                     │    ╲
φ │   ╲ ψ.φ   ══▷   fmap φ │     ╲ fmap (ψ.φ)
  ▼    ◀                   ▼      ◀  
  b ──▶ c                f b ────▶ f c
    ψ                       fmap ψ

       contravariant
   a                        f a
  │  ╲                     ▲    ▶
φ │   ╲ ψ.φ   ══▷   cmap φ │     ╲ cmap (ψ.φ)
  ▼    ◀                   │      ╲  
  b ──▶ c                f b ◀─── f c
    ψ                       cmap ψ

which is equivalent to saying

fmap ψ . fmap φ = fmap (ψ.φ)

whereas

cmap φ . cmap ψ = cmap (ψ.φ)
GMunguia
  • 81
  • 4
epsilonhalbe
  • 15,637
  • 5
  • 46
  • 74
15

I know this answer won't be as deeply academic as the other ones, but it's simply based on the common implementations of contravariant you'll come across.

First, a tip: Don't read the contraMap function type using the same mental metaphor for f as you do when reading the good ol' Functor's map.

You know how you think:

"a thing that contains (or produces) an t"

...when you read a type like f t?

Well, you need to stop doing that, in this case.

The Contravariant functor is "the dual" of the classic functor so, when you see f a in contraMap, you should think the "dual" metaphor:

f t is a thing that CONSUMES a t

Now contraMap's type should start to make sense:

contraMap :: (a -> b) -> f b ...

...pause right there, and the type is perfectly sensible:

  1. A function that "produces" a b.
  2. A thing that "consumes" a b.

First argument cooks the b. Second argument eats the b.

Makes sense, right?

Now finish writing the type:

contraMap :: (a -> b) -> f b -> f a

So in the end this thing must yield a "consumer of a".

Well, surely we can build that, given that our first argument is a function that takes an a as input.

A function (a -> b) should be a good building block for building a "consumer of a".

So contraMap basically lets you create a new "consumer", like this (warning: made up symbols incoming):

(takes a as input / produces b as output) ~~> (consumer of b)

  • On the left of my made up symbol: The first argument to contraMap (i.e. (a -> b)).
  • On the right: The second argument (i.e. f b).
  • The whole thing glued together: The final output of contraMap (a thing that knows how to consume an a, i.e. f a).
Alexander
  • 2,052
  • 1
  • 15
  • 17
  • 2
    Amazing, this is by far the best "beginner" answer. Thank you so much! – Xanthir Aug 07 '19 at 17:14
  • 1
    Very good answer, +1! It helped me view this topic from another perspective. I've written my interpretation in [another answer I've just posted](https://stackoverflow.com/a/65870966/5825294). Maybe you like that point of view too? – Enlico Jan 24 '21 at 13:16
14

First, a note about our friend, the Functor class

You can think of Functor f as an assertion that a never appears in the "negative position". This is an esoteric term for this idea: Notice that in the following datatypes the a appears to act as a "result" variable.

  • newtype IO a = IO (World -> (World, a))

  • newtype Identity a = Identity a

  • newtype List a = List (forall r. r -> (a -> List a -> r) -> r)

In each of these examples a appears in a positive position. In some sense the a for each type represents the "result" of a function. It might help to think of a in the second example as () -> a. And it might help to remember that third example is equivalent to data List a = Nil | Cons a (List a). In callbacks like a -> List -> r the a appears in the negative position but the callback itself is in the negative position so negative and negative multiply to be positive.

This scheme for signing the parameters of a function is elaborated in this wonderful blog post.

Now note that each of these types admit a Functor. That is no mistake! Functors are meant to model the idea of categorical covariant functors, which "preserve the order of the arrows" i.e. f a -> f b as opposed to f b -> f a. In Haskell, types where a never appears in a negative position always admit Functor. We say these types are covariant on a.

To put it another way, one could validly rename the Functor class to be Covariant. They are one and the same idea.

The reason this idea is worded so strangely with the word "never" is that a can appear in both a positive and negative location, in which case we say the type is invariant on a. a can also never appear (such as a phantom type), in which case we say the type is both covariant and contravariant on a – bivariant.

Back to Contravariant

So for types where a never appears in the positive position we say the type is contravariant in a. Every such type Foo a will admit an instance Contravariant Foo. Here are some examples, taken from the contravariant package:

  • data Void a(a is phantom)
  • data Unit a = Unit (a is phantom again)
  • newtype Const constant a = Const constant
  • newtype WriteOnlyStateVariable a = WriteOnlyStateVariable (a -> IO ())
  • newtype Predicate a = Predicate (a -> Bool)
  • newtype Equivalence a = Equivalence (a -> a -> Bool)

In these examples a is either bivariant or merely contravariant. a either never appears or is negative (in these contrived examples a always appears before an arrow so determining this is extra-simple). As a result, each of these types admit an instance Contravariant.

A more intuitive exercise would be to squint at these types (which exhibit contravariance) and then squint at the types above (which exhibit covariance) and see if you can intuit a difference in the semantic meaning of a. Maybe that is helpful, or maybe it is just still abstruse sleight of hand.

When might these be practically useful? Let us for example want to partition a list of cookies by what kind of chips they have. We have a chipEquality :: Chip -> Chip -> Bool. To obtain a Cookie -> Cookie -> Bool, we simply evaluate runEquivalence . contramap cookie2chip . Equivalence $ chipEquality.

Pretty verbose! But solving the problem of newtype-induced verbosity will have to be another question...

Other resources (add links here as you find them)

hao
  • 10,138
  • 1
  • 35
  • 50
0

Another view on the topic, limited to functions seen as contravariant functors. (See also this.)

Functions as containers (therefore Functors) of their result

A function f of type a -> b can be tought of as containing a value of type b, which we get access to when we feed a value of type a to f.

Now, things which are containers of other things can be made Functors, in the sense that we can apply a function g to their content, via applying fmap g to the functor itself.

Therefore, f, which is of type a -> b can be seen as a functor in b, i.e. (->) a can be made a Functor. To do so, we need to define fmap: fmapping a function g on the "content" of f essentially means applying g on whatever f returns (once it's fed with an input of type a, obviously), which means that fmap g f = \x -> g (f x) or, more concisely, fmap g f = g . f.

fmapping on a -> b functions = post-processing their result of type b

As a last thought: a function of type a -> b is a functor in b because we can post-process it by means of a function b -> c (where c is just another type).

contramapping on a -> b functions = pre-processing an input to get a

But what if we want to use a function g (of type c -> a) to pre-process a value of some type c to obtain the value of type a that we want to feed to f?

Well, it's clear that in this case, we want g to act before f, i.e. we are looking for f . g.

And we want f . g to be the "implementation" of the concept of "mapping g on f". In other words we want whichmap g f = f . g.

Guess what? whichmap is actually the implementation of contramap for functions! And contramap is what you have to implement to make some type an instance of the Contravariant functor typeclass.

Well, not really (-> b)...

Actually there isn't exactly an instance of Contravariant mirroring instance Functor ((->) r), I believe just because instance Contravariant (-> r)/instance Contravariant (flip (->) r) are invalid syntaxes; so another type is created, via

newtype Op a b = Op { getOp :: b -> a }

and this is made an instance of Contravariant:

instance Contravariant (Op a) where
  contramap f g = Op (getOp g . f)

The two last chunks of code are taken from hackage.

The example at the top of this page is very illuminating too.

Enlico
  • 23,259
  • 6
  • 48
  • 102