8

The Applicative typeclass represents lax monoidal functors that preserve the cartesian monoidal structure on the category of typed functions.

In other words, given the canonical isomorphisms witnessing that (,) forms a monoidal structure:

-- Implementations left to the motivated reader
assoc_fwd :: ((a, b), c) -> (a, (b, c))
assoc_bwd :: (a, (b, c)) -> ((a, b), c)

lunit_fwd :: ((), a) -> a
lunit_bwd :: a -> ((), a)

runit_fwd :: (a, ()) -> a
runit_bwd :: a -> (a, ())

The typeclass and its laws can equivalently be written like this:

class Functor f => Applicative f
  where
  zip :: (f a, f b) -> f (a, b)
  husk :: () -> f ()

-- Laws:

-- assoc_fwd >>> bimap id zip >>> zip
-- =
-- bimap zip id >>> zip >>> fmap assoc_fwd

-- lunit_fwd
-- =
-- bimap husk id >>> zip >>> fmap lunit_fwd

-- runit_fwd
-- =
-- bimap id husk >>> zip >>> fmap runit_fwd

One might wonder what a functor that is oplax monoidal with respect to the same structure might look like:

class Functor f => OpApplicative f
  where
  unzip :: f (a, b) -> (f a, f b)
  unhusk :: f () -> ()

-- Laws:

-- assoc_bwd <<< bimap id unzip <<< unzip
-- =
-- bimap unzip id <<< unzip <<< fmap assoc_bwd

-- lunit_bwd
-- =
-- bimap unhusk id <<< unzip <<< fmap lunit_bwd

-- runit_bwd
-- =
-- bimap id unhusk <<< unzip <<< fmap runit_bwd

If we think about the types involved in the definitions and laws, the disappointing truth is revealed; OpApplicative is no more specific a constraint than Functor:

instance Functor f => OpApplicative f
  where
  unzip fab = (fst <$> fab, snd <$> fab)
  unhusk = const ()

However, while every Applicative functor (really, any Functor) is trivially OpApplicative, there is not necessarily a nice relationship between the Applicative laxities and OpApplicative oplaxities. So we can look for strong monoidal functors wrt the cartesian monoidal structure:

class (Applicative f, OpApplicative f) => StrongApplicative f

-- Laws:
-- unhusk . husk = id
-- husk . unhusk = id
-- zip . unzip = id
-- unzip . zip = id

The first law above is trivial, since the only inhabitant of the type () -> () is the identity function on ().

However, the remaining three laws, and hence the subclass itself, is not trivial. Specifically, not every Applicative is a lawful instance of this class.

Here are some Applicative functors for which we can declare lawful instances of StrongApplicative:

  • Identity
  • VoidF
  • (->) r
  • Monoid m => (,) m (see answers)
  • Vec (n :: Nat)
  • Stream (infinite)

And here are some Applicatives for which we cannot:

  • []
  • Either e
  • Maybe
  • NonEmptyList

The pattern here suggests that the StrongApplicative class is in a sense the FixedSize class, where "fixed size" * means that the multiplicity ** of inhabitants of a in an inhabitant of f a is fixed.

This can be stated as two conjectures:

  • Every Applicative representing a "fixed size" container of elements of its type argument is an instance of StrongApplicative
  • No instance of StrongApplicative exists in which the number of occurrences of a can vary

Can anyone think of counterexamples that disprove these conjectures, or some convincing reasoning that demonstrates why they are true or false?


* I realize that I haven't properly defined the adjective "fixed size". Unfortunately the task is a little bit circular. I don't know of any formal description of a "fixed size" container, and am trying to come up with one. StrongApplicative is my best attempt so far.

In order to evaluate whether this is a good definition however, I need something to compare it to. Given some formal/informal definition of what it means for a functor to have a given size or multiplicity with respect to inhabitants of its type argument, the question is whether the existence of a StrongApplicative instance precisely distinguishes functors of fixed and varying size.

Not being aware of an existing formal definition, I'm making an appeal to intuition in my usage of the term "fixed size". However if someone already knows of an existing formalism for the size of a functor and can compare StrongApplicative to it, so much the better.

** By "multiplicity" I'm referring in a loose sense to "how many" arbitrary elements of the functor's parameter type occur in an inhabitant of the functor's codomain type. This is without regard to the specific type the functor is applied to, and hence without regard to any specific inhabitants of the parameter type.

Not being precise about this has caused some confusion in the comments, so here's some examples of what I would consider the size/multiplicity of various functors to be:

  • VoidF: fixed, 0
  • Identity: fixed, 1
  • Maybe: variable, minimum 0, maximum 1
  • []: variable, minimum 0, maximum infinite
  • NonEmptyList: variable, minimum 1, maximum infinite
  • Stream: fixed, infinite
  • Monoid m => (,) m: fixed, 1
  • data Pair a = Pair a a: fixed, 2
  • Either x: variable, minimum 0, maximum 1
  • data Strange a = L a | R a: fixed, 1
Asad Saeeduddin
  • 46,193
  • 6
  • 90
  • 139
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackoverflow.com/rooms/209288/discussion-on-question-by-asad-saeeduddin-are-all-fixed-size-containers-strong-m). – Samuel Liew Mar 09 '20 at 08:53
  • One possible definition of "fixed size" would be "representable". All representable functors are strong applicatives in the sense described here, because `(->) r` is and they're isomorphic in the right way to that. – Daniel Wagner Mar 09 '20 at 21:15
  • @DanielWagner I think you need more than just an isomorphism in order to inherit the strong applicative from `(->) r`; you need the components of the isomorphism to preserve the strong applicative structure. For some reason the `Representable` typeclass in Haskell has a mysterious `tabulate . return = return` law (which doesn't really even make sense for non monadic functors), but it gives us 1/4 of the conditions we need to say that `tabulate` and `zip` are morphisms of a suitable category of monoids. The other 3 are extra laws you have to demand. – Asad Saeeduddin Mar 10 '20 at 00:21
  • Sorry, that should be "`tabulate` and `index` are morphisms of a suitable category..." – Asad Saeeduddin Mar 10 '20 at 00:28
  • @AsadSaeeduddin The way that law is stated in the docs is oddly specific, perhaps, but it turns out requiring `return` isn't a serious problem. `cotraverse getConst . Const` is a default implementation for `return`/`pure` in terms of `Distributive`, and, since distributives/representables have fixed shape, that implementation is unique. – duplode Mar 10 '20 at 01:17
  • @AsadSaeeduddin *If* you already have a strong applicative structure, than merely having an isomorphism isn't enough, indeed. But if your question is "does there exist a strong applicative structure?", then that can be answered with any isomorphism; the isomorphism, plus the strong applicative structure on `(->) r`, induces a strong applicative structure on the representable functor (though perhaps not The One you had in mind if you already had one in mind!). Since this gives a way to construct a strong applicative structure, this answers the question in the affirmative. – Daniel Wagner Mar 10 '20 at 01:23
  • Possibly worth considering: moving from `pure :: a -> f a` to `husk :: () -> f ()` is okay for `Applicative` because we already have `Functor`, and so we can implement `pure a = a <$ husk ()`. But are you sure that the analogous move from `impure :: f a -> a` to `unhusk :: f () -> ()` is okay? – Daniel Wagner Mar 10 '20 at 02:28
  • @DanielWagner Why wouldn’t it be? – bradrn Mar 10 '20 at 02:32
  • @bradrn Well... I guess I left that a bit implied, but... the first move is okay because we can implement `pure` in terms of `husk`. But we can't implement `impure` in terms of `unhusk`. So it seems weird that the two ways of defining `Applicative` turn out to be the same, but when we "turn the arrows around", we get two ways of defining `OpApplicative` that are *not* the same. That seems like a hint that we haven't worked out the right way to "turn the arrows around" yet. Perhaps related: we demand `Functor => Applicative`, but not `OpFunctor => OpApplicative`. Why (not)? – Daniel Wagner Mar 10 '20 at 02:35
  • @DanielWagner Good observation — I understand now. But if this is the ‘wrong’ way of flipping the arrows, then what is the right way? As for not demanding `OpFunctor => OpApplicative`, it’s because Haskell `OpFunctor` is the same as `Functor`: `opfmap :: OpFunctor f => (b -> a) -> f (b -> a)` is equivalent to `fmap :: Functor f => (a -> b) -> f (a -> b)`. – bradrn Mar 10 '20 at 02:43
  • @bradrn Are you sure? Another way to turn the arrow around, more consistent with how `zip` was flipped, would be to transition from `fmap :: (a -> b, f a) -> f b` to `opfmap :: f b -> (a -> b, f a)` or something! Anyway, just to say: I'm not sure I know what the right way to do this is, hence my proposal that the person asking the question ponder deeply and clarify, though of course I also keep thinking about it a bit myself. – Daniel Wagner Mar 10 '20 at 02:52
  • @DanielWagner That’s a pretty interesting definition, and one I hadn’t thought of before. I suppose another way to do it is `contramap :: Contravariant => (a -> b) -> f b -> f a`. I think I’ll keep on thinking about this myself as well. – bradrn Mar 10 '20 at 03:06
  • @DanielWagner Sorry, I still disagree that a functor `f` with an arbitrary natural isomorphism to a strong applicative functor `g` (in this case `(->) (Rep r)`) equips `f` with an applicative structure. Unless the components of the isomorphism preserve applicative structure (which is not trivial), there is no guarantee that your derived applicative for `f` is a lawful applicative. – Asad Saeeduddin Mar 10 '20 at 03:51
  • What you are claiming is analogous to claiming that if you have some monoid `(m :: *, append :: m -> m -> m, mempty :: m)` and a bijection `(f :: m -> n, g :: n -> m)`, then you can derive a monoid structure on `n`. But you can only do this if the bijection is more than just a bijection. It needs to be a monoid isomorphism that preserves the monoid structure. – Asad Saeeduddin Mar 10 '20 at 03:54
  • @AsadSaeeduddin It is analogous, and that other claim is true, too: a bijection on a set which has monoid structure induces monoid structure on the other side of the bijection. There is no need for the bijection to "preserve the monoid structure" -- it is a consequence that you can prove after-the-fact that the bijection is *also* a monoid isomorphism between the original monoid and the induced monoid, but that's basically irrelevant to the question of whether the other thing we made up is a monoid or not. – Daniel Wagner Mar 10 '20 at 04:04
  • @DanielWagner Do you have somewhere I can refer to a proof of this? I believe it in the case that the bijection is a monoid isomorphism, but if it is not a monoid isomorphism I don't understand how it could be true. Unless you're claiming every bijection out of a monoid is a monoid isomorphism for some monoid on the other set. – Asad Saeeduddin Mar 10 '20 at 04:12
  • @AsadSaeeduddin I am claiming exactly that, yes. It's really not that hard to prove yourself. Take `(M, +, 0)` as your monoid, `f : M -> N` and `g : N -> M` any bijection. Define `n1 * n2 = f (g n1 + g n2)` and `1 = f 0`. Then `(N, *, 1)` is a monoid: `(n1 * n2) * n3 = f (g (f (g n1 + g n2)) + g n3) = f ((g n1 + g n2) + g n3) = f (g n1 + (g n2 + g n3)) = f (g n1 + g (f (g n2 + g n3))) = n1 * (n2 * n3)`, and `1 * n = f (g (f 0) + g n) = f (0 + g n) = f (g n) = n`, and `n * 1 = f (g n + g (f 0)) = f (g n + 0) = f (g n) = n`. – Daniel Wagner Mar 10 '20 at 04:48
  • @AsadSaeeduddin ...and `f`, `g` are a monoid isomorphism: `f 0 = 1`, `f m1 * f m2 = f (g (f m1) + g (f m2)) = f (m1 + m2)`, `g 1 = g (f 0) = 0`, `g n1 + g n2 = g (f (g n1 + g n2)) = g (n1 * n2)`. – Daniel Wagner Mar 10 '20 at 04:56
  • @DanielWagner That's pretty surprising to me. I tried out one of the Applicative laws and to my surprise it all just decomposes: https://gist.github.com/masaeedu/9c25b89594a2417f6668d2dd00fbcbba – Asad Saeeduddin Mar 10 '20 at 05:02
  • @DanielWagner Regarding the `pure` and `husk` thing; from my perspective I'm actually treating the abstract concept of an "lax/oplax/strong monoidal functor" as the central one and `Applicative` as just the Haskell-ism that loosely models it. From that perspective `husk` and `unhusk` are exactly the morphisms an (op)lax monoidal functor needs to supply, and no more. – Asad Saeeduddin Mar 10 '20 at 07:47
  • Regarding "but not `OpFunctor => OpApplicative`", `OpFunctor` is trivial. Every functor `C -> D` gives rise to its opposite functor `C^op -> D^op`, whose morphism and object mappings are actually identical to the original functor. `OpApplicative` is actually precisely the claim that our `Functor`'s opposite is lax monoidal, which happens to be trivial in `Hask`. – Asad Saeeduddin Mar 10 '20 at 08:01

3 Answers3

5

We can answer at least one of these questions in the negative:

Every Applicative representing a "fixed size" container of elements of its type argument is an instance of StrongApplicative

In fact one of the examples of a lawful StrongApplicative in the original question is wrong. The writer applicative Monoid => (,) m is not StrongApplicative, because for example husk $ unhusk $ ("foo", ()) == ("", ()) /= ("foo", ()).

Similarly, the example of a fixed size container:

data Strange a = L a | R a

of fixed multiplicity 1, is not a strong applicative, because if we define husk = Left then husk $ unhusk $ Right () /= Right (), and vice versa. An equivalent way to view this is that this is just the writer applicative for your choice of monoid on Bool.

So there exist "fixed size" applicatives that are not StrongApplicative. Whether all StrongApplicatives are of fixed size remains to be seen.

Asad Saeeduddin
  • 46,193
  • 6
  • 90
  • 139
5

Let's take representable functors as our definition of "fixed size container":

class Representable f where
    type Rep f
    tabulate :: (Rep f -> a) -> f a
    index :: f a -> Rep f -> a

The real Representable has a few laws and superclasses, but for the purposes of this answer, we actually need just two properties:

tabulate . index = id
index . tabulate = id

(Okay, we also need a law-abiding instance StrongApplicative ((->) r). Easy peasy, you already agree it exists.)

If we take that definition, then I can confirm that conjecture 1:

Every Applicative representing a "fixed size" container of elements of its type argument is an [law-abiding] instance of StrongApplicative

is true. Here's how:

instance Representable f => Applicative f where
    zip (fa, fb) = tabulate (zip (index fa, index fb))
    husk = tabulate . husk

instance Representable f => OpApplicative f where
    unzip fab = let (fa, fb) = unzip (index fab) in (tabulate fa, tabulate fb)
    unhusk = unhusk . index

instance Representable f => StrongApplicative f

There's a lot of laws to prove, but I'll focus just on the Big Four that StrongApplicative add -- you probably already believe the lead-in ones for Applicative and OpApplicative, but if you don't, their proofs look just like the ones below (which in turn look quite a lot like each other). For clarity, I will use zipf, huskf, etc. for the function instance, and zipr, huskr, etc. for the representable instance, so you can keep track of which is which. (And so that it's easy to verify that we don't take the thing we're trying to prove as an assumption! It's okay to use unhuskf . huskf = id when proving unhuskr . huskr = id, but it would be wrong to assume unhuskr . huskr = id in that same proof.)

The proof of each law proceeds in basically the same way: unroll definitions, drop the isomorphism that Representable gives you, then use the analogous law for functions.

unhuskr . huskr
= { def. of unhuskr and huskr }
(unhuskf . index) . (tabulate . huskf)
= { index . tabulate = id }
unhuskf . huskf
= { unhuskf . huskf = id }
id

huskr . unhuskr
= { def. of huskr and unhuskr }
(tabulate . huskf) . (unhuskf . index)
= { huskf . unhuskf = id }
tabulate . index
= { tabulate . index = id }
id

zipr (unzipr fab)
= { def. of unzipr }
zipr (let (fa, fb) = unzipf (index fab) in (tabulate fa, tabulate fb))
= { def. of zipr }
let (fa, fb) = unzipf (index fab) in tabulate (zipf (index (tabulate fa), index (tabulate fb)))
= { index . tabulate = id }
let (fa, fb) = unzipf (index fab) in tabulate (zipf (fa, fb))
= { def. of (fa, fb) }
tabulate (zipf (unzipf (index fab)))
= { zipf . unzipf = id }
tabulate (index fab)
= { tabulate . index = id }
fab

unzipr (zipr (fa, fb))
= { def. of zipr }
unzipr (tabulate (zipf (index fa, index fb)))
= { def. of unzipr }
let (fa', fb') = unzipf (index (tabulate (zipf (index fa, index fb))))
in (tabulate fa', tabulate fb')
= { index . tabulate = id }
let (fa', fb') = unzipf (zipf (index fa, index fb))
in (tabulate fa', tabulate fb')
= { unzipf . zipf = id }
let (fa', fb') = (index fa, index fb)
in (tabulate fa', tabulate fb')
= { def. of fa' and fb' }
(tabulate (index fa), tabulate (index fb))
= { tabulate . index = id }
(fa, fb)
Daniel Wagner
  • 145,880
  • 9
  • 220
  • 380
  • Currently pondering: `instance StrongApplicative f => Representable f where type Rep f = forall x. f x -> x`. `index` is easy. I haven't worked out the trick for `tabulate` yet, but it seems tantalizingly close. – Daniel Wagner Mar 10 '20 at 02:17
  • In discussion with @AsadSaeeduddin, I managed to find this same `StrongApplicative` instance as well, but couldn’t prove the laws. Congratulations for figuring it out! I tried to do the `Representable` instance as well given `StrongApplicative`, but couldn’t think of a good `Rep` type — I’d be curious to know, how does your `forall x. f x -> x` achieve this? – bradrn Mar 10 '20 at 02:26
  • @bradrn Recall that the hypothesis is that these things have a fixed set of "holes" into which elements slot. Then the functions of type `forall x. f x -> x` are exactly those functions which choose a hole and return the value in that hole. (And, while thinking about how to implement `tabulate`, I have come up with an objection to the type for `unhusk`; see comments on the question itself for details.) – Daniel Wagner Mar 10 '20 at 02:30
  • Thanks @DanielWagner! That’s a really clever approach — I wouldn’t have thought of that. – bradrn Mar 10 '20 at 02:31
  • After trying to implement it, I don’t think I’m convinced that `forall x. f x -> x` will work as `Rep`. My reasoning is that, using this `Rep`, you can write `index` for _any_ type, not just `StrongApplicative` ones — so I suspect that `forall x. f x -> x` might be too general. – bradrn Mar 10 '20 at 03:15
  • @bradrn Indeed, the trick is getting your hands on a suitable `f (forall x. f x -> x)` to implement `tabulate` with. And now you can start to see why I might want to demand `unhusk :: f x -> x` (...or similar...) instead of `unhusk :: f () -> ()`... – Daniel Wagner Mar 10 '20 at 03:43
  • Thanks for your answer. I agree with a lot of the points in here, but unfortunately I actually don't "already believe the lead-in ones for" `Applicative` and `OpApplicative`, at least not unless you place additional constraints on `tabulate` and `zip`. – Asad Saeeduddin Mar 10 '20 at 03:53
  • @AsadSaeeduddin Hm, okay, that stance surprises me a bit. Which law did you have trouble proving in the same way as above? I'm not going to write out the same boring "unroll definitions, apply underlying law" pattern six more times. Perhaps the blocker is that I didn't include the `instance Representable f => Functor f where fmap fab = tabulate . fmap fab . index` definition (I thought it was too obvious)? – Daniel Wagner Mar 10 '20 at 04:01
  • @DanielWagner I was previously talking with AsadSaeeduddin at https://funprog.zulipchat.com/#narrow/stream/201385-Haskell/topic/Fixed.20size.20containers. I had trouble proving that `husk . unhusk == id`, but it looks like you’ve managed to prove that already. – bradrn Mar 10 '20 at 04:29
  • Tangential musings: `unhusk :: f x -> x` poses questions like "given `data Duo x = Duo x x`, which `unhusk` should I pick?". Answering "both" already points towards `Distributive` territory: `distribute id :: Distributive g => g (g a -> a)`. The further step towards `Representable` and `Rep f = forall x. f x -> x` is rife with technical annoyances, [as I once found out in another Q&A](https://stackoverflow.com/a/56919305/2751851). – duplode Mar 11 '20 at 04:37
4
  • Every Applicative representing a "fixed size" container of elements of its type argument is an instance of StrongApplicative
  • No instance of StrongApplicative exists in which the number of occurrences of a can vary

Can anyone think of counterexamples that disprove these conjectures, or some convincing reasoning that demonstrates why they are true or false?

I’m not sure about that first conjecture, and based on discussions with @AsadSaeeduddin it’s likely to be difficult to prove, but the second conjecture is true. To see why, consider the StrongApplicative law husk . unhusk == id; that is, for all x :: f (), husk (unhusk x) == x. But in Haskell, unhusk == const (), so that law is equivalent to saying for all x :: f (), husk () == x. But this in turn implies that there can only exist one distinct value of type f (): if there were two values x, y :: f (), then x == husk () and husk () == y, so x == y. But if there is only one possible f () value, then f must be of fixed shape. (e.g. for data Pair a = Pair a a, there is only one value of type Pair (), this being Pair () (), but there are multiple values of type Maybe () or [()].) Thus husk . unhusk == id implies that f must be of fixed shape.

bradrn
  • 8,337
  • 2
  • 22
  • 51
  • Hm. Is it really clear that "there can only exist one distinct value of type `f ()`" implies "the number of occurrences of `a` can not vary" in the presence of fancy GADTs and stuff? – Daniel Wagner Mar 09 '20 at 21:17
  • @DanielWagner It turned out that “the number of occurrences of `a` cannot vary” isn’t a sufficient condition for a `StrongApplicative` instance; for instance, `data Writer w a = Writer (w,a)` has non-varying multiplicity of `a`, but is not a `StrongApplicative`. You actually need the shape of the functor to be invariant, which I do believe is a consequence of `f ()` being a singleton. – bradrn Mar 09 '20 at 23:16
  • I'm not sure I see how that's relevant. In the answer, while confirming the second conjecture, you argue that "strong applicative" implies "one distinct `f ()`" implies "number of occurrences of `a` can not vary". I am objecting that the last step of that argument is not clearly true; e.g. consider `data Weird a where One :: a -> Weird a; None :: Weird Bool`. There is a distinct value of type `Weird ()`, but different constructors have varying numbers of `a`s inside. (It's not a full counterexample here because `Functor` is hard, but how do we know that can't be fixed?) – Daniel Wagner Mar 10 '20 at 01:27
  • @DanielWagner Good point that `Weird ()` is a singleton but it is not of fixed shape. But `Weird` isn’t a `Functor`, so it cannot be a `StrongApplicative` anyway. I suppose the relevant conjecture would be: _if `f` is a `Functor`, does `f ()` being a singleton imply that `f` is of fixed shape_? I strongly suspect this to be true, but as you have noted I don’t actually have any proof yet. – bradrn Mar 10 '20 at 02:11