38

According to the famous paper Idioms are oblivious, arrows are meticulous, monads are promiscuous, the expressive power of arrows (without any additional typeclasses) should be somewhere strictly between applicative functors and monads: monads are equivalent to ArrowApply, and Applicative should be equivalent to something the paper calls "static arrows". However, it is not clear to me what restriction this "static"-ness means.

Playing around with the three typeclasses in question, I was able to build up an equivalence between applicative functors and arrows, which I present below in the context of the well-known equivalence between Monad and ArrowApply. Is this construction correct? (I've proven most of the arrow laws before getting bored of it). Doesn't that mean that Arrow and Applicative are exactly the same?

{-# LANGUAGE TupleSections, NoImplicitPrelude #-}
import Prelude (($), const, uncurry)

-- In the red corner, we have arrows, from the land of * -> * -> *
import Control.Category
import Control.Arrow hiding (Kleisli)

-- In the blue corner, we have applicative functors and monads,
-- the pride of * -> *
import Control.Applicative
import Control.Monad

-- Recall the well-known result that every monad yields an ArrowApply:
newtype Kleisli m a b = Kleisli{ runKleisli :: a -> m b}

instance (Monad m) => Category (Kleisli m) where
    id = Kleisli return
    Kleisli g . Kleisli f = Kleisli $ g <=< f

instance (Monad m) => Arrow (Kleisli m) where
    arr = Kleisli . (return .)
    first (Kleisli f) = Kleisli $ \(x, y) -> liftM (,y) (f x)

instance (Monad m) => ArrowApply (Kleisli m) where
    app = Kleisli $ \(Kleisli f, x) -> f x

-- Every arrow arr can be turned into an applicative functor
-- for any choice of origin o
newtype Arrplicative arr o a = Arrplicative{ runArrplicative :: arr o a }

instance (Arrow arr) => Functor (Arrplicative arr o) where
    fmap f = Arrplicative . (arr f .) . runArrplicative

instance (Arrow arr) => Applicative (Arrplicative arr o) where
    pure = Arrplicative . arr . const

    Arrplicative af <*> Arrplicative ax = Arrplicative $
        arr (uncurry ($)) . (af &&& ax)

-- Arrplicatives over ArrowApply are monads, even
instance (ArrowApply arr) => Monad (Arrplicative arr o) where
    return = pure
    Arrplicative ax >>= f =
        Arrplicative $ (ax >>> arr (runArrplicative . f)) &&& id >>> app

-- Every applicative functor f can be turned into an arrow??
newtype Applicarrow f a b = Applicarrow{ runApplicarrow :: f (a -> b) }

instance (Applicative f) => Category (Applicarrow f) where
    id = Applicarrow $ pure id
    Applicarrow g . Applicarrow f = Applicarrow $ (.) <$> g <*> f

instance (Applicative f) => Arrow (Applicarrow f) where
    arr = Applicarrow . pure
    first (Applicarrow f) = Applicarrow $ first <$> f
Cactus
  • 27,075
  • 9
  • 69
  • 149
  • What trouble are you having with static arrows? The isomorphism is displayed on the first page of the paper. It says that applicatives are equivalent to arrows `arr` equipped with an isomorphism between `arr a b` and `arr () (a -> b)`. – Tom Ellis Jul 10 '14 at 08:48
  • @TomEllis: Let me try to formulate why I don't understand that. Suppose we had a typeclass `class (Arrow arr) => ArrowStatic arr where iso :: arr a b :<->: arr () (a -> b)`. Now, if `Applicative` is equivalent to `ArrowStatic`, shouldn't that mean I can turn any `Applicative` into `ArrowStatic`, and thus, `Arrow`? How does that imply that `Arrow` is a more expressive interface? – Cactus Jul 10 '14 at 10:55
  • 3
    There is a partial proof that `Category` + `Applicative` + a few extra laws for the interaction between the two is the same as `Arrow`. http://cdsmith.wordpress.com/2011/08/13/arrow-category-applicative-part-iia/. The proof doesn't go as far as proving that any `Applicative` `Category` is also an `Arrow`. We had a similar discussion about `Arrows` and `Applicative` insprired by this tangentially related answer: http://stackoverflow.com/a/22875729/414413 – Cirdec Jul 10 '14 at 17:32
  • Why is the newtype wrapper `Arrplicative` needed? Why not simply `instance Arrow arr => Functor (arr o) where fmap f = (arr f .)`? The other one I can understand. – Erik Kaplun Jan 29 '16 at 16:04
  • 2
    @ErikAllik, such instances are overlapping evil. With that instance around and without `OverlappingInstances`, the *only* way for something that looks like `p x` to be a `Functor` is for `p` to be an `Arrow`! It breaks all sorts of everything, so Cactus thoughtfully didn't do it. – dfeuer Jan 29 '16 at 19:54
  • Ohhh... I thought of that but didn't realize `arr o`was extremely generic. But then again, the constraint `Arrow are => arr o` won't help _at all_? Not even at the final step of the search? Not even with a GHC extension? – Erik Kaplun Jan 29 '16 at 20:09
  • 1
    @EricAllik, no, not at all, as far as I know. GHC commits to the instance before solving its constraints. Because of `OverlappingInstances` (blech!), this ends up being a good thing for equality constraints. Otherwise, I think it may be important for efficient resolution--not sure. And I think it's almost certainly important for making things sane *without* overlapping/incoherence. – dfeuer Jan 29 '16 at 20:46
  • In Scala, there is no such limitation, and it works out very well — like Prolog. Sometimes you just get 2 or more possibilities, but it's only a problem at the final stage, and even then I believe you can force narrow it down to one. So I'd like to understand what is the type or category or whatever-theoretic reason why Haskell decides to leave out the Prolog-ness in its implicit... oops I mean instance resolver. – Erik Kaplun Jan 29 '16 at 22:09
  • @ErikAllik, I doubt you'll find an answer in category theory, or even necessarily type theory. You should ask the question somewhere. Maybe haskell-cafe or ghc-devs? – dfeuer Jan 30 '16 at 08:03

3 Answers3

32

Every applicative yields an arrow and every arrow yields an applicative, but they are not equivalent. If you have an arrow arr and a morphism arr a b it does not follow that you can generate a morphism arr o (a \to b) that replicates its functionality. Thus if you round trip through applicative you lose some features.

Applicatives are monoidal functors. Arrows are profunctors that are also categories, or equivalently, monoids in the category of profunctors. There is no natural connection between these two notions. If you will excuse my flippancy: In Hask it turns out that the functor part of the pro-functor in an arrow is a monoidal functor, but that construction necessarily forgets the "pro" part.

When you go from arrows to applicatives you are ignoring the part of an arrow that takes input and only using the part that deals with output. Many interesting arrows use the input part in one way or another and so by turning them into applicatives you are giving up useful stuff.

That said, in practice I find applicative the nicer abstraction to work with and one that almost always does what I want. In theory arrows are more powerfull, but I don't find my self using them in practice.

Erik Kaplun
  • 37,128
  • 15
  • 99
  • 111
Philip JF
  • 28,199
  • 5
  • 70
  • 77
  • 6
    Can you give me some example to help build an intuition of what one can do with an arrow that can't be done with an applicative? I am hoping for something comparable to how monads allow the shape of the computation to depend on sideeffects on the left of a bind, whereas applicatives have static shapes. – Cactus Jul 10 '14 at 04:56
  • 1
    Applicatives don't allow you to use the result of one computation as the input to a subsequent computation. – Tom Ellis Jul 10 '14 at 08:41
  • 2
    @Cactus [danidiaz's answer](http://stackoverflow.com/a/24669525/1598537) gives an example showing that whilst neither vanilla Arrow nor Applicative can shape the future computation, Arrow can _use_ the output of one computation as the input to the next, where Applicative can only purely combine the output of both. In practice, an occasional >>= between applicatives gives this full monadic power whilst keeping the nice pure-feel syntax of Applicative. – AndrewC Jul 10 '14 at 08:47
  • 1
    @AndrewC The problem in danidiaz's answer is that he constructs a Kleisli arrow from a monad, while the OP asks about constructing an arrow differently (as `f (a -> b)`) just from the `Applicative` instance. – Petr Jul 10 '14 at 09:50
  • @TomEllis - 'Applicatives don't allow...' - I've seen that before, but I don't really understand what it means. How is `f <$> (g x) <*> (h y)` not using the result of one computation in a subsequent computation? – Squidly Jul 10 '14 at 10:03
  • 1
    @MrBones: No applicative computation there depends on the result of a previous applicative computation. (Of course the pure computations can depend on each other). For example in `IO` none of the `IO` actions there depend on results from previous `IO` actions. – Tom Ellis Jul 10 '14 at 10:16
  • @TomEllis I don't think I understand how an applicative computation differs from a pure computation here then. Could you point me in the right direction? – Squidly Jul 10 '14 at 10:17
  • 2
    @MrBones: If `f` is an `Applicative` then `f (a -> b)` and `f a` are two applicative computations. We can run the first, run the second, and combine their results to get `f b`. However the second computation cannot depend on the first. – Tom Ellis Jul 10 '14 at 10:23
  • 2
    @MrBones: If `m` is a `Monad` then `m a` is a monadic computation. We can run it and *apply its result* to a function of type `a -> m b` to get a *second* monadic computation (which we can run to get a `b`). The result of combining these two things is a monadic computation of type `m b` formed from running two smaller computations where the second *could* depend on the first. Does that help? – Tom Ellis Jul 10 '14 at 10:24
  • @TomEllis Yes, distinguishes Monadic & Applicative nicely. However, say we have some pure computation `a -> b`, and `a`, how is it different from the applicative `f (a -> b)` and `f a`, except that the applicative ones must first be run to be combined? – Squidly Jul 10 '14 at 10:34
  • I don't understand the question. Feel free to ask it as a separate Stack Overflow question if you like. – Tom Ellis Jul 10 '14 at 10:39
  • @TomEllis because IO (a->b) and IO a can depend on the current time, whereas (a->b) and a cannot. – AndrewC Jul 10 '14 at 15:52
  • @Petr My mistake, I had skimmed and assumed the "Applicative" Kleisli category Applicative f => a -> f b, but the composition operator in that category is actually just >>=. Like I say - my mistake. – AndrewC Jul 10 '14 at 15:56
27

Let's compare the IO applicative functor with the Kleisli arrows of the IO monad.

You can have an arrow that prints a value read by a previous arrow:

runKleisli ((Kleisli $ \() -> getLine) >>> Kleisli putStrLn) ()

But you can't do that with applicative functors. With applicative functors, all the effects take place before applying the function-in-the-functor to the arguments-in-the-functor. The function-in-the-functor can't use the value inside an argument-in-the-functor to "modulate" its own effect, so to speak.

danidiaz
  • 26,936
  • 4
  • 45
  • 95
  • 3
    I'd say this is a bad example/argument, because it addresses something quite different that the OP is asking. The correspondence the OP is referring to is between plain `Arrow` and `Applicative`, while you're talking about the `Kleisli` arrow, which corresponds to a `Monad`. It is clear, that monads are more powerful than applicatives, but the question was about arrows induced just by applicatives without any `Monad` constraint as `newtype Applicarrow f a b = Applicarrow{ runApplicarrow :: f (a -> b) }`. – Petr Jul 10 '14 at 09:41
  • 4
    It's a perfectly good example. It only takes advantage of the `Monad` property "by accident". The point is that one can present three APIs for doing IO: an applicative, an arrow, and a monad based API. These are listed in order of *strictly* increasing expressive power. – Tom Ellis Jul 10 '14 at 10:13
  • 12
    Alternatively, suppose an `Arrow` called `IOA` magically appeared, with combinators `getLine :: IOA () String` and `putStrLn :: IOA String ()`. Using `Arrow` combinators we can form their composition `getLine >>> putStrLn`. There would be no way of doing that with merely an `Applicative` API. – Tom Ellis Jul 10 '14 at 10:15
  • 1
    OK, I am liking this last example. So if `IOA` was an instance of just `Arrow` and not `ArrowApply`, you could still read from a file and write its contents to another file, but you couldn't read a filename from a file and then read that file. Whereas with an applicative `IO` type, you couldn't even do the first example. Hmmm. So how exactly is what that first example does problematic? – Cactus Jul 10 '14 at 13:22
  • 1
    @TomEllis OK, that makes sense, I'm getting more inclined towards the example. However, to be convincing, I'd like this to be clarified: The power of `Applicative (Arrplicative (Kleisli IO))` could be much stronger than `Applicative IO`, in particular because in the definition of its instance we indirectly use `.` from `Kleisli`. So it should be discussed that even this isn't strong enough. While I'm inclined to believe that it's not possible, there could still be some clever trick, such as passing some monadic/functional values through this applicative etc. – Petr Jul 10 '14 at 14:17
  • 1
    @Cactus: It's problematic in exactly that the second action (what you write to the file) depends on the result from the first action. I don't see why `Arrow` can't do your second example though (partly it depends on what primatives you provide though). – Tom Ellis Jul 10 '14 at 16:37
  • @Cactus You could have `readFile :: IOA FilePath String` and then you could do `readIndirect = readFile >>> readFile :: IOA FilePath String`. It's the _form_ of the computation that can't change in Arrow. – AndrewC Aug 10 '15 at 09:59
  • 1
    @AndrewC: yes, but for me the key insight here is that with an `Arrow` interface, you can make *either* kind of `readFile` your API, depending on how static you want to be. And of course you can't implement your `readFile` example if your underlying IO interface is applicative-only. – Cactus Aug 11 '15 at 07:47
11

(I've posted the below to my blog with an extended introduction)

Tom Ellis suggested thinking about a concrete example involving file I/O, so let's compare three approaches to it using the three typeclasses. To make things simple, we will only care about two operations: reading a string from a file and writing a string to a file. Files are going to be identified by their file path:

type FilePath = String

Monadic I/O

Our first I/O interface is defined as follows:

data IOM ∷ ⋆ → ⋆
instance Monad IOM
readFile ∷ FilePath → IOM String
writeFile ∷ FilePath → String → IOM ()

Using this interface, we can for example copy a file from one path to another:

copy ∷ FilePath → FilePath → IOM ()
copy from to = readFile from >>= writeFile to

However, we can do much more than that: the choice of files we manipulate can depend on effects upstream. For example, the below function takes an index file which contains a filename, and copies it to the given target directory:

copyIndirect ∷ FilePath → FilePath → IOM ()
copyIndirect index target = do
    from ← readFile index
    copy from (target ⟨/⟩ to)

On the flip side, this means there is no way to know upfront the set of filenames that are going to be manipulated by a given value action ∷ IOM α. By "upfront", what I mean is the ability to write a pure function fileNames :: IOM α → [FilePath].

Of course, for non-IO-based monads (such as ones for which we have some kind of extractor function μ α → α), this distinction becomes a bit more fuzzy, but it still makes sense to think about trying to extract information without evaluating the effects of the monad (so for example, we could ask "what can we know about a Reader Γ α without having a value of type Γ at hand?").

The reason we can't really do static analysis in this sense on monads is because the function on the right-hand side of a bind is in the space of Haskell functions, and as such, is completely opaque.

So let's try restricting our interface to just an applicative functor.

Applicative I/O

data IOF ∷ ⋆ → ⋆
instance Applicative IOF
readFile ∷ FilePath → IOF String
writeFile ∷ FilePath → String → IOF ()

Since IOF is not a monad, there's no way to compose readFile and writeFile, so all we can do with this interface is to either read from a file and then postprocess its contents purely, or write to a file; but there's no way to write the contents of a file into another one.

How about changing the type of writeFile?

writeFile′ ∷ FilePath → IOF (String → ())

The main problem with this interface is that while it would allow writing something like

copy ∷ FilePath → FilePath → IOF ()
copy from to = writeFile′ to ⟨*⟩ readFile from

it leads to all kind of nasty problems because String → () is such a horrible model of writing a string to a file, since it breaks referential transparency. For example, what do you expect the contents of out.txt to be after running this program?

(λ write → [write "foo", write "bar", write "foo"]) ⟨$⟩ writeFile′ "out.txt"

Two approaches to arrowized I/O

First of all, let's get two arrow-based I/O interfaces out of the way that don't (in fact, can't) bring anything new to the table: Kleisli IOM and Applicarrow IOF.

The Kleisli-arrow of IOM, modulo currying, is:

readFile ∷ Kleisli IOM FilePath String
writeFile ∷ Kleisli IOM (FilePath, String) ()

Since writeFile's input still contains both the filename and the contents, we can still write copyIndirect (using arrow notation for simplicity). Note how the ArrowApply instance of Kleisli IOM is not even used.

copyIndirect ∷ Kleisli IOM (FilePath, FilePath) ()
copyIndirect = proc (index, target) → do
    from ← readFile ↢ index
    s ← readFile ↢ from
    writeFile ↢ (to, s)

The Applicarrow of IOF would be:

readFile ∷ FilePath → Applicarrow IOF () String
writeFile ∷ FilePath → String → Applicarrow IOF () ()

which of course still exhibits the same problem of being unable to compose readFile and writeFile.

A proper arrowized I/O interface

Instead of transforming IOM or IOF into an arrow, what if we start from scratch, and try to create something in between, in terms of where we use Haskell functions and where we make an arrow? Take the following interface:

data IOA ∷ ⋆ → ⋆ → ⋆
instance Arrow IOA
readFile ∷ FilePath → IOA () String
writeFile ∷ FilePath → IOA String ()

Because writeFile takes the content from the input side of the arrow, we can still implement copy:

copy ∷ FilePath → FilePath → IOA () ()
copy from to = readFile from >>> writeFile to

However, the other argument of writeFile is a purely functional one, and so it can't depend on the output of e.g. readFile; so copyIndirect can't be implemented with this Arrow interface.

If we turn this argument around, this also means that while we can't know in advance what will end up being written to a file (before running the full IOA pipeline itself), but we can statically determine the set of filenames that will be modified.

Conclusion

Monads are opaque to static analysis, and applicative functors are poor at expressing dynamic-time data dependencies. It turns out arrows can provide a sweet spot between the two: by choosing the purely functional and the arrowized inputs carefully, it is possible to create an interface that allows for just the right interplay of dynamic behaviour and amenability to static analysis.

Cactus
  • 27,075
  • 9
  • 69
  • 149
  • 1
    @StephaneRolland: you're right on both counts. I've fixed the link (it's not my fault I swear, nic.io crapped out on me...), and the `IOA` example is already using `>>>` in the blogpost, it was just an editing mistake I guess when I was formatting it for Stack Overflow. – Cactus Jul 04 '18 at 01:17
  • Thanks :-) Moreover, do you know of any work/paper going your way for an arrowised IO, something more like a possible real life implementation with pros/cons ? I'm currently in the middle of John Hugues paper on arrowized parsers. Next, I'll read the paper on the IO Monad implementation. And I am wondering if someone already tried to implement IO as an arrow entirely, or almost. I want to dig this idea to better understand IO. – Stephane Rolland Jul 04 '18 at 13:11