0

I have read a blog post that mentions Haskell optmization. It says:

if Haskell sees multiple copies of this expensive function called on the exact same input, it will simply evaluate one of the function calls, cache the result, and replace every future function call of F on that same input with the cached result.

It uses this as an example how Haskell is better then Python (for calling same heavy function multiple times.

And from reading it I'm asking myself. How can this be true for a function that does any kind of IO?

cafce25
  • 15,907
  • 4
  • 25
  • 31
Piliponful
  • 142
  • 3
  • 10
  • You can't do IO in a Haskell function, you can only do so in an `IO` action, which is something different. – cafce25 Mar 13 '23 at 14:48
  • 1
    If you pass the same [RealWorld](https://wiki.haskell.org/IO_inside) into it, you'll get the same result. (probably wrong, but easier to understand) – teapot418 Mar 13 '23 at 14:53
  • @cafce25 If you couldn't do IO in a haskell function there would be no point in this language. If you want me to change my wording, sure. What I'm asking is if function have return type IO will it still cache? – Piliponful Mar 13 '23 at 15:17
  • @teapot418 seems like a very useful read, I'll look into it, thank you! – Piliponful Mar 13 '23 at 15:19
  • 2
    Unfortunately, the paragraph you quoted is misleading at best, if not outright wrong. Haskell-the-language doesn't do any such thing as replacing multiple function calls with the same argument with a single call. GHC may _sometimes_ do this sort of trick, but it's really nothing you should rely on. What it does do reliably is, if the function is evaluated _and you bind the result to a name_, then multiple references to that name will indeed be shared – much like a value you would have computed in Python; but unlike in Python the function may just never be evaluated at all if it's not needed. – leftaroundabout Mar 13 '23 at 15:25
  • 1
    Regarding your question, the short answer is that lazyness does not come into action, _at all_, when IO is involved. That's not entirely true, but almost: to a large extend, the whole monad mechanism is specifically ensuring a particular order of operation, in contrast with the otherwise “it'll be computed _some time if needed_”. – leftaroundabout Mar 13 '23 at 15:31
  • @Piliponful IO action values are tricky to understand at first. Haskell can do IO, but in a rather unique way which can be very confusing to beginners. I'd recommend to read [this explanation](https://stackoverflow.com/a/51772273/3234959) to have some intuition. – chi Mar 13 '23 at 18:33

1 Answers1

2
  1. The claim is wrong. No extant implementation of Haskell automatically memoizes.

  2. If an implementation were to do this, some parts of the construction of an IO action could be memoized, and some couldn't. It may be that there's some non-trivial computation that needs to be done to decide what IO should happen; for example:

    f :: Int -> IO ()
    f n = if existsPrimesThatAddTo (2*n)
        then putStrLn "Goldbach conjecture not yet disproven"
        else putStrLn "Wow, resolved an open math problem!"
    

    The decision about which branch to descend into could be memoized. The activity of printing to the screen could not. One of the miracles of the way Haskell does IO is that the type system can tell you which bits of an IO action computation could be memoized and which couldn't.

Daniel Wagner
  • 145,880
  • 9
  • 220
  • 380