I've read somewhere lately that pattern matching happens during run-time and not compile-time. (I am looking for the source, but can't find it at the moment.) Is it true? And if so, do guards in functions have the same performance?
Reading this was surprising to me because I used to think GHC was able to optimize some (probably not all) pattern match decisions during compile time. Does this happen at all?
A case for example:
f 1 = 3
f 2 = 4
vs
f' a | a == 1 = 3
| a == 2 = 4
Do f
and f'
compile to the same number of instructions (e.g. in Core and/or lower)?
Is the situation any different if I pattern match on a constructor instead of a value? E.g. if GHC sees that a function from a location is always invoked with one constructor, does it optimize that call in a way that eliminates the run-time check? And if so, can you give me an example showing what the optimization produces?
In summary
What is good to know about these two approaches in terms of performance?
When is one preferable performance-wise?