I'm writing an "append" function for a data type I've created (which basically deals with "streams"). However, this data type has 12 different constructors, dealing with different types of "stream", for example, infinite, null, fixed length, variable length, already appended etc.
There logic between the input types and output types is a bit complex but not incredibly so.
I've considered two approaches:
- Match against broad categories (perhaps by wrapping in a simpler proxy type) and then match inside those matches OR
- Just pattern match against 144 cases (12*12). I could perhaps reduce this to 100 with wildcard matches for particular combinations but that's about it.
I know the second approach is more ugly and difficult to maintain, but disregarding that, will GHC find the second approach easier to optimise? If it can do the second approach with a simple jump table (or perhaps two jump tables) I suspect it will be faster. But if it's doing a linear check it will be far slower.
Does GHC optimise pattern matches (even very big ones) into constant time jump tables?