Long story short
I'm wandering if I should think of using contramap
when I find myself writing code like (. f) . g
, where f
is in practice preprocessing the second argument to g
.
Long story longer
I will describe how I came up with the code that made me think of the question in the title.
Initially, I had two inputs, a1 :: In
and a2 :: In
, wrapped in a pair (a1, a2) :: (In,In)
, and I needed to do two interacting processing on those inputs. Specifically, I had a function binOp :: In -> In -> Mid
to generate a "temporary" result, and a function fun :: Mid -> In -> In -> Out
to be fed with binOp
's inputs and output.
Given the part "function fed with inputs and output of another function" above, I though of using the function monad, so I came up with this,
finalFun = uncurry . fun =<< uncurry binOp
which isn't very complicated to read: binOp
takes the inputs as a pair, and passes its output followed by its inputs to fun
, which takes the inputs as a pair too.
However, I noticed that in the implementation of fun
I was actually using only a "reduced" version of the inputs, i.e. I had a definition like fun a b c = fun' a (reduce b) (reduce c)
, so I thougth that, instead of fun
, I could use fun'
, alongside reduce
, in the definition of finalFun
; I came up with
finalFun = (. both reduce) . uncurry . fun' =<< uncurry binOp
which is far less easy to read, especially because it features an un-natural order of the parts, I believe. I could only think of using some more descriptive name, as in
finalFun = preReduce . uncurry . fun' =<< uncurry binOp
where preReduce = (. both reduce)
Since preReduce
is actually pre-processing the 2nd and 3rd argument of fun'
, I was wandering if this is the right moment to use contramap
.