I'm new to Haskell. Concepts such as monad and monoid, and their corresponding typeclasses, are very interesting but highly abstract and distant. I wonder how those advanced concepts can make things easier to implement. Some self-contained concrete examples would be nice to see.
-
One can for example generate a list of all possibilities with `[(-), (+)] <*> [1 .. 10 ] <*> [ 1 .. 10 ]`. This does not only work with lists, but with `Maybe`s, `IO`s, etc. Where a `Maybe` can be seen as a calculation that could fail. – Willem Van Onsem Nov 24 '21 at 17:43
-
4I feel this question is a bit too opinionated for SO, since it amounts to "tell me why (or when) Haskell is better than other languages". Anyway, personally I like Haskell not because it _allows_ me to do something easily, but because it _prevents_ me to do that. More than in most languages, in Haskell types can be used to constrain the program behavior, e.g. "you can read and write X, only read Y, and only append to Z": once the type "contract" is written, there is no way around that. Types becomes very informative (at the price of mroe complexity, of course). – chi Nov 24 '21 at 18:11
-
@chi What you said is exactly what I want to know. Could you give some concrete examples of type constrains that are hard to achieve in TypeScript/Rust/C++ but can be easily handled by Haskell? – zxch3n Nov 24 '21 at 18:19
-
1@R3m: the fact that you need an `IO` to perform `IO` actions is imho a big one. It means that all functions are pure (the ones with `IO` as type are also pure since they return the same "recipe" each time). This means that you can effectively test functions, since these can not modify variables, nor perform any IO action. As a result, if you can test a function with all possible input, you know that it can not change its result based on some context. – Willem Van Onsem Nov 24 '21 at 18:46
-
voting to close: opinion-based. – Nov 24 '21 at 22:09
-
1See also [Why Functional Programming Matters](https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf) and [Non-Trivial Lazy Evaluation](https://stackoverflow.com/questions/7868507/non-trivial-lazy-evaluation). – Daniel Wagner Nov 25 '21 at 00:17
1 Answers
You get a way of understanding your problem domain in terms of types and type classes. Here is what I can gather from looking at V3
and its instances, without thinking about what the type is used to represent (3D vector).
type V3 :: Type -> Type data V3 a = V3 a a a
It's a simple type. And non-commital. There is no restriction on what a
can be at all. This is where its power comes from, Edward Kmett has a classic talk on these 'dumb reusable datatypes'.
V3
has basic properties. This one deriving
clause probably gives you over 20 functions to operate on V3
. Foldable
amounts to defining toList (V3 a b c) = [a, b, c]
.
deriving stock (Functor, Foldable, Traversable)
I focus on type constructor classes. There are many type classes that we can derive automatically but those instances say less about the structure of V3
. They are the opposite indicator, if a type doesn't derive Eq
then it may because one cannot be defined or because it uses a non-standard definition which can be noteworthy.
deriving stock (Eq, Ord, Show, Read, Data, Generic, Generic1, Lift)
Then I see it is Representable
, this makes sense because a representable indicates a sort of "static shape". Actually representable means V3 a
is (isomorphic to) a function!
type Index3 :: Type data Index3 = O | I | II instance Representable V3 where type Rep V3 = Index3 index :: V3 a -> (Index3 -> a) index (V3 a b c) = \case O -> a I -> b II -> c tabulate :: (Index3 -> a) -> V3 a tabulate make = V3 (make O) (make I) (make II)
Instead of writing V3 1.0 2.0 3.0
you can write tabulate vec
:
vec :: Index3 -> Double
vec = \case
O -> 1.0
I -> 2.0
II -> 3.0
Showing that V3
is (naturally) isomorphic to (Index3 ->)
(= Representable
) is significant. We are well familiar with functions and we can inherit any function instance by witnessing the isomorphism.
This rule is packaged by Co
and you can list all possible instances V3
can give us through its function representation (:instances Co V3
)
> :instances Co V3
instance Applicative (Co V3)
instance Functor (Co V3)
instance Monad (Co V3)
instance Apply (Co V3)
instance Bind (Co V3)
instance Distributive (Co V3)
instance Representable (Co V3)
So we can derive Applicative
and Monad
at least from that, if Index3
is a Monoid we can define a Comonad
deriving (Applicative, Monad) via Co V3
We also see a lot of instances that are lifted versions of their own algebra.
instance Num a => Num (V3 a) where (+) = liftA2 (+) (-) = liftA2 (-) (*) = liftA2 (*) negate = fmap negate abs = fmap abs signum = fmap signum fromInteger = pure . fromInteger instance Semigroup a => Semigroup (V3 a) where (<>) = liftA2 (<>) instance Monoid a => Monoid (V3 a) where mempty = pure mempty
This can be summarized with Ap V3 a
which allows these operations over any Applicative
.
deriving (Num, Semigroup, Monoid) via Ap V3 a
I have characterised this simple type thoroughly, even if I had never seen this datatype before it would be very clear to me. The instances (and no-instances) give it character and deriving is like an executive summary that tells you what's going on at exactly the right level of detail.
When you have defined V3
you can now define a TicTacToe
board via the composition of two vectors which defines TicTacToe
as isomorphic to functions from (Index3, Index3)
. We can build an empty board with pure Nothing :: TicTacToe (Maybe Player)
and index into a board with (Index3, Index3)
.
type TicTacToe :: Type -> Type
newtype TicTacToe a = TicTacToe (V3 (V3 a))
deriving (Functor, Foldable, Applicative, Representable)
via Compose V3 V3
deriving (Monad)
via Co TicTacToe
deriving (Num, Semigroup, Monoid)
via Ap TicTacToe a
-- > index board (O, O)
-- Nothing
-- > index board (O, II)
-- Just Player2
board :: TicTacToe (Maybe Player)
board = TicTacToe do
V3 do V3 Nothing Nothing (Just Player2)
do V3 Nothing Nothing Nothing
do V3 Nothing Nothing (Just Player1)
And the whole story repeats itself, TicTacToe
is now a Monad
, numbers and monoid operations can be lifted over it.

- 6,848
- 7
- 37
- 46
-
2Nice answer. (Though the `linear` library is IMO one of the biggest _failures_ to use Haskell's type system to express a mathematical concept. In particular, implanting a basis representation at the fundamental level utterly misses the geometric essence of vector spaces. And representable functors per se are pretty trivial. IMO it makes much more sense to go the other way around – start with the functions and then build [MemoTries](https://hackage.haskell.org/package/MemoTrie-0.6.10/docs/Data-MemoTrie.html).) – leftaroundabout Nov 25 '21 at 19:57