If you write 1
, it has any possible number type. That's what Num p => p
actually means.
If you use 1
in an expression, GHCi will attempt to figure out the correct type of number to use based on what functions you're calling on it, and then automatically give 1
the right type.
If GHCi cannot guess what the correct type is (because there's not enough context or because several types would fit), it defaults to Integer
. (And for 1.0
it will default to Double
. And for any other type constraint, it will try to default to ()
if possible.)
This is similar to how compiled code works. If you write a number in your source code, GHC (the compiler) will attempt to auto-detect what the correct type should be. The difference is, if the compiler can't figure it out, it won't "guess" or "default", it'll just give you a compile-time error and demand that you specify what you mean. That's desirable to make compiled code work how you expected, but it's tedious for interactively trying stuff out, which is why GHCi has defaulting.
The type of a single character is always Char
.
The type of a string is always String
or [Char]
. (One is an alias to the other.)
The type of True
and False
is always Bool
. And so on.
So it's only really numbers that have the possibility of multiple types.
[Well, there's an option to make strings polymorphic too, but we won't worry about that now...]
If you want messy details, you can read the Haskell Language Report (which is the official specification document that defines the Haskell language) and the GHCi user manual (which describes what GHCi does).