Javascript is an Untyped Language
No, Javascript doesn't have a notion of polymorphism, because it is an untyped language. Simply put, polymorphism means that a strict type system is less strict under controlled conditions, that is it doesn't lose type safety by behaving polymorphically.
Untyped language is somewhat simplified, though. From the perspective of a static type system Javascript has one huge union type and a value or its underlying expression can take any of its representations during runtime (variables can even adapt to different types during their existence). This sort of typing is also called dynamical typing.
Dynamically typed languages have introspection to check the type of a value at runtime. But this means are limited. You can't introspect the type of a function, for instance, as long as it is not completely applied.
However, functional programming in the original sense means to work with lots of small, specialized first and higher order functions, which are declared in curried form. This approach leads to partially applied functions all over your code. The problem is now that you not only have to deduce the types of your initial functions but also the intermediate types of partially applied ones. This is going to be tough quickly:
// what's the type of this function?
const comp = f => g => x => f(g(x));
// and this partially applied one?
const inc = n => n + 1;
comp(inc);
// and even worse:
comp1 = comp(comp);
comp2 = comp(comp) (comp);
I'm sure I've lost you somewhere between the lines. It takes a lot of time to deduce these types from the mind. And should it really be your responsibility as a developer to act like a compiler? I don't think so.
Alternative Solutions to the Problem
Fortunately, the Javascript community has been actively developing solutions to this type of problem.
A static type checker on top of the language
Flow and TypeScript are static type checkers that try to add a type system to Javascript retrospectively. I personally don't think that this is a promising approach, because usually you design the type system first when you create a new language. Javascript can perform side effects literally everywhere, which makes it really hard to create a sound and reliable type checker. Check out the issues in the Flow repository to get your own picture.
Degrade Javascript as compile target
Ya, this headline may be a bit opinion-based, but that's what it feels like to me. Elm, purescript, Facebook's Reason are representatives of this approach. Well, if you want to give up on Javascript, these are reasonable possibilities. But which horse to bet on? And is the fragmentation of the Javascript ecosystem really desirable? Or do we want a community that is dependent on vendors like Facebook? I can't really answer this question, because I am highly biased, as you're about to see.
Runtime Type Checker
Heads up: This is a shameless plug!
As a dynamically typed language Javascript ships with mature introspection capabilities. Along with ES2015 proxies we have all we need to build a virtualized runtime type checker. Virtual means in this context, that it is pluggable, that is you can switch it on and off. A runtime type system needs to be pluggable, because it has a major impact on performance and is only needed during the development stage.
I've been working on such a type checker for several months now and it's been an exciting journey so far. ftor is far from beeing stable but I believe the approach is worth exploring.
Here is the comp
combinator from above as a typed version with type hints (TS
is just an internal Symbol
that holds the current signature of a type and you can use it to request this signature for debugging purposes):
import * as F from ".../ftor.js";
F.type(true);
const comp = F.Fun(
"(comp :: (b -> c) -> (a -> b) -> a -> c)",
f => g => x => f(g(x))
);
const inc = F.Fun(
"(inc :: Number -> Number)",
n => n + 1
);
comp(inc) [TS]; // "(comp :: (a -> Number) -> a -> Number)"
const comp1 = comp(comp),
comp2 = comp(comp) (comp);
comp1 [TS]; // "(comp :: (a -> b0 -> c0) -> a -> (a0 -> b0) -> a0 -> c0)"
comp2 [TS]; // "(comp :: (b1 -> c1) -> (a0 -> a1 -> b1) -> a0 -> a1 -> c1)"
comp1
's intermediate type signature tells you that it expects...
- a binary function
- a value
- an unary function
- and another value
That is you would apply it like comp1(inc) (1) (add) (2) (3)
.
comp2
's intermediate type signature tells you that it expects...
- an unary function
- a binary function
- two values
And you would apply it like comp2(inc) (add) (2) (3)
. So comp2
is actually useful, because it allows us to apply a binary function as the inner function of the composition.
Admittedly, these type signatures aren't easy to read if you are not familiar with them. But in my experience the learning curve is rather short.
ftor supports parametric and row polymorphism, but currently not ad-hoc.