There are (at least) three possible type inference strategies the compiler could apply to var o = null
:
- pick
Void
- pick
Object
- look for a later initialization and pick that type
All of them are technically feasible, so the question emerges, which one makes the most sense for developers.
Clearly, Void
is pretty useless and I would argue that Object
is not much more useful, either. While correct, picking either of these types is unlikely to help the developer write better and more readable code.
The last option, looking for an initialization, was not adopted on purpose to avoid so-called action-at-a-distance errors:
var parent = null;
// imagine some recursion or loop structure, so this makes more sense
processNode(parent); // expects a parameter of type `Node`
parent = determineParent(); // returns a parameter of type `Node`
If the compiler inferred Node
for parent
because determineParent()
returns it, this would compile. But the code is fragile because changes to the last line, might lead to a different type chosen in the first line and hence to compile errors on the second line. That's not good!
We're used to the fact that changing a type's declaration can lead to errors down the road but here the change (line 3), its effect (line 1), and consequent error (line 2) can be pretty far apart, this making it much more complicated for developers to understand or, better, predict what happens.
By keeping the type inference rules simple, developers have it easier to form a simple but correct mental model of what's going on.
Addendum
There are doubts whether option 3, inferring the type from a later initialization, is indeed technically feasible. My opinion (that it is) is based on my understanding of JEP 286, specifically:
On the other hand, we could have expanded this feature to include the local equivalent of "blank" finals (i.e., not requiring an initializer, instead relying on definite assignment analysis.) We chose the restriction to "variables with initializers only" because it covers a significant fraction of the candidates while maintaining the simplicity of the feature and reducing "action at a distance" errors.
Similarly, we also could have taken all assignments into account when inferring the type, rather than just the initializer; while this would have further increased the percentage of locals that could exploit this feature, it would also increase the risk of "action at a distance" errors.