For Function
specifically, no. Function
defines exactly one abstract method, apply
, which uses T
contravariantly and R
covariantly. But Function
isn't what they had in mind when they designed that feature.
When the Java devs designed call-site variance, they were imagining classes that had both covariant and contravariant uses. For instance, in principle, the E
in List<E>
must be invariant. It appears in covariant position in get
and in contravariant position in add
.
So the rationale was this. Suppose we have a type hierarchy X <= Y <= Z
. That is, X
is a class that subclasses Y
, and Y
in turn subclasses Z
. A List<Y>
can do anything with type Y
. It can have Y
s added to the end, and a user can retrieve elements of type Y
from it. But it can never be a List<Z>
or a List<X>
, since adding to a List<X>
would be unsound, and so would retrieving as a List<Z>
.
But we can express our intention. List<? extends Y>
is a type we can only ever read from. It actually can take a List<Z>
under the hood, since a list of Z
elements is genuinely still (at least for covariant methods) a list of Y
elements. We can get elements from this list, but we can't add to the end of it, since we said we're using the type argument in covariant position but add
uses the type argument contravariantly. Essentially, List<? extends Y>
is a smaller interface that includes some of the methods from the actual interface List
.
The same is true of List<? super Y>
. We can't read from it, since we don't know that every element is of type Y
. But we can add to it, since we know that the list at least supports elements of type Y
. We can use all of the contravariant methods, like add
, but none of the covariant ones.
For a type like List
that uses its type arguments in different ways, the call-site variance makes some amount of sense. For a special-purpose interface like Function
that does one thing, it makes little sense.
That was the Java developers' rationale some twenty years ago when generics were added to Java. A lot has happened since then. If someone wrote an interface like List
in today's world, an interface with upwards of 20 abstract methods, half of which have "this method may not be supported and might just throw UnsupportedOperationException
" built-in to the contract, they'd rightly be laughed off the stage.
Today's world is one of small, tight interfaces. We follow the SOLID principles. An interface does one thing and does it well. If an interface defines more than two or three (non-defaulted, non-inherited) methods, we give pause and ask if we can make it more modular. And we try to design systems that are more immutable by design, to support scaling and concurrency. We have record
s, or data class
es or whatever your favorite language calls them, that are immutable by default.
So twenty years ago, the idea of a massive super-interface that does twenty things and that can be narrowed down dynamically via type projections seemed pretty cool. Today, it makes far more sense to specify the variance at the declaration site, since most interfaces are small and have a clear use case in mind.
The scala.collection.Seq
trait defines three abstract, non-inherited methods (apply
, iterator
, and length
), and all of those use the type argument covariantly, so Seq
is defined with a covariant type. The corresponding mutable trait adds one more method (update
), which uses its type argument contravariantly, so it has an invariant argument.
In Scala, if you want to modify a sequence, you take a scala.collection.mutable.Seq
. If you want to read, you take a scala.collection.Seq
. And those interfaces are small enough and narrow enough in purpose that the fact that there are several of those doesn't affect the code quality (and the fact that traits and classes in Scala are cheap to write, compared to the boilerplate necessary in Java to make even a simple class).