I have read often that Java wildcards are a concept that is more powerful than the concept of use-site variance. But in my understanding, the concept of Java wildcards is exactly equal to the concept of use-site variance.
So what is the difference between the two? Can you give a concrete example that is possible with Java wildcards but not with use site variance?
For example, the first answer in How does Java's use-site variance compare to C#'s declaration site variance? is one example where the claim of my question is made:
First off, if I remember correctly use-site variance is strictly more powerful than declaration-site variance (although at the cost of concision), or at least Java's wildcards are (which are actually more powerful than use-site variance).
However, the answer does not state what is the difference, only that there is one.
Edit:
A first difference I have found here (first paragraph on Page 112) seems to be that use-site variance completely disallows calling a method that has a type parameter in the wrong position while wildcards allow calling it with some types. E.g., you cannot call add
, on a List<? extends String>
. With Java wildcards, you can call add
on such a class, but you have to use null
as argument.For contravariance, one can call any method that returns a contravariant parameter but one has to assume that the return type is Object
. But is this the only difference?