T
in IC<T>
is required to be an IB<IA>
. You have given it an IB<A>
. You have no guarantee that an IB<A>
can be used as an IB<IA>
just because A
implements IA
.
Think about it this way: if IB<T>
means "I can eat anything of type T", and IA
means "I am a fruit", and A
means "apple", then IB<IA>
means "I can eat any fruit", and IB<A>
means "I can eat any apple". If some code wants to feed you bananas and grapes then it needs to take an IB<IA>
, not an IB<A>
.
Let's suppose IB<A>
could be converted to IB<IA>
and see what goes wrong:
class AppleEater : IB<Apple>
{
public Apple a_val { get; set; }
}
class Apple : IA
{
public int val { get; set; }
}
class Orange : IA
{
public int val { get; set; }
}
...
IB<Apple> iba = new AppleEater();
IB<IA> ibia = iba; // Suppose this were legal.
ibia.a_val = new Orange(); // ibia.a_val is of type IA and Orange implements IA
And now we just set iba.val
, which is a property of type Apple
to a reference to an object of type Orange
.
That's why the conversion has to be illegal.
So how can you make this legal?
As the code stands, you cannot, because as I've just shown, it isn't typesafe.
You can make it legal by marking T
as out
like this: interface IB<out T>
. However, it is then illegal to use T
in any "input context". In particular, you cannot have any property of type T
that has a setter. If we make that restriction then the problem disappears because a_val
cannot be set to an instance of Orange
because it is read-only.
This question is asked extremely frequently on SO. Look for questions about "covariance and contravariance" in C# for plenty of examples.