The following code:
public class Test {
<T extends Test1 & Test2> void f(T t) {
t.foo1();
t.foo2();
}
}
interface Test1 {
void foo1();
}
interface Test2 {
void foo2();
}
Compiles into the following bytecode
<T extends test.Test1 & test.Test2> void f(T);
Code:
0: aload_1
1: invokeinterface #2, 1 // InterfaceMethod test/Test1.foo1:()V
6: aload_1
7: checkcast #3 // class test/Test2
10: invokeinterface #4, 1 // InterfaceMethod test/Test2.foo2:()V
15: return
So within bytecode the first argument have Test1
type and whenever you're using it as a Test2
, compiler inserts a checked cast. This code would be written with Java 1.4 as
void f(Test1 t) {
t.foo1();
((Test2) t).foo2();
}
It is extremely unlikely that you would be able to observe any performance difference caused by that cast, but it might be useful to understand how that code is compiled into bytecode.