So I came over a weird behaviour regarding bitwise operators and bit shift. I was trying to make a small check faster by using bit masks and I came accross this:
public class Weirdness {
private final static int constant = 3;
private static int notConstant = 3;
public void stuff() {
byte a = 0b1 << 3;
byte b = 0b1 << (int) 3;
byte c = 0b1 << constant;
byte d = 0b1 << notConstant; //error
byte e = 0b1 << getAnInt(); //error
byte f = 0b1 << getAFinalInt(); //error
int i = 3;
byte g = 0b1 << i; //error
final int j = 3;
byte h = 0b1 << j;
}
public static int getAnInt() {
return 3;
}
public static final int getAFinalInt() {
return 3;
}
}
a
, b
, c
and h
do not give compilation errors; But d
, e
, f
and g
do. The compiler asks to cast explicitly to byte
or to declare the last variables as int
. I have noticed a similar behaviour with the bitwize &
and |
too.
Could someone explain what is happening here?
What kind of magic is compiler working for the a
, b
, c
and h
to work?
EDIT: Or How this is not exactly a duplicate
I believe this question is different because from Why can not I add two bytes and get an int and I can add two final bytes get a byte? because what is causing the interesting behaviour is how the compiler optimize the bitwize shift operations.
And since I seek a theoretical answer (because I already understand that I can make my code compile by casting) to how the shift and other bitwize operations determine their return value, I believe this question can complement Java - bit shifting with integers and bytes and bring more interesting information to StackOverflow.