I am aware that many arithmetic operators can mean e.g. adding or multiplying short
can give an int
answer.
But Surely bitwise ones should respect the input types?
Specifically:
Uint16 x = 123;
Uint16 ffff = 0xffff;
x = x ^ 0xffff;
x = x ^ ffff;
Whereas as @dmitry points out, x ^= 0xffff
is just fine.
Both are invalid. Is there a good reason or do I just cast the result and get on with my day?
I have a real case more like:
x = (x >> 8) ^ buffer[(x & 0xFF) ^ y];
And am trying to work out: how many casts do I actually need?!