This is a two fold question .
I have been reading up on the intricacies of how compilers process code and I am having this confusion. Both processes seem to be following the same logic of sign extension for signed integers. So is conversion simply implemented as an arithmetic right shift?
One of the examples states a function as
Int Fun1(unsigned word ) { Return (int) ((word << 24) >> 24 ); }
The argument passed is 0x87654321. Since this would be signed when converted to binary, how would the shift happen? My logic was that the left shift should extract the last 8 bits leaving 0 as the MSB and this would then we extended while right shifting. Is this logic correct?
Edit: I understand that the downvote is probably due to unspecified info. Assume a 32 bit big endian machine with two's complement for signed integers.