To help you see what's happening in your code, I've included the text of the standard that explains how automatic type conversions are done (for integers), along with the section on bitwise shifting since that works a bit differently. I then step through your code to see exactly what intermediate types exist after each operation.
Relevant parts of the standard
6.3.1.1 Boolean, characters, and integers
- If an int can represent all values of the original type, the value is converted to an int; otherwise, it is converted to an unsigned int. These are called the integer promotions. All other types are unchanged by the integer promotions.
6.3.1.8 Usual Arithmetic Conversions
(I'm just summarizing the relevant parts here.)
- Integer promotion is done.
- If they are both signed or both unsigned, they are both converted to the larger type.
- If the unsigned type is larger, the signed type is converted to the unsigned type.
- If the signed type can represent all values of the unsigned type, the unsigned type is converted to the signed one.
- Otherwise, they are both converted to the unsigned type of the same size as the signed type.
(Basically, if you've got a OP b
, the size of the type used will be the largest of int
, type(a), type(b), and it
will prefer types that can represent all values representable by type(a) and type(b). And finally, it favors signed types.
Most of the time, that means it'll be int.)
6.5.7 Bitwise shift operators
- The result of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are filled with zeros. If E1 has an unsigned type, the value of the result is $E1 x 2^{E2}$,reduced modulo one more than the maximum value representable in the result type. If E1 has a signed type and nonnegative value, and $E1 x 2^{E2}$ is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined.
How all that applies to your code.
I'm skipping the first example for now, since I don't know what type pop() returns. If you add that information to your
question, I can address that example as well.
Let's step through what happens in this expression (note that you had an extra (
after the first cast in your version; I've removed that):
(((int32_t)argument[0] << 8) & (int32_t)0x0000ff00 | (((int32_t)argument[1]) & (int32_t)0x000000ff) )
Some of these conversions depend on the relative sizes of the types.
Let INT_TYPE be the larger of int32_t and int on your system.
((int32_t)argument[0] << 8)
- argument[0] is explicitly cast to int32_t
- 8 is already an int, so no conversion happens
- (int32_t)argument[0] is converted to INT_TYPE.
- The left shift happens and the result has type INT_TYPE.
(Note that if argument[0] could have been negative, the shift would be undefined behavior. But since it was originally unsigned, so you're safe here.)
Let a
represent the result of those steps.
a & (int32_t)0x0000ff00
- 0x000ff0 is explicitly cast to int32_t.
- Usual arithmetic conversions. Both sides are converted to INT_TYPE. Result is of type INT_TYPE.
Let b
represent the result of those steps.
(((int32_t)argument[1]) & (int32_t)0x000000ff)
- Both of the explicit casts happen
- Usual arithmetic conversions are done. Both sides are now INT_TYPE.
- Result has type INT_TYPE.
Let c
represent that result.
b | c
- Usual arithmetic conversions; no changes since they're both INT_TYPE.
- Result has type INT_TYPE.
Conclusion
So none of the intermediate results are unsigned here. (Also, most of the explicit casts were unnecessary, especially if sizeof(int) >= sizeof(int32_t)
on your system).
Additionally, since you start with uint8_t
s, never shift more than 8 bits, and are storing all the intermediate results in types of at least 32 bits, the top 16 bits will always be 0 and the values will all be non-negative, which means that the signed and unsigned types represent all the values you could have here exactly the same.
What exactly are you observing that makes you think it's using unsigned types where it should use signed ones? Can we see example inputs and outputs along with the outputs you expected?
Edit:
Based on your comment, it appears that the reason it isn't working the way you expected is not because the type is unsigned, but because you're generating the bitwise representations of 16 bit signed ints but storing them in 32 bit signed ints. Get rid of all the casts you have other than the (int32_t)argument[0]
ones (and change those to (int)argument[0]
. int
is generally the size that the system operates on most efficiently, so your operations to use int unless you have a specific reason to use another size). Then cast the final result to int16_t
.