I'm using an MPC56XX (embedded systems) with a compiler for which an int
and a long
are both 32 bits wide.
In a required software package we had the following definitions for 32-bit wide types:
typedef signed int sint32;
typedef unsigned int uint32;
In a new release this was changed without much documentation to:
typedef signed long sint32;
typedef unsigned long uint32;
I can see why this would be a good thing: Integers have a conversion rank between short
and long
, so theoretically extra conversions can apply when using the first set of definitions.
My question: Given the above change forced upon us by the package authors, is there a situation imaginable where such a change would change the compiled code, correctly leading to a different result?
I'm familiar with the "usual unary conversions" and the "usual binary conversions", but I have a hard time coming up with a concrete situation where this could really ruin my existing code. But is it really irrelevant?
I'm currently working in a pure C environment, using C89/C94, but I'd be interested in both C and C++ issues.
EDIT: I know that mixing int
with sint32
may produce different results when it's redefined. But we're not allowed to use the original C types directly, only the typedef'ed ones.
I'm looking for a sample (expression or snippet) using constants, unary/binary operators, casts, etc. with a different but correct compilation result based on the changed type definition.