My understanding per reading C99 [PDF] is that it declares types (e.g. unsigned int
) in terms of rank, range, etc. in such a way that does not mandate a maximum true bit size, although a minimum size is implied. I've read on StackOverflow that while unsigned int
compiles as uint32_t
on 32-bit platforms, it may be promoted to 64-bit on X86-64 platforms. Further, Wikipedia says the Unix/Linux family of operating systems uses 64-bit for long
by default. Posts indicate that the main concern is unexpected behavior; performance issues of promoting or not promoting appear to be secondary.
To avoid potential unexpected behavior, when porting a program from one platform to another (e.g. porting a Windows app to Linux), is it possible to force the compiler (e.g. gcc
) to not only abide by a certain maximum bit width for a specific type (e.g. enforce long
to be guaranteed as 32-bit in the context of the current code, contrary general platform sizing), but to ALSO enforce such a restriction on all math ops / intermediate results in math ops involving that restricted value?
There's been discussion of how to restrict long
to a specific size by value, but that appears limited to restricting the values themselves, not ops.
If promotion can still occur -- per the post regarding int32_t
multiplication and unexpected overflows during automatic promotion in code compiled on X86-64 CPUs -- restricting by value doesn't ensure expected results / consistency.
How can we restrict not only the values, but intermediates from ops, as well?
Does the ability to enforce a maximum bit width in all operations relative a specific type (assuming that's possible) vary by compiler or is it standardized in some way?
i.e. if long
is natively 64-bit, can we force it to be 32-bit via certain preprocessor directives, etc.?