Take the following code for example:
uint32_t fg;
uint32_t bg;
uint32_t mask;
uint32_t dest;
...
dest = (fg & mask) | (bg & (~mask));
Now this fragment has all it's operands typed 32 bit unsigned ints. Using a C compiler with a 32 bit int size, no integer promotions will happen, so the entire operation is performed in 32 bits.
My problem is that for example on Wikipedia it is shown that usually even 64 bit machines get to have compilers which use a 32 bit int size. Conforming to the C standard, they wouldn't promote the operands to 64 bit ints, so potentially compiling into something having inferior performance and probably even larger code size (just assuming from how 16 bit operations are more expensive cycle and instruction size-wise on a 32 bit x86).
The primary question is: Do I have to be concerned? (I believe I may not, since with optimizations enabled a sane compiler might be able to omit the excess gunk which would show up from strictly following the C standard. Please see past the example code, and think in general where my belief may have less ground)
If it is so (that I actually have to be concerned), could you recommend some method (book, site, whatever) which covers this area? (Well, I know this is a bit out-of-bounds for SO, however I see this much less useful if I would only get a three word Yes, you do! as an answer to accept)