The UINT8_C
macro is defined in "stdint.h", with the following specification: The macro UINTN_C(value)
shall expand to an integer constant expression corresponding to the type uint_leastN_t
.
In the wild, however, implementations differ:
#define UINT8_C(value) ((uint8_t) __CONCAT(value, U)) // AVR-libc
#define UINT8_C(x_) (static_cast<std::uint8_t>(x_)) // QP/C++
#define UINT8_C(c) c // GNU C Library
The first two implementations seem roughly equivalent, but the third one behaves differently: for example, the following program prints 1
with AVR-libc and QP/C++, but -1
with glibc (because right shifts on signed values propagate the sign bit).
std::cout << (UINT8_C(-1) >> 7) << std::endl; // prints -1 in glibc
The implementation of UINT16_C
displays the same behavior, but not UINT32_C
, because its definition includes the U
suffix:
#define UINT32_C(c) c ## U
Interestingly, glibc's definition of UINT8_C
changed in 2006, due to a bug report. The previous definition was #define UINT8_C(c) c ## U
, but that produced incorrect output (false
) on -1 < UINT8_C(0)
due to integer promotion rules.
Are all three definitions correct according to the standard? Are there other differences (besides the handling of negative constants) between these three implementations?