7

In C99 standard Section 7.18.4.1 "Macros for minimum-width integer constants", some macros defined as [U]INT[N]_C(x) for casting constant integers to least data types where N = 8, 16, 32, 64. Why are these macros defined since I can use L, UL, LL or ULL modifiers instead? For example, when I want to use at least 32 bits unsigned constant integer, I can simply write 42UL instead of UINT32_C(42). Since long data type is at least 32 bits wide it is also portable.

So, what is the purpose of these macros?

obareey
  • 309
  • 2
  • 9
  • I think they're so that you don't have to worry about such things as how big an `int`, or a `long` is when creating your own types of fixed sized integers. – Dave Newman May 03 '13 at 14:18

3 Answers3

7

You'd need them in places where you want to make sure that they don't become too wide

#define myConstant UINT32_C(42)

and later

printf( "%" PRId32 " is %s\n", (hasproperty ? toto : myConstant), "rich");

here, if the constant would be UL the expression might be ulong and the variadic function could put a 64bit value on the stack that would be misinterpreted by printf.

Jens Gustedt
  • 76,821
  • 6
  • 102
  • 177
  • So, are you suggesting that long can be 64-bits on some 32-bits platforms? Therefore it would be problematic in some functions like printf. If so, it would have also a performance impact, because 64-bits integer manipulation occur when 32-bits is enough. – obareey May 06 '13 at 07:03
  • 1
    yes, this is up to the platform designers to decide. And don't overestimate performance difference for operations on integers of different width, on modern machines there is none for the arithmetic itself. The only difference could be if you are doing a lot of such operations in a loop and the access to memory would be a bottleneck. In the contrary, modern avx processors do 256 bit operations as fast as 32 bit operations, so generally it is a bad idea to chose types of a specified width. Use `size_t`, `ptrdiff_t` and things like that that fit the semantics of what you are doing. – Jens Gustedt May 06 '13 at 08:15
  • Are there any 16 bit compilers where static `const uint32_t var = 0xFFFF0000;` could result in var = 0x0000? What do the C99 and C89 specs say about the size of a constant? – Samuel Jan 06 '15 at 19:34
  • @Samuel: no. The constant has always a type that fits it. And since its value fits into 32 bits, there will never be a loss of information, here. – Jens Gustedt Jan 06 '15 at 19:41
  • This is not a good example. The minimum-width integer constant macros might add an integer constant suffix (such as `L`, `LL`, `U`, `UL` or `ULL`), but that won't downcast the argument like a regular cast would. You do `printf( "%" PRIu32 "\n", UINT32_C(42000000000));` and bam, your program is undefined. – Petr Skocik Nov 13 '17 at 18:14
  • @PSkocik, I don't see that. Sure that if you put a constant that doesn't fit into the type the behavior might be undefined. But decent compilers warn about that, so this is a minor problem. On the other hand if you downcast to the type, you get another value and your program is incorrect and no compiler can ever warn you. I prefer the former. – Jens Gustedt Nov 13 '17 at 23:01
3

They use the smallest integer type with a width of at least N, so UINT32_C(42) is only equivalent to 42UL on systems where int is smaller than 32 bits. On systems where int is 32 bits or greater, UINT32_C(42) is equivalent to 42U. You could even imagine a system where a short is 32 bits wide, in which case UINT32_C(42) would be equivalent to (unsigned short)42.

EDIT: @obareey It seems that most, if not all, implementations of the standard library do not comply with this part of the standard, perhaps because it is impossible. [glibc bug 2841] [glibc commit b7398be5]

Oktalist
  • 14,336
  • 3
  • 43
  • 63
  • But on those platforms it doesn't matter if 32-bits is short, because it would be the fastest integer with at least 32-bits, or I thought as so. Now I'm confused. Because it writes "The macro UINTN_C(value) shall expand to an integer constant expression corresponding to the type uint_leastN_t" in the standard, but 32-bits ARM compiler defines "#define UINT8_C(x) (x ## u)" and "#define UINT16_C(x) (x ## u)" without any casting. – obareey May 06 '13 at 07:13
  • I wonder how often any of these macros actually get used to make code more portable? The semantics of things like signed-versus-unsigned comparisons have so many weird and quirky corner cases I can't see how one could possibly hope to cope with them all. For example, given `uint32_t n`, what would be the behavior of `(n-1) < UINT32_C(5);` if `n` is zero? How would one best write that expression to be portable on system with `int` larger than, smaller than, or equal to 32 bits? – supercat May 10 '13 at 23:53
0

The macros essentially possibly add an integer constant suffix such as L, LL, U, UL, o UL to their argument, which basically makes them almost equivalent to the corresponding cast, except that the suffix won't ever downcast.

E.g., UINT32_C(42000000000) (42 billion) on an LLP64 architecture will turn into 42000000000U, which will have type UL subject to the rules explained here. The corresponding cast, on the other hand ((uint32_t)42000000000), would truncate it down to uint32_t (unsigned int on LLP64).

I can't think of a good use case, but I imagine it could be usable in some generic bit-twiddling macros that need at least X bits to work, but don't want to remove any extra bits if the user passes in something bigger.

Petr Skocik
  • 58,047
  • 6
  • 95
  • 142