17

While researching how to do cross-platform printf() format strings in C (that is, taking into account the number of bits I expect each integer argument to printf() should be) I ran across this section of the Wikipedia article on printf(). The article discusses non-standard options that can be passed to printf() format strings, such as (what seems to be a Microsoft-specific extension):

printf("%I32d\n", my32bitInt);

It goes on to state that:

ISO C99 includes the inttypes.h header file that includes a number of macros for use in platform-independent printf coding.

... and then lists a set of macros that can be found in said header. Looking at the header file, to use them I would have to write:

 printf("%"PRId32"\n", my32bitInt);

My question is: am I missing something? Is this really the standard C99 way to do it? If so, why? (Though I'm not surprised that I have never seen code that uses the format strings this way, since it seems so cumbersome...)

hippietrail
  • 15,848
  • 18
  • 99
  • 158
mpontillo
  • 13,559
  • 7
  • 62
  • 90

5 Answers5

16

The C Rationale seems to imply that <inttypes.h> is standardizing existing practice:

<inttypes.h> was derived from the header of the same name found on several existing 64-bit systems.

but the remainder of the text doesn't write about those macros, and I don't remember they were existing practice at the time.

What follows is just speculation, but educated by experience of how standardization committees work.

One advantage of the C99 macros over standardizing additional format specifier for printf (note that C99 also did add some) is that providing <inttypes.h> and <stdint.h> when you already have an implementation supporting the required features in an implementation specific way is just writing two files with adequate typedef and macros. That reduces the cost of making existing implementation conformant, reduces the risk of breaking existing programs which made use of the existing implementation specifics features (the standard way doesn't interfere) and facilitate the porting of conformant programs to implementation who don't have these headers (they can be provided by the program). Additionally, if the implementation specific ways already varied at the time, it doesn't favorize one implementation over another.

AProgrammer
  • 51,233
  • 8
  • 91
  • 143
8

Correct, this is how the C99 standard says you should use them. If you want truly portablt code that is 100% standards-conformant to the letter, you should always print an int using "%d" and an int32_t using "%"PRId32.

Most people won't bother, though, since there are very few cases where failure to do so would matter. Unless you're porting your code to Win16 or DOS, you can assume that sizeof(int32_t) <= sizeof(int), so it's harmless to accidentally printf an int32_t as an int. Likewise, a long long is pretty much universally 64 bits (although it is not guaranteed to be so), so printing an int64_t as a long long (e.g. with a %llx specifier) is safe as well.

The types int_fast32_t, int_least32_t, et al are hardly ever used, so you can imagine that their corresponding format specifiers are used even more rarely.

Adam Rosenfield
  • 390,455
  • 97
  • 512
  • 589
  • "My question is: am I missing something? Is this really the standard C99 way to do it? **If so, why?**" – John Kugelman Jul 26 '09 at 04:43
  • Though I would really like to know the "why" part, I realize that it is somewhat of a rhetorical question. I doubt anyone would know, unless they attending the discussion at the standards organizations when C99 was being talked about. I am imaging a bunch of engineers discussing the merits of requiring changes to printf() when they already had format strings to print just about anything. They probably just decided to make it a #define and be done with it. So unless someone else has some profound insight here, I will likely accept this answer; it answers most of my question. – mpontillo Jul 26 '09 at 05:47
  • 1
    Maybe I write too much embedded code, but I'm a big fan of using the `PRIx##` macros in my printf strings. I only ever assume that `int` is 16 bits or greater. Once you've done it awhile, you get used to it. – tomlogic Apr 06 '11 at 15:32
  • Embedded software uses C all the time in 2013 where sizeof(int) == 1 or 2. This code also needs a level of portability. – chux - Reinstate Monica May 28 '13 at 14:19
2

You can always cast upwards and use %jd which is the intmax_t format specifier.

printf("%jd\n", (intmax_t)(-2));

I used intmax_t to show that any intXX_t can be used, but simply casting to long is much better for the int32_t case, then use %ld.

u0b34a0f6ae
  • 48,117
  • 14
  • 92
  • 101
  • +1 for the interesting idea, but I'm not sure I like the idea of potentially casting a 32-bit integer to a 64-bit+ value on an embedded system... just feels wrong somehow. ;-) – mpontillo Nov 07 '11 at 05:27
  • when you are using `printf`, wasting 4 bytes for temporary storage (if at all) is the least of your performance problems! ;) – lambdapower May 17 '13 at 06:53
1

I can only speculate about why. I like AProgrammer's answer above, but there's one aspect overlooked: what are you going to add to printf as a format modifier? There are already two different ways that numbers are used in a printf format string (width and precision). Adding a third kind of number to say how many bits of precision are in the argument would be great, but where are you going to put it without confusing people? Unfortunatey one of the flaws in C is that printf was not designed to be extensible.

The macros are awful, but when you have to write code that is portable across 32-bit and 64-bit platforms, they are a godsend. Definitely saved my bacon.

I think the answer to your question why is either

  • Nobody could think of a better way to do it, or
  • The standards committee couldn't agree on anything they felt was clearly better.
Norman Ramsey
  • 198,648
  • 61
  • 360
  • 533
  • Good point, but Microsoft added a way to do it using a format modifier (which is ugly but it works). One way could have been to decouple the current "d = int, u = unsigned int" thinking and re-assign the type specifiers such that "d = int32_t, u = uint32_t". That might have been less disruptive to C than the Java route. (specify the bit width of every type explicitly - which they maybe should have done initially...) But it still would have required changes to printf() implementations. I think they just took the easy way out. – mpontillo Jul 27 '09 at 00:55
  • @Mike: do you have a pointer to the Microsoft way? I hate the macros. And I have most of a printf implementation kicking around already. – Norman Ramsey Jul 27 '09 at 02:51
  • It was mentioned on the Wikipedia article I linked, but here it is straight from the source: http://msdn.microsoft.com/en-us/library/tcxf1dw6.aspx – mpontillo Jul 27 '09 at 15:19
  • IMHO, ANSI should have (and C still should) allow a `...` specifier to specify type coercions for different kinds of arguments. If a library's `printf` implementation specified in `stdio.h` that all floating-point arguments must be promoted to `long double`, all integer arguments to `int64_t` or `uint64_t`, and all pointers to `void*`, calling it might be less efficient than under the "normal" rules, but its behavior would be independent of the type of `int` [except when `int` is bigger than 64 bits]. Had C89 done that, floating-point math might not have become so degraded in the years since. – supercat Jun 24 '15 at 15:41
1

Another possibility: backward compatibility. If you add more format specifiers to printf, or additional options, it is possible that a specifier in some pre-C99 code would have a format string interpreted differently.

With the C99 change, you're not changing the functionality of printf.

tomlogic
  • 11,489
  • 3
  • 33
  • 59