4

In one of the answers (and its comments) on How to convert an int to string in C the following solution is given

char str[ENOUGH];
sprintf(str, "%d", 42);

in the comments caf mentions that ENOUGH can be determined at compile time with:

#define ENOUGH ((CHAR_BIT * sizeof(int) - 1) / 3 + 2)

I get the + 2 because you need to be able to display the minus sign and null terminator but what is the logic behind the other part? Specifically CHAR_BIT?

Community
  • 1
  • 1
Gerhard Burger
  • 1,379
  • 1
  • 16
  • 25

3 Answers3

5

If int type is 32-bit, how many bytes do you need to represent any number (without sign and null terminator)?

If int type is 32-bit the maximum int value is 2147483648 (assuming two's complement), that is 10 digits so 10 bytes needed for storage.

To know the number of bits in an int in a specific platform (e.g., 32 in our example) we can use CHAR_BIT * sizeof (int) == 32. Remember CHAR_BIT is the number of bits in a C byte and sizeof yields a size in bytes.

Then (32 - 1) / 3 == 10 so 10 bytes needed. You may also wonder how the author finds the value 3? Well the log base 2 of 10 is a little more than 3.

ouah
  • 142,963
  • 15
  • 272
  • 331
  • Thanks for your explanation, I didn't know `CHAR_BIT` determined the number of bits in an `int` :O I thought it only said something about the number of bits in a char. – Gerhard Burger Sep 15 '14 at 13:43
  • This formula fails for unusually `int` bit widths such as 18. It also fails should it get used for `char` as in `((CHAR_BIT * sizeof(char) - 1) / 3 + 2)`. Instead recommend `((CHAR_BIT * sizeof(what_ever_signed_integer_type) + 1) / 3 + 2)`. – chux - Reinstate Monica Sep 15 '14 at 15:06
1

I assume that ENOUGH is computed in a conservative way because the final +2 takes into account the \0 null terminator (always present, that's fine) and the "-" minus sign (sometimes present). For positive values (and zero) you end up with one unused extra byte.

So, if ENOUGH is NOT computed as the strictly minimum number of bytes required to store the value why not use a fixed value of 12? (10 bytes for the number and 2 bytes for \0 and sign)

However:

CHAR_BIT * sizeof(int) is the exact number of bits to store an int in your machine.

-1 is because 1 bit is used for the sign (you "consume" 1 bit of information to store the sign, irrespective of the technique involved let it be two's complement, one's complement or naive sign storing)

/3 is because every 1 decimal digit takes at least 3 bits of information

Gianluca Ghettini
  • 11,129
  • 19
  • 93
  • 159
0

CHAR_BIT is the number of bits in an char (most likely 8), sizeof(int) is 2 or 4, so ENOUGH is 7 or 12, which is enough space to save an int including the sign and the NULL terminator.

mch
  • 9,424
  • 2
  • 28
  • 42