Why is it
char* itoa(int value, char* str, int base);
instead of
char (*itoa(int value, char (*str)[sizeof(int) * CHAR_BIT + 1], int base))[sizeof(int) * CHAR_BIT + 1]
which would protect programmers from buffer overruns?
Why is it
char* itoa(int value, char* str, int base);
instead of
char (*itoa(int value, char (*str)[sizeof(int) * CHAR_BIT + 1], int base))[sizeof(int) * CHAR_BIT + 1]
which would protect programmers from buffer overruns?
Why is itoa declared unsafely?
Note that itoa()
is not a standard C library function and implementations/signatures vary.
itoa(int value, char* str, int base)
is a building block function some libraries provide. It is meant to be efficient when used by a knowledgeable coder that inures adequate buffer space.
Yet, itoa(int value, char* str, int base);
lacks a buffer size safe-guard. It is easy to mis-calculate worst case, as OP did.
Suggested itoa(char *dest, size_t size, int a, int base)
implementation and give up a minor efficiency for buffer size checking. Note: the caller should check the return value.
Care must be taken with trying to predict maximum string needs.
Consider below as suggested by OP:
#define INT_N (sizeof(int) * CHAR_BIT + 1)
char (*itoa(int value, char (*str)[INT_N], int base))[INT_N]
Conversion of a signed 32-bit INT_MIN
to base 2 is
"-10000000000000000000000000000000"
INT_N
is 33, yet the buffer size needed is 34.
itoa
isn't standard, so any library can declare it as it pleases. It is not very meaningful to ask for a rationale about functions that aren't standardized in the first place.
And those functions that are standardized mostly got into the standard by chance. They basically took every function available in Unix, wrote its name on a paper, tossed them in a hat and drew one hundred or so, whimsically. And so we got a bunch of diverse functions of varied quality and usefulness. I'd say the majority of them are either unsafe or bad style. No rationale exists.
As for the specific case:
The reason why fixed array pointers weren't used is obviously that most library functions in C, standard or not, work on null terminated strings with variable length. If some function would behave differently, it would stand out. At the point when C was launched, Unix was apparently trying to move away from fixed length strings to null-terminated strings.
Furthermore, the rules about pointer conversions were pretty much non-existing initially, so it probably wouldn't have added any safety to use array pointers at the point when all these functions were cooked up, back in the 1970s. There wasn't even a void pointer type.
Regarding buffer overruns and other error controls: when writing C, you can either place error controls in the function or the caller. The most common practice in C is to leave error handling to the caller, which is perfectly fine as long as it is documented. For this there exists a rationale, namely "the spirit of C", which is to always put performance first. Error handling costs performance. In many use-cases the caller knows the nature of the data in advance, making such error controls superfluous.