I think you're confusing string literals having a '\0'
(null-terminator) in the end with arrays in general. Arrays have compile-time length known to the compiler 1. sizeof
is an operator which gives the size based on the array length and the base type of the array.
So when someone does int a[] = {1, 2, 3};
there's no null-terminating character added in the end and number of elements is deduced as 3 by the compiler. On a platform where sizeof(int)
= 4, you'll get sizeof(a)
as 12.
The confusion is because for char b[] = "abc";
, the element count would be 4 since all string literals have a '\0'
automatically put up I.e. They are null-terminated automatically. It is not the sizeof
operator which does a check for this; it simply gives 4 * sizeof(char)
since for sizeof
all that matters is the compile-time array length which is 4 = 1 + the number of characters explicitly stated in the string literal due to the nature of string literals in C.
However a character array not initialised by a string literal but with character literals doesn't have this quirk. Thus if char c[] = {'a', 'b', 'c'};
, sizeof(c)
would return 3 and NOT 4 as it is not a string literal and there's no null-terminating character. Again sizeof operator (not function) does this deduction at compile-time 2.
Finally, how the sizeof operator itself is implemented to do this, is an implementation detail not mandated by the standard. A standard talks about conditions and results. How they're achieved by implementations isn't a concern of the standard (or to anyone except the developers who implement it).
1 C99 introduced Variable Length Arrays (VLA) which allows arrays to have dynamic size.
2 Only for VLAs the sizeof
operator and its operand are evaluated at run-time