As @dbush describes in his answer, an array is defined to be a contiguously allocated non-empty set of objects of the element type (C17 6.2.5/20). Clearly, then, malloc( sizeof( int ) * 5 )
does not allocate enough space for an int[10]
.
But I found it difficult to formally support the last part of that answer, claiming that the size differential makes (for example) (*foo)[4]
have undefined behavior. That conclusion seems plausible, but where does the standard actually say so?
One of the main problems here is that (dynamically) allocated objects have no declared type, only, under some circumstances, an effective type determined by how they are and have been accessed. (C17 6.5/6 and footnote 88). We do know that on success, malloc(n)
returns a pointer to an object of size n
(C17 7.22.3.4/2), but how do we attribute undefined behavior specifically to the association with that object of an effective type describing objects of size larger than n
?
I ultimately decided that the best way to connect the dots is as follows. Suppose that o
is an allocated object of size n
, T
is a complete type having sizeof(T) > n
, and o
is read or written via an lvalue of type T
. Then paragraph 6.5/6 attributes effective type T
to object o
, but because o
's size is insuficient we must conclude that its representation constitutes a trap representation of type T
(C17 3.19.4). Paragraph 6.2.6.1/5 then reiterates the definition of "trap representation" and gets us to where we want to go:
Certain object representations need not represent a value of the
object type. If the stored value of an object has such a
representation and is read by an lvalue expression that does not have
character type, the behavior is undefined. If such a representation is
produced by a side effect that modifies all or any part of the object
by an lvalue expression that does not have character type, the
behavior is undefined. Such a representation is called a trap
representation.
(Emphasis added.)