A recent question on SO concerning "Why does allocating a large element on the stack not fail in this specific case?" and a series of other questions concerning "large arrays on the stack" or "stack size limits" made me search for related limits documented in the standard.
I know that the C standard does not specify a "stack" and that it therefore does not define any limits for such a stack. But I wondered up to which SIZE_X
in void foo() { char anArray[SIZE_X]; ... }
the standard guarantees the program to work, and what happened if a program exceeded this SIZE_X
.
I found the following definition, but I'm not sure if this definition is actually a guarantee for a specific supported size of objects with automatic storage duration (cf. this online C11 standard draft):
5.2.4.1 Translation limits
(1) The implementation shall be able to translate and execute at least one program that contains at least one instance of every one of the following limits:
...
65535 bytes in an object (in a hosted environment only)
Does this mean that an implementation must support a value up to 65535
for SIZE_X
in a function like void foo() { char anArray[SIZE_X]; ... }
and that any value larger than 65535
for SIZE_X
is undefined behaviour?
For the heap, a call to malloc
returning NULL
let's me control the attempt of requesting "too large objects". But how can I control the behaviour of the program if it "requests a too large object with automatic storage duration", specifically if such a maximum size were not documented, e.g. in some limits.h
? So is it possible to write a portable function like checkLimits()
supporting an "entry barrier" like:
int main() {
if(! checkLimits()) {
printf("program execution for sure not supported in this environment.");
return 1;
} else {
printf("might work. wish you good luck!");
}
...
}