Looking at this question: Why does a C/C++ compiler need know the size of an array at compile time ? it came to me that compiler implementers should have had some times to get their feet wet now (it's part of C99 standard, that's 10 years ago) and provide efficient implementations.
However it still seems (from the answers) to be considered costly.
This somehow surprises me.
Of course, I understand that a static offset is much better than a dynamic one in terms of performance, and unlike one suggestion I would not actually have the compiler perform a heap allocation of the array since this would probably cost even more [this has not been measured ;)]
But I am still surprised at the supposed cost:
- if there is no VLA in a function, then there would not be any cost, as far I can see.
- if there is one single VLA, then one can either put it before or after all the variables, and therefore get a static offset for most of the stack frame (or so it seems to me, but I am not well-versed in stack management)
The question arise of multiple VLAs of course, and I was wondering if having a dedicated VLA stack would work. This means than a VLA would be represented by a count and a pointer (of known sizes therefore) and the actual memory taken in an secondary stack only used for this purpose (and thus really a stack too).
[rephrasing]
How VLAs are implemented in gcc / VC++ ?
Is the cost really that impressive ?
[end rephrasing]
It seems to me it can only be better than using, say, a vector
, even with present implementations, since you do not incur the cost of a dynamic allocation (at the cost of not being resizable).
EDIT:
There is a partial response here, however comparing VLAs to traditional arrays seem unfair. If we knew the size beforehand, then we would not need a VLA. In the same question AndreyT gave some pointers regarding the implementation, but it's not as precise as I would like.