Checking very array access would require at least two extra instructions with every indexed operation - more if you want to catch s++ = 0;
, where s
is a pointer into some array (not only do we need to track the size, but also where the pointer region started), and even more if the data was dynamically allocated too (because now we need to keep track of where the original allocation was, and how large it was). I've got array bounds checks in my Pascal compiler, and it adds approximately 15% overhead - in some cases 300%, some cases 5-10%. But it only works for fixed size arrays, not dynamically allocated memory due to the problems described above. The 5-15% isn't really a big problem for most code. The cases where it is 300% is a problem - and it would be even worse if it supported dynamically allocated memory too!
The above is the simple cases where we know where the memory "came from". What if you have a function that takes a pointer to something - it would require extra storage for every pointer to arrange for somewhere to store the size of the pointee memory size - and that memory would have to be read on every memory access, a compare and a branch instruction would have to be added. Quite often, a pointer access is only a single instruction, so we have now added at least three more instructions (and a branch which is never a good thing). And of course, that data has to be filled in before it is used - and ideally in a way that doesn't ruin people's ideas of data-layout in memory...
This is why running code with valgrind
and similar tools is around 10 times slower than running "full speed".
To add a bit of "padding" (aka "crumble-zone") to the memory allocation, and check at delete
-time that the "padding" is still intact is less intrusive, and thus the preferred method in most cases - it is only a few percent slower in itself, and catches that "your code is not behaving as you expect", even if it doesn't catch it IMMEDIATELY.