Sometimes you will not get a runtime error when you access your array out of range in C code. such as:
char array[1024];
char* ptr = array;
*(ptr-10) = 'a';
//or
*(ptr-4096) = 'a';
Assuming the array
is on the stack, I am curious about what is the MIN SIZE that will let *(ptr-SIZE) = 'a'
throw a segmentation fault in any situation?
for example:
*(ptr-4096*1024) = 'a' // this will always throw a segmentation fault
*(ptr-4096*8) = 'a' // sometimes this will not throw a segmentation fault
2017/1/10, new added:
I feel sorry for not stating the question clearly. : (
What I want to know is not just a ambiguous UNDEFINED. Assuming the stack is high, and heap is low. So the memory layout will be:
high *********
* stack * <- my array goes here
*********
* *
* * <- ptr may be here
* *
*********
* heap *
*********
* ??? * <- ptr may be here
*********
Of course I know that
The problem is that C/C++ doesn't actually do any boundary checking with regards to arrays. It depends on the OS to ensure that you are accessing valid memory
So, the code above will cause the OS kernel invoke do_page_fault
, and will try to find vma
of the address, and check if the vma->vm_start < address
.
Now, let's come back to the question I asked: the MIN SIZE that will let *(ptr-SIZE) = 'a'
throw a segmentation fault in any situation. In other word, I want to know the SIZE that the do_page_fault
can afford. It is nothing about the C compiler, but how your OS protects your memory.