There's probably already an answer to this somewhere, but I can't find it.
As noted in this question: Accessing an array out of bounds gives no error, why?, C++ does not enforce array bounds, instead opting to provide undefined behavior. What I'm concerned about is the power of this behavior.
So suppose I write some simple program:
#include <iostream>
int main() {
int* a = new int[1];
long large_number = 9223372036854775807l; //2**63 - 1
for (long i = 0l; i < large_number; i++) {
std::cout << i << " " << a[i] << std::endl;
}
return 0;
}
This will continue to print the next 32-bit number stored on my system (assuming 32 bit-sized ints, obviously). When I run this on my machine, the program segfaults when i is around 30,000, which I'm guessing is around the size of the memory allocated for my program. This brings me to my question, which is three-fold:
What's preventing me from continuing to read (not write) values outside this range? Is this prevention of reading system-specific? Compiler-specific?
If I was clever with how I manipulate my pointer, could I read or write values outside of the scope of my program (without, obviously, having direct/normal access to these values)?
I'm running all of this on a virtual machine. Can I access read/write memory values on my host machine? (If (2) is a no, then this is a no as well).
Note that I'm running g++ 5.3.1, no c++11, on a ubuntu virtualbox with a windows host machine.
Also, I recognize this question could be considered a security issue (reading/writing memory). I'm certainly not intending anything malicious, but if this is a problem, let me know and I will be glad to close the question.
EDIT: The following question appears related and interesting: Accessing outside the memory allocated by the program. (Accessing other app's memory) There doesn't seem to be a consensus on whether or not a program can read outside of it's virtual memory space though.