Accessing beyond the end of an array is behaviour undefined by the C++ standard. It is your job as a programmer to ensure that never happens.
If it does happen, a valid C++ program can do anything without violating the C++ standard; it can format your hard drive, email your browser history and credit card numbers to Antarctica, or even time travel. Amusingly the last one is not a joke.
In practice, C++ compilers implement the abstract machine on your hardware on top of a a naive page based memory provider that your OS supplies. The C/C++ runtime manages a memory heap where it slices the OS provided pages into the chunks of memory you ask for. This means your process owns the address space around most of the addresses returned by new
; often data immediately around the block returned contains bookeeping information for the C/C++ runtime for when the address is recycled.
By writing there, you are overwriting some other data structure in your programs memory space, or using memory you did not claim you are going to write to. In a larger program this will pretty reliably result in heap corruption, and a seemlingly random crash at an unexpected and seemingly unrelated part of your code. In your toy program, the memory you corrupted wasn't used, so no obvious symptoms iccurred and it "seemed to work". Welcome to undefined behaviour.
All of this is why it is not considered a good idea (in at least some circlea) to use new
/malloc
and pointer arithmetic "in the raw", but rather dress it up in patterns that make it far less likely you'll cause heap corruption. Things like std::vector
and range based for loops and the like. C/C++ programmers failure to keep memory access under control has inspried dozens of highly successful and slow languages whose primary virtue is that accessing an invalid pointer is not undefined behaviour.