Why does this notion of undefined behaviour exist?
To allow the language / library to be implemented on a variety of different computer architectures as efficiently as possible (- and perhaps in the case of C - while allowing the implementation to remain simple).
if instead of causing undefined behaviour, using the operations outside of their intended use would cause a compilation error
In most cases of undefined behaviour, it is impossible - or prohibitively expensive in resources - to prove that undefined behaviour exists at compile time for all programs in general.
Some cases are possible to prove for some programs, but it's not possible to specify which of those cases are exhaustively, and so the standard won't attempt to do so. Nevertheless, some compilers are smart enough to recognize some simple cases of UB, and those compilers will warn the programmer about it. Example:
int arr[10];
return arr[10];
This program has undefined behaviour. A particular version of GCC that I tested shows:
warning: array subscript 10 is above array bounds of 'int [10]' [-Warray-bounds]
It's hardly a good idea to ignore a warning like this.
More typical alternative to having undefined behaviour would be to have defined error handling in such cases, such as throwing an exception (compare for example Java, where accessing a null reference causes an exception of type java.lang.NullPointerException
to be thrown). But checking for the pre-conditions of well defined behaviour is slower than not checking it.
By not checking for pre-conditions, the language gives the programmer the option of proving the correctness themselves, and thereby avoiding the runtime overhead of the check in a program that was proven to not need it. Indeed, this power comes with a great responsibility.
These days the burden of proving the program's well-definedness can be somewhat alleviated by using tools (example) which add some of those runtime checks, and neatly terminate the program upon failed check.