The example you are looking for is any kind of compile-time calculation that causes UB, e.g., an overflow of a signed integer. In can happen, for instance, when constexpr
, template metaprogramming, or optimizations are involved. There is no reason why such UB should be propagated into runtime. An example:
template <signed char N>
struct inc {
static const signed char value = 1 + inc<N + 1>::value;
};
template <>
struct inc<-100> {
static const signed char value = 1;
};
static const signed char I1 = inc<-110>::value;
static const signed char I2 = inc<110>::value; // UB
The signed integer overflow — and therefore UB — obviously happens at compile time, when templates are recurrently instantiated.
Anyway, IMO, the main reason is simplicity. There is only one UB defined by the Standard. Which is more simple than define many kinds of UB and then say which one applies in which situation.
By pure logic, if a cause of UB doesn't exist until runtime (such as dereferencing an invalid pointer), then UB cannot apply to compile time. Such as when compiling the following source file:
#include <iostream>
void(int* p) { std::cout << *p; }
UPDATE
My understanding of UB is as follows: If a condition for UB is met, then there is no requirement for the behavior. See [defns.undefined]. Some such conditions (signed integer overflow) can happen at compile time as well as at runtime. Another conditions (dereferencing an invalid pointer) cannot happen until runtime.