I would try to explain "no diagnostic required" for behaviours categorized as undefined behaviour (UB).
The Standard by saying "UB doesn't require diagnostic"1, gives compilers total freedom to optimize the code, as the compiler can eliminate many overheads only by assuming your program is completely well-defined (which means your program doesn't have UBs) which is a good assumption — after all if that assumption is wrong, then anything which compiler does based on this (wrong) assumption is going to behave in an undefined (i.e unpredictable) way which is completely consistent because your program has undefined behaviours anyway!
Note that a program which contains UBs has the freedom to behave like anything. Note again that I said "consistent" because it is consistent with Standard's stance : "neither the language specification nor the compilers give any guarantee of your program's behaviour if it contains UB(s)".
1. The opposite is "diagnostic required" which means the compiler is required to provide diagnostics to the programmer either by emitting warning or error messages. In other words, it is not allowed to assume the program is well-defined so as to optimize certain parts of code.
Here is an article (on LLVM blog) which explains this further using example:
An except from the article (italicised mine):
Signed integer overflow: If arithmetic on an 'int' type (for example)
overflows, the result is undefined. One example is that "INT_MAX+1" is
not guaranteed to be INT_MIN. This behavior enables certain classes of
optimizations that are important for some code. For example, knowing
that INT_MAX+1 is undefined allows optimizing "X+1 > X" to "true".
Knowing the multiplication "cannot" overflow (because doing so would
be undefined) allows optimizing "X*2/2" to "X". While these may seem
trivial, these sorts of things are commonly exposed by inlining and
macro expansion. A more important optimization that this allows is for
"<=" loops like this:
for (i = 0; i <= N; ++i) { ... }
In this loop, the compiler can assume that the loop will iterate
exactly N+1 times if "i" is undefined on overflow, which allows a
broad range of loop optimizations to kick in. On the other hand, if
the variable is defined to wrap around on overflow, then the compiler
must assume that the loop is possibly infinite (which happens if N is
INT_MAX) - which then disables these important loop optimizations.
This particularly affects 64-bit platforms since so much code uses
"int" as induction variables.
It is worth noting that unsigned overflow is guaranteed to be defined
as 2's complement (wrapping) overflow, so you can always use them. The
cost to making signed integer overflow defined is that these sorts of
optimizations are simply lost (for example, a common symptom is a ton
of sign extensions inside of loops on 64-bit targets). Both Clang and
GCC accept the "-fwrapv" flag which forces the compiler to treat
signed integer overflow as defined (other than divide of INT_MIN by
-1).
I would recommend you to read the entire article — it has three parts, all are good.
Hope that helps.