It is claimed in this blog that:
- Undefined behavior only "happens" at high optimization levels like -O2 or -O3.
- If I turn off optimizations with a flag like -O0, then there's no UB.
are both false. I'm wondering if there's any real-world showcase for the claim.
For example, n << 1
triggers UB when n<0
. For the following function:
void foo(int n) {
int t = n<<1;
if (n>=0)
nuke();
}
the compiler could compile it cautiously:
void foo(int n) {
int t = n>=0 ? (n*2) : error("lshift negative int");
if (n>=0)
nuke();
}
or normally:
void foo(int n) {
int t = n*2;
if (n>=0)
nuke();
}
or optimize it aggresively:
void foo(int n) {
// unused
// int t = n<<1;
// always true, otherwise UB
// if (n>=0)
nuke();
}
Is there any modern popular compiler like gcc/clang that behave in the last way, where some UB not only causes unexpected behavior locally at that statement, but could also be exploited purposely (not considering buffer-overflow attack etc) and pollute the control flow globally, even when -O0
is specified?
Put it simply, are all UBs practically somehow implementation-defined under -O0
?
== EDIT ==
The question is not if those claims are theoretically false or nonsensical (because they are). It's whether there's realworld showcase. As @nate-eldredge has rephrased it in the comment:
given some piece of code that is formally UB, a real-life non-optimizing compiler produces results which are particularly surprising (in the way described above), even to a reasonably knowledgeable programmer?