As a newbie, I'm trying to figure out what can occur when using GCC optimization flags. Which optimization flags likely to make side effects. Which particular situation should I avoid using optimization. I guess optimization can broke some multithreading code but what else?
-
5This question might be too broad... there are lots of optimizations that GCC performs. Look up "load store optimization" for example. It's pretty rare that you would want to turn optimization off, usually it's better to fix your code. – Dietrich Epp Aug 17 '14 at 16:44
-
1When you debug the code using gdb or other debugger the optimization might confuse in this case by optimizing out the variables – Sagar Sakre Aug 17 '14 at 16:47
-
related: [Is optimisation level -O3 dangerous in g++?](http://stackoverflow.com/questions/11546075/is-optimisation-level-o3-dangerous-in-g) – Ciro Santilli OurBigBook.com Jun 26 '15 at 06:32
1 Answers
A compiler's optimizations generally strive to preserve implementation-defined behavior (with some exceptions, especially for aggressive floating-point optimizations that aren't enabled through the general -O
flag). In other words, if the functional behavior of your program changes as a result of optimizations, the program is probably invoking unspecified behavior or undefined behavior.
You should always avoid writing programs that invoke undefined behavior, and make sure that unspecified behavior cannot affect the results of the program. The problem is not with the optimizations, it is with invoking undefined behavior or with giving unspecified behavior a chance to influence the results. These will cause trouble when if you change your compiler, or compiler version, even if you keep compiling without optimizations.
To answer your question as phrased:
optimization can make an uninitialized variable appear as both true and false, or the product of an uninitialized variable by two result into an odd value. Simply do not use uninitialized variables, as this is basically undefined behavior.
with optimization, a signed arithmetic overflow may result in an expression of type
int
apparently having a value larger thanINT_MAX
. The functionint f(int x) { return x+1 > x; }
, compiled with optimizations, returns1
when applied toINT_MAX
. To avoid this strange behavior, simply do not rely on signed arithmetic overflow, it is undefined behavior.with optimization, dangling pointers can be different and identical at the same time: simply do not use dangling pointers for anything.
with optimization, constant expressions invoking undefined behavior may be computed at compile-time with semantics different from the run-time semantics of the assembly instruction generated without optimizations. Expect this to happen for shifts wider than the size of the type: with 32-bit
int
, the expression1 >> 32
can evaluate to0
with optimization and to1
without. The same can happen for overflows in the conversion from floating-point to integer: they are also undefined behavior.

- 79,187
- 7
- 161
- 281
-
On gcc with optimization enabled, a statement like `uint1 = int1*uchar1;` can retroactively alter the value of int1 and cause other peculiarities. It's too bad that gcc pushes things that far, since guaranteed weak constraints on the behavior of integer overflow would make possible optimizations which won't be possible if programmers have to generate optimization-proof code to prevent overflow at all costs. – supercat May 23 '16 at 22:21