9

When I want to do debugging of C or C++ programs, I've been taught to use -O0 to turn optimization OFF, and -ggdb to insert symbols into the executable which are optimized for using the GNU gdb debugger, which I use (or, you can use -glldb for LLVM/clang's lldb debugger, or just -g for general debugging symbols, but that won't be as good as -ggdb apparently...). However, I recently stumbled upon someone saying to use -Og (instead of -O0), and it caught me off-guard. Sure enough though, it's in man gcc!:

-Og Optimize debugging experience. -Og enables optimizations that do not interfere with debugging. It should be the optimization level of choice for the standard edit-compile-debug cycle, offering a reasonable level of optimization while maintaining fast compilation and a good debugging experience.

So, what's the difference? Here's the -O0 description from man gcc:

-O0 Reduce compilation time and make debugging produce the expected results. This is the default.

man gcc clearly says -Og "should be the optimization level of choice for the standard edit-compile-debug cycle", though.

This makes it sound like -O0 is truly "no optimizations", whereas -Og is "some optimizations on, but only those which don't interfere with debugging." Is this correct? So, which should I use, and why?

Related:

  1. related, but NOT a duplicate! (read it closely, it's not at all a duplicate): What is the difference between -O0 ,-O1 and -g
  2. my answer on debugging --copt= settings to use with Bazel: gdb: No symbol "i" in current context
Gabriel Staples
  • 36,492
  • 15
  • 194
  • 265
  • 6
    [The manual](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#Optimize-Options) has more details: `Like -O0, -Og completely disables a number of optimization passes so that individual options controlling them have no effect. Otherwise -Og enables all -O1 optimization flags except for those that may interfere with debugging:`. Then it lists the exact flags disabled. – kaylum Aug 13 '20 at 00:04
  • Ah, that is useful! That's more information than is in `man gcc`! – Gabriel Staples Aug 13 '20 at 00:09
  • If you read the excerpts in your question carefully, I think the answer is there quite explicitly. `-Og` optimizes the debugging experience, whereas `-O0` minimizes the compilation speed. You want to use `-Og` if you want to enjoy debugging, `-O0` if you want to quickly find out if the damn thing even compiles. – Antti Haapala -- Слава Україні Aug 13 '20 at 04:23
  • @AnttiHaapala, true, but it's nice to have confirmation from other developers, especially when I'm so surprised by the new information (ie: that `-Og` even exists). – Gabriel Staples Aug 13 '20 at 19:01

2 Answers2

10

Quick summary

Do not use -Og. -Og breaks debugging.

Use -ggdb -O0 (preferred if using the gdb debugger), or -g3 -O0 instead.

Using -g -O0 is okay too, but -g alone defaults to debug level 2 (-g2), which means that compared to -g3, -g is missing "extra information, such as all the macro definitions present in the program." (See man gcc and search for -glevel).

Details

@kaylum just provided some great insight in their comment under my question! And the key part I really care about the most is this:

[-Og] is a better choice than -O0 for producing debuggable code because some compiler passes that collect debug information are disabled at -O0.

https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#Optimize-Options

So, from now on I'm using -Og (NOT -O0) in addition to -ggdb.


UDPATE 13 Aug. 2020:

Heck with this! Nevermind. I'm sticking with -O0.

With -Og I get <optimized out> and Can't take address of "var" which isn't an lvalue. errors all over the place! I can't print my variables or examine their memory anymore! Ex:

(gdb) print &angle
Can't take address of "angle" which isn't an lvalue.
(gdb) print angle_fixed_p
$6 = <optimized out>

With -O0, however, everything works fine!

(gdb) print angle
$7 = -1.34869879e+20
(gdb) print &angle
$8 = (float *) 0x7ffffffefbbc
(gdb) x angle
0x8000000000000000:     Cannot access memory at address 0x8000000000000000
(gdb) x &angle
0x7ffffffefbbc: 0xe0e9f642

So, back to using -O0 instead of -Og it is!

Related:

  1. [they also recommend -O0, and I concur] What does <value optimized out> mean in gdb?
Gabriel Staples
  • 36,492
  • 15
  • 194
  • 265
  • 4
    I also recommend `-O0` for debugging. With `-Og` breakpoints tend to trigger at wrong lines. – Piotr Siupa Mar 18 '22 at 18:19
  • 2
    Yup, `-Og` code-gen is a lot more like `-O1` than the truly strict debug support you get with `-O0`. ([Why does clang produce inefficient asm with -O0 (for this simple floating point sum)?](https://stackoverflow.com/q/53366394)). `-Og` lets the compiler optimize away local temporaries. You can see some of this with `gcc -Og -S -fverbose-asm`, that it's still inventing names for temporaries it invented, not using the original C var names, when operating on registers. Although in simple cases it seems `-Og` looks a lot more like the C source than `-O1`: https://godbolt.org/z/zxcqb66jx – Peter Cordes Sep 27 '22 at 21:56
1

Due to the editing, the addition of -ggdb isn't entirely clear from Gabriel's excellent answer above. I have found that if GDB is your preferred debugger, then along with -O0 it is definitely advantageous to always also use -ggdb3 (the 3 is important, better than just ggdb).

drifty0pine
  • 136
  • 1
  • 3