12

The release notes for gcc were a little vague on -Og:

It addresses the need for fast compilation and a superior debugging experience while providing a reasonable level of runtime performance. Overall experience for development should be better than the default optimization level -O0.

Does "Overall experience for development" include compilation time? If I don't need debug symbols and am optimizing for compile time, should I be using -O0 or -Og?

jww
  • 97,681
  • 90
  • 411
  • 885
daj
  • 6,962
  • 9
  • 45
  • 79
  • 4
    If you don't need debugging symbols you should use -O0 – Akobold Mar 21 '13 at 19:51
  • 4
    Why not just time the full build of *your project* with *your compiler* with the different options, and pick whichever is the fastest? – NPE Mar 21 '13 at 19:54
  • If you're only interested in good performance, the normal level is O2. O0 is almost no optimisation, and it's mostly been for debugging (e.g. it won't optimise out unused variables, it won't improve sloppy coding that's easily fixed, etc). – teppic Mar 21 '13 at 20:16
  • @teppic thanks - my program runs in a trivial amount of time, so I'm optimizing for compilation time for now. – daj Mar 21 '13 at 20:43
  • One thing to remember, at `-O0`, which is often equated to "debug", there are no optimizations and no symbols. So things like signed overflow, which can produce a warning with the right warnings enabled, does not warn. `-Og` is somewhat like `-O` but at `0.5` - not exactly `0` and not exactly `1`. `-Og` also produces some symbols. I know it does not provide symbolic defines, which are normally available at `-g3`. – jww Sep 28 '16 at 06:12

3 Answers3

0

Does "Overall experience for development" include compilation time?

I think it does, but not in this very specific case.

If I don't need debug symbols and am optimizing for compile time, should I be using -O0 or -Og?

-O0.

-1

If I don't need debug symbols and am optimizing for compile time, should I be using -O0 or -Og?

If the presence or absence of debug symbols doesn't matter, time both options and see which one is faster.

Melebius
  • 6,183
  • 4
  • 39
  • 52
Mark B
  • 95,107
  • 10
  • 109
  • 188
-4

With -Og the compiler has to construct and write out extra data (for debugging), so it will take longer. Just compile to assembler (with gcc -S -Og, etc) and compare. But whatever difference there is between -O0 and -Og runtime is probably dwarfed by the time to start gcc and its complete machinery.

If you want compile time, perhaps you should consider tcc for C. Perhaps LLVM is faster for C++.

vonbrand
  • 11,412
  • 8
  • 32
  • 52
  • 5
    Do you have evidence to backup your first claim, "With -Og the compiler has to construct and write out extra data (for debugging)"? That would only apply to the `-g` flags if I am not mistaken. A look in the GCC 4.9.2 source code (`gcc/opts.c`) showed that `-Og` is the the same as `-O1` (`/* -Og selects optimization level 1. */`), but with some flags disabled that would be enabled with `-O1` or higher (`OPT_LEVELS_1_PLUS_NOT_DEBUG`, `OPT_LEVELS_1_PLUS_SPEED_ONLY`, and `OPT_LEVELS_2_PLUS_SPEED_ONLY`). – Lekensteyn Nov 22 '14 at 10:17
  • 3
    I posted a more extensive answer here: http://stackoverflow.com/a/27076307/427545 – Lekensteyn Nov 22 '14 at 10:41