3

Anyone able to give me some info on the relative performance of code running under following conditions,

  1. Just compiled
  2. Compiled with --coverage
  3. Running under kcov

Am I going to need twice a long to run my test suite if I integrated code coverage tool like gcov or kcov?

PraAnj
  • 899
  • 1
  • 10
  • 27
Paul D Smith
  • 639
  • 5
  • 16

1 Answers1

3

My experience with this is as follows but note that actual results will probably depend heavily on your code.

  • Running with compiled '--coverage' is about half the speed of just compiled code.

  • Running with kcov is significantly (x6 - x10) times slower than just compiled code.

So what I'm doing is:

  • For a lot of runs or something that I know takes sometime, use '--coverage' then gcovr/lcov
  • For a one-off run of a shortish executable, use kcov.
Paul D Smith
  • 639
  • 5
  • 16
  • 1
    6x overhead? What is kcov doing that makes it so expensive? – Ira Baxter Sep 05 '18 at 13:50
  • 6 times is a big perf loss. But i saw following issue. https://github.com/SimonKagstrom/kcov/issues/159 – PraAnj Sep 05 '18 at 15:25
  • Interesting. Might come back and try kcov again next time I'm working on this. – Paul D Smith Sep 06 '18 at 09:06
  • Wait... so kcov is a coverage tool that works on binaries? That explains the overhead. Such tools either use breakpoint traps (lot over overhead compared to the instructino the breakpoint replaces) or zillions of code patches that jump out, save teh regs, set a coverage flag, restore the registers, do what the jmp instruction replaced, and jump back, which would easily be 6x the overhead of the replaced instruction. This is why people use source code instrumentation tools for coverage. (I build such tools; they have about 15-30% overhead. See my bio). – Ira Baxter Sep 12 '18 at 13:49