Is there a difference in source code for release and debug compiled program?
It depends on the source code, and the options used to compile the library or program. Below are a few differences I am aware of.
ASSERTS
The simplest of "debugging and diagnostics" is an assert
. They are in effect when NDEBUG
is not defined. Asserts create self-debugging code, and they snap when an unexpected condition is encountered. The trick is you have to assert everything. Everywhere you validate parameters and state, you should see an assert. Everywhere there's an assert, you should see an if
to validate parameters and state.
I laugh when I see a code base without asserts. I kind of say to myself, the devs have too much time on their hands if they are wasting it under the debugger. I often ask why thy don't use asserts, and they usually answer with the following...
Posix assert
sucks because it calls abort
. If you are debugging a program, then you usually want to step the code to see how the code handles negative conditions that caused the assert
to fire. Terminating the program runs foul with the "debugging and diagnostic" purpose. It has got to be one of the dumbest decisions in the history of C/C++. No one seems to recall the reasoning for the abort (a few years ago I tried to track down the pedigree on various C/C++ standards lists).
Usually you replace the useless Posix assert with something more useful, like an assert
that raises a SIGTRAP
on Linux or calls DebugBreak
on Windows. See, for example, a sample trap.h
. You replace the Posix assert
with your assert to ensure libraries you are using get the updated behavior (if they have already been compiled, then its too late).
I also laugh when projects like ISC's BIND (the DNS server that powers the Internet) DoS's itself with its asserts (they have their own assert; they don't use Posix assert). There's a number of CVE's against BIND for its self-inflicted DoS. DoS'ing yourself is right up there with "lets abort a program being debugged".
For completeness, Microsoft Foundation Classs (MFC) used to have something like 16,000 or 20,000 asserts to help catch mistakes early. That was back in the late 1990s or mid 2000s. I don't what the state is today.
APIs
Some APIs exist that are purposefully built for "debugging and diagnostics". Other APIs can be used for it even though they are not necessarily safe to use in production.
An example of the former (purposefully built) is a Logging and DebugPrint
API. Apple successfully used it to egress a user's FileVault passwords and keys. Also see os x filevault debug print.
An example of the latter (not safe for production) is Windows IsBadReadPointer
and IsBadWritePointer
. Its not safe for production because it suffers a race condition. But its usually fine for development because you want the extra scrutiny.
When we perform security reviews and audits, we often ask/recommend removing all non-essential logging; and ensure the logging level cannot be changed at runtime. When an app goes production, the time for debugging is over. There's no reason to log everything.
Libraries
Sometimes there are special libraries to use to help with debugging a diagnostics. Linux's Electric Fence and Microsoft's CRT Library come to mind. Both are memory checkers with APIs. In this case, you link command will be different, too.
Options
Sometimes you need additional options or defines to help with debugging and diagnostics. Glibc++ and -D_GLIBCXX_DEBUG
comes to mind. Another one is concept checking, which used to be enabled by the define -D_GLIBCXX_CONCEPT_CHECKS
. Its Boost code and its broken, so you should not use it. In these cases, your compile flags will be different.
Another one I often laugh at is a Release build that lacks the NDEBUG
define. That includes Debian and Ubuntu as a matter of policy. The NSA, GHCQ and other 3-letter agencies thanks them for taking the sensitive information (like server keys), stripping the encryption (writing it to a file unprotected), and then egressing the sensitive information (sending them Windows Error Reporting, Apport Error Reporting, etc).
Initialization
Some development environments perform initialization with special bit patterns when a value is not explicitly initialized. Its really just a feature of the tools, like the compiler or linker. Microsoft's tools come to mind; see When and why will an OS initialise memory to 0xCD, 0xDD, etc. on malloc/free/new/delete? GCC had a feature request for it, but I don't think anything was ever done with it.
I often laugh when I disassemble a production DLL and see the Microsoft debug bit patterns bcause I know they are shipping a Debug DLL. I laugh because it often indicates the Release DLL has a memory error that the dev team was not able to clear. Adobe is notorious for doing this (not surprisingly, Adobe supplies some of the most insecure software on the planet, even though they don't supply an Operating System like Apple or Microsoft).
#ifdef _DEBUG
check that error didn't occur...
SerialPrint("Error occurred")
#endif
It makes me want to cry, but you still have to do this in 2016. GDB is (was?) broken under Aarch64, X32 and S/390, so you have to use printf's to debug your code.