I am working on a big, old but actively developed C/C++ code base, where using #define
's to control compile-time aspects of code is a common practice. For example, it is common to define -DTARGET_HAS_FEATURE
in a Makefile, and then use it in code:
#ifdef TARGET_HAS_FEATURE
// ... conditionally included code
#endif
The project is multi-target and cross-platform, so certain quirks are understandably easier to handle by a smart usage of preprocessor. However I have a strong feeling that at the current stage such macrodefinitions are being overused, and I want to keep them under control. They are essentially seen as global untyped variables conveniently at their disposal by some developers. Needless to say, this enormously complicates understanding what the code actually does, and sometimes hides bugs.
The question here is: are there any approaches to analyzing the environment the preprocessor operates on at the scope of the project, and reporting if certain properties do not hold? An answer might be essentially a static code analyzer tool designed for the preprocessor language, but it can be a flag of existing preprocessor software, a code style guideline, or any other technique meant to address known weak properties of the macrolanguage in existing code base.
Among the properties that I would like to check are:
- Whether there are declared defines which are never used in a single place in the code (unused variables).
- Whether there are
#ifdef
-#endif
blocks which reference a name never defined in any file (undefined variables). - Whether a same macrodefinition is used both in def/undef and arithmetical expressions (type violation).
- Whether a macrodefinition is redefined (changing a constant variable)
- Whether the level of nesting
#ifdef
-#endif
expressions is too big (code readability problem).