1

As I dive deeper into the C programming language I am having trouble understanding why macros should be used as a last resort. For instance, this post. This is not the first time I've heard chatter that they are a last resort. Some suggest that the memory footprint is more excessive than calling a function (this post). Albeit I understand these arguments, as well as why they should not be used for C++ (compiler optimizations and the like), I do not understand the following:

Since macros 'unroll' (if you will) in the .text segment of the stack there is less overhead associated with macros as opposed to function calls - e.g. memory need not be allocated between the frame pointer and stack pointer. This memory overhead is quantifiable, where as this post suggest that macros are not.

Much of the work I do is in embedded systems, micro-controllers, and systems programming. I have read many books and articles by Bjarne Stroustrup. I am also in the process of reading Clean Code - where Robert Martin persists that readability is king (not implying macros increase readability in their own right).

TLDR: Considering macros reduce the overhead associated with stack frames and (if used appropriately) can increase readability - Why the negative stigma? They are littered through BSD papers and man pages.

Community
  • 1
  • 1
  • That has nothing to do with the resulting code. First you should understand what macros are and how they work. Then the arguments against them are obvious. And tey are not deprecated in general. constant-like macros are well accepted in C. – too honest for this site Dec 24 '16 at 19:26
  • The macros shown in [for loop macro coding style](http://stackoverflow.com/questions/41090504/for-loop-macro-coding-style) are a strong argument against some forms of macro. There are other places where macros can be useful. If you're rewriting the language syntax, they're usually a bad idea: `#define IF if(`, `#define THEN ){`, `#define ELSE }else{`, `#define ENDIF }` is a notorious example from the Bourne shell code. – Jonathan Leffler Dec 24 '16 at 21:39

4 Answers4

4

Some will vote to close because this could be construed as an opinion question, but there are some facts.

  • If you have a decent compiler, calls to simple functions are expanded inline just like macros, but they're considerably more type safe. I've seen cases where inlined functions were faster because the compiler missed common sub-expressions in the the compiled macro that were eliminated when function arguments were inlined.
  • When you call a simple function multiple times, you can give the compiler hints on whether to expand it inline each time or call it -- trading code space for tiny stack and runtime overhead you mention. Better yet, lots of compilers (like gcc) will make this call automatically based on heuristics that may be better than your intuition. With a macro, you can only modify the macro to call a function. Code changes are more error prone than build hints.
  • In most compilation systems, error messages and debuggers don't reference the body code of macros. OTOH, modern debuggers will step correctly line-by-line through the body of a function even though it's actually been expanded inline. Similarly, compilers will correctly point to error locations in function bodies.
  • It's extremely easy to code subtle bugs by expanding multiple arg references: #define MAX(X,Y) (X>Y?X:Y) followed by MAX(++x, 3). Here x is incremented once if it's less than 3, twice otherwise. Ouch.
  • Multi-level macro expansion (where a macro call produces other macro calls) relies on complex rules and is therefore error-prone. E.g. it's not hard to create macros that depend on specific behavior of one preprocessor so that they fail on another.
  • Functions can be recursive. Macros can't.
  • C11 features provide ways to do things where macros have been (ab)used in the past: aggregate literals for example. Again, the advantages are type safety, error, and debugger references.

The upshot is that when you use a macro, you're adding error risk and maintenance cost that don't exist with inlined functions. It's hard to escape the conclusion that you should use macros only as a fallback. The main defensible reason to do otherwise is that you're forced to use an old or junky compiler.

Gene
  • 46,253
  • 4
  • 58
  • 96
1

No typesafety and have side effects of pure textual replacement. These are main things for which we should avoid.

user2736738
  • 30,591
  • 5
  • 42
  • 56
1

Macros can do a lot - quite a lot - but as is well-known it's easy to accidentally build macros that do the wrong thing, either by mishandling side-effects, accidentally evaluating arguments multiple times, or messing up operator precedence. I think that the right evaluation to make would be to evaluate the upsides of the benefits of macros versus the potential risks or drawbacks, then make the call from there.

For example, let's take function-like macros. As you mention, they're often used to make code faster by eliminating short function calls. But nowadays, you can achieve the same thing either by using the new inline keyword adopted from C++ or just by cranking up compiler optimization settings, since compilers these days are dramatically better at optimization than they were many years back. If you're using those macros because you want to perform operations like taking a min or max that have basically identical code but different types, use the ‘_Generic` keyword. These other options are less error-prone and easier to debug and test, so it's probably worth avoiding the risk of a macro error by using them.

Then there's defining constants with#define. This fell out of favor in C++ in favor of constants, and that's also now possible in C using static const variables at global scope. You can use macros to do this, but it's more type-safe to use the other option instead. Most compilers are smart enough to inline the constants and do optimizations on the values in ways that previously only macros would guarantee.

For these more routine operations, the benefits of using macros aren't as high as they used to be because new language features and vastly smarter compilers have provided less-risky alternatives. That's the main reason why in general the advice is to avoid using macros - they're just not the best tool for the job.

This doesn't mean to never use macros. There are many times and places where they're fantastic. X Macros are a really neat way to automatically generate code, and macro substitution is super helpful for taking advantage of compiler-specific or OS-specific features while maintaining portability. I don't foresee those uses going away any time soon. But do consider the alternatives in other cases, since in many instances they were specifically invented to address weaknesses of the macro preprocessing system!

templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065
1

A good optimizing compiler should give efficient code for calls to inline functions, as efficient as if you use macros (think of getc as an example)

But you may want to use macros when they are not replacable by inline functions. (Here is an example).

Community
  • 1
  • 1
Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547