0

I just spotted this in the legacy code. I know that using the macro, whenever the name is used, it is replaced by the contents of the macro. They are most commonly used to give symbolic names to numeric constants.What I know is preprocess has no notion of type safety, scope.

What is the real benefit of doing like this?

#define STATIC static
STATIC function1() { /*Do something*/ }

I knew that static functions are visible only in that module or translation unit. Not visible outside of the C file they are defined.


Why not just declare like this, instead of macro replacement?

static function1() { /*Do something*/ }

I thought I will find an answer in SO but I didn't find any proper answer.

alk
  • 69,737
  • 10
  • 105
  • 255
danglingpointer
  • 4,708
  • 3
  • 24
  • 42
  • This is usually done with macros that have different replacements depending on options you use when compiling, so it might not always expand into the same thing. – Barmar Sep 22 '17 at 09:45
  • Maybe this question is related: https://stackoverflow.com/questions/32398612/why-only-define-a-macro-if-its-not-already-defined/32398697#32398697 – Barmar Sep 22 '17 at 09:46
  • @Barmas, in that case, it just overrides the scope of the function that is static when compiling? – danglingpointer Sep 22 '17 at 09:50
  • 3
    You can change *all* the functions by changing to `#define STATIC` (empty replacement) and suddenly they are not static anymore. If that is a benefit or not is questionable. – Bo Persson Sep 22 '17 at 09:54
  • Windows does this a lot define `char *` to `PCHAR` define `int` to `INT` and all, I think its just for readability – qwn Sep 22 '17 at 10:08
  • 1
    @qwn The PCHAR define is terrible, it should be a typedef. For example, `PCHAR a, b;` is *very* misleading, since `a` will be `char *` but `b` will be `char`. – Tom Karzes Sep 22 '17 at 10:17
  • Indeed. The Windows "hungarian notation" has been heavily criticized, one of the main reasons why, is that they hide pointers behind typedefs. Note that this is a coding style originating from early 90s. – Lundin Sep 22 '17 at 10:54

1 Answers1

4

There is no rational reason why you would do this. Generally it is bad practice to hide keywords behind #define in this manner, because the code turns cryptic and harder to read.

I would suspect it has to do with coding style, it is common to write various function specifiers in upper case, particularly in Windows programming. Often it is done to specify a certain calling convention. For example CALLBACK in Windows, or this example from the old "Windows bible" (Petzold):

#define EXPORT __declspec (dllexport)

(Which could be modified to also contain extern "C" in case of C++.) You'd then have a function such as EXPORT void CALLBACK func (void). Similarly there's also WINAPI in Windows.

Sometimes we also see things like

#define PRIVATE static

which is kind of horrible, because static doesn't really have the same meaning as private in C++.

Lundin
  • 195,001
  • 40
  • 254
  • 396
  • 1
    sometimes `inline` during the debugging is replaced with nothing. So replacing `inline` with nothing is sometimes handy. Same with long attributes `PACKED` is easier to read than `__attribute__((__packed__))`. Capital letters usually indicate the macro. – 0___________ Sep 22 '17 at 10:36
  • @Lundin, true I felt its bad practice to do that, harder to read the code. – danglingpointer Sep 22 '17 at 10:53