4

Possible Duplicate:
C++ - enum vs. const vs. #define

Before I used #define I used to create constants in my main function and pass them where they were needed. I found that I passed them very often and it was kind of odd, especially array sizes.

More recently I have been using #define for the reason that I don't have to pass constants in my main to each individual function.

But now that I think of it, I could use global constants as well, but for some reason I have been a little hesitant towards them.

Which is the better practice: global constants or #define?

A side question, also related: Is passing constants from my main as I described a bad practice?

Community
  • 1
  • 1
Zac Blazic
  • 607
  • 2
  • 6
  • 17
  • 1
    C++ FAQ Lite has a nice short summary: http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.7 – Cubbi Mar 24 '11 at 13:19
  • 3
    This question shouldn't be asked for C an C++ at the same time. These are very different with respect to what compile time constants are. – Jens Gustedt Mar 24 '11 at 13:49

5 Answers5

6

They don't do quite the same thing. #define lets you affect the code at compilation time, while global constants only come into effect at runtime.

Seeing as #define can only give you extra trouble because there's no checking going on with how you use it, you should use global constants when you can and #define when you must. It will be safer and more readable that way.

As for passing constants from main, it's not unreasonable because it makes the called functions more flexible to accept an argument from the caller than to blindly pull it out of some global. Of course it the argument isn't really expected to change for the lifetime of the program you don't have much to gain from that.

Jon
  • 428,835
  • 81
  • 738
  • 806
  • Would it be acceptable to use a global constant for a 2D array size for example? I'm guessing its the only way to do it. – Zac Blazic Mar 24 '11 at 13:34
  • 1
    Not true: macros let you affect the code at preprocessing time, while global constants may affect code at compile time, and may do only at runtime. – Alexander Poluektov Mar 24 '11 at 13:44
  • @Alexander: You are correct technically. I am simplifying on purpose, e.g. compile time = when I hit the compile button. – Jon Mar 24 '11 at 13:47
3

Using constants instead of #define is very much to be preferred. #define replaces the token dumbly in every place it appears, and can cause all sorts of unintended consequences.

Passing values instead of using globals is good practice. It makes the code more flexible and modular, and more testable. Try googling for "parameterise from above".

Nick
  • 25,026
  • 7
  • 51
  • 83
Pete
  • 1,241
  • 2
  • 9
  • 21
  • Depends on what you're using the globals for - personally I'd have no problem with someone having a global constant `PI`, for example. – Stuart Golodetz Mar 24 '11 at 13:20
  • 1
    Also, constants can be declared inside a namespace or a class, and can be made protected or private. This, together with the fact that constants are typed (macro substitution is not), makes for safer code. – Juancho Mar 24 '11 at 13:20
  • 1
    Efficiency-wise, a string constant for example will reside only once in memory. A string from a #define will be substituted several times into your code and probably into your executable. – Juancho Mar 24 '11 at 13:22
  • @Stuart: that's true of course, all rules are guidelines. – Pete Mar 24 '11 at 13:34
  • What do I do in the case of a 2D array that needs the column size in the prototype? – Zac Blazic Mar 24 '11 at 13:42
2

You should never use either #defines or const variables to represent array sizes; it's better to make them explicit.

Instead of:

#define TYPICAL_ARRAY_SIZE 4711

int fill_with_zeroes(char *array)
{
  memset(array, 0, TYPICAL_ARRAY_SIZE);
}

int main(void)
{
  char *za;

  if((za = malloc(TYPICAL_ARRAY_SIZE)) != NULL)
  {
    fill_with_zeroes(za);
  }
}

which uses a (shared, imagine it's in a common header or something) #define to communicate the array size, it's much better to just pass it to the function as a real argument:

void fill_with_zeroes(char *array, size_t num_elements)
{
  memset(array, 0, num_elements);  /* sizeof (char) == 1. */
}

Then just change the call site:

int main(void)
{
  const size_t array_size = 4711;
  char *za;

  if((za = malloc(array_size)) != NULL)
  {
    fill_with_zeroes(za, array_size);
  }
}

This makes the size local to the place that allocated it, there's no need for the called function to magically "know" something about its arguments that is not communicated through its arguments.

If the array is non-dynamically allocated, we can do even better and remove the repeated symbolic size even locally:

int main(void)
{
  char array[42];
  fill_with_zeroes(array, sizeof array / sizeof *array);
}

Here, the well-known sizeof x / sizeof *x expression is used to (at compile-time) compute the number of elements in the array.

unwind
  • 391,730
  • 64
  • 469
  • 606
1

Constants are better. The only difference between the two is that constants are type-safe.

Chris Frederick
  • 5,482
  • 3
  • 36
  • 44
DQP
  • 21
  • 4
0

You shouldn't use values defined with #define like const parameters. Defines are used mostly to prevent the compiler to compile some parts of code depending on your needings at compile time (platform dependent choices, optimization at compile time, ).

So if you are not using define for these reasons avoid that and use costant values.

Heisenbug
  • 38,762
  • 28
  • 132
  • 190