0

Can you write preprocessor directives to return you a std::string or char*?

For example: In case of integers:

#define square(x) (x*x)

int main()
{
   int x = square(5);
}

I'm looking to do the same but with strings like a switch-case pattern. if pass 1 it should return "One" and 2 for "Two" so on..

a3f
  • 8,517
  • 1
  • 41
  • 46
cpx
  • 17,009
  • 20
  • 87
  • 142
  • 4
    Why not just a normal function? – Martin York Mar 11 '10 at 06:10
  • What Martin said - and your `square()` macro example shows one reason why: it's so easy to get macros wrong. The `square()` macro should be more like `#define square(x) ((x)*(x))` to avoid problems with things like `square(1 + 4)` returning 9 instead of 25. Even with that fix, it's difficult to prevent incorrect behavior with arguments that have side effects. A function avoids these problems and will likely have no noticeable impact on performance (especially if it can be made an `inline`). – Michael Burr Mar 11 '10 at 06:50
  • By using simple macros or inline functions, does the performance is achieved only at compile time or run-time as well? – cpx Mar 11 '10 at 07:04

4 Answers4

2

You don't want to do this with macros in C++; a function is fine:

char const* num_name(int n, char const* default_=0) {
  // you could change the default_ to something else if desired

  static char const* names[] = {"Zero", "One", "Two", "..."};
  if (0 <= n && n < (sizeof names / sizeof *names)) {
    return names[n];
  }
  return default_;
}

int main() {
  cout << num_name(42, "Many") << '\n';
  char const* name = num_name(35);
  if (!name) { // using the null pointer default_ value as I have above
    // name not defined, handle however you like
  }
  return 0;
}

Similarly, that square should be a function:

inline int square(int n) {
  return n * n;
}

(Though in practice square isn't very useful, you'd just multiply directly.)


As a curiosity, though I wouldn't recommend it in this case (the above function is fine), a template meta-programming equivalent would be:

template<unsigned N> // could also be int if desired
struct NumName {
  static char const* name(char const* default_=0) { return default_; }
};
#define G(NUM,NAME) \
template<> struct NumName<NUM> { \
  static char const* name(char const* default_=0) { return NAME; } \
};
G(0,"Zero")
G(1,"One")
G(2,"Two")
G(3,"Three")
// ...
#undef G

Note that the primary way the TMP example fails is you have to use compile-time constants instead of any int.

  • This makes me wonder if this problem is achievable with template metaprogramming in some deviously clever way... – Chris Lutz Mar 11 '10 at 06:26
  • @Chris: Sure, you'd have to use `NumName::name()` (call an inline static function) and specialize for all the values you care about. I don't see the need, however. –  Mar 11 '10 at 06:27
  • @Roger - I was wondering more if that would allow us to do neat tricks to simplify larger numbers into combinations of smaller numbers, but the template system doesn't allow for ranges (like, say, `template<> struct NumName<21-29>`) so it ends up not being very useful. – Chris Lutz Mar 11 '10 at 07:30
  • @Chris: The template system certainly allows for that, though not directly. You'd typically do that by inheriting `NumName` from `NumNameImpl::value>`. You can now specialize `NumName`, `HorribleExpression` and/or `NumNameImpl` – MSalters Mar 11 '10 at 10:22
1

A #define preprocessor directive does substitute the string of characters in the source code. The case...when construct you want is still not trivial:

#define x(i) ((i)==1?"One":((i)==2?"Two":"Many"))

might be a start -- but defining something like

static char* xsof[] = ["One", "Two", "Many"];

and

#define x(i) xsof[max(0, min((i)-1, (sizeof xsof / sizeof xsof[0] - 1)))]

seems more reasonable and better-performing.

Edit: per Chris Lutz's suggestion, made the second macro automatically adjust to the xsof definition; per Mark's, made the count 1-based.

Alex Martelli
  • 854,459
  • 170
  • 1,222
  • 1,395
1

I have seen this...

#define STRING_1() "ONE"
#define STRING_2() "TWO"
#define STRING_3() "THREE"
...

#define STRING_A_NUMBER_I(n) STRING_##n()

#define STRING_A_NUMBER(n) STRING_A_NUMBER_I(n)  

I belive this extra step is to make sure n is evaluated, so if you pass 1+2, it gets transformed to 3 before passed to STRING_A_NUMBER_I, this seems a bit dodge, can anyone elaborate?

matt
  • 4,042
  • 5
  • 32
  • 50
  • Boost uses this method for doing its preprocessor code generation stuff. It was fun to decipher how all of that worked. Also, @Chris Lutz, it will work if "STRING_##n" had the open/close brackets on it – Grant Peters Mar 11 '10 at 07:26
  • @Dave: sorry I don't know what you mean exactly also, you would have to make sure you are passing in an actual number, for example, you can't pass 1+2, you have to pass 3 – matt Mar 11 '10 at 07:33
  • well I think its the reason why *Roger Pate* defined it with two parameters to give each a value, instead of being default. – cpx Mar 11 '10 at 07:38
0

You cannot turn integers into strings so 1 ---> "One", 2 ---> "Two", etc except by enumerating each value.

You can convert an argument value into a string with the C preprocessor:

#define STRINGIZER(x)   #x
#define EVALUATOR(x)    STRINGIZER(x)
#define NAME(x)         EVALUATOR(x)

NAME(123)    // "123"

#define N   123
#define M   234

NAME(N+M)    // "123+234"

See also SO 1489932.

Community
  • 1
  • 1
Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278