31

In some code I am working on I have come across strange re-definitions of truth and falsehood. I have seen such things before to make checks more strict/certain, but this one is a little bizarre in my mind and I wonder if anyone can tell me what could be a good reason for such definitions, see below with my comments next to them:

#define FALSE (1 != 1) // why not just define it as "false" or "0"?
#define TRUE (!FALSE)  // why not just define it as "true" or "1"?

There are many other strange oddities in this code base. Like there are re-definitions for all the standard types like:

#define myUInt32 unsigned integer // why not just use uint32_t from stdint?

All these little "quirks" make me feel like I am missing something obvious, but I really can't see the point :(

Note: Strictly this is c++ code, but it could have been ported from a 'c' project.

Francesco
  • 3,200
  • 1
  • 34
  • 46
code_fodder
  • 15,263
  • 17
  • 90
  • 167

7 Answers7

35

The intent appears to be portability.

#define FALSE (1 != 1) // why not just define it as "false" or "0"?
#define TRUE (!FALSE)  // why not just define it as "true" or "1"?

These have boolean type in languages that support it (C++), while providing still-useful numeric values for those that don't (C — even C99 and C11, apparently, despite their acquisition of explicit boolean datatypes).

Having booleans where possible is good for function overloading.

#define myUInt32 unsigned integer // why not just use uint32_t from stdint?

That's fine if stdint is available. You may take such things for granted, but it's a big wide world out there! This code recognises that.

Disclaimer: Personally, I would stick to the standards and simply state that compilers released later than 1990 are a pre-requisite. But we don't know what the underlying requirements are for the project in question.

TRWTF is that the author of the code in question did not explain this in comments alongside.

Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
11
#define FALSE (1 != 1) // why not just define it as "false" or "0"?

I think it is because the type of the expression (1!=1) depends on the language's support for boolean value — if it is C++, the type is bool, else it is int.

On the other hand 0 is always int, in both languages, and false is not recognized in C.

Nawaz
  • 353,942
  • 115
  • 666
  • 851
7

Strictly this is c++ code, but it could have been ported from a 'c' project.

This is about portability as mentioned previously, but it actually goes far beyond. It's a clever exploit of the language definitions in order to comply with the languages.

These absurd looking macros are not as absurd as they appear at first glance, but they are in fact ingenious, as they guarantee for both C and C++ that TRUE and FALSE have the correct values (and type) of true and false (even if the compiler is a C compiler which doesn't have those keywords, so you can't trivially write something like #define TRUE true).

In C++ code, such a construct would be useless, since the language defines true and false as keywords.
However, in order to have C and C++ seamlessly interoperate in the same code base, you need "something" that works for both (unless you want to use a different code style).

The way these macros are defined is a testimony of the C++ standard on being explicitly vague about what values true and false actually have. The C++ standard states:

Values of type bool are either true or false.
[...]
A zero value, null pointer value, or null member pointer value is converted to false; any other value is converted to true.
[...]
A prvalue of type bool can be converted to a prvalue of type int, with false becoming zero and true becoming one.

Note how it says that two particular values exist and what their names are, and what corresponding integer values they convert to and from, but it does not say what these values are. You might be inclined to think that the values are obviously 0 and 1 (and incidentially you might have guessed right), but that's not the case. The actual values are deliberately not defined.

This is analogous to how pointers (and in particular the nullpointer) are defined. Remember that an integer literal of zero converts to the null pointer and that a nullpointer converts to false, compares equal... blah blah... etc etc.
A lot is being said about which converts to what and whatnot, but it doesn't say anywhere that a null pointer has to have a binary representation of zero (and in fact, there exist some exotic architectures where this isn't the case!).

Damon
  • 67,688
  • 20
  • 135
  • 185
  • 1
    This is an idiom with multiple origins. We used something very much like this to prevent silly errors in an environment where the application source code was split among Pascal, C and multiple assemblers. This was a natural technique for expressing "truthiness" (to steal a Colbertism ), as each language had different numerical versions of TRUE and FALSE. – Mike Housky Oct 14 '13 at 13:12
6

Many of them have historical reasons such as old codes migrated from C, codes from non-standard C++ compilers, cross compiler codes (portability), to support backward compatibility, following code styles, bad habits.

There were some compilers which had not <cstdint> for integer types like uint32_t, or they had not <cstdbool>. A good programmer had to define everything and use pre-processors heavily to make his program well defined over different compilers.

Today, we can use <cstdint>, true/false, <cstdbool>, ... and everyone is happy!

masoud
  • 55,379
  • 16
  • 141
  • 208
5

The nice thing about this defintion is to avoid an implicit conversion from TRUE or FALSE to an integer. This is useful to make sure the compiler can't choose the wrong function overload.

4

C didn't have a native boolean type, so you couldn't strictly use "false" or "true" without defining them elsewhere - and how would you then define that?

The same argument applies to myUInt32 - C originally didnt have uint32_t and the other types in stdint, so this provides a means of ensuring you are getting the correct size integer. If you ported to a different architecture, you just need to change the definition of myUInt32 to be whatever equates to an unsigned integer 32 bits wide - be that a long, or a short.

mjs
  • 2,837
  • 4
  • 28
  • 48
1

There is a nice explanation which states the difference between false and FALSE. I think it might help in understanding bit further though most of the answers have explained it. here

Community
  • 1
  • 1
HVar
  • 120
  • 8