I have a co-worker that maintains that TRUE used to be defined as 0 and all other values were FALSE. I could swear that every language I've worked with, if you could even get a value for a boolean, that the value for FALSE is 0. Did TRUE used to be 0? If so, when did we switch?
22 Answers
The 0 / non-0 thing your coworker is confused about is probably referring to when people use numeric values as return value indicating success, not truth (i.e. in bash scripts and some styles of C/C++).
Using 0 = success allows for a much greater precision in specifying causes of failure (e.g. 1 = missing file, 2 = missing limb, and so on).
As a side note: in Ruby, the only false values are nil and false. 0 is true, but not as opposed to other numbers. 0 is true because it's an instance of the object 0.

- 58,466
- 12
- 54
- 59
It might be in reference to a result code of 0 which in most cases after a process has run, a result code of 0 meant, "Hey, everything worked fine, no problems here."

- 12,373
- 15
- 63
- 98
-
4With the justification that there's nothing more to say if it worked, whereas if it failed, there are plenty of non-zero return codes to indicate what kind of failure it was. – slim Sep 19 '08 at 18:24
-
1And hence the unix command 'true' does nothing but issue a return code of 0. – Justsalt Sep 19 '08 at 18:28
I worked at a company with a large amount of old C code. Some of the shared headers defined their own values for TRUE and FALSE, and some did indeed have TRUE as 0 and FALSE as 1. This led to "truth wars":
/* like my constants better */
#undef TRUE
#define TRUE 1
#undef FALSE
#define FALSE 0

- 14,617
- 7
- 36
- 33
Several functions in the C standard library return an 'error code' integer as result. Since noErr is defined as 0, a quick check can be 'if it's 0, it's Ok'. The same convention carried to a Unix process' 'result code'; that is, an integer that gave some inidication about how a given process finished.
In Unix shell scripting, the result code of a command just executed is available, and tipically used to signify if the command 'succeeded' or not, with 0 meaning success, and anything else a specific non-success condition.
From that, all test-like constructs in shell scripts use 'success' (that is, a result code of 0) to mean TRUE, and anything else to mean FALSE.
On a totally different plane, digital circuits frecuently use 'negative logic'. that is, even if 0 volts is called 'binary 0' and some positive value (commonly +5v or +3.3v, but nowadays it's not rare to use +1.8v) is called 'binary 1', some events are 'asserted' by a given pin going to 0. I think there's some noise-resistant advantages, but i'm not sure about the reasons.
Note, however that there's nothing 'ancient' or some 'switching time' about this. Everything I know about this is based on old conventions, but are totally current and relevant today.

- 60,510
- 8
- 78
- 126
I'm not certain, but I can tell you this: tricks relying on the underlying nature of TRUE and FALSE are prone to error because the definition of these values is left up to the implementer of the language (or, at the very least, the specifier).

- 335
- 1
- 12
-
well, in a language like perl where there is a loose definition for truth, it's something that you learn to not only allot for but to love. But in the type safety realm of most compiled languages, true and false are rather strict in their definitions. – stephenbayer Sep 19 '08 at 18:25
-
Yes, but it does become a potential issue - even in the same language - if you're converting between compilers, platforms, or even just updating to the latest revision of your language. That's why I espouse being implementation agnostic unless you're very certain of your platform or you need to be. – Sam Erwin Sep 19 '08 at 18:30
System calls in the C standard library typically return -1 on error and 0 on success. Also the Fotran computed if statement would (and probably still does) jump to one of three line numbers depending on the condition evaluating to less than, equal to or greater than zero.
eg: IF (I-15) 10,20,10
would test for the condition of I == 15 jumping to line 20 if true (evaluates to zero) and line 10 otherwise.
Sam is right about the problems of relying on specific knowledge of implementation details.

- 8,920
- 2
- 18
- 24
General rule:
Shells (DOS included) use "0" as "No Error"... not necessarily true.
Programming languages use non-zero to denote true.
That said, if you're in a language which lets your define TRUE of FALSE, define it and always use the constants.

- 11,522
- 5
- 41
- 58
Even today, in some languages (Ruby, lisp, ...) 0 is true because everything except nil is true. More often 1 is true. That's a common gotcha and so it's sometimes considered a good practice to not rely on 0 being false, but to do an explicit test. Java requires you do this.
Instead of this
int x;
....
x = 0;
if (x) // might be ambiguous
{
}
Make is explicit
if (0 != x)
{
}

- 21,379
- 8
- 78
- 117
-
Technically in Java if statements are only valid for boolean expressions and integers do not cast to booleans. In Java, true is true and false is false and neither is a number. – Mr. Shiny and New 安宇 Sep 29 '09 at 21:23
-
I recall doing some VB programming in an access form where True was -1.

- 19,122
- 11
- 62
- 71
-
I think many BASIC flavors evaluate zero as false and anything non-zero as true, but define the constant False as 0 and the constant True as -1 (or all 1's in binary). – C. Dragon 76 Sep 19 '08 at 18:50
-
Most BASIC flavours have no logical operators; all operators are bitwise. Hence (NOT 0) is -1 and comparisons evaluate to -1 if true and 0 if false. – Artelius Nov 03 '08 at 21:54
I remember PL/1 had no boolean class. You could create a bit and assign it the result of a boolean expression. Then, to use it, you had to remember that 1 was false and 0 was true.
It's easy to get confused when bash's true/false return statements are the other way around:
$ false; echo $?
1
$ true; echo $?
0

- 15,796
- 20
- 79
- 114
For the most part, false is defined as 0, and true is non-zero. Some programming languages use 1, some use -1, and some use any non-zero value.
For Unix shells though, they use the opposite convention.
Most commands that run in a Unix shell are actually small programs. They pass back an exit code so that you can determine whether the command was successful (a value of 0), or whether it failed for some reason (1 or more, depending on the type of failure).
This is used in the sh/ksh/bash shell interpreters within the if/while/until commands to check conditions:
if command then # successful fi
If the command is successful (ie, returns a zero exit code), the code within the statement is executed. Usually, the command that is used is the [ command, which is an alias for the test command.
In the C language, before C++, there was no such thing as a boolean. Conditionals were done by testing ints. Zero meant false and any non-zero meant true. So you could write
if (2) {
alwaysDoThis();
} else {
neverDothis();
}
Fortunately C++ allowed a dedicated boolean type.

- 211,554
- 25
- 370
- 453

- 26,349
- 9
- 53
- 79
I have heard of and used older compilers where true > 0, and false <= 0.
That's one reason you don't want to use if(pointer) or if(number) to check for zero, they might evaluate to false unexpectedly.
Similarly, I've worked on systems where NULL wasn't zero.

- 12,508
- 5
- 40
- 37
In any language I've ever worked in (going back to BASIC in the late 70s), false has been considered 0 and true has been non-zero.

- 27,121
- 13
- 66
- 85
I can't recall TRUE
being 0
.
0
is something a C programmer would return to indicate success, though. This can be confused with TRUE
.
It's not always 1
either. It can be -1
or just non-zero.

- 76,472
- 17
- 159
- 346
For languages without a built in boolean type, the only convention that I have seen is to define TRUE as 1 and FALSE as 0. For example, in C, the if
statement will execute the if clause if the conditional expression evaluates to anything other than 0.
I even once saw a coding guidelines document which specifically said not to redefine TRUE and FALSE. :)
If you are using a language that has a built in boolean, like C++, then keywords true
and false
are part of the language, and you should not rely on how they are actually implemented.

- 38,860
- 14
- 75
- 115
-
I believe the keywords true and false have values specified by the standard, so you CAN rely on them. – Mark Ransom Sep 19 '08 at 20:21
-
1A number of languages, including older versions of VB, define true/false as -1/0. That is so that bitwise operations and logical operations do the same thing. – user11318 Sep 19 '08 at 23:50
In languages like C there was no boolean value so you had to define your own. Could they have worked on a non-standard BOOL overrides?

- 54,393
- 15
- 113
- 135
DOS and exit codes from applications generally use 0 to mean success and non-zero to mean failure of some type!
DOS error codes are 0-255 and when tested using the 'errorlevel' syntax mean anything above or including the specified value, so the following matches 2 and above to the first goto, 1 to the second and 0 (success) to the final one!
IF errorlevel 2 goto CRS
IF errorlevel 1 goto DLR
IF errorlevel 0 goto STR

- 14,896
- 8
- 53
- 78
The SQL Server Database Engine optimizes storage of bit columns. If there are 8 or less bit columns in a table, the columns are stored as 1 byte. If there are from 9 up to 16 bit columns, the columns are stored as 2 bytes, and so on. The string values TRUE and FALSE can be converted to bit values: TRUE is converted to 1 and FALSE is converted to 0. Converting to bit promotes any nonzero value to 1.
Every language can have 0 as true or false So stop using number use words true Lol Or t and f 1 byte storage

- 2,711
- 18
- 15