0

Python is an object-oriented programming language, although it treats 0 as False, while languages like, for example, Ruby evaluates 0 to true, because (I believe) 0 it's a number and numbers are objects so they must evaluate to true because they exists.

well, all I know about these conventions is that, they are there because performance and or designs reasons, but:

What are the advantages of treating 0 as False? What are the disadvantages of treating 0 as true?

thegrinner
  • 11,546
  • 5
  • 41
  • 64
yeyo
  • 2,954
  • 2
  • 29
  • 40
  • 3
    FWIW, JavaScript and PHP treat `0` as `false` as well. This does not have anything to do with OO programing languages IMO. – Felix Kling May 31 '13 at 18:27
  • 1
    Duplicate of a question at programmers.SE: [Why is 0 false?](http://programmers.stackexchange.com/q/198284) (found via Google). – Felix Kling May 31 '13 at 18:29
  • 1
    related : http://stackoverflow.com/a/3175293/846892 – Ashwini Chaudhary May 31 '13 at 18:31
  • This probably has a lot more to do with strict/loose typing, than it does with the language being object oriented or not. For what it's worth, a large majority of languages consider 0 to be "false". – CmdrMoozy May 31 '13 at 18:32
  • @FelixKling it does not say what are the specific advantages or disadvantages, IMO what it says is, "it make sense". Although there is a comment, made by Mason Wheeler, which makes perfect sense to me (and 29 more). – yeyo May 31 '13 at 18:41

2 Answers2

2

This probably has to do with Python's roots being in C. In C, false == 0. Also at an academic level, in binary representation, 0 is almost always regarded as false.

Captain Skyhawk
  • 3,499
  • 2
  • 25
  • 39
0

Almost every common language (everything on the C branch, Haskell, Python, etc.) interpret 0 as False. This makes a lot of sense, because the question "Does this variable contain something meaningful?" is most often True when > 0, and False when 0.

The only place I recall seeing 0 being used as True is in the Unix shell, because all other exit codes from a program indicate a particular error, and 0 means no error.

If Ruby treats 0 as True, I can only imagine it's to fit with Unix scripting and handle return codes.

  • I've always thought of `0` (or other number) a C function returns as the answer to *"Do you see errors?"*. Does the function return `17`? Means: Yes, error code 17. Does it return 42? Yes, error code 42. Does it return 0? No errors. – ypercubeᵀᴹ May 31 '13 at 18:39
  • To be precise, shells consider success, not truth. Certain commands, like `test`, are designed to signal success when their arguments evaluate to true (which often includes non-zero values) – chepner May 31 '13 at 18:39
  • chepner: Run "for i in true false; do echo -n "$i: "; $i; echo $?; done", if you don't believe me. –  May 31 '13 at 19:18