6

The benefits of using the two's complement for storing negative values in memory are well-known and well-discussed in this board.

Hence, I'm wondering:

Do or did some architectures exist, which have chosen a different way for representing negative values in memory than using two's complement? If so: What were the reasons?

Multisync
  • 767
  • 6
  • 25
  • Possible duplicate of [Are there any non-twos-complement implementations of C?](https://stackoverflow.com/questions/12276957/are-there-any-non-twos-complement-implementations-of-c) – phuclv Dec 25 '17 at 03:07

1 Answers1

4

Signed-magnitude existed as the most obvious, naive implementation of signed numbers.

One's complement has also been used on real machines.

On both of those representations, there's a benefit that the positive and negative ranges span equal intervals. A downside is that they both contain a negative zero representation that doesn't naturally occur in the sort of integer arithmetic commonly used in computation. And of course, the hardware for two's complement turns out to be much simpler to build

Note that the above applies to integers. Common IEEE-style floating point representations are effectively sign-magnitude, with some more details layered into the magnitude representation.

Phil Miller
  • 36,389
  • 13
  • 67
  • 90
  • 1
    The *exponent* of IEEE FP uses a [biased representation](http://en.wikipedia.org/wiki/Exponent_bias), e.g., adding 127 for single precision. The linked Wikipedia article states that the motivation was to simplify comparison. Sign-magnitude format simplifies sign bit manipulation (e.g., absolute value; I *think* IEEE 754 allows sign bit manipulation to ignore signaling NaNs) and multiplication (more common for FP than integer). (By the way, negative zero can be used as an integer NaN.) (Not sure I would say **much** simpler to build, excluding staggered ALUs.) –  Sep 08 '14 at 21:28