-8

UInt62.max / 2 is represented by 0100..0000 in the memory. Add 1 and it will be 0100..0001. So, the first bit for the sign. And we take -1. But CPU thinks that it's -9 223 372 036 854 775 808. Why does it work so complexly?

You can see that it's truth because of the issue in the Swift playground: Why is UInt64 max equal -1 in Swift?

var max = UInt64.max / 2 + 1 // playground shows -1 because it treats it as Int64 
Dmitry
  • 14,306
  • 23
  • 105
  • 189
  • 1
    What? can you please demonstrate with some code? – NathanOliver Sep 25 '15 at 17:00
  • You better to answer but not to down vote for the interesting question. – Dmitry Sep 25 '15 at 17:00
  • 2
    look into two's complement representation. – jaggedSpire Sep 25 '15 at 17:00
  • 3
    "UInt62.max / 2 is represented by 1000..0000 in the memory." -- No, it's not. Not even if you meant 64 rather than 62. –  Sep 25 '15 at 17:01
  • The sample in the question is fixed. Demo code added. Sorry for the mistake. – Dmitry Sep 25 '15 at 17:03
  • 1
    It is not an interestin question, you need to do some studying about the actual representation of numbers (and presumably other things) in memory. Also the size if int, uint, UInt64, Int64. – zaph Sep 25 '15 at 17:04
  • In my Playground (Xcode 7.1 beta 2) it shows 9223372036854775808. – MirekE Sep 25 '15 at 17:27
  • Xcode 7.0 shows wrong value. Does 7.1 beta 2 work with iPhone? Xcode 7.1 beta 1 doesn't. – Dmitry Sep 25 '15 at 17:27
  • No, it's just issue in the playground - see the provided link in the question. – Dmitry Sep 25 '15 at 17:33
  • @Altaveron Open the Apple Calculator, command-3 will take you tothe programmer version. It displays in decimal (20) and hexadecimal (16). Take some time playing withthat to get a better understanding of the number representations. – zaph Sep 25 '15 at 17:38
  • Yes 7.0 shows wrong result. That suggests that it is a bug that has been fixed. I did not try 7.1 b2 with a device yet, sorry. – MirekE Sep 25 '15 at 17:40

1 Answers1

8

Yes, in fact -1 is not represented as 1 with a sign bit, but rather as all bits set to one. This is called a "two's complement" representation, and is used in most of the modern processors.

Read more about it here:

https://en.wikipedia.org/wiki/Two%27s_complement

One of the reasons for that is that this way arithmetic operations that involve both negative and positive numbers are easier. If -1 was represented as 1 with a sign bit, and we attempted to add 1 to it in a naive way, we would get 2 with a sign bit instead of zero. With two's complement representation you can just add the numbers as if they were unsigned, and get the correct result.

T.J. Crowder
  • 1,031,962
  • 187
  • 1,923
  • 1,875
Ishamael
  • 12,583
  • 4
  • 34
  • 52
  • 1
    See also https://www.youtube.com/watch?v=lKTsv6iVxV4 – wimh Sep 25 '15 at 17:07
  • It also makes mathematically as -1 > -2 so -1 should have a larger bit pattern. – NathanOliver Sep 25 '15 at 17:08
  • 2
    To expand on the "it's easier" part, crucially this means that the same [ALU](https://en.wikipedia.org/wiki/Arithmetic_logic_unit) circuit can be used for both addition and subtraction, calculating addition as `ADD(a,b,carry=0)` and subtraction as `ADD(a,~b,carry=1)`. – MooseBoys Sep 25 '15 at 17:11
  • 1
    There's also some more explanation [here](http://stackoverflow.com/questions/1049722/what-is-2s-complement) on Stack Overflow. – jaggedSpire Sep 25 '15 at 17:15
  • The simplest explanation is that 1111 + 1 = (1)0000. So 1111 must be -1. Then -1 + 1 = 0. – Dmitry Sep 25 '15 at 17:29
  • 1
    There is no -1 in UInt64 – MirekE Sep 25 '15 at 17:36