14

I am reading Douglas Crockford's book - Javascript the good parts - and he says:

JavaScript has a single number type. Internally, it is represented as 64-bit floating point, the same as Java’s double. Unlike most other programming languages, there is no separate integer type, so 1 and 1.0 are the same value. This is a significant convenience because problems of overflow in short integers are completely avoided...

I am not overly familiar with other languages so would like a bit of an explanation. I can understand why a 64 bit helps but his statement seems to apply to the lack of floats and doubles.

What would be (pseudo code perhaps) an example of a short integer overflow situation that wont occur in JS?

Amruth
  • 5,792
  • 2
  • 28
  • 41
cyberwombat
  • 38,105
  • 35
  • 175
  • 251
  • 1
    integers in JS can be from -(2^53 - 1) to (2^53 - 1) .. effectively a signed 54bit integer (but not quite really, but that's not relevant)... short integers are 16bit ... 54bits is bigger than 16bits ... so no overflow problems – Jaromanda X Sep 06 '16 at 03:47
  • example for signed short ... 32767 + 1 is 32768 in JS, in other languages it's -32768 – Jaromanda X Sep 06 '16 at 03:48
  • @JaromandaX ... should that be `-(2^53 + 1)`? I don't know...just merely curious. – rnevius Sep 06 '16 at 03:49
  • 2
    @rnevius - no, I have it right -basically it's +/-(2^53 - 1) – Jaromanda X Sep 06 '16 at 03:49
  • @JaromandaX There are many integers outside that range that can be represented too (most representable integers are outside that range), but that is the largest range within which all integers can be represented. – Paul Sep 06 '16 at 03:52
  • Related [Convert exponential number to a whole number in javascript](http://stackoverflow.com/questions/14037684/convert-exponential-number-to-a-whole-number-in-javascript), [Converting large numbers from binary to decimal and back to binary in JavaScript](http://stackoverflow.com/questions/39334494/converting-large-numbers-from-binary-to-decimal-and-back-to-binary-in-javascript) – guest271314 Sep 06 '16 at 03:55
  • yes, sorry, I wasn't clear about what the +/-2^53 was ... `Number.MAX_SAFE_INTEGER` and `Number.MIN_SAFE_INTEGER` to be exact - simply the range where `value !== value+1` is guaranteed – Jaromanda X Sep 06 '16 at 03:56

1 Answers1

9

Suppose you had an 8 bit unsigned number.

Here are a selection of digital and binary representations:

1: 00000001

2: 00000010

15: 00001111

255: 11111111

If you have 255 and add 1, what happens? There's no more bits, so it wraps around to

0: 00000000

Here's a demonstration in C# using uint (an unsigned 32-bit integer)

using System;

public class Program
{
    public static void Main()
    {
        uint n = 4294967294;
        for(int i = 0; i < 4; ++i)
        {
            n = n + 1;
            Console.WriteLine("n = {0}", n); 
        }

    }
}

This will output:

n = 4294967294
n = 4294967295
n = 0
n = 1

This is the problem you don't get in javascript.


You get different problems.

For example:

var n = 9007199254740991;
var m = n + 1;
var p = m + 1;
alert('n = ' + n + ' and m = ' + m + ' and p = ' + p);

You will see:

n = 9007199254740991 and m = 9007199254740992 and p = 9007199254740992

Rather than wrapping around, your number representations will shed accuracy.


Note that this 'shedding accuracy' behavior is not unique to javascript, it's what you expect from floating-point data types. Another .NET example:

using System;

public class Program
{
    public static void Main()
    {
        float n = 16777214; // 2^24 - 2
        for(int i = 0; i < 4; ++i) 
        {
            Console.WriteLine(string.Format("n = {0}", n.ToString("0")));
            Console.WriteLine("(n+1) - n = {0}", (n+1)-n);
            n = n + 1;                
        }
    }
}

This will output:

n = 16777210
(n+1) - n = 1
n = 16777220
(n+1) - n = 1
n = 16777220
(n+1) - n = 0
n = 16777220
(n+1) - n = 0
Andrew Shepherd
  • 44,254
  • 30
  • 139
  • 205