I am reading Douglas Crockford's book - Javascript the good parts - and he says:
JavaScript has a single number type. Internally, it is represented as 64-bit floating point, the same as Java’s double. Unlike most other programming languages, there is no separate integer type, so 1 and 1.0 are the same value. This is a significant convenience because problems of overflow in short integers are completely avoided...
I am not overly familiar with other languages so would like a bit of an explanation. I can understand why a 64 bit helps but his statement seems to apply to the lack of floats and doubles.
What would be (pseudo code perhaps) an example of a short integer overflow situation that wont occur in JS?