In chapter 3.4.2 of The C++ Programming Language, Bjarne Stroustrup says The point of adding int
s in a double
would be to gracefully handle a number larger than the largest int
. Are double
s guaranteed to be able to hold the largest int
s?
-
A `double` can hold a larger range of values at the cost of less precision. – Retired Ninja Jun 08 '22 at 18:30
-
1You can hold any integer value in a double. A double, assuming IEEE 754, (from memory) has 15 digits of accuracy, which is enough for any `int` to be represented exactly. You will lose accuracy with doubles when dealing with larger values like `long long` though. – ChrisMM Jun 08 '22 at 18:31
-
A `double` is typically a 52 bit integer with a 11 bit exponent. this results in the ability to hold much larger numbers, but trades precision. The number stored is not EXACTLY what you may have intended. So if you take a 64 bit `int`, the biggest `int` you going to find on common computing hardware these days, will fit, but it'll look fuzzy down in the least significant digits. – user4581301 Jun 08 '22 at 18:31
-
Weird note: C++ doesn't specify the upper limit of `int`, just that it cannot be bigger than `long`, and `long` is similarly specified. Technically you could have an `int` of infinite size so long as `long` is also infinite, but realistically `int` won't be bigger than the register size of the computer its running on without a really, really good (and weird) reason. – user4581301 Jun 08 '22 at 18:39
-
2@user4581301: That said, virtually everything people use that they think of as a computer (excluding embedded device stuff) nowadays is native 64 bit; while most compilers still use 32 bit `int`s, they could, with minimal (or no) efficiency loss (and possible gain) make `int` a 64 bit type and immediately ruin the ability to store an `int` in a `double` without precision loss. They *don't*, largely because entirely too much code *relies* on `int` being 32 bits and they don't want to break it, but they *could*. – ShadowRanger Jun 08 '22 at 18:52
-
@ChrisMM: Re “You can hold any integer value in a double.”: No, you cannot. Every `double` format uses a finite number of bits and therefore can represent only a finite number of values. The number of integer values is infinite, so it is larger than the number of values a `double` format can represent. – Eric Postpischil Jun 08 '22 at 18:56
-
1@ShadowRanger: Re “minimal (or no) efficiency loss”: Have you evaluated how much changing `int` to 64 bits will increase structure sizes? And how much that will increase memory use in programs with large numbers of data structures containing `int` members? And how much that will increase cache misses? And disk I/O? – Eric Postpischil Jun 08 '22 at 19:00
-
@EricPostpischil, I was referring to a 32-bit `int` integer. Maybe not entirely clear. – ChrisMM Jun 08 '22 at 19:16
-
@EricPostpischil: Sure. Same way doubling the size of pointers did that too. There are losses, especially when the switch causes caches to spill, but for the actual mathematical operations, it's minimal. If people don't want to use as much memory, they *can* specify `stdint.h` types, but as noted a lot of code was doing that by relying on `int` being a particular size. `int` was originally supposed to be "whatever the CPU native integer processing handled natively" (math auto-promoting to `int` makes more sense when you realize it was supposed to be "the biggest cheap integer type for math"). – ShadowRanger Jun 08 '22 at 19:17
-
I'm not arguing they should do this. And it was reasonable to use `int` historically (especially what with `stdint.h` types not existing until after all this legacy code was written). But you were never *supposed* to rely on it being a particular size, people just did, and now we're stuck. If history could be rewritten, one reasonable solution to your "it eats too much memory/disk I/O" objection would be "when you define structures and store to/from disk, use explicitly sized types", with `int` existing for "here's the biggest cheap integer type for math ops", but I lack a time machine. :-) – ShadowRanger Jun 08 '22 at 19:21
-
@ShadowRanger I'm not sure it'd even break anything here, those program would probably never let those `int` exceed 2^32. – apple apple Jun 10 '22 at 15:05
-
@appleapple: You're an optimist. :-) They'd rely on the overflow behavior (legal and predictable for `unsigned` at least). Or rely on the `int` size and alignment to align the members of unions to allow access by different union members in ways that violate the C spec but are legal on basically every target in the world (e.g. `union { char[8]; struct { int[2]; } }`, reading raw bytes to the `char` array then extracting the `int`s). Or rely on it for serializing `int` in network protocols/disk I/O. Or using OS APIs with fixed size types that happen to be the same as `int` when it's 32 bits. – ShadowRanger Jun 10 '22 at 15:11
-
@appleapple: I've seen code that was originally written to run on non-IEEE 754 floating point machines that assumed it could do a bunch of math with `double`s that ends in a logical `int`, and just cast to `int` rather than rounding. On the original machine it was written for, the calculations erred high, so it would compute `16.000000000000004` and cast to `16`; when you tried to port it to an IEEE 754 compliant machine, it got `15.999999999999994` and cast to `15`. Then did more scaling math on it and ended up with an off-by-140 error. People rely on implementation details *ALL THE TIME*. – ShadowRanger Jun 10 '22 at 15:16
2 Answers
It's not guaranteed.
If you assume double
is in fact implemented as a IEEE 754 binary64 type (it should be), then it has significand precision of 53 bits (that's the number of bits of integer precision it provides). Once you exceed 53 bits though, you'll start losing data (initially, it can only represent every other integer value, then every fourth value, then every eighth, etc., as it relies more and more on the exponent to scale it).
On most systems, int
is 32 bits or less, so a single int
addition can't exceed the representational ability of the double
. But there are systems in which int
is 64 bits, and on those systems, even without addition getting involved, a large int
value can overflow the representational precision of a double
; you'll get something close to the right value, but it won't be exactly correct.
In practice, when this situation arises, you probably want to use int64_t
or the like; double
will be more portable (there's no guarantee a given system implements a 64 bit integer type), but it may be slower (on systems without a floating point coprocessor) and it will be inherently less precise than a true 64 bit integer type.
I suspect Bjarne Stroustrup's comment dates back to the days when virtually all systems had native integer handling of 32 bits or fewer, so:
- Not all of them provided a 64 bit integer type at all, and
- When they did provide a 64 bit integer type, it was implemented in software by the compiler performing several 32 bit operations on paired 32 bit values to produce the equivalent of a 64 bit operation, making it much slower than a single floating point operation (assuming the system had a floating point coprocessor)
That sort of system still exists today mostly in the embedded development space, but for general purpose computers, it's pretty darn rare.
Alternatively, the computation in question may be one for which the result is likely to be huge (well beyond what even a 64 bit integer can hold) and some loss of precision is tolerated; an IEEE 754 binary64 type can technically represent values as high as 2 ** 1023
(the gaps between representable values just get nuts at that point), and could usefully store the result of summing a bunch of "large enough to not get lost due to precision loss (variable definition, depending on magnitude of result)" 32 bit integers up into the high two digit or low three digit bit counts.

- 143,180
- 12
- 188
- 271
-
FWIW, any system that has `long long` has to have at least a 64 bit integer type. – NathanOliver Jun 08 '22 at 18:50
-
I don't have the book in front of me so I'm missing context, but I would imagine Stroustrup's comment simply refers to the fact that the exponent allows you to represent numbers *much* larger than a standard 32-bit integer would allow. The whole point of using floating point representation. – Jon Reeves Jun 08 '22 at 18:55
-
@NathanOliver: Sure. [`long long` didn't officially exist prior to C++11 AFAICT](https://stackoverflow.com/a/6462453/364696) (it was supported by many compilers, and part of C99, but omitted from C++98/C++03), so if that line from Stroustrup predated common availability of C++11 compilers, `long long` wouldn't have been an option. – ShadowRanger Jun 08 '22 at 18:55
-
@JonReeves: Yes, if the goal is "store huge numbers imprecisely", that's another reasonable use case. I'll edit to clarify. – ShadowRanger Jun 08 '22 at 18:56
Are
double
s guaranteed to be able to hold the largestint
s?
No, primarily because the sizes and particular features of double
and int
are not guaranteed by the C++ standard.
The format commonly used for double
is IEEE-754 “double precision,” also called binary64. The set of finite numbers this format represents is { M•2e for integers M and e such that −253 < M < +253 and −1074 ≤ e ≤ 971 }. The largest set of consecutive integers in this set is the integers from −253 to +253, inclusive. 253+1 is not representable in this format.
Therefore, if int
is 54 bits or fewer, so it has one sign bit and 53 or fewer value bits, every int
value can be represented as a double
in this format. If int
is wider than 54 bits, it can represent 253+1 but this double
format cannot.

- 195,579
- 13
- 168
- 312
-
And generally `int` is 32bit or 16bit on smaller cpus. Check https://en.cppreference.com/w/cpp/types/numeric_limits , specifically the `radix` and `digits`. Using those 2 you can assert that `int` fits in a `double` or any other combination. – Goswin von Brederlow Jun 08 '22 at 19:40