2

I thought that this conversion cannot fail. So boost::numeric_cast<double>(long) should produce the same result as just a regular cast.

Is this correct? If so, why is the boost::numeric_cast slower than a regular cast? Is there some sort of check it is doing?

phuclv
  • 37,963
  • 15
  • 156
  • 475
jhourback
  • 4,381
  • 4
  • 26
  • 31
  • 2
    Are you making assumptions about the size of these types? If both `long` and `double` are 64 bits (as they often are), then there's no way you can fit every `long` value plus fractional values into a `double`. – chris Mar 09 '21 at 19:49

3 Answers3

2

From the documentation:

The lack of preservation of range makes conversions between numeric types error prone. This is true for both implicit conversions and explicit conversions (through static_cast). numeric_cast detects loss of range when a numeric type is converted, and throws an exception if the range cannot be preserved.

So it looks like boost's numeric casts do some extra checking, and can throw exceptions -- so they're not always the same as a "regular cast".

druckermanly
  • 2,694
  • 15
  • 27
  • But the documentation also states: `The implementation must guarantee that for a conversion to a type that can hold all possible values of the source type, there will be no runtime overhead.` – jhourback Mar 09 '21 at 19:47
  • 2
    @jhourback If double and long are both 64-bits then a double can not hold all possible values of a long to the same accuracy (it needs some bits for the sign and mantissa) leaving only 53 bits. – Richard Critten Mar 09 '21 at 19:51
  • Basically, what that is saying can be also expressed like this: "if you cast a type `X` to a type `Y`, and all values of `X` can be held in `Y`, then the cast incurs no overhead" -- but that's not the case for all casts that `numeric_cast` can handle... for instance, casting a `uint8_t` to a `uint16_t` will incur no overhead, but casting the other direction likely will. – druckermanly Mar 09 '21 at 20:10
  • biggest integer that can be stored in a double -- https://stackoverflow.com/questions/1848700/biggest-integer-that-can-be-stored-in-a-double – J'e Mar 10 '21 at 12:52
2
static_assert((1ull<<57ull)!=(1+(1ull<<57ull)));
static_assert((double)(1ull<<57ull)==(double)(1+(1ull<<57ull)));

boost numeric cast would throw rather that round, as the above code does.

64 bit integers can represent some integers that 64 bit doubles cannot. 64 bit doubles spend bits on the 'exponent'.

Yakk - Adam Nevraumont
  • 262,606
  • 27
  • 330
  • 524
-1

The documentation says boost::numeric_cast is for casting without loss of range. The range of long is not necessarily narrower than double, therefore boost::numeric_cast<double>(long) may produce a different result from a regular cast. For example an implementation can have 96-bit long and 72-bit double with a very small double range. There's nothing wrong with that, it's completely C++ compliant because types in C++ don't have a fixed size and only have a minimum size. See What does the C++ standard state the size of int, long type to be?

Besides the documentation is probably a little bit unclear in that boost::numeric_cast also prevents conversion from a value not representable in the target type. For obvious reasons, a floating-point value doesn't use all of its bits for the significant part of the value and trade range for precision. Therefore the precision of an N-bit double would be smaller than N and is equal to std::numeric_limits<double>::digits/DBL_MANT_DIG digits. For IEEE-754 binary64 the precision is 53 bits, therefore if the long value requires more than 53 bits of significand then obviously it can't be stored in a double. For example 0xABCDEF9876543210 = -6066929684898893296 has a 60-bit significand, which is the distance between the first and last 1 bit. When converted to double it'll be rounded to -6066929684898892800. The change in value means boost::numeric_cast would fail. Some languages like JavaScript even have a MAX_SAFE_INTEGER constant for this

See also

phuclv
  • 37,963
  • 15
  • 156
  • 475