Well, IEEE-754 double-precision has 53 bits of precision. The only numbers that can be represented exactly, then, are:
- rational numbers,
- with a terminating binary representation,
- that is expressed in 53 bits or less
So...to figure out whether or not a given decimal value can be exactly represented, the following should suffice:
public bool IsExactlyRepresentableAsIeee754Double( decimal value )
{
// An IEEE 754 double has 53 bits of precision (52 bits of which are explicitly stored).
const decimal ieee754MaxBits = 0x001FFFFFFFFFFFFF ;
// move the decimal place right until the decimal's absolute value is integral
decimal n = Math.Abs(value) ;
while ( Decimal.Floor(n) != n )
{
n *= 10m;
}
bool isRepresentable = n <= ieee754MaxBits ;
return isRepresentable ;
}
There is a caveat, here, however: decimal
tracks trailing, fractional zeros (for details see below), so 1m
and 1.0m
have different binary representations. So a round-trip like decimal x = (decimal)(double)1.00m ;
will result in x
having the same binary representation as 1m
rather than that of 1.00m
;
Since the internal representation of a decimal is well documented in the .Net CLR documentation and specs. Its backing store is readily available via the method Decimal.GetBits()
and consists of 4 32-bit words, specified as follows:
The binary representation of a Decimal number consists of a 1-bit sign,
a 96-bit integer number, and a scaling factor used to divide the
integer number and specify what portion of it is a decimal fraction.
The scaling factor is implicitly the number 10, raised to an exponent
ranging from 0 to 28.
bits
is a four-element long array of 32-bit signed integers.
bits [0], bits 1, and bits [2] contain the low, middle, and
high 32 bits of the 96-bit integer number.
bits [3] contains the scale factor and sign, and consists of [the]
following parts:
- Bits 0 to 15, the lower word, are unused and must be zero.
- Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.
- Bits 24 to 30 are unused and must be zero.
- Bit 31 contains the sign; 0 meaning positive, and 1 meaning negative.
A numeric value might have several possible binary representations; all are
equally valid and numerically equivalent. Note that the bit representation
differentiates between negative and positive zero. These values are treated
as being equal in all operations.
So you probably could get clever and speed this up with a little bit twiddling. The kicker is that decimal
tracks trailing zeroes so the binary representations of 1m
and 1.00m
differ — 1m
is represented as +1 scaled by 100; 1.00m
is represented as +100 scaled by 102. This complicates things a bit.