19

How to determine if a number, for example 1.577, can be precisely represented in float or double format?

It means it is real 1.577 not a 1.566999999999994324 etc.

EDIT: I'm looking for a tool, where I can type a number and it will display double/float representation of it. So it's not only c# related question.

apocalypse
  • 5,764
  • 9
  • 47
  • 95

3 Answers3

28

You can use an online decimal to floating-point converter. For example, type in 1.577 and you get two indications that it is not exact:

1) The "Inexact" box is checked

2) It converts to 1.5769999999999999573674358543939888477325439453125 in double precision floating-point.

Contrast that to a number like 1.25, which prints as 1.25, and the "Inexact" box is NOT checked.

(That converter can also check single-precision numbers.)

Rick Regan
  • 3,407
  • 22
  • 28
  • Whoever wrote that tool never learnt how to print floating point numbers. http://www.cs.indiana.edu/~dyb/pubs/FP-Printing-PLDI96.pdf – leppie Feb 20 '15 at 19:55
  • 2
    @leppie: I did. There is more than one algorithm that can print out a floating-point number correctly. – tmyklebu Feb 20 '15 at 20:25
  • 4
    @leppie: I don't think that's the point of the converter. It's aiming to show the *exact* value of the converted binary floating-point number (in decimal, amongst other possibilities) rather than the (usually) approximate value that Burger and Dybvig would give. Using Burger and Dybvig would be a bit pointless for the purpose of showing how a general value represented in decimal gets approximated when converting to binary. – Mark Dickinson Feb 20 '15 at 20:34
  • 14
    @leppie: Just to echo what Mark said, the main purpose of the converter is NOT to print a rounded value like Burger and Dybvig, Steele and White, or David Gay. That's what gets newbies in trouble. They enter 0.1 and the machine prints back 0.1. They they have no idea that internally, the value is not 0.1. That's why I wrote the tool. – Rick Regan Feb 21 '15 at 01:11
6

You already have answers on how to definitively check for exact representation. In addition, it is both feasible and useful to be able to eliminate many numbers without formally testing, and to check short decimal fractions on sight.

Suppose the decimal representation of your number terminates after N decimal places. For example, N is 3 for 1.577. Take the part after the decimal point, and look at it as in integer, 577. If the number is exactly representable as a binary fraction, that part has to be an integer multiple of 5^N. 577 is not an integer multiple of 125 so 1.577 is not exactly representable.

If you have a number of reasonable magnitude with only a few significant digits in its decimal representation, then if it passes this test it is exactly representable. For example, I know without computerized testing that 1.625 is exactly representable.

Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75
1

Well, IEEE-754 double-precision has 53 bits of precision. The only numbers that can be represented exactly, then, are:

  • rational numbers,
  • with a terminating binary representation,
  • that is expressed in 53 bits or less

So...to figure out whether or not a given decimal value can be exactly represented, the following should suffice:

public bool IsExactlyRepresentableAsIeee754Double( decimal value )
{
  // An IEEE 754 double has 53 bits of precision (52 bits of which are explicitly stored).
  const decimal ieee754MaxBits = 0x001FFFFFFFFFFFFF ;

  // move the decimal place right until the decimal's absolute value is integral
  decimal n = Math.Abs(value) ;
  while ( Decimal.Floor(n) != n )
  {
    n *= 10m;
  }

  bool isRepresentable = n <= ieee754MaxBits ;
  return isRepresentable ;
}

There is a caveat, here, however: decimal tracks trailing, fractional zeros (for details see below), so 1m and 1.0m have different binary representations. So a round-trip like decimal x = (decimal)(double)1.00m ; will result in x having the same binary representation as 1m rather than that of 1.00m;

Since the internal representation of a decimal is well documented in the .Net CLR documentation and specs. Its backing store is readily available via the method Decimal.GetBits() and consists of 4 32-bit words, specified as follows:

The binary representation of a Decimal number consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the integer number and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28.

bits is a four-element long array of 32-bit signed integers.

  • bits [0], bits 1, and bits [2] contain the low, middle, and high 32 bits of the 96-bit integer number.

  • bits [3] contains the scale factor and sign, and consists of [the] following parts:

    • Bits 0 to 15, the lower word, are unused and must be zero.
    • Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.
    • Bits 24 to 30 are unused and must be zero.
    • Bit 31 contains the sign; 0 meaning positive, and 1 meaning negative.

A numeric value might have several possible binary representations; all are equally valid and numerically equivalent. Note that the bit representation differentiates between negative and positive zero. These values are treated as being equal in all operations.

So you probably could get clever and speed this up with a little bit twiddling. The kicker is that decimal tracks trailing zeroes so the binary representations of 1m and 1.00m differ — 1m is represented as +1 scaled by 100; 1.00m is represented as +100 scaled by 102. This complicates things a bit.

Nicholas Carey
  • 71,308
  • 16
  • 93
  • 135
  • 2
    How is this supposed to work? It seems that your code will say that `0.1` is representable which is wrong. – nwellnhof Feb 22 '15 at 11:48