0

I have 3 doubles which I multiply. I can do this in 2 different ways:

a. -1.0093674437739981 * 0.05521692677702658 * 0.04865764623936961

b. -1.0093674437739981 * (0.05521692677702658 * 0.04865764623936961)

The answers I get are:

a. -0.0027118934413746733

b. -0.002711893441374673

I have lost 1 digit in the second answer.

How is this possible and is there a way to avoid this without using BigDecimal??

The context is that I have implemented a backpropagation algorithm in 2 different ways:

  • The first algorithm goes recursivly throug the network and multiplies all the terms (weight * sigmoid-deriv * error-deriv).

  • The second algorithm uses matrices. It first calculates the backprogagation in the first layer and then multiplies it with the weight in the second layer (weight * (sigmoid-deriv * error-deriv). When I do this, I lose precision (as described above).

    This implementation does not work as well as the first implementation.

Update:

I have found a solution concerning the neural network. If the arithmetic introduces an error, storing the complete mantissa of a weight (storing in order to read in the weights to do some futher training) is not a wise thing to do. Its better to limit the mantissa to eight digits in order to avoid the error the next time. Now my neural net can do multiple epochs instead of 1.

wyp
  • 31
  • 3
  • 3
    Floating point inaccuracies are pretty much caused by hardware, so no, there's no way around them. Use `BigDecimal`. – daniu Sep 28 '17 at 10:46
  • @daniu While that is technically true, there is [a whole field of computer science/mathematics](https://softwareengineering.stackexchange.com/questions/220995/how-to-identify-unstable-floating-point-computations) devoted to minimizing the loss of accuracy. Knowing how to keep your tiny errors from growing into enormous errors is crucial in many areas. – biziclop Sep 28 '17 at 10:55

1 Answers1

0

Doubles are stored as a combination of mantissa and exponent, where you have to think of mantissa as an integer representing the most significant bits of the number and the exponent as information about the shifting of the decimal point.

That being said your imprecision will be larger in any arithmetic operation if your two operands have larger deviations in the exponent (i.e. 1000.123 * 0.001000123 will have a larger impact on the precision than 1000.123 * 1000.123).

In your case a you are multiplying rougly -1 with rougly 0.055, in case b you are multiplying 0.055 with 0.048 which gives you 0.0026 which in turn is farther away from -1, so the imprecision from multiplying -1 with 0.0026 is larger.

You see where this is going, right? Fiddeling on that level will not give you a faster or better solution than working with BigDecimal in the first place.

Jonathan
  • 2,698
  • 24
  • 37