I have 3 doubles which I multiply. I can do this in 2 different ways:
a. -1.0093674437739981 * 0.05521692677702658 * 0.04865764623936961
b. -1.0093674437739981 * (0.05521692677702658 * 0.04865764623936961)
The answers I get are:
a. -0.0027118934413746733
b. -0.002711893441374673
I have lost 1 digit in the second answer.
How is this possible and is there a way to avoid this without using BigDecimal??
The context is that I have implemented a backpropagation algorithm in 2 different ways:
The first algorithm goes recursivly throug the network and multiplies all the terms (weight * sigmoid-deriv * error-deriv).
The second algorithm uses matrices. It first calculates the backprogagation in the first layer and then multiplies it with the weight in the second layer (weight * (sigmoid-deriv * error-deriv). When I do this, I lose precision (as described above).
This implementation does not work as well as the first implementation.
Update:
I have found a solution concerning the neural network. If the arithmetic introduces an error, storing the complete mantissa of a weight (storing in order to read in the weights to do some futher training) is not a wise thing to do. Its better to limit the mantissa to eight digits in order to avoid the error the next time. Now my neural net can do multiple epochs instead of 1.