I try to reproduce some algorithms from external image processing library and I have found strange floating point number substraction precision error.
In original library (which is running on 32 bit debug configuration) there is a piece of code:
double d1 = *im1 - m_Centroids[j][0];
My code is at this moment the same (also running on 32 bit debug configuration):
double d1 = *im1 - m_Centroids[j][0];
At some point of program execution (when stopped for debugging) in original library those variables have values (in VisualStudio watch window):
Original code:
*im1 0.113626622 float
double(*im1) 0.11362662166357040 double
m_Centroids[j][0] 25.6416969 float
double(m_Centroids[j][0]) 25.641696929931641 double
*im1 - m_Centroids[j][0] -25.5280704 float
double(*im1 - m_Centroids[j][0]) -25.528070449829102 double
d1 -25.528070308268070 double
(See the difference between the last two)
My code:
*im1 0.113626622 float
double(*im1) 0.11362662166357040 double
m_Centroids[j][0] 25.6416969 float
double(m_Centroids[j][0]) 25.641696929931641 double
*im1 - m_Centroids[j][0] -25.5280704 float
double(*im1 - m_Centroids[j][0]) -25.528070449829102 double
d1 -25.528070449829102 double
Also I've run original code and my code simultaneously on seperate VisualStudio instances, on the same 64 bit computer.
That difference is causing my program having slightly different results at the end, than original.
What is the cause for such difference in substraction? (Considering, it is the same machine they're running on and same configuration)