for the whole day I've been fighting with something I believed being a bug but then (with some nasty printf-debugging) it came out that it was a weird exp math function behaviour ... maybe there's something I'm missing here because I really don't get it.
Could you please explain me the following behaviour:
#include <cmath>
printf( "%lf\n", exp( -100.0 ) ); // <--- I know this should be %e, see the edit above
gcc 4.2.1 ( both on mingw and Mac OS X, both 64bit ):
0.000000
msvc 2012 ( 64bit )
3.720075....e-44
( Where '....' are decimals I don't remember and can't test right now since I'm at home on my Mac ).
GCC flags:
g++ -Wall -g -pg *.cpp -o mytest
MSVC with default flags.
What am I missing here? Isn't exp a standard function with standard precision?
EDIT 2
Ok this seems to be related to MSVC default floating point model ( /fp:precise ) ... any chance to have the same model with GCC ? I'd like to release the project as opensource but there's not point in it if it's not gonna work with the most popular opensource compiler due to a lack of precise floating point model.
EDIT
Ok so I was using a wrong format for the printf call ( I should've used %e or %.50lf to print scientific notation or enough decimals ), but the point is that the whole project doesn't give me the expected results ( I'm developing a SVM with SMO algorithm and Gaussian kernel ) if compiled with GCC, while it works flawlessly if I compile it with MSVC (of course with no changes whatsoever ). What could cause that?
( Picture of the same source code running on a Mac and on a VM with Windows 7 )