I've recently come across the problem that visual-c++
does not seem to be IEEE 754 compliant, but instead uses subnormal representation. That is, double precision floats in it do not have the usual representation of 1 sign bit, 11 exponent bits and 52 explicitly stored significant decimal bits, see below.
As gcc
and clang
are however compliant and consistent cross-platform behaviour is highly desired I would like to know whether it is possible to force visual-c++
to use the normal representation. Alternatively making gcc
and clang
use the subnormal representation would of course also solve the problem.
The issue of the different double representations can be reproduced in visual-c++
, gcc
and clang
using the following code:
#include <iostream>
#include <string>
int main()
{
try {
std::stod("8.0975711886543594e-324");
std::cout << "Subnormal representation.";
} catch (std::exception& e) {
std::cout << "Normal representation.";
}
return 0;
}
Is a representation specification to produce consitent behaviour in all three cases at all possible?
Edit: As geza pointed out, this appears a problem in the different implementations of std::stod
, which would then make the question if there is any way to make std::stod
behave consistently without having to implement a seperate wrapper for it.