Here's a question with answers on "Cross Platform Floating Point Consistency" but it talks exclusively about runtime consistency (of IEEE floating point).
I'm interested in compile-time consistency, specifically:
If I have a specific floating-point number and want to put a floating-point literal in my source code and have every compiler targeting an IEEE-754 architecture compile that to the same bit pattern that is in fact that float (or double): what do I need to do?
- A certain number of digits?
- The exact decimal number for that bit pattern (rather than any decimal number that maps to that binary pattern)?
- Or?
(I know there has been controversy for years over what you need to do to round-trip floating point values from IEEE format to decimal representations and back and I don't know if this is or is not an issue with floating point literals and the compilers (and the C++ standard).)