2

I am working on floating point determinism and having already studied so many surprising potential causes of indeterminism, I am starting to get paranoid about copying floats:

Does anything in the C++ standard or in general guarantee me that a float lvalue, after being copied to another float variable or when used as a const-ref or by-value parameter, will always be bitwise equivalent to the original value?

Can anything cause a copied float to be bitwise inquivalent to the original value, such as changing the floating point environment or passing it into a different thread?

Here is some sample code based on what I use to check for equivalence of floating point values in my test-cases, this one will fail because it expects FE_TONEAREST:

#include <cfenv>
#include <cstdint>

// MSVC-specific pragmas for floating point control
#pragma float_control(precise, on)
#pragma float_control(except, on)
#pragma fenv_access(on)
#pragma fp_contract(off)

// May make a copy of the floats
bool compareFloats(float resultValue, float comparisonValue)
{
    // I was originally doing a bit-wise comparison here but I was made
    // aware in the comments that this might not actually be what I want
    // so I only check against the equality of the values here now
    // (NaN values etc. have to be handled extra)
    bool areEqual = (resultValue == comparisonValue);

    // Additional outputs if not equal
    // ...

    return areEqual;
}

int main()
{
    std::fesetround(FE_TOWARDZERO)
    float value = 1.f / 10;
    float expectedResult = 0x1.99999ap-4;

    compareFloats(value, expectedResult);
}

Do I have to be worried that if I pass a float by-value into the comparison function it might come out differently on the other side, even though it is an lvalue?

Ident
  • 1,184
  • 11
  • 25
  • Are you interested in the behavior of NaNs regarding bitwise stability? – Max Langhof Aug 05 '19 at 09:20
  • Also, I recommend you read https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/ if you haven't already. – Max Langhof Aug 05 '19 at 09:22
  • The floating point series on randomascii and the gafferongames articles are my main sources, as well as some sprinkled information in blogs here and there. Regarding NaNs: My bit-wise checks should work on those if the NaN is produced through the same mechanism, shouldn't they? – Ident Aug 05 '19 at 09:24
  • To be clear, this is not something governed by the C++ standard but by your compiler/platform, its adherence to IEEE-754 and what the exact effects of available options are. Compiling with `/fp:fast` instead of `/fp:precise` does not make the compiler disobey the C++ standard because dictating floating point handling is outside of the scope of C++. – Max Langhof Aug 05 '19 at 10:01
  • You could negate the issue by passing the floats by reference – M.M Aug 05 '19 at 10:23
  • I considered passing them by reference but would only want to do so after a proof that this is needed.. Const-ref would imo be better at showing the intention (unchanged input variable), but very very annoyingly in C++ it allows the compiler to make a copy of the variable (even calling the conversion constructor) so it would again defeat our goal. – Ident Aug 05 '19 at 10:35
  • Why do you care about equivalence of the bits representing the number instead of equivalence of the value? In other words, if the represented value does not change, why do you care about the bits? – Eric Postpischil Aug 05 '19 at 11:41
  • @EricPostpischil should I not care? I could not find information on how other people test for floating point determinism issues, so I thought the best check I can do is a bitwise one to ensure it also works the same way with other compilers, but maybe you are right and it would be better to compare only against float values, because NaNs for example are implementation-defined when it comes to their bits, and in their case it would be best to use the NaN check functions. – Ident Aug 05 '19 at 11:59
  • 1
    @Ident: It seems your concern about determinism is ensuring that floating-point arithmetic gets correct results, or at least results within specification or at least that are the same when calculated by different means. For this purpose, you should care only about the represented values and not the bits that represent them, except for any payload data in NaNs. In some floating-point arithmetic systems, it is perfectly normal for a value to be represented in multiple ways, such as 9•10^-1 and 90•10^-2 for .9. – Eric Postpischil Aug 05 '19 at 12:08
  • @EricPostpischil this makes a lot of sense, I will edit my question and proceed comparing floats using the equal operator. – Ident Aug 05 '19 at 12:39

2 Answers2

3

No there is no such guarantee.

Subnormal, non-normalised floating points, and NaN are all cases where the bit patterns may differ.

I believe that signed negative zero is allowed to become a signed positive zero on assignment, although IEEE754 disallows that.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483
  • 1
    There was a question recently where it turned out the problem was that assigning a SNaN to another float changed it to a QNaN . I can't find that now though (maybe OP deleted it). – M.M Aug 05 '19 at 09:59
  • Negative and positive zero must be preserved and handled reliably for IEEE conformant floating-point. – Deduplicator Aug 05 '19 at 10:06
  • @Deduplicator: Yes you're correct on that point, I've appended. – Bathsheba Aug 05 '19 at 10:06
  • @M.M I think you mean this https://stackoverflow.com/questions/27259234/c-nan-byte-representation-changes-during-assignment "under ARM, sNaNs get converted to qNaNs when used in operations" – Ident Aug 05 '19 at 10:29
  • @bathsheba "[...] are all cases where the bit patterns may differ." <- in which situation? After operations? After assignments? Between compilers? Between CPUs? – Ident Aug 05 '19 at 10:31
  • @Ident it wasn't that one – M.M Aug 05 '19 at 11:02
  • @M.M interesting, but I could reproduce this with my compiler! I requested an edit to the answer showing the repro – Ident Aug 05 '19 at 11:09
  • @Ident normally you should post your own answer rather than making substantial edits to another one – M.M Aug 05 '19 at 11:22
  • Seems like I was shooting a bit too quickly here. After fixing the typo I can't repro this anymore and unfortunately I can't undo my edit either, I will have to edit my edit out, sorry. – Ident Aug 05 '19 at 11:40
1

The C++ standard itself has virtually no guarantees on floating point math because it does not mandate IEEE-754 but leaves it up to the implementation (emphasis mine):

[basic.fundamental/12]

There are three floating-point types: float, double, and long double. The type double provides at least as much precision as float, and the type long double provides at least as much precision as double. The set of values of the type float is a subset of the set of values of the type double; the set of values of the type double is a subset of the set of values of the type long double. The value representation of floating-point types is implementation-defined. [ Note: This document imposes no requirements on the accuracy of floating-point operations; see also [support.limits]. — end note ]

The C++ code you write is a high-level abstract description of what you want the abstract machine to do, and it is fully in the hands of the compiler what this gets translated to. "Assignments" is an aspect of the C++ standard, and as shown above, the C++ standard does not mandate the behavior of floating point operations. To verify the statement "assignments leave floating point values unchanged" your compiler would have to specify its floating point behavior in terms of the C++ abstract machine, and I've not seen any such documentation (especially not for MSVC).

In other words: Without nailing down the exact compiler, compiler version, compilation flags etc., it is impossible to say for sure what the floating point semantics of a C++ program are (especially regarding the difficult cases like rounding, NaNs or signed zero). Most compilers differentiate between strict IEEE conformance and relaxing some of those restrictions, but even then you are not necessarily guaranteed that the program has the same outputs in non-optimized vs optimized builds due to, say, constant folding, precision of intermediate results and so on.

Point in case: For gcc, even with -O0, your program in question does not compute 1.f / 10 at run-time but at compile-time and thus your rounding mode settings are ignored: https://godbolt.org/z/U8B6bc

You should not be paranoid about copying floats in particular but paranoid of compiler optimizations for floating point in general.

Max Langhof
  • 23,383
  • 5
  • 39
  • 72
  • 1
    Very helpful answer. I am currently trying to restrict the compiler as much as I can to make it compile my program's FP operations exactly as I want them to be, hence also the pragma's, fenv settings and my tests, so I hope I am on a good path. Btw to compile your example in a way it returns consistent results, GCC's -frounding-math flag seems to do the job, which is also recommended to be used whenever changing the FP rounding mode. – Ident Aug 05 '19 at 12:16