I have some cross platform code I'm working with. On the Mac it's compiled with Clang, on Windows it's compiled with Visual C++.
There is a calculation that can be sensitive, and there was a difference between Mac and Windows that was triggering asserts. It ends up there is a difference between acos results, but I'm not clear why.
On both platforms, the input to acos is exactly -1.0f. In Visual C++, acos(-1.0f)
is 3.14159274. That's the value of pi as a float, which is what I'd expect.
But on macOS:
float value = acos(-1.0f);
...evaluates to 3.1415925. Thats just enough of an accuracy difference to trigger issues in the code. acos is an operation that can be imprecise with float - I understand that. And different compilers can have different implementations of acos. I'm just unclear why Clang seems to have issues with such a simple acos result while Visual C++ doesn't. A float is capable of representing 3.14159274, but that's not the result I'm getting.
It is possible to get an accurate/Visual C++ aligned value out of Xcode's version of Clang with:
float value = (float)acos((double)-1.0f);
So I can fix the issue by moving to higher accuracy, and then down casting the value back to float to preserve the same rounding as Windows. I'm just looking for a justification as to why the extra precision is necessary when the VC++ compiler doesn't seem to have a precision issue. It could be differences between the Clang/Xcode/VC++ math libraries as well. I just assumed that acos(-1.0) might be more settled across the compilers. I couldn't find any difference in round modes (even though rounding should not be necessary) and fresh projects in Xcode and Visual Studio show the same difference. Both machines are Intel.