I have an ETL process running on SQL Server 2017 on Linux.
The process has some floating point calculations, and I noticed on two different runs of the same process, I got slightly different results. One run gave me something like 1.23450000...1, and another gave me 1.23459999... Both are approximations of an exact value of 1.2345. The difference is on the order of 10e-15 or so.
I know that if I care about two numbers being exactly equal, I shouldn't be using floating point in the first place. I could accept either result as correct, since both are within some small epsilon of the "correct" answer.
So my real question is more a curiosity about the variation of the approximation itself. Do I have to assume the binary representation of a floating point result in SQL Server may not be exactly the same every time? Are there reasons why you could get different results, for example maybe a SUM iterates rows in a different order, or a different query plan yields a different representation? If this is the case, it would never be safe to assume you can compare floating points for exact equality, even if it's result from current run vs. results from last run.