Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:
octave:34> fprintf("%.80f\n%.80f\n", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
octave:35> fprintf("%.80f\n%.80f\n", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000
In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.
Therefore:
1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)
vs
1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)
HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps
-- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.