are the following two comparisons different in any way?
Usually no difference. They are the same. The RHS code becomes a double
, then the compare occurs.
Below is C analysis which I am confident C++ inherits.
Potential differences come up when the source code value is inexact as a double
(large integer).
With x > 12345678901234567891.0
, the code is convert to a double
per "the result is either the nearest representable value, or the larger or smaller representable value immediately adjacent to the nearest representable value, chosen in an implementation-defined manner.
With x > 12345678901234567891u
, the code is converted exactly to an integer constant and then to a double
per "If the value being converted is in the range of values that can be represented but cannot be represented exactly, the result is either the nearest higher or nearest lower representable value, chosen in an implementation-defined manner.
I would expect both of these to generate the same double
, but given the slight wording difference and they are both implementation defined behavior, but not tied to each other, differences could exist. It comes down to the quality of implementation of the compiler.
Of course, source code integers outside the long long
range are problematic.
In general, does it matter if I compare/assign an integer literal or a floating-point literal to a floating-point variable?
In general, best to compare by coding a common type. The path to forming the common type may expose subtle conversion effects.