Edit3: Actual answer.
Two wrong assumptions lead to the question popping up:
a) The implicit cast for floating point numbers is to double, not float. This explains the difference between the two operations.
b) Don't trust the debugger. As was pointed out by Servy (after a bit of poking with a stick) the Debugger uses ToString() which as he says apparently does implicit rounding. Floating point values that have no accurate representation in binary are displayed rounded. One may argue about this, i would argue that it's evil.
Edit2: Accepted dupe: Strange behavior when casting a float to int in C#
Thank you Nolonar. The rest of you should read the actual question before posting.
Edit: This isn't really a duplicate, in fact the answer to the alleged duplicate merely points out why i am confused... i guess i should make myself more clear...
The question is,
a) why does (61.1 * 10) yield different results from (61.1f * 10) [assumed compiler optimization plays a role here]?
b) why does the second line in the example below appear as "621.0" in debugger output, when printing it, and so on - when, as apparent after casting to int, it is in fact closer to 620.9~? [assumed VS debugger "watch" feature does formatting magic here]
Ok, so this doesn't happen to me very often any more, but i am fundamentally confused here about what appears to be a simple thing (And therein lies my problem, assuming floats and casts are a simple thing..)
I mean, I know how floats work. But this is so far from the precision limit and presumably a non-critical operation?
I could guess that compiler optimization has to do with the two cases in the title yielding different results, but the real case is at runtime:
float x = 62.1
x*10 // == 621.0
(int)(x*10) // == 620
Am i correct in assuming that 621.0 as shown is really rather something like 620.99999999999999999999999999999998 and thus gets truncate after the cast?
In this case, i suppose i should blame the VS "watch" feature..