there is one little thing making me puzzle in c# :)
Here is my variables & results:
decimal a1 = 0.2M;
decimal a2 = 1.0M;
a1 - a2 = -0.8
float b1 = 0.2F;
float b2 = 1.0F;
b1 - b2 = -0.8
double c1 = 0.2;
double c2 = 1.0;
c1 - c2 = -0.8
double x1 = 0.2F;
double x2 = 1.0F;
x1 - x2 = -0.799999997019768
Decimal - result is as expected for me, knowing that they work in base 10 notations.
Float - Surprised me, knowing it works on base 2 notation it actually shows result as if it worked as base 10 notation with out loosing precision.
Double c - Same thing as for Float.
Double x - shows result that I would expect for Float to take place.
The Question is what's going on with Float, Double 'c' and 'x' groups? Why Double 'x' group lost its precision while Float group actually calculated in base 10 notation giving so to speak "expected" result from calculation? Wondering why declaring number types of Double x group as F so drastically changed its out come?
for what it worth I would only expect Decimal group give me '-0.8' result and all others some thing to '-0.799999997019768'.
looks like I'm missing some link of understanding that takes place in how calculation was taking care of.