In Java and C#:
int a = (int)(-1.5 + 2.5);
int b = (int)(-1.55 + 2.55);
int c = (int)(1.45 + 2.55);
// a = 1; b = 0; c = 4;
Could anyone explain why adding positive number to negative one with 2 or more digits after decimal point causes decimal number break? "b = 0.99999999999999978".
So the question is - why "-1.5 + 2.5 = 1", but "-1.55 + 2.55 = 0"?