0

I have a variable of type float with some value, and there is a point where that value is cast to a double type variable, at that moment the value in the float variable is no more what I see in the double type variable. To make this example easy to follow, I do this in the immediate window to produce whats happening:

(double)float.Parse ( "-0.00146256")

gives me -0.001462560030631721

the "-0.00146256" (in a string yes) is the origin of this value, and its stored in a float with float.Parse() some where in code, and then at a later point cast to double.

Why is the value changing to a different one in the double variable, and what can I do to prevent this non precision behavior?

ElGavilan
  • 6,610
  • 16
  • 27
  • 36
user734028
  • 1,021
  • 1
  • 9
  • 21
  • 6
    use decimal if you want precision – Giorgi Nakeuri Jan 30 '15 at 12:11
  • 1
    __No__. The value of the decimal digits in the string are __not__ stored in the float or double. They are __approximated__! No decimal values except powers of 2 are __storable__ in binary number formats. – TaW Jan 30 '15 at 12:17
  • @TaW Yes, integer multiples of (reasonable) powers of 2, to be precise. For example `0.8125` is representable because it is 13 (an integer) times 2**(-4). – Jeppe Stig Nielsen Jan 30 '15 at 12:21
  • Visit for more detail on decimal, float and double http://stackoverflow.com/questions/618535/difference-between-decimal-float-and-double-in-net – husnain_sys Jan 30 '15 at 12:24
  • its just that if I had done double.Parse at that string to begin with, this problem would not have occurred. – user734028 Jan 30 '15 at 12:31

1 Answers1

1

It is how it is suppose to work. Microsoft says

This type is useful for applications that need large numbers but do not need precise accuracy. If you require very accurate numbers, consider using the Decimal data type.

You can cast to decimal first and then to double:

double d = (double)(decimal)float.Parse("-0.00146256");
Giorgi Nakeuri
  • 35,155
  • 8
  • 47
  • 75