0

I am reading a floating point field into a C# double field. The DB value 0.1 is being read as 0.10000000149011612. I find this strange since this question's answers discuss that FLOAT uses 8 bytes, which is surely enough for 0.1?

SQL Server uses either 4 bytes or 8 bytes to store non-decimal floating point numbers.

If you specify real or any value in the range float<1> to float<24>, SQL server uses float<24> internally. If you use float or anything in the range float<25> to float<53> (the limit), it uses float<53> which is otherwise known as double precision.

My table has field of type FLOAT which is read and returned by a stored procedure with parameter @Result FLOAT OUT.

My C# code looks like this, note it uses the Float type not Real:

//setting up the query
command.Parameters.Add("@Result", SqlDbType.Float).Direction = ParameterDirection.Output;

//reading the value
double result = Convert.ToDouble(command.Parameters["@Result"].Value);

Should this be working... or does a SqlServer FLOAT work differently than a normal double type in languages like C#? Should 0.1 be stored and returned accurately or not? If I was using Real I wouldn't be surprised but I thought Float should be very accurate. Or at least would have limited precision not introduce erroneous digits...

Mr. Boy
  • 60,845
  • 93
  • 320
  • 589
  • 2
    No. float is an approximate datatype and is not totally accurate. https://learn.microsoft.com/en-us/sql/t-sql/data-types/float-and-real-transact-sql – Sean Lange Apr 10 '17 at 16:26
  • well, the default for float is 53 giving it a precision of 15 digits though you have more digits there. But i would just use decimal(length,precision) – S3S Apr 10 '17 at 16:26
  • @SeanLange sure, no fp representation can be totally accurate. But working with C# double types I don't normally see it introduce inaccuracies like this. Is it a difference in the way .Net and SqlServer represent floating point numbers? – Mr. Boy Apr 10 '17 at 16:28
  • And as @scsimon points out, I'm getting errors introduced at 8 decimal places... is this indicating an issue/bug somewhere? – Mr. Boy Apr 10 '17 at 16:33
  • I would wager that yes, the C# and SQL Server internal respresentations are different. This is one of may reasons I avoid FLOAT types in both C# and SQL unless it's absolutely required for some reason. – pmbAustin Apr 10 '17 at 16:33
  • 1
    @pmbAustin: I'd be very surprised if that were the case. I'd expect it to be IEEE-754 in both cases. – Jon Skeet Apr 10 '17 at 16:43
  • 2
    Have you checked the precision of your field? The value 0.100000001490116119384765625 is the *exact* value of 0.1f, for example... so it sounds like you've effectively got a FLOAT(24) field. And what happens if you *don't* call `Convert.ToDouble`? What is the execution-time type of `command.Parameters["@Result"].Value`? – Jon Skeet Apr 10 '17 at 16:45

1 Answers1

0

Run this query and notice that it creates an endless while loop. You will have to manually stop the execution.

 DECLARE @Counter float; 
 SET @Counter = 0; 
 WHILE (@Counter <> 1.0) BEGIN 
    SET @Counter += 0.1; 
    PRINT @Counter; 
 END; 

You would think that it would be the same as this, but it isn't.

DECLARE @Counter decimal(4,2); 
SET @Counter = 0; 
WHILE (@Counter <> 1.0) BEGIN 
  SET @Counter += 0.1; 
  PRINT @Counter; 
END;

The problem is that 0.1 can't be stored exactly in a float data type. A decimal data type will need to be used.

A decimal data type is an exact numeric data type, where float is an approximate numeric data type. Here is a reference, Link.

The reason the query with the float data type produces an endless while loop is because an apprroximate numeric data type will NEVER be equal 1.0.

Jason
  • 945
  • 1
  • 9
  • 17
  • 1
    It'd be good to explain why the `float` version is an endless loop, as opposed to the `decimal` version. The confusion of why they are different is why the question (and many others) exist. – krillgar Apr 10 '17 at 16:55
  • krillgar, updated my answer. – Jason Apr 10 '17 at 17:25
  • A decimal floating-point type is not inherently any more or less accurate than a binary floating-point type. They differ merely in *which* numbers can be stored exactly using that type. Some numbers, like (1/3), cannot be represented exactly in either binary or decimal floating-point formats. – njuffa Apr 10 '17 at 18:22