I am reading a floating point field into a C# double field. The DB value 0.1 is being read as 0.10000000149011612. I find this strange since this question's answers discuss that FLOAT
uses 8 bytes, which is surely enough for 0.1?
SQL Server uses either 4 bytes or 8 bytes to store non-decimal floating point numbers.
If you specify real or any value in the range float<1> to float<24>, SQL server uses float<24> internally. If you use float or anything in the range float<25> to float<53> (the limit), it uses float<53> which is otherwise known as double precision.
My table has field of type FLOAT
which is read and returned by a stored procedure with parameter @Result FLOAT OUT
.
My C# code looks like this, note it uses the Float
type not Real
:
//setting up the query
command.Parameters.Add("@Result", SqlDbType.Float).Direction = ParameterDirection.Output;
//reading the value
double result = Convert.ToDouble(command.Parameters["@Result"].Value);
Should this be working... or does a SqlServer FLOAT work differently than a normal double
type in languages like C#? Should 0.1 be stored and returned accurately or not? If I was using Real
I wouldn't be surprised but I thought Float
should be very accurate. Or at least would have limited precision not introduce erroneous digits...