1

For example FLOAT type have default numeric precision 53

SELECT * FROM sys.types

and TDS protocol sends precision 53, but through OLEDB precision returns as 18.
What exactly precision 53 means, it seems impossible to put as many decimal digits into 8 bytes? How OLEDB gets 18 from that number? I found MaxLenFromPrecision in Microsoft sources but this array up to precision 37 for numeric types only. FreeTDS have array up to 77 tds_numeric_bytes_per_prec but isn't similar with Microsoft's data and item by index 53 not 18 as OLEDB returns. By that is 53 is some virtual number while 18 real precision as described at http://msdn.microsoft.com/en-us/library/ms190476.aspx ?

user2091150
  • 978
  • 12
  • 25

2 Answers2

4

What exactly precision 53 means, it seems impossible to put as many decimal digits into 8 bytes?

Precision 53 is 53 bits. 8 bytes contain 64 bits.

A double precision floating point number has 1 sign bit, 11 exponent bits, and 53 mantissa bits.

Gilbert Le Blanc
  • 50,182
  • 6
  • 67
  • 111
1

From BOL: http://msdn.microsoft.com/en-us/library/ms173773.aspx

Where n is the number of bits that are used to store the mantissa of the float number in scientific notation and, therefore, dictates the precision and storage size. If n is specified, it must be a value between 1 and 53. The default value of n is 53.

53 is the precision set at the database level if you don't explicitly set it when declaring your float data type.

John Eisbrener
  • 642
  • 8
  • 17