For example FLOAT type have default numeric precision 53
SELECT * FROM sys.types
and TDS protocol sends precision 53
, but through OLEDB precision returns as 18
.
What exactly precision 53
means, it seems impossible to put as many decimal digits into 8 bytes? How OLEDB gets 18
from that number? I found MaxLenFromPrecision
in Microsoft sources but this array up to precision 37
for numeric types only. FreeTDS have array up to 77
tds_numeric_bytes_per_prec
but isn't similar with Microsoft's data and item by index 53
not 18
as OLEDB returns. By that is 53
is some virtual number while 18
real precision as described at http://msdn.microsoft.com/en-us/library/ms190476.aspx ?