0

This post has the following code:

DECLARE @A DECIMAL(3, 0), @B DECIMAL(18, 0), @F FLOAT

SET @A = 3
SET @B = 3
SET @F = 3

SELECT 1 / @A * 3.0, 1 / @B * 3.0, 1 / @F * 3.0
SELECT 1 / @A * 3  , 1 / @B * 3  , 1 / @F * 3 

Using float, the expression evaluates to 1. Using Decimal, the expression evaluates to some collection of 9s after the decimal point. Why does float yield the more accurate answer in this case? I thought that Decimal is more accurate / exact per Difference between numeric, float and decimal in SQL Server and Use Float or Decimal for Accounting Application Dollar Amount?

Community
  • 1
  • 1
James
  • 2,876
  • 18
  • 72
  • 116

1 Answers1

4

The decimal values that you have declared are fixed width, and there are no points after the decimal place. This affects the calculations.

SQL Server has a rather complex formula for how to calculate the precision of arithmetical expressions containing decimal numbers. The details are in the documentation. You also need to take into account that numeric constants are in decimal format, rather than numeric.

Also, in the end, you need to convert back to a decimal format with the precision that you want. In that case, you might discover that float and decimal are equivalent.

Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786
  • Thanks. That [post](http://www.sqlservercentral.com/Forums/Topic522397-360-1.aspx#bm522459) was given me pause about using `decimal` (instead of `float`) for financial data. From your referenced documentation, I found [this article](https://msdn.microsoft.com/en-us/library/ms190309.aspx) which shows float to be higher precedence. Therefore, the calc in the select statement is being converted to a `float`. If I convert back to `decimal`, it returns the equivalent value as you noted. – James Aug 01 '16 at 19:41