This post has the following code:
DECLARE @A DECIMAL(3, 0), @B DECIMAL(18, 0), @F FLOAT
SET @A = 3
SET @B = 3
SET @F = 3
SELECT 1 / @A * 3.0, 1 / @B * 3.0, 1 / @F * 3.0
SELECT 1 / @A * 3 , 1 / @B * 3 , 1 / @F * 3
Using float, the expression evaluates to 1. Using Decimal, the expression evaluates to some collection of 9s after the decimal point. Why does float yield the more accurate answer in this case? I thought that Decimal is more accurate / exact per Difference between numeric, float and decimal in SQL Server and Use Float or Decimal for Accounting Application Dollar Amount?