1

Can anyone explain the following results in SQL Server? I'm stumped.

declare @mynum float = 8.31

select ceiling( @mynum*100)

Results in 831

declare @mynum float = 8.21

select ceiling( @mynum*100)

Results in 822

I've tested a whole range of numbers (in SQL Server 2012). Some increase while others stay the same. I'm at a loss understanding why ceiling is treating some of them differently. Changing from a float to a decimal(18,5) seems to fix the problem but I'm wary there may be other repercussions I'm missing from doing so. Any explanations would help.

Aaron Bertrand
  • 272,866
  • 37
  • 466
  • 490
TEIHelp
  • 11
  • 2
  • 1
    Why are you using float? I don't know of any repercussions from switching to decimal except you have to type a little bit more and will have more predictable accuracy. See [this article](https://sqlblog.org/2009/10/12/bad-habits-to-kick-choosing-the-wrong-data-type) and [this Q & A](https://stackoverflow.com/q/1355418/61305). – Aaron Bertrand Oct 02 '14 at 16:41

2 Answers2

3

I think this is called float precision. You can find it in almost all programming languages and in Database too. This is because data is stored only with some precision and in fact what you set as 8.31 is probably not 8.31 but for example 8.31631312381813 and when multiply it and ceil it may cause that different value appear.

At SQL server documentation page you can read:

Approximate-number data types for use with floating point numeric data. Floating point data is approximate; therefore, not all values in the data type range can be represented exactly.

In other database systems the same problem exists. For example at mysql website you can read:

Floating-point numbers sometimes cause confusion because they are approximate and not stored as exact values. A floating-point value as written in an SQL statement may not be the same as the value represented internally. Attempts to treat floating-point values as exact in comparisons may lead to problems. They are also subject to platform or implementation dependencies. The FLOAT and DOUBLE data types are subject to these issues. For DECIMAL columns, MySQL performs operations with a precision of 65 decimal digits, which should solve most common inaccuracy problems.

Marcin Nabiałek
  • 109,655
  • 42
  • 258
  • 291
  • @Lasse Different numbers will get represented by float differently. If 8.31 actually gets represented as 8.3100000000000001, then you multiply that and it's 831.000000000001, what is CEILING(831.0000000000001)? 832. If 8.21 gets represented as 8.209999999999 ... and so on. – Aaron Bertrand Oct 02 '14 at 16:45
  • You probably want this quote from [the SQL Server documentation](http://msdn.microsoft.com/en-us/library/ms173773.aspx), rather than anything from the MySQL documentation: `Approximate-number data types for use with floating point numeric data. Floating point data is approximate; therefore, not all values in the data type range can be represented exactly.` – Aaron Bertrand Oct 02 '14 at 16:46
  • @Lasse that's great, if you're always dealing with the value 100. – Aaron Bertrand Oct 02 '14 at 16:54
  • @Lasse do you think it's possible there is a typo in the question? They said they tested a whole bunch of values. Can you see past that? The issue portrayed is that one value (8.31) works as expected, and doesn't round, whereas another value (8.21) works differently, and rounds up. Regardless of what they state the actual explicit output incorrectly, you can see that there is a difference in behavior, yes? – Aaron Bertrand Oct 02 '14 at 16:57
  • (I don't think the OP cares (or should care) how it works in MySQL. Can you answer the question for SQL Server?) – Aaron Bertrand Oct 02 '14 at 16:59
  • Deleted all my comments since the question has now been fixed. – Lasse V. Karlsen Oct 02 '14 at 17:11
0

Floating point are not 100% accurate. Like Marcin Nabiałek wrote the 8.31 you see is probably represented by something else, something like 8.310000000001. See here for some interesting reading about the accuracy problem of floating point.

Solution is not to use floating point data types unless you really have to. You should rather use DECIMAL or MONEY data types.

If you really have to use a floating point data type, then you can add or subtract a small value (the accuracy thresold or epsilon) before every floor, ceiling or comparison operations to get the precision you want. If you have a lot of floating point operations then it might be worth it to code your own floating point comparison functions.

Community
  • 1
  • 1
ForguesR
  • 3,558
  • 1
  • 17
  • 39