1

What decision should you take when you are using decimal or float as a datatype in a table?

Abhishek Bhandari
  • 613
  • 1
  • 5
  • 16
HelloWorld1
  • 13,688
  • 28
  • 82
  • 145
  • 1
    Possible duplicate: http://stackoverflow.com/questions/1056323/difference-between-numeric-float-and-decimal-in-sql-server , also look at the MSDN documentation: http://msdn.microsoft.com/en-us/library/ms187912.aspx – Styxxy Feb 21 '13 at 09:33
  • To quote a famous phrase on [regexes](http://en.wikiquote.org/wiki/Jamie_Zawinski)... _Some people, when confronted with a problem, think "I know, I'll use floats." Now they have two problems._ – xanatos Feb 21 '13 at 09:41

3 Answers3

3

A float is smaller and has higher performance, so if performance is an issue you might want to go with float.

A decimal is better for repesenting decimal numbers accurately. A float stores the number in binary so an amount like 10.2 cannot be stored exactly as a float, but it can be stored exactly as a decimal. If the exact decimal representation is more important than performance then you should choose decimal.

Mark Byers
  • 811,555
  • 193
  • 1,581
  • 1,452
0

Check This out .This might be What you are looking for .It basically gives you the difference .

Difference between float and decimal data type

Community
  • 1
  • 1
Abhishek Bhandari
  • 613
  • 1
  • 5
  • 16
0

With an integer type, a value like 57 represents the number fifty-seven precisely. There's no question of whether it really represents something a tiny bit bigger, or a tiny bit smaller. The value zero represents the additive identity (adding zero to any value yields the same value), the value one represents the multiplicative identity (multiplying any value by one yields the same value), any other positive value represents the number of times one must be added to zero to reach it (e.g. 2 is 1+1, 3 is 1+1+1, etc.), and any negative value represents the additive inverse of the corresponding positive value.

With a binary floating-point type, values don't represent precise numbers. Rather, with the exception of some sentinel values, every floating-point value has a range of numbers for which it should be considered the best representation; one number in that range is considered the "nominal value". The IEEE standard requires that adding two floating-point values should yield the best representation of the number that would be produced by adding their nominal values, but that doesn't mean that a float whose nominal value happens to be exactly 2000000.125 actually represents that exact numerical quantity. It may just as likely represent the result of a calculation (e.g. 20000000.0f/10.0f) whose outcome was better representable by that float value than by any other.

Although the Decimal type offers higher precision than float or double, its primary usage is for representing precise decimal fractions (e.g. 1.37). The semantics of the type are actually closer to those of an imprecise type like double than a precise type like int, since adding e.g. a trillion to a number and then subtracting a trillion may result in a number that differs from the starting value. If the purpose of the type is to identify precise decimal fractions, or if one needs the extra precision the type can offer, Decimal may be a good fit despite its quirks. Otherwise, if a value which would be displayed as 1.37 really represents a number for which 1.37 is the nicest convenient representation, but which might actually be a somewhat bigger or smaller, type double will be more compact and faster to work with.

supercat
  • 77,689
  • 9
  • 166
  • 211