5

What's the difference between using decimal or floatin C#? I know float is used for more precise decimal numbers and decimal is used for few decimals like currency or prices but when it's used for few decimals why is better to use decimalrather than float?

Dr. Funk
  • 282
  • 3
  • 10
  • 2
    *"I know float is used for more precise decimal numbers and decimal is used for few decimals like currency"* You have that backwards, `decimal` has more precision. – Ron Beyer Oct 26 '15 at 18:31
  • 2
    simple answer. use float for low precision. use double for high precision. use decimal for exact precision. of course you cant store more that 15 decimals range but thats exact not float. – M.kazem Akhgary Oct 26 '15 at 18:40
  • @M.kazemAkhgary “exact”! Wow! How does `decimal` represent 1/3 then? What happens when the latter is multiplied by 3? – Pascal Cuoq Oct 27 '15 at 00:28
  • Sorry. Decimal stores about 28 digits range. If you divide 1by 3 you get 0.33. It will store only 28 repeates threes. when you multiply it by 3 you will get 0.99. 28 repeated nines. As you see this is exact math evaluation. But if you use double type since its float and high precise (not exact) you will get 1. Because it will always round to something with less decimals. Thats why we say float. It can round up or round down depending on decimals. Thus its not good for storing for example world records or times in car racing.Because milliseconds are important and we should not round anything! – M.kazem Akhgary Oct 27 '15 at 05:04
  • But when you want to calculate something in math form then you must use double to give you better numbers (at some cases gives you exact result due to rounding. Like the example you give. 1/3 is not store exactly. 0.33*3 is not calculated exactly but when we round 0.999 we get 1 which is exact result expected). Also double is much faster than decimal but you dont feel it because processors are fast! You must read this answer about decimal vs double performance http://stackoverflow.com/a/329618/4767498 – M.kazem Akhgary Oct 27 '15 at 05:13
  • Sorry for my terrible English but i hope you got what I mean – M.kazem Akhgary Oct 27 '15 at 05:20

3 Answers3

8

A float is a floating point binary type, which means that, under the hood, it is a binary mantissa followed by a binary exponent, taking the form mantissa x 10 ^ exponent, 10 being the number 2 in binary. For example, the number 3.0 is represented as 1.1 x 10^1, and the number 8 1/2 is represented as 1.0001 x 10^11. It essentially represents numbers in the form of binary fractions. The problem with base-2 floating point numbers is that it is difficult to precisely represent decimal numbers that aren't factors of 2.

It's easy to represent values like 1/2, 1/4, 1/8, 1/16, etc. in floating-point binary format. 1/2 is just 0.1, 1/4 is 0.01, 1/8 is 0.001, etc. But if you want to represent a decimal value like 0.6, you have to build a sum of base-2 fractions to get close to representing it. So you end up with a floating-point representation of 1.001100110011001100110011 x 10^-1 where the 0011 just keeps repeating, because there is no rational representation of decimal 0.6 in base-2.

This is where the decimal type comes in. Rather than use a fractional binary representation, the decimal type uses a sign bit, a 96-bit integer significand, and an integer scaling factor to represent the value, taking the form (sign x significand) / (10 ^ scaling factor). The sign can be 0 or 1, the significand can be anything from 0 to 2^96 - 1, and the scaling factor can be anything from 0 to 28. What this means is that the number is represented under the hood as a decimal fraction instead of a binary fraction, so the numbers that we humans are used to working with can be precisely and accurately represented in a rational form under the hood. Unlike the ugly and imprecise binary representation of 0.6 that we saw earlier, the decimal type represents 0.6 as a nice clean (1 x 6) / (10 ^ 1). Pretty, isn't it?

Unfortunately, this comes at a cost. Operations on the decimal type are much, much slower than operations on float or double. The processor in virtually every computer known to man is a binary processor (I say "virtually every" because I cannot disprove the existence of a non-binary computer somewhere on the planet). This means that it natively supports binary addition, subtraction, etc. An operation like float x = 256.0 / 2; compiles to a simple instruction where the exponent of the floating point number gets decremented. However, decimal x = 256.0m / 2; compiles to a more complicated set of instructions, because the number is not stored as a binary fraction, and the special base-10 representation of the number must be accounted for.

Generally, if you require speed more than decimal accuracy, a float or double will suffice for your application. If, however, you require decimal accuracy above all else, such as for accounting, then decimal is the type to use.

See this MSDN documentation for more details.

Dr. Funk
  • 282
  • 3
  • 10
1

The decimal keyword indicates a 128-bit data type. Compared to floating-point types, the decimal type has more precision and a smaller range, which makes it appropriate for financial and monetary calculations. The approximate range and precision for the decimal type are shown in the following table.

(-7.9 x 1028 to 7.9 x 1028) / (10^0 - ^28), 28-29 significant digits

From MSDN

Floats on the other hand:

The float keyword signifies a simple type that stores 32-bit floating-point values. The following table shows the precision and approximate range for the float type.

-3.4 × 10^38 to +3.4 × 10^38, 7 significant digits

Link

Community
  • 1
  • 1
sab669
  • 3,984
  • 8
  • 38
  • 75
  • So, `decimal`is the most precise keyword for decimal numbers? – Facundo Schiavoni Oct 26 '15 at 18:38
  • Yes, it supports the larger number in terms of possible digits (thus more precise). I don't know off the top of my head which supports the higher mathematical value though. **EDIT** `Decimal.Max` gives you `79,228,162,514,264,337,593,543,950,335`, where as `Single.MaxValue` (why is the float type `Single`?) says `3.402823e38` according to MSDN. – sab669 Oct 26 '15 at 18:39
  • But `float`is faster so I should avoid using `decimal` unless I need lot of precision right? – Facundo Schiavoni Oct 26 '15 at 18:41
  • One could say it "doesn't really matter" given how much power modern computers have, but I suppose there's no point in reinforcing bad habbits. So if you can remember which type is smaller (in terms of memory) and you know it'll suffice for your needs, then yes, a `float` would be better I guess. I've had professors who care about using the smallest types necessary, and I've had employers who don't give a shit so long as it compiles. – sab669 Oct 26 '15 at 18:43
  • @FacundoSchiavoni If you're working with decimal values (not binary), use `decimal` unless you need a lot of speed. Try running the following code: `float x = 0.29f; bool surprise = (x * 10) == (0.29f * 10);` – Jakub Lortz Oct 26 '15 at 18:57
0

Decimal will have 128 bits, float only 32 bits for the representation of the number