2419

What is the difference between decimal, float and double in .NET?

When would someone use one of these?

PC Luddite
  • 5,883
  • 6
  • 23
  • 39
  • 2
    interesting article http://zetcode.com/lang/csharp/datatypes/ – GibboK Mar 01 '14 at 14:20
  • 7
    You cannot use decimal to interop with native code since it is a .net specific implementation, while float and double numbers can be processed by CPUs directly. – codymanix Mar 06 '21 at 10:55

18 Answers18

2557

float (the C# alias for System.Single) and double (the C# alias for System.Double) are floating binary point types. float is 32-bit; double is 64-bit. In other words, they represent a number like this:

10001.10010110011

The binary number and the location of the binary point are both encoded within the value.

decimal (the C# alias for System.Decimal) is a floating decimal point type. In other words, they represent a number like this:

12345.65789

Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal still a floating point type instead of a fixed point type.

The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.

As for what to use when:

  • For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.

  • For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • 90
    `float`/`double` usually do not represent numbers as `101.101110`, normally it is represented as something like `1101010 * 2^(01010010)` - an exponent – Mingwei Samuel Aug 13 '14 at 21:50
  • 94
    @Hazzard: That's what the "and the location of the binary point" part of the answer means. – Jon Skeet Aug 13 '14 at 21:57
  • 144
    I'm surprised it hasn't been said already, `float` is a C# alias keyword and isn't a .Net type. it's `System.Single`.. `single` and `double` are floating binary point types. – Brett Caswell Feb 03 '15 at 15:48
  • 11
    wait....isn't a decimal represented in 1s and 0s eventually? I thought computers could only work in binary form. so then a decimal is eventually a binary type isn't it? – BenKoshy Nov 26 '15 at 03:00
  • 65
    @BKSpurgeon: Well, only in the same way that you can say that *everything* is a binary type, at which point it becomes a fairly useless definition. Decimal is a decimal type in that it's a number represented as an integer significand and a scale, such that the result is significand * 10^scale, whereas float and double are significand * 2^scale. You take a number written in decimal, and move the decimal point far enough to the right that you've got an integer to work out the significand and the scale. For float/double you'd start with a number written in binary. – Jon Skeet Nov 26 '15 at 07:20
  • 3
    The other aspect is conversion between these data types: Single and Double Data types -use "Fuzzy" comparison -conversion from double to single loses precision -conversion from single to double creates inaccuracy -conversion to/from decimal introduces rounding errors between bases -Create a team consistence[Style Guide] on the data type you're using and watch for conversion – Ehsan Feb 26 '16 at 02:52
  • 37
    Another difference: float 32-bit; double 64-bit; and decimal 128-bit. – David Aug 29 '16 at 15:08
  • @JonSkeet For `floats`/`doubles` we get: `Console.WriteLine(0.1 + 0.2 == 0.3); // false`. If I get it right, it's not equal because of the conversion from decimal notation we use in code to the binary notation used in memory. Can we do it the other way though? Initialize `decimal` variables with binary notation in code and then get a similar mismatch? – Andrzej Gis Jun 21 '19 at 20:05
  • @AndrzejGis: No, because every binary value *is* exactly representable in decimal. (Basically because 2 is a factor of 10.) – Jon Skeet Jun 22 '19 at 06:16
  • as I tried to suggest by editing (before I had enough reputation to comment. Sorry If my suggested edit wasted yours or anyone else's time, by the way), I feel the note about performance at the end of your second bullet point should either not be there at all, or be expanded (as I tried to do) into its own bullet point. It seems out of place as is. – Twisted on STRIKE at1687989253 Nov 04 '20 at 17:19
  • @TwistedCode: I can't see your suggested edit now, but I'm reasonably comfortable with it being there. – Jon Skeet Nov 04 '20 at 17:22
  • A great, canonical answer, but it would be nice if it mentioned "float is 32-bit and double is 64-bit" right toward the beginning – Jacob Stamm Jul 15 '22 at 18:27
1261

Precision is the main difference.

Float - 7 digits (32 bit)

Double-15-16 digits (64 bit)

Decimal -28-29 significant digits (128 bit)

Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.

Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);

Result :

float: 0.3333333  
double: 0.333333333333333  
decimal: 0.3333333333333333333333333333
Martin Backasch
  • 1,829
  • 3
  • 20
  • 30
cgreeno
  • 31,943
  • 7
  • 66
  • 87
  • 5
    This answer needs to be corrected. Precision for Decimal is not 128 bits but infinite because the format is essentially different from float. @Skeet answer is the best. `Example: 0.1 = 0.099999.... in float but in decimal it is 0.l, that is infinite precision. If you were to use 128 bits precision like in floats, you would get 0.999999....(upto 29 digits) but that is still not precise as decimal 0.1` – TheTechGuy Nov 21 '11 at 18:23
  • 72
    @Thecrocodilehunter: sorry, but no. Decimal can represent all numbers that can be represented in decimal notation, but not 1/3 for example. 1.0m / 3.0m will evaluate to 0.33333333... with a large but finite number of 3s at the end. Multiplying it by 3 will not return an exact 1.0. – Erik P. Nov 29 '11 at 21:14
  • 4
    This is a fault with the number itself ( 0.3333... in this case), not its decimal representation where it is produced 100% faithfully. When you introduced an error in the number, no body can remove it (not even decimal numbers). The only way to remove error from this number is to use 1/3 not 0.333. Some calculator might take 1/3 as mid value but most of them don't. `Try this: represent 0.3333 in floating point, you will end up with 0.3332999998..., this is not 0.3333 (you see the error). Now represent this in decimal it is 0.3333 (exactly as it is, no error - 100% accurate).` – TheTechGuy Nov 29 '11 at 22:13
  • 59
    @Thecrocodilehunter: I think you're confusing accuracy and precision. They are different things in this context. Precision is the number of digits available to represent a number. The more precision, the less you need to round. No data type has infinite precision. – Igby Largeman Jan 06 '12 at 17:42
  • 6
    @IgbyLargeman Precision and Accuracy is used in context of measuring a value by an instrument. In this case we are not talking about any instrument. We are only talking about representing a value faithfully by decimal vs floating point. Precision does not apply here as we are not talking about consistency of measuring the same value, over and over. But Accuracy does. Accuracy of decimal point on a number that is in its range is 100%, that is infinite accuracy. – TheTechGuy Jan 09 '12 at 19:50
  • 18
    @Thecrocodilehunter: You're assuming that the value that is being measured is *exactly* `0.1` -- that is rarely the case in the real world! *Any* finite storage format will conflate an infinite number of possible values to a finite number of bit patterns. For example, `float` will conflate `0.1` and `0.1 + 1e-8`, while `decimal` will conflate `0.1` and `0.1 + 1e-29`. Sure, *within a given range*, certain values can be represented in any format with zero loss of accuracy (e.g. `float` can store any integer up to 1.6e7 with zero loss of accuracy) -- but that's still not *infinite* accuracy. – Daniel Pryden Jan 10 '12 at 01:49
  • 1
    @DanielPryden, Ok I believe float will represent 0.1 as 0.1, not as 0.1 + 1e-29. This is because the format is essentially different than float. That is why it is very slow but accurate. If you were right than decimal is useless. Remember the main problem in float `if(0.1 = 0.1)` this condition does not holds true when we think it should be true. In decimal it will ALWAYS be true because 0.1 will be 0.1 and nothing else. For example it will not be 0.99999999999999999999999999999. – TheTechGuy Jan 10 '12 at 12:14
  • 31
    @Thecrocodilehunter: You missed my point. `0.1` is **not a special value**! The only thing that makes `0.1` "better" than `0.10000001` is because **human beings** like base 10. And even with a `float` value, if you initialize two values with `0.1` the same way, *they will both be the same value*. It's just that that value won't be **exactly** `0.1` -- it will be *the closest value to `0.1` that can be exactly represented as a `float`*. Sure, with binary floats, `(1.0 / 10) * 10 != 1.0`, but with decimal floats, `(1.0 / 3) * 3 != 1.0` either. **Neither** is *perfectly* precise. – Daniel Pryden Jan 10 '12 at 18:27
  • 3
    @DanielPryden, with decimal number, it will be **exactly** 0.1. Of course it is not about 0.1 only. A large number of decimals numbers has this problem. The fact is in decimal (0.1 == 0.1) will **always be true**. In float it may or may not be true because the actual binary value may not be exactly 0.1. – TheTechGuy Jan 10 '12 at 19:19
  • 20
    @Thecrocodilehunter: You still don't understand. I don't know how to say this any more plainly: In C, if you do `double a = 0.1; double b = 0.1;` then `a == b` **will be true**. It's just that `a` and `b` will *both* not exactly equal `0.1`. In C#, if you do `decimal a = 1.0m / 3.0m; decimal b = 1.0m / 3.0m;` then `a == b` will also be true. But in that case, **neither** of `a` nor `b` will **exactly** equal `1/3` -- they will both equal `0.3333...`. In *both* cases, some accuracy is lost due to representation. You stubbornly say that `decimal` has "infinite" precision, which is *false*. – Daniel Pryden Jan 10 '12 at 19:29
  • 3
    @Thecrocodilehunter: Just in case you don't believe me, [here's some sample code that shows that `0.1 == 0.1`](http://ideone.com/P5Mdh). – Daniel Pryden Jan 10 '12 at 19:33
  • 3
    This should have been marked as the correct answer. Jon Skeet's answer's a bit confusing... – Chibueze Opata Jul 15 '12 at 14:36
  • 9
    @ChibuezeOpata: Skeet's answer discusses a completely separate difference which this answer completely ignores. Personally, I consider Skeet's answer to be more valuable, as his answer is more relevant in deciding which data type to use. – Brian Jan 18 '13 at 18:32
  • 1
    @Brian They are both very valuable, and that is why I said ab initio that they are incomplete without each other. Concerning the question asked however, this answer simply goes straight to the point and tells you the essential differences. You can make almost all the deductions in Jon Skeet's answer from this one. :) – Chibueze Opata Jan 18 '13 at 19:46
  • 6
    @ChibuezeOpata No, you can't, because this answer doesn't even mention the decimal/binary distinction. – svick Jun 21 '13 at 16:11
  • 1
    @DanielPryden - I know this is an old issue, but maybe I can help clarify. The issue here is that Decimal numbers are 100% accurate when representing numbers that are within the precision of the decimal format. That is, not the result of pi, or 1/3, or 2/3. That's irrelevant because those numbers require greater precision than decimal can represent. If you do a calculation on a decimal value that exceeds the precision, then all bets are off. With float/double numbers that ARE within the precision of the format are not always 100% accurate. .1 for example. – Erik Funkenbusch Jun 21 '13 at 18:40
  • 6
    @MystereMan: what do you mean by "within the precision of the decimal format"? If the number you are measuring is exactly an integer raised to a power of ten, then absolutely use a `decimal`. Many numbers encountered in everyday life have this property (because the are discrete, not continuous, measurements), but many others do not. The correct data type for any purpose always depends on the purpose. Please don't mistake anything I'm saying here as implying that anyone should *always* use floats -- I'm merely arguing that one shouldn't blindly always use `decimal`s instead. – Daniel Pryden Jun 21 '13 at 23:52
  • 1
    @MystereMan: I think part of your confusion is betrayed by the phrase "within the precision of the format". I don't think that makes sense -- do you mean something like "within the representable range" instead? But even that doesn't prove anything: 0.1 is not any more "within the precision" of a double than 2^53+1 is, and both can be represented equally faithfully. – Daniel Pryden Jun 21 '13 at 23:59
  • 4
    Pretty much every time the issue of precision of floating point representation (be it decimal or binary) comes up, there ensues a long conversation of comments at cross-purposes. Fundamentally this is due to the question of whether the *exact* value *represented* by the floating point *representation* corresponds to the same *exact* value in the real world. This cannot be known by looking at the representation of the number itself; it can only be known by the humans that use the representation. – Dan Nissenbaum Jun 26 '13 at 00:16
  • 5
    Here are a small example code for C# (which this article is about) that visualizes the problem (using decimal & float). `(0.1f == 1f/10)` and `(0.1m == 1m/10)`. The first will evaluate to false while the second will evaluate to true, even though both should evaluate to true. This is due to the fact that float cannot exactly store the value 0.1. – David Mårtensson Jun 27 '13 at 14:27
  • @DavidMårtensson: Why should 0.1f not equal 1f/10? Should not both evaluate to 13421773/134217728? – supercat Sep 12 '13 at 14:58
  • @supercat Because how float works internally 0.1f cannot be exactly represented in the internal binary format and due to the fact that calculations use more precision internally 1f/10 will not land on the same rounded value as 0.1f, hence they will not be equal. – David Mårtensson Sep 13 '13 at 16:09
  • @DavidMårtensson: The compile-time type of the expression `1f/10` is `float`. Are you saying that compilers are not required to round the result of the division to the nearest `float` before performing the comparison? I regard as somewhat broken the fact that one is allowed to directly compare a `float` to anything else, or a `double` to anything else other than 32-bit-or-smaller integers [I think a cast should be required] but I would consider severely broken a compiler that performed what was by the rules of the language a float/float comparison as though it were a float/double comparison. – supercat Sep 13 '13 at 16:29
  • @DavidMårtensson: (Incidentally, what I'd like to see would be a language with both "loose" and "strict" 32- and 64-bit floating-point types, where the strict ones would not accept any implicit conversions and the "loose" one would be defined as extending operation results to `double` and would generally allow implicit down-conversions to `float`, but would disallow direct comparisons between 32-bit and 64-bit values. I would posit that while C# will have no qualm about `double d1=f1*f2;` it would be rare for the programmer to actually intend that `d1` might hold a `float`-precision result.) – supercat Sep 13 '13 at 16:36
  • 1
    The IEEE standard for binary floating point does not mandate strict decimal precision, see Mark Jones answer below. It is not defined by the language. If you require strict rounding you should use the decimal datatype which is a decimal floating point as Jon Skeet points out in Mehrdad's answer below. The different types have different uses and different requirements. When calculating real world values in physics for example, your original numbers are probably less precise than your compiler so the computational errors will usually have less impact that measurement errors. – David Mårtensson Sep 16 '13 at 07:07
  • 10
    Yet another attempt to try to hit the nail on the head: Both `float` and `double` can *exactly* represent fractions of the form `p/q` where `q` is a power of 2. E.g. 0.5, 3.25, 1/256, etc. `decimal` however can *exactly* represent fractions of the form `p/q` where `q` is a power of 10 (ten). See [this answer](http://stackoverflow.com/a/15348989/2700898). Though it is correct that `decimal` has more significant digits, it is misleading to leave it at that; the *representation* is fundamentally different than `float` and `double` which lends `decimal` to precise decimal calculations. – Matt Feb 28 '14 at 18:52
  • 4
    Precision is not the main difference. Decimal being base 10 is the main difference. – Randall Sutton Jun 25 '14 at 13:10
  • 3
    -1 while the main difference between float and double is precision, the main difference between float, double, and decimal is not. It's true that decimal does have a wider precision, but more importantly, it also stores the values in a decimal-centric format, as opposed to float and double, which store their values in binary-centric format. To give an example, the number ".75" in decimal is equivalent to ".11" in binary, because one half plus one forth == three fourths. Naturally, some fractional decimal values (even within the ~7 digit range) can only be approximated by double and float. – BrainSlugs83 Mar 14 '15 at 22:44
  • @Matt decimal can exactly represent fractions of the form p/q when q is a power of 2 or a power of 5 (i.e., prime factors of 10). Consider 1/2 (0.5) and 1/5 (0.2), for example; neither denominator is a power of 10. – phoog May 27 '15 at 10:27
  • 1
    @hmd consider a floating-point base-3 system, where 1/10 (rather 1/101) is an infinitely repeating fraction: 0.00220022.... However, 1/3 is not; it is 0.1. Consider Matt's comment: Fractions that can be exactly represented in a given base are those that use the prime factors of the base. Decimal does not have infinite precision; it has 28 decimal digits of precision. If it truly had infinite precision, you would be able to represent half of `0.0000000037252902984619140625m`. But you can't; dividing that by 2 gives `0.0000000018626451492309570312m` instead of `0.00000000186264514923095703125` – phoog May 27 '15 at 10:45
  • @phoog There is no requirement that p/q be in simplest form. In your examples, 1/2=5/10 and 1/5=2/10 and therefore have exact decimal representations. Another example is 1/20=0.05 in which the denominator is neither a power of 2, 5 or 10. You said "decimal can exactly represent fractions of the form p/q when q is a power of 2 or a power of 5". Though technically correct, this is actually more restrictive than what I said because 1/10, for example, cannot be written in the form p/q where q is a power of 2 or 5. – Matt May 27 '15 at 19:34
  • @hmd In addition to the values, like 0.1, that can be represented as `decimal` but not as `double`, there are some values that can be exactly represented as `double` but not `decimal`. Consider the fraction `1 / 2^31`. The `decimal` representation is truncated, while the `double` representation is exact. The .NET *string representation* of the `double` is not exact, but the in-memory bit representation is exact. Jon Skeet has a class that will convert any double to the exact decimal string representation, which can be quite long: http://csharpindepth.com/Articles/General/FloatingPoint.aspx – phoog May 27 '15 at 21:30
  • @Matt I also oversimplified. The real requirement is that after reducing the fraction to its simplest form q is the product of a power of 2 and a power of 5; that is, q's unique prime factors must be the same as or a subset of the unique prime factors of the base. You can of course equivalently recast that as all of q`s prime factors being either a prime factor of the base or a divisor of p. – phoog May 27 '15 at 21:34
  • Better answer!! =)) – Jose Henrique Feb 22 '20 at 18:21
  • 2
    @ChibuezeOpata: Jon Skeet's answer might be a bit confusing, but it has infinite accuracy.... – awe Sep 08 '20 at 12:12
  • @DanielPryden I believe the confusion here comes from the fact that DECIMAL in mySQL is a exact value data type, UNlike float. while System.decimal in c# it is a approximate value data type, like float. – LukasKroess Jun 20 '23 at 06:59
126
+---------+----------------+---------+----------+---------------------------------------------------------+
| C#      | .Net Framework | Signed? | Bytes    | Possible Values                                         |
| Type    | (System) type  |         | Occupied |                                                         |
+---------+----------------+---------+----------+---------------------------------------------------------+
| sbyte   | System.Sbyte   | Yes     | 1        | -128 to 127                                             |
| short   | System.Int16   | Yes     | 2        | -32,768 to 32,767                                       |
| int     | System.Int32   | Yes     | 4        | -2,147,483,648 to 2,147,483,647                         |
| long    | System.Int64   | Yes     | 8        | -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 |
| byte    | System.Byte    | No      | 1        | 0 to 255                                                |
| ushort  | System.Uint16  | No      | 2        | 0 to 65,535                                             |
| uint    | System.UInt32  | No      | 4        | 0 to 4,294,967,295                                      |
| ulong   | System.Uint64  | No      | 8        | 0 to 18,446,744,073,709,551,615                         |
| float   | System.Single  | Yes     | 4        | Approximately ±1.5e-45 to ±3.4e38                       |
|         |                |         |          |  with ~6-9 significant figures                          |
| double  | System.Double  | Yes     | 8        | Approximately ±5.0e-324 to ±1.7e308                     |
|         |                |         |          |  with ~15-17 significant figures                        |
| decimal | System.Decimal | Yes     | 16       | Approximately ±1.0e-28 to ±7.9e28                       |
|         |                |         |          |  with 28-29 significant figures                         |
| char    | System.Char    | N/A     | 2        | Any Unicode character (16 bit)                          |
| bool    | System.Boolean | N/A     | 1 / 2    | true or false                                           |
+---------+----------------+---------+----------+---------------------------------------------------------+

See here for more information.

  • 12
    You left out the biggest difference, which is the base used for the decimal type (decimal is stored as base 10, all other numeric types listed are base 2). – BrainSlugs83 Mar 14 '15 at 22:55
  • 2
    The value ranges for the Single and Double are not depicted correctly in the above image or the source forum post. Since we can't easily superscript the text here, use the caret character: Single should be 10^-45 and 10^38, and Double should be 10^-324 and 10^308. Also, MSDN has the float with a range of -3.4x10^38 to +3.4x10^38. Search MSDN for System.Single and System.Double in case of link changes. Single: https://msdn.microsoft.com/en-us/library/b1e65aza.aspx Double: https://msdn.microsoft.com/en-us/library/678hzkk9.aspx – deegee Jun 22 '15 at 19:18
  • 3
    Decimal is 128 bits ... means it occupies 16 bytes not 12 – user1477332 Oct 23 '18 at 03:29
105

The Decimal structure is strictly geared to financial calculations requiring accuracy, which are relatively intolerant of rounding. Decimals are not adequate for scientific applications, however, for several reasons:

  • A certain loss of precision is acceptable in many scientific calculations because of the practical limits of the physical problem or artifact being measured. Loss of precision is not acceptable in finance.
  • Decimal is much (much) slower than float and double for most operations, primarily because floating point operations are done in binary, whereas Decimal stuff is done in base 10 (i.e. floats and doubles are handled by the FPU hardware, such as MMX/SSE, whereas decimals are calculated in software).
  • Decimal has an unacceptably smaller value range than double, despite the fact that it supports more digits of precision. Therefore, Decimal can't be used to represent many scientific values.
Mark Jones
  • 2,024
  • 1
  • 19
  • 12
  • 7
    If you're doing financial calculations, you absolutely have to roll your own datatypes or find a good library that matches your exact needs. Accuracy in a financial setting is defined by (human) standards bodies and they have very specific localized (both in time and geography) rules about how to do calculations. Things like correct rounding aren't captured in the simple numeric datatypes in .Net. The ability to do calculations is only a very small part of the puzzle. – James Moore Apr 06 '16 at 16:59
92

I won't reiterate tons of good (and some bad) information already answered in other answers and comments, but I will answer your followup question with a tip:

When would someone use one of these?

Use decimal for counted values

Use float/double for measured values

Some examples:

  • money (do we count money or measure money?)

  • distance (do we count distance or measure distance? *)

  • scores (do we count scores or measure scores?)

We always count money and should never measure it. We usually measure distance. We often count scores.

* In some cases, what I would call nominal distance, we may indeed want to 'count' distance. For example, maybe we are dealing with country signs that show distances to cities, and we know that those distances never have more than one decimal digit (xxx.x km).

tomosius
  • 1,369
  • 12
  • 18
  • 4
    I really like this answer, especially the question "do we count or measure money?" However, other than money, I can't think of anything that is "counted" that is not simply integer. I have seen some applications that use decimal simply because double has *too few* significant digits. In other words, decimal might be used because C# does not have a **quadruple** type https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format – John Henckel Apr 04 '19 at 18:55
57

float 7 digits of precision

double has about 15 digits of precision

decimal has about 28 digits of precision

If you need better accuracy, use double instead of float. In modern CPUs both data types have almost the same performance. The only benifit of using float is they take up less space. Practically matters only if you have got many of them.

I found this is interesting. What Every Computer Scientist Should Know About Floating-Point Arithmetic

ABCD
  • 897
  • 16
  • 38
CharithJ
  • 46,289
  • 20
  • 116
  • 131
  • 1
    @RogerLipscombe: I would consider `double` proper in accounting applications in those cases (and basically only those cases) where no integer type larger than 32 bits was available, and the `double` was being used as though it were a 53-bit integer type (e.g. to hold a whole number of pennies, or a whole number of hundredths of a cent). Not much use for such things nowadays, but many languages gained the ability to use double-precision floating-point values long before they gained 64-bit (or in some cases even 32-bit!) integer math. – supercat May 29 '14 at 17:57
  • 1
    Your answer implies precision is the only difference between these data types. Given binary floating point arithmetic is typically implemented in *hardware FPU*, performance is a significant difference. This may be inconsequential for some applications, but is critical for others. – saille Jan 15 '15 at 03:16
  • 6
    @supercat double is *never* proper in accounting applications. Because Double can only approximate decimal values (even within the range of its own precision). This is because double stores the values in a base-2 (binary)-centric format. – BrainSlugs83 Mar 14 '15 at 22:50
  • 2
    @BrainSlugs83: Use of floating-point types to hold *non-whole-number* quantities would be improper, but it was historically very common for languages to have floating-point types that could precisely represent larger whole-number values than their integer types could represent. Perhaps the most extreme example was Turbo-87 whose only integer types were limited to -32768 to +32767, but whose `Real` could IIRC represent values up to 1.8E+19 with unit precision. I would think it would be much saner for an accounting application to use `Real` to represent a whole number of pennies than... – supercat Mar 15 '15 at 19:45
  • 1
    ...for it to try to perform multi-precision math using a bunch of 16-bit values. For most other languages the difference wasn't that extreme, but for a long time it has been very common for languages not to have any integer type that went beyond 4E9 but have a `double` type which had unit accuracy up to 9E15. If one needs to store whole numbers which are bigger than the largest available integer type, using `double` is apt to be simpler and more efficient than trying to fudge multi-precision math, especially given that while processors have instructions to perform 16x16->32 or... – supercat Mar 15 '15 at 19:47
  • ...32x32->64 multiplication, programming languages generally don't. – supercat Mar 15 '15 at 19:51
48

No one has mentioned that

In default settings, Floats (System.Single) and doubles (System.Double) will never use overflow checking while Decimal (System.Decimal) will always use overflow checking.

I mean

decimal myNumber = decimal.MaxValue;
myNumber += 1;

throws OverflowException.

But these do not:

float myNumber = float.MaxValue;
myNumber += 1;

&

double myNumber = double.MaxValue;
myNumber += 1;
GorkemHalulu
  • 2,925
  • 1
  • 27
  • 25
  • 2
    `float.MaxValue+1 == float.MaxValue`, just as `decimal.MaxValue+0.1D == decimal.MaxValue`. Perhaps you meant something like `float.MaxValue*2`? – supercat Jan 14 '15 at 00:21
  • @supercar But it is not true that decimal.MaxValue + 1 == decimal.MaxValue – GorkemHalulu Jan 14 '15 at 06:12
  • @supercar decimal.MaxValue + 0.1m == decimal.MaxValue ok – GorkemHalulu Jan 14 '15 at 06:19
  • 1
    The `System.Decimal` throws an exception just before it becomes unable to distinguish whole units, but if an application is supposed to be dealing with e.g. dollars and cents, that could be too late. – supercat Jan 14 '15 at 16:15
31

Integers, as was mentioned, are whole numbers. They can't store the point something, like .7, .42, and .007. If you need to store numbers that are not whole numbers, you need a different type of variable. You can use the double type or the float type. You set these types of variables up in exactly the same way: instead of using the word int, you type double or float. Like this:

float myFloat;
double myDouble;

(float is short for "floating point", and just means a number with a point something on the end.)

The difference between the two is in the size of the numbers that they can hold. For float, you can have up to 7 digits in your number. For doubles, you can have up to 16 digits. To be more precise, here's the official size:

float:  1.5 × 10^-45  to 3.4 × 10^38  
double: 5.0 × 10^-324 to 1.7 × 10^308

float is a 32-bit number, and double is a 64-bit number.

Double click your new button to get at the code. Add the following three lines to your button code:

double myDouble;
myDouble = 0.007;
MessageBox.Show(myDouble.ToString());

Halt your program and return to the coding window. Change this line:

myDouble = 0.007;
myDouble = 12345678.1234567;

Run your programme and click your double button. The message box correctly displays the number. Add another number on the end, though, and C# will again round up or down. The moral is if you want accuracy, be careful of rounding!

Sae1962
  • 1,122
  • 15
  • 31
daniel
  • 405
  • 6
  • 6
  • 4
    The "point something" you mentioned is generally referred to as "the fractional part" of a number. "Floating point" does not mean "a number with a point something on the end"; but instead "Floating Point" distinguishes the type of number, as opposed to a "Fixed Point" number (which can also store a fractional value); the difference is whether the precision is fixed, or floating. -- Floating point numbers give you a much bigger dynamic range of values (Min and Max), at the cost of precision, whereas a fixed point numbers give you a constant amount of precision at the cost of range. – BrainSlugs83 Sep 16 '17 at 01:09
29
  1. Double and float can be divided by integer zero without an exception at both compilation and run time.
  2. Decimal cannot be divided by integer zero. Compilation will always fail if you do that.
Second Person Shooter
  • 14,188
  • 21
  • 90
  • 165
  • 6
    They sure can! They also also have a couple of "magic" values such as Infinity, Negative Infinity, and NaN (not a number) which make it very useful for detecting vertical lines while computing slopes... Further, if you need to decide between calling float.TryParse, double.TryParse, and decimal.TryParse (to detect if a string is a number, for example), I recommend using double or float, as they will parse "Infinity", "-Infinity", and "NaN" properly, whereas decimal will not. – BrainSlugs83 Jun 23 '11 at 19:29
  • 2
    _Compilation_ only fails if you attempt to divide a literal `decimal` by zero (CS0020), and the same is true of integral literals. However if a runtime decimal value is divided by zero, you'll get an exception not a compile error. – Drew Noakes Nov 18 '16 at 00:24
19
  • float: ±1.5 x 10^-45 to ±3.4 x 10^38 (~7 significant figures
  • double: ±5.0 x 10^-324 to ±1.7 x 10^308 (15-16 significant figures)
  • decimal: ±1.0 x 10^-28 to ±7.9 x 10^28 (28-29 significant figures)
Wai Ha Lee
  • 8,598
  • 83
  • 57
  • 92
Mukesh Kumar
  • 2,354
  • 4
  • 26
  • 37
  • 11
    The difference is more than just precision. -- `decimal` is actually stored in decimal format (as opposed to base 2; so it won't lose or round digits due to conversion between the two numeric systems); additionally, `decimal` has no concept of special values such as NaN, -0, ∞, or -∞. – BrainSlugs83 Sep 16 '17 at 01:19
16

This has been an interesting thread for me, as today, we've just had a nasty little bug, concerning decimal having less precision than a float.

In our C# code, we are reading numeric values from an Excel spreadsheet, converting them into a decimal, then sending this decimal back to a Service to save into a SQL Server database.

Microsoft.Office.Interop.Excel.Range cell = …
object cellValue = cell.Value2;
if (cellValue != null)
{
    decimal value = 0;
    Decimal.TryParse(cellValue.ToString(), out value);
}

Now, for almost all of our Excel values, this worked beautifully. But for some, very small Excel values, using decimal.TryParse lost the value completely. One such example is

  • cellValue = 0.00006317592

  • Decimal.TryParse(cellValue.ToString(), out value); // would return 0

The solution, bizarrely, was to convert the Excel values into a double first, and then into a decimal:

Microsoft.Office.Interop.Excel.Range cell = …
object cellValue = cell.Value2;
if (cellValue != null)
{
    double valueDouble = 0;
    double.TryParse(cellValue.ToString(), out valueDouble);
    decimal value = (decimal) valueDouble;
    …
}

Even though double has less precision than a decimal, this actually ensured small numbers would still be recognised. For some reason, double.TryParse was actually able to retrieve such small numbers, whereas decimal.TryParse would set them to zero.

Odd. Very odd.

Sae1962
  • 1,122
  • 15
  • 31
Mike Gledhill
  • 27,846
  • 7
  • 149
  • 159
  • 4
    Out of curiosity, what was the raw value of cellValue.ToString()? Decimal.TryParse("0.00006317592", out val) seems to work... – micahtan Aug 27 '12 at 23:57
  • 12
    -1 Don't get me wrong, if true, it's very interesting but this is a separate question, it's certainly not an answer to this question. – weston May 22 '13 at 14:19
  • 5
    Maybe because the Excel cell was returning a double and ToString() value was "6.31759E-05" therefore the decimal.Parse() didn't like the notation. I bet if you checked the return value of Decimal.TryParse() it would have been false. – SergioL Oct 15 '14 at 20:44
  • 3
    @weston Answers often complement other answers by filling in nuances they have missed. This answer highlights a difference in terms of parsing. It is very much an answer to the question! – Robino May 20 '15 at 15:52
  • 3
    Er... `decimal.Parse("0.00006317592")` works -- you've got something else going on. -- Possibly scientific notation? – BrainSlugs83 Sep 16 '17 at 01:15
  • 2
    `decimal.Parse("0.00006317592") works`, but `decimal.Parse(0.00006317592.ToString())` does not as @SergioL suggested. `0.00006317592.ToString()` becomes `6.317592E-05` and decimal.Parse does not like that. – Robert McKee Oct 25 '19 at 14:42
15

The Decimal, Double, and Float variable types are different in the way that they store the values. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.

Float - 32 bit (7 digits)

Double - 64 bit (15-16 digits)

Decimal - 128 bit (28-29 significant digits)

More about...the difference between Decimal, Float and Double

John Saunders
  • 160,644
  • 26
  • 247
  • 397
warnerl
  • 159
  • 1
  • 2
9

For applications such as games and embedded systems where memory and performance are both critical, float is usually the numeric type of choice as it is faster and half the size of a double. Integers used to be the weapon of choice, but floating point performance has overtaken integer in modern processors. Decimal is right out!

yoyo
  • 8,310
  • 4
  • 56
  • 50
  • 2
    Pretty much all modern systems, even cell phones, have hardware support for double; and if you game has even simple physics, you will notice a big difference between double and float. (For example, calculating the velocity / friction in a simple Asteroids clone, doubles allow acceleration to flow much more fluidly than float. -- Seems like it shouldn't matter, but it totally does.) – BrainSlugs83 Sep 16 '17 at 01:22
  • Doubles are also double the size of floats, meaning you need to chew through twice as much data, which hurts your cache performance. As always, measure and proceed accordingly. – yoyo Sep 22 '17 at 17:53
5

The problem with all these types is that a certain imprecision subsists AND that this problem can occur with small decimal numbers like in the following example

Dim fMean as Double = 1.18
Dim fDelta as Double = 0.08
Dim fLimit as Double = 1.1

If fMean - fDelta < fLimit Then
    bLower = True
Else
    bLower = False
End If

Question: Which value does bLower variable contain ?

Answer: On a 32 bit machine bLower contains TRUE !!!

If I replace Double by Decimal, bLower contains FALSE which is the good answer.

In double, the problem is that fMean-fDelta = 1.09999999999 that is lower that 1.1.

Caution: I think that same problem can certainly exists for other number because Decimal is only a double with higher precision and the precision has always a limit.

In fact, Double, Float and Decimal correspond to BINARY decimal in COBOL !

It is regrettable that other numeric types implemented in COBOL don't exist in .Net. For those that don't know COBOL, there exist in COBOL following numeric type

BINARY or COMP like float or double or decimal
PACKED-DECIMAL or COMP-3 (2 digit in 1 byte)
ZONED-DECIMAL (1 digit in 1 byte) 
schlebe
  • 3,387
  • 5
  • 37
  • 50
4

In simple words:

  1. The Decimal, Double, and Float variable types are different in the way that they store the values.
  2. Precision is the main difference (Notice that this is not the single difference) where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.
  3. The summary table:

/==========================================================================================
    Type       Bits    Have up to                   Approximate Range 
/==========================================================================================
    float      32      7 digits                     -3.4 × 10 ^ (38)   to +3.4 × 10 ^ (38)
    double     64      15-16 digits                 ±5.0 × 10 ^ (-324) to ±1.7 × 10 ^ (308)
    decimal    128     28-29 significant digits     ±7.9 x 10 ^ (28) or (1 to 10 ^ (28)
/==========================================================================================
You can read more here, Float, Double, and Decimal.
IndustProg
  • 627
  • 1
  • 13
  • 33
  • 1
    What does this answer add that isn't already covered in the existing answers? BTW, your "or" in the "decimal" line is incorrect: the slash in the web page that you're copying from indicates division rather than an alternative. – Mark Dickinson Feb 10 '18 at 12:15
  • 2
    And I'd dispute strongly that precision is the main difference. The main difference is the base: decimal floating-point versus binary floating-point. That difference is what makes `Decimal` suitable for financial applications, and it's the main criterion to use when deciding between `Decimal` and `Double`. It's rare that `Double` precision isn't enough for scientific applications, for example (and `Decimal` is often *unsuitable* for scientific applications because of its limited range). – Mark Dickinson Feb 10 '18 at 12:28
2

The main difference between each of these is the precision.

  • float is a 32-bit number
  • double is a 64-bit number
  • decimal is a 128-bit number
Pang
  • 9,564
  • 146
  • 81
  • 122
user3776645
  • 397
  • 3
  • 5
2

Float:

It is a floating binary point type variable. Which means it represents a number in it’s binary form. Float is a single precision 32 bits(6-9 significant figures) data type. It is used mostly in graphic libraries because of very high demand for processing power, and also in conditions where rounding errors are not very important.

Double:

It is also a floating binary point type variable with double precision and 64 bits size(15-17 significant figures). Double are probably the most generally used data type for real values, except for financial applications and places where high accuracy is desired.

Decimal:

It is a floating decimal point type variable. Which means it represents a number using decimal numbers (0-9). It uses 128 bits(28-29 significant figures) for storing and representing data. Therefore, it has more precision than float and double. They are mostly used in financial applications because of their high precision and easy to avoid rounding errors.

Example:

using System;
  
public class GFG {
  
    static public void Main()
    {
  
        double d = 0.42e2;    //double data type
        Console.WriteLine(d); // output 42
  
        float f = 134.45E-2f;  //float data type
        Console.WriteLine(f); // output: 1.3445
  
        decimal m = 1.5E6m;   //decimal data type
        Console.WriteLine(m); // output: 1500000
    }
}

Comparison between Float, Double and Decimal on the Basis of:

No. of Bits used:

  • Float uses 32 bits to represent data.
  • Double uses 64 bits to represent data.
  • Decimal uses 128 bits to represent data.

Range of values:

  • The float value ranges from approximately ±1.5e-45 to ±3.4e38.

  • The double value ranges from approximately ±5.0e-324 to ±1.7e308.

  • The Decimal value ranges from approximately ±1.0e-28 to ±7.9e28.

Precision:

  • Float represent data with single precision.
  • Double represent data with double precision.
  • Decimal has higher precision than float and Double.

Accuracy:

  • Float is less accurate than Double and Decimal.
  • Double is more accurate than Float but less accurate than Decimal.
  • Decimal is more accurate than Float and Double.
Abbas Aryanpour
  • 391
  • 3
  • 15
-3

To define Decimal, Float and Double in .Net (c#)

you must mention values as:

Decimal dec = 12M/6;
Double dbl = 11D/6;
float fl = 15F/6;

and check the results.

And Bytes Occupied by each are

Float - 4
Double - 8
Decimal - 12
Community
  • 1
  • 1