19

Is it worth it to implement it in hardware? If yes why? If not why not?


Sorry I thought it is clear that I am talking about Decimal Rational Numbers! Ok something like decNumber++ for C++, decimal for .NET... Hope it is clear now :)

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
Khaled Alshaya
  • 94,250
  • 39
  • 176
  • 234
  • You might want to clarify what you mean by "decimal numbers". The fact that you capitalize Decimal brings to mind the .NET Decimal type, which is a 128-bit decimal floating-point number. But other BCD (Binary-Coded Decimal) implementations exist as well, and each has its own semantics. – Daniel Pryden Sep 18 '09 at 23:40
  • @Peter Mortensen: No, both the implementations the OP mentions are decimal floating-point, not fixed-point. – Daniel Pryden Sep 18 '09 at 23:53
  • @Peter Sorry for not being clear enough. I am talking about Floating Point Decimal, Thanks. – Khaled Alshaya Sep 19 '09 at 00:00
  • *decimal* in .NET is not fixed. 128 bits can't hold the range and precision it is holding unless it is Floating-Point Decimal number. Sorry If this comment is wrong but this is what I understand. – Khaled Alshaya Sep 19 '09 at 00:05
  • @Peter Mortensen: No, the .NET System.Decimal is a floating-point type, with a 1-bit sign, a 96-bit integer mantissa, and a 31-bit integer exponent. That is the definition of a floating-point structure. The only difference is, IEEE-754 floats use a base-2 exponent, while System.Decimal uses a base-10 exponent. Source: http://msdn.microsoft.com/en-us/library/system.decimal(VS.80).aspx – Daniel Pryden Sep 19 '09 at 00:06
  • 2
    Microsoft `Decimal` data types - neither .NET `System.Decimal`, nor COM Automation `VT_DECIMAL` - are _not_ fixed-point. – Pavel Minaev Sep 19 '09 at 00:07
  • Actually, decimal string arithmetic has some hardware support on every single PC, though it would make absolutely no discernable difference if such HW support had not been enshrined forever in the Instruction Set Architecture. See my answer below. – DigitalRoss Sep 19 '09 at 00:10
  • http://msdn.microsoft.com/en-us/library/aa164763(office.10,printer).aspx says: "The two scaled integer data types, Currency and Decimal, provide a high level of accuracy. These are also referred to as fixed-point data types.". Is that incorrect? – Peter Mortensen Sep 19 '09 at 00:10
  • @AraK: You're half right. 128 bits can hold plenty of range and precision if it's a floating point number, regardless of whether it's in decimal or binary floating point. With fixed point you get *either* range *or* precision -- you can't have both. – Daniel Pryden Sep 19 '09 at 00:11
  • @Peter Mortensen: Your link is a printer link, please don't post that. Your link also is specifically about Office XP. I don't know if Office XP uses something different from what the .NET Framework uses, but .NET definitely uses a floating-point decimal type. I would trust the MSDN documentation for `System.Decimal` more than I'd trust the Office XP documentation. – Daniel Pryden Sep 19 '09 at 00:14
  • 1
    Office `Decimal` most likely refers to COM Automation `VARIANT` type `VT_DECIMAL`. Even so, it is wrong. `VARIANT` decimals are defined via `struct DECIMAL` which is described here: http://msdn.microsoft.com/en-us/library/ms221061.aspx - as you can see, it's integer base + power of 10, which is effectively decimal floating-point – Pavel Minaev Sep 19 '09 at 00:19
  • @Daniel I was comparing range and precision considering Floating-Point VS Fixed-Point DECIMAL numbers with respect to the 128 bit decimal type in .NET – Khaled Alshaya Sep 19 '09 at 00:23
  • @AraK: Point taken. Either way, the actual point of your comment was to illustrate that .NET `System.Decimal` is a floating-point type, which is absolutely correct, an error in the Office XP documentation notwithstanding. And what I said was also correct: floating-point is inherently a trade-off between range and precision. – Daniel Pryden Sep 19 '09 at 00:24
  • @Daniel I wasn't clear enough actually :) – Khaled Alshaya Sep 19 '09 at 00:26
  • Also, this question may be of interest: http://stackoverflow.com/questions/803225/when-should-i-use-double-instead-of-decimal – Daniel Pryden Sep 19 '09 at 00:47

12 Answers12

19

The latest revision of the IEEE 754:2008 standard does indeed define hardware decimal floating point numbers, using the representations shown in the software referenced in the question. The previous version of the standard (IEEE 754:1985) did not provide decimal floating point numbers. Most current hardware implements the 1985 standard and not the 2008 standard, but IBM's iSeries computers using Power6 chips have such support, and so do the z10 mainframes.

The standardization effort for decimal floating point was spearheaded by Mike Cowlishaw of IBM UK, who has a web site full of useful information (including the software in the question). It is likely that in due course, other hardware manufacturers will also introduce decimal floating point units on their chips, but I have not heard a statement of direction for when (or whether) Intel might add one. Intel does have optimized software libraries for it.

The C standards committee is looking to add support for decimal floating point and that work is TR 24732.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
  • +1 I think this answers my question the most. Especially the mention of the new standard, Which means we could see these chips soon after standardization like what happened with Binary Floating-Point numbers. – Khaled Alshaya Sep 20 '09 at 00:17
  • This is interesting. I wasn't aware of decimal floating point in IEEE 754:2008. However, the point still stands that decimal isn't inherently any better than binary floating point except in certain edge cases, so even when we get FPU's with built-in decimal floating point, you will still need to evaluate whether decimal or binary is better for your application. (I would expect that even with hardware support, binary floating point will likely still perform faster, although by a much smaller margin.) – Daniel Pryden Sep 20 '09 at 01:11
  • 3
    Decimal arithmetic is more easily predictable and benefits those applications where working with decimal data is a benefit. A primary beneficiary is accounting applications - unless you are the US Federal Government, you need to keep tabs on your spending accurately, and you run into far fewer edge cases if you use decimal numbers. (The 128-bit floating point decimal type can support even projected US budget deficits with accuracy - down to the fictitious penny if need so be.) – Jonathan Leffler Sep 20 '09 at 01:34
  • 1
    One of the reasons single and double-precision floating point numbers will stay more efficient is... That they're single or double-precision. If you want to compare their efficiency (and memory footprint) you'd need to compare decimals to 128-bit (quadruple-precision, I guess) floating point numbers - but if you did use FP numbers you'd probably only need single or double precision. So what I'm saying is that 128-bit decimal numbers, even with hardware acceleration, will probably still be slower than their binary floating-point alternative. – configurator Jun 02 '10 at 15:53
5

Some IBM processors have dedicated decimal hardware included (Decimal Floating Point | DFP- unit).

In contribution of answered Sep 18 at 23:43 Daniel Pryden

the main reason is that DFP-units need more transistors in a chip then BFP-units. The reason is the BCD Code to calculate decimal numbers in a binary environment. The IEEE754-2008 has several methods to minimize the overload. It seems that the DPD hxxp://en.wikipedia.org/wiki/Densely_packed_decimal method is more effective in comparison to the BID hxxp://en.wikipedia.org/wiki/Binary_Integer_Decimal method.

Normally, you need 4 bits to cover the decimal range from 0 to 9. Bit the 10 to 15 are invalid but still possible with BCD. Therefore, the DPD compress 3*4=12 bit into 10 bit to cover the range from 000 to 999 with 1024 (10^2)possibilities.

In general it is to say, that BFP is faster then DFP. And BFP need less space on a chip then DFP.

The question why IBM implemented a DFP unit is quite simple answered: They build servers for the finance market. If data represents money, it should be reliable.

With hardware accelerated decimal arithmetic, some errors do not accour as in binary. 1/5 = 0.2 => 0.0110011001100110011001100110... in binary so recurrent fractions could be avoided.

And the overhelming round() function in excel would be useless anymore :D (->function =1*(0,5-0,4-0,1) wtf!)

hope that explain your question a little!

Charakterlos
  • 51
  • 1
  • 1
4

There is (a tiny bit of) decimal string acceleration, but...

This is a good question. My first reaction was "macro ops have always failed to prove out", but after thinking about it, what you are talking about would go a whole lot faster if implemented in a functional unit. I guess it comes down to whether those operations are done enough to matter. There is a rather sorry history of macro op and application-specific special-purpose instructions, and in particular the older attempts at decimal financial formats are just legacy baggage now. For example, I doubt if they are used much, but every PC has the Intel BCD opcodes, which consist of

DAA, AAA, AAD, AAM, DAS, AAS

Once upon a time, decimal string instructions were common on high-end hardware. It's not clear that they ever made much of a benchmark difference. Programs spend a lot of time testing and branching and moving things and calculating addresses. It normally doesn't make sense to put macro-operations into the instruction set architecture, because overall things seem to go faster if you give the CPU the smallest number of fundamental things to do, so it can put all its resources into doing them as fast as possible.

These days, not even all the binary ops are actually in the real ISA. The cpu translates the legacy ISA into micro-ops at runtime. It's all part of going fast by specializing in core operations. For now the left-over transisters seem to be waiting for some graphics and 3D work, i.e., MMX, SSE, 3DNow!

I suppose it's possible that a clean-sheet design might do something radical and unify the current (HW) scientific and (SW) decimal floating point formats, but don't hold your breath.

DigitalRoss
  • 143,651
  • 25
  • 248
  • 329
  • Very good point, though those codes aren't actually used by any floating-point decimal arithmetic implementation that I know of. – Pavel Minaev Sep 19 '09 at 00:11
  • 2
    True, but BCD strings aren't the way modern decimal types are implemented. For example, the .NET `System.Decimal` is a floating-point decimal structure with an exponent and mantissa, instead of a BCD string, which is usually implemented as fixed-point. – Daniel Pryden Sep 19 '09 at 00:17
  • Right, they are basically the same as extended floats, except that the exponent is 10^e rather than 2^e. I suppose I could improve the answer a little. – DigitalRoss Sep 19 '09 at 00:31
  • It might be worth pointing out that, until recently, there simply wasn't a well-defined standard on decimal floating-point, and so hardware support was lacking (on typical desktop & embedded architectures - on mainframes it has been there for ages, since COBOL arithmetic is decimal). Now that we have an IEEE decimal floating-point standard in form of IEEE 754-2008, and both C and C++ are coming up with TR to support that, I'm sure hardware support will be quick to follow. – Pavel Minaev Sep 19 '09 at 06:13
  • The reason the BCD operations aren't used by most floating point decimal libraries is that there is no way to access them from C, in particular - you have to drop into assembler to access the instructions. People avoid coding in assembler with good reason. Those involved in business-level calculations, in particular, want to avoid doing assembler work, not least because their code must be portable across as many platforms as possible. – Jonathan Leffler Sep 20 '09 at 00:27
  • My PC doesn't have the opcodes (unless in 32 bit mode): http://forum.osdev.org/viewtopic.php?f=1&t=23616. – maaartinus Apr 03 '14 at 06:32
  • There are also [`FBLD` and `FLSTP`](https://courses.engr.illinois.edu/ece390/archive/spr2002/books/labmanual/inst-ref-fbld.html) in x87 to load and store BCD numbers – phuclv Dec 06 '16 at 16:05
2

No, they are very memory-inefficient. And the calculations are also on hardware not easy to implement (of course it can be done, but it also can use a lot of time). Another disadvantage of the decimal format is, it's not widly used, before research showed that the binary-formatted numbers were more accurate the format was popular for a time. But now programmers know better. The decimal format is't efficient and is more lossy. Also additional hardware-representations require additional instruction-sets, that can lead to more difficult code.

Willem Van Onsem
  • 443,496
  • 30
  • 428
  • 555
  • 2
    +1 for "The decimal format isn't efficient and is more lossy". Most people think decimals are more precise, but they aren't. IMHO, Microsoft has only made this worse with the .NET System.Decimal type, since it has 128 bits to work with. *Of course* a 128-bit number will be more precise than a 64-bit number. But a 128-bit binary float would be *even more* precise than a 128-bit decimal. – Daniel Pryden Sep 19 '09 at 00:01
  • 5
    "Lossy" is an ambiguous term, so I'd not use it without specifying what you mean, exactly. What usually matters is that when user inputs `1.1` and `2.2` into your application, and ask to add them, the output is `3.3` - and not `3.29...`. In that sense, `decimal` is less lossy, and it is precisely this niche it is intended for. This goes for granted for any calculations involving money - never, ever use `float` or `double` for money! - but it equally applies to any case where you deal with decimal user input. – Pavel Minaev Sep 19 '09 at 00:09
  • Because humans use the decimal-standard as their own, it "seems" to look that the decimal-type is less lossy (perhaps indeed a bad word-choise). But if you add 1/3 with let's say 1/7 you will notice that de double type will be more accurate (i haven't checked it, but I'm pretty sure in most of such cases, the result is more accurate) – Willem Van Onsem Sep 19 '09 at 09:25
  • @DanielPryden That's why binary floating point is the best choice for measurements of real world things. Decimal floating point is mainly intended for when dealing with money or other abstract quantities that humans like to represent in decimal form, as it avoids rounding errors during decimal<->binary conversion. I agree that blindly using decimal floating point is a mistake, but so is blindly using binary floating point. – Craig Ringer Aug 25 '15 at 06:56
2

Decimals (and more generally, fractions) are relatively easy to implement as a pair of integers. General purpose libraries are ubiquitous and easily fast enough for most applications.

Anyone who needs the ultimate in speed is going to hand-tune their implementation (eg changing the divisor to suit a particular usage, algebraicly combining/reordering the operations, clever use of SIMD shuffles...). Merely encoding the most common functions into a hardware ISA would surely never satisfy them -- in all likelihood, it wouldn't help at all.

Richard Berg
  • 20,629
  • 2
  • 66
  • 86
2

The hardware you want used to be fairly common.

Older CPU's had hardware BCD (Binaray coded decimal) arithmetic. ( The little Intel chips had a little support as noted by earlier posters)

Hardware BCD was very good at speeding up FORTRAN which used 80 bit BCD for numbers.

Scientific computing used to make up a significant percentage of the worldwide market.

Since everyone (relatively speaking) got home PC running windows, the market got tiny as a percentage. So nobody does it anymore.

Since you don't mind having 64bit doubles (binary floating point) for most things, it mostly works.

If you use 128bit binary floating point on modern hardware vector units it's not too bad. Still less accurate than 80bit BCD, but you get that.

At an earlier job, a colleague formerly from JPL was astonished we still used FORTRAN. "We've converted to C and C++ he told us." I asked him how they solved the problem of lack of precision. They'd not noticed. (They have also not the same space probe landing accuracy they used to have. But anyone can miss a planet.)

So, basically 128bit doubles in the vector unit are more okay, and widely available.

My twenty cents. Please don't represent it as a floating point number :)

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
Tim Williscroft
  • 3,705
  • 24
  • 37
2

Decimal floating-point standard (IEEE 754-2008) is already implemented in hardware by two companies; IBM's POWER 6/7 based servers, and SilMinds SilAx PCIe-based acceleration card.

SilMinds published a case study about converting the Decimal arithmetic execution to use its HW solutions. A great boost in time and slashed energy consumption are presented.

Moreover several publications by "Michael J. Schulte" and others reveal very positive benchmarks results, and some comparison between DPD and BID formats (both defined in the IEEE 754-2008 standard)

You can find pdfs to:

  1. Performance analysis of decimal floating-point libraries and its impact on decimal hardware and software solutions

  2. A survey of hardware designs for decimal arithmetic

  3. Energy and Delay Improvement via Decimal Floating Point Units

Those 3 papers should be more than enough for your questions!

Willem Van Onsem
  • 443,496
  • 30
  • 428
  • 555
Tarek Eldeeb
  • 588
  • 2
  • 6
  • 24
1

I speculate that there are no compute-intensive applications of decimal numbers. On the other hand, floating points number are extensively used in engineering applications, which must handle enormous amounts of data and do not need exact results, just need to stay within a desired precision.

Roberto Bonvallet
  • 31,943
  • 5
  • 40
  • 57
  • Also extensively used in graphics, a GPU's efficiency comes from doing massive amounts of floating point operations that cover most of what it needs to do. – Nick Craver Sep 18 '09 at 23:37
  • 1
    Agreed. For most scientific purposes, the error in your calculation and/or observation methodology is several orders of magnitude greater than the error introduced by floating-point rounding. Real number-crunching performs best when it leverages the strengths of the underlying platform. For binary computers, computation using binary numbers is more efficient. – Daniel Pryden Sep 18 '09 at 23:38
1

The simple answer is that computers are binary machines. They don't have ten fingers, they have two. So building hardware for binary numbers is considerably faster, easier, and more efficient than building hardware for decimal numbers.

By the way: decimal and binary are number bases, while fixed-point and floating-point are mechanisms for approximating rational numbers. The two are completely orthogonal: you can have floating-point decimal numbers (.NET's System.Decimal is implemented this way) and fixed-point binary numbers (normal integers are just a special case of this).

Daniel Pryden
  • 59,486
  • 16
  • 97
  • 135
  • I see your point that Floating-Point Numbers are more efficient and that is true, but they have different usage as I understand. – Khaled Alshaya Sep 18 '09 at 23:47
  • @AraK: "Different usage" how? The only thing that you can't do with binary numbers is perform math the way a banker would, where there is an arbitrary distinction between 0.01 having significance and 0.009 not being significant. And I would say it you're probably better off using fixed point in such a case anyway. – Daniel Pryden Sep 18 '09 at 23:58
  • 2
    Humans deal with decimal numbers. Consequently, it's much easier to explain to a person why `1/3` will render as `1.33...`, than it is to explain to the same person why `1.3` quietly becomes `1.29999`. – Pavel Minaev Sep 19 '09 at 00:12
  • On the whole though this is (surprisingly) the most precise answer so far. All modern computer architectures are binary, therefore it's easier to work with binary numbers, whether integers or floating-point. Duh. – Pavel Minaev Sep 19 '09 at 00:20
  • @Pavel Minaev: You're completely correct. However, whether that matters depends on your application. Most of the time I'm doing number crunching, it's for scientific applications where users are quite accustomed to seeing 1.2999 (or, even better: 1.3 ± 0.001). – Daniel Pryden Sep 19 '09 at 00:20
  • 1
    Yep, it definitely depends on the task at hand. For similar reasons games (and, in general, any layout/rendering code - e.g. WPF or GDI+) uses floating-point, and often single-precision one at that. – Pavel Minaev Sep 19 '09 at 06:14
0

Floating point math essentially IS an attempt to implement decimals in hardware. It's troublesome, which is why the Decimal types are created partly in software. It's a good question, why CPUs don't support more types, but I suppose it goes back to CISC vs. RISC processors -- RISC won the performance battle, so they try to keep things simple these days I guess.

Lee B
  • 2,137
  • 12
  • 16
  • 1
    `decimal` is itself floating-point, it's just decimal floating-point, while `float` and `double` are binary floating-point. – Pavel Minaev Sep 19 '09 at 00:10
  • Generally speaking, perhaps. But when most people (including myself) talk of floating point in computers, they're talking about the IEEE standard floating point specification, as implemented in modern processors, not simply a number with a point in it, that has a variable number of significant digits. – Lee B Sep 19 '09 at 01:32
  • FYI, decimal floating-point is also an IEEE standard (IEEE 754-2008). – Pavel Minaev Sep 19 '09 at 06:15
0

Modern computers are usually general purpose. Floating point arithmetic is very general purpose, while Decimal has a far more specific purpose. I think that's part of the reason.

Joren
  • 14,472
  • 3
  • 50
  • 54
-1

Do you mean the typical numeric integral types "int", "long", "short" (etc.)? Because operations on those types are definitely implemented in hardware. If you're talking about arbitrary-precision large numbers ("BigNums" and "Decimals" and such), it's probably a combination of rarity of operations using these data types and the complexity of building hardware to deal with arbitrarily large data formats.

Mike Daniels
  • 8,582
  • 2
  • 31
  • 44