12

I know BCD is like more intuitive datatype if you don't know binary. But I don't know why to use this encoding, its like don't makes a lot of sense since its waste representation in 4bits (when representation is bigger than 9).

Also I think x86 only supports adds and subs directly (you can convert them via FPU).

Its possible that this comes from old machines, or other architectures?

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
llazzaro
  • 3,970
  • 4
  • 33
  • 47

10 Answers10

12

BCD arithmetic is useful for exact decimal calculations, which is often a requirement for financial applications, accountancy, etc. It also makes things like multiplying/dividing by powers of 10 easier. These days there are better alternatives.

There's a good Wikipedia article which discusses the pro and cons.

Paul R
  • 208,748
  • 37
  • 389
  • 560
  • 1
    "better alternatives"? I would build a C++ `BigDecimal` type on the hardware's BCD -- it would sure be fast if you did it that way. I'm not sure what would be "better" than using the hardware datatype. – S.Lott Mar 01 '10 at 22:26
  • 5
    I doubt modern x86 CPU's have optimized BCD implementations - they are probably implemented as microcode with a focus on compatibility, not performance. – Michael Mar 01 '10 at 22:28
  • 1
    IBM has hardware support for DECFLOAT in its POWER 6 CPUs. – Paul R Mar 01 '10 at 22:30
11

BCD is useful at the very low end of the electronics spectrum, when the value in a register is displayed by some output device. For example, say you have a calculator with a number of seven-segment displays that show a number. It is convenient if each display is controlled by separate bits.

It may seem implausible that a modern x86 processor would be used in a device with these kinds of displays, but x86 goes back a long way, and the ISA maintains a great deal of backward compatibility.

Jay Conrod
  • 28,943
  • 19
  • 98
  • 110
7

BCD exists in modern x86 CPU's since it was in the original 8086 processor, and all x86 CPU's are 8086 compatible. BCD operations in x86 were used to support business applications way back then. BCD support in the processor itself isn't really used anymore.

Note that BCD is an exact representation of decimal numbers, which floating point is not, and that implementing BCD in hardware is far simpler than implementing floating point. These sort of things mattered more back when processors had less than a million transistors that ran at a few megahertz.

Nubok
  • 3,502
  • 7
  • 27
  • 47
Michael
  • 54,279
  • 5
  • 125
  • 144
  • @Michael: I don't recall the x86 instructions for BCD. Can you remind me, please? – John Saunders Mar 02 '10 at 02:29
  • @John, I can think of DAA, DAS (Decimal Adjust[after] Addition / Subtraction). There may be a few others, been a while I didn't play with that ;-) – mjv Mar 02 '10 at 07:03
  • @mjv: Thanks. I had totally forgotten about those. I barely remember even having seen an example of using those - and that wasn't a real-world example. – John Saunders Mar 02 '10 at 08:11
6

BCD is space-wise wasteful, that's true, but it has the advantage of being a "fixed pitch" format, making it easy to find the nth digit in a particular number.

Another advantage is that is allows for exact arithmetic calculations on arbitrary size numbers. Also, using the "fixed pitch" characteristics mentioned, such arithmetic operations can easily be chunked into multiple threads (parallel processing).

mjv
  • 73,152
  • 14
  • 113
  • 156
  • Exact arithmetic calculations on arbitrary size numbers can also easily be done on normal binary numbers with normal binary numbers. – Nubok Nov 24 '12 at 02:43
  • 2
    @nubok: With BCD, one can read in an arbitrary-sized decimal-formatted value, perform calculations on it, and write it out in decimal format, all in a constant time per digit. If the arbitrary-size number format stores things using a power-of-two base, the per-digit time required to convert to/from decimal format will increase as N gets larger. – supercat Nov 23 '13 at 21:37
  • @nubok: it is true until you are working with integers. But if you change to real numbers, binary representation sucks, that is what you call Float. – karatedog Jan 19 '14 at 20:47
  • Not just converting. If you use a base-2 (big) integer to represent a (big) decimal type, you'll soon find that rounding is _incredibly_ slow—you have to divide by some large power of 10. So, what ends up happening is an addition might be quick, but it still needs to be rounded down to the desired precision which means your addition is really a small add _and_ a large divide. Quite slow. BCD doesn't have this problem. – Eric Lagergren Feb 09 '18 at 08:04
  • The multi-threading argument isn't convincing, unless you mean something requring access to the decimal digits of a number (which is horribly slow for extended-precision binary BigInts). If you can multi-thread BCD addition or mulitplication or whatever, you can also multi-thread over 32 or 64-bit chunks of a binary biginteger. (But usually you can't multithread or even SIMD because you need carry propagation when adding.) – Peter Cordes Nov 11 '20 at 00:49
5

I think BCD is useful for many things, the reasons given above. One thing that is sort of obvious that seems to have been overlooked is providing an instruction to go from binary to BCD and the opposite. This could be very useful in converting an ASCII number to binary for arithmatic.

One of the posters was wrong about numbers being stored often in ASCII, actually a lot of binary number storage is done because its more efficient. And converting ASCII to binary is a little complicated. BCD is sort of go between ASCII and binary, if there were a bsdtoint and inttobcd instructions it would make conversions as such really easy. All ASCII values must be converted to binary for arithmatic. So, BCD is actually useful in that ASCII to binary conversion.

David
  • 66
  • 1
  • 1
4

Nowadays, it's common to store numbers in binary format, and convert them to decimal format for display purposes, but the conversion does take some time. If the primary purpose of a number is to be displayed, or to be added to a number which will be displayed, it may be more practical to perform computations in a decimal format than to perform computations in binary and convert to decimal. Many devices with numerical readouts, and many video games, stored numbers in packed BCD format, which stores two digits per byte. This is why many score counters overflow at 1,000,000 points rather than some power-of-two value. If hardware did not facilitate packed-BCD arithmetic, the alternative would not be to use binary, but to use unpacked decimal. Converting packed BCD to unpacked decimal at the moment it's displayed can easily be done a digit at a time. Converting binary to decimal, by contrast, is much slower, and requires operating on the entire quantity.

Incidentally, the 8086 instruction set is the only one I've seen with instructions for "ASCII Adjust for Division" and "ASCII Adjust for Multiplication", one of which multiplies a byte by ten and the other of which divides by ten. Curiously, the value "0A" is part of the machine instructions, and substituting a different number will cause those instructions to multiply or divide by other quantities, but the instructions are not documented as being general-purpose multiply/divide-by-constant instructions. I wonder why that feature wasn't documented, given that it could have been useful?

It's also interesting to note the variety of approaches processors used for adding or subtracting packed BCD. Many perform a binary addition but use a flag to keep track of whether a carry occurred from bit 3 to bit 4 during an addition; they may then expect code to clean up the result (e.g. PIC), supply an opcode to cleanup addition but not subtraction, supply one opcode to clean up addition and another for subtraction (e.g. x86), or use a flag to track whether the last operation was addition or subtraction and use the same opcode to clean up both (e.g. Z80). Some use separate opcodes for BCD arithmetic (e.g. 68000), and some use a flag to indicate whether add/subtract operations should use binary or BCD (e.g. 6502 derivatives). Interestingly, the original 6502 performs BCD math at the same speed as binary math, but CMOS derivatives of it require an extra cycle for BCD operations.

supercat
  • 77,689
  • 9
  • 166
  • 211
  • In x87 there are also [`FBLD` and `FLSTP`](https://courses.engr.illinois.edu/ece390/archive/spr2002/books/labmanual/inst-ref-fbld.html) to load and store BCD numbers. Some modern architectures have hardware support for decimal float like Power6 and IBM z10 – phuclv Dec 06 '16 at 16:10
2

I'm sure the Wiki article linked to earlier goes into more detail, but I used BCD on IBM mainframe programming (in PL/I). BCD not only guaranteed that you could look at particular areas of a byte to find an individual digit - which is useful sometimes - but also allowed the hardware to apply simple rules to calculate the required precision and scale for e.g. adding or multiplying two numbers together.

As I recall, I was told that on mainframes, support for BCD was implemented in hardware and at that time, was our only option for representing floating point numbers. (We're talking 18 years go here!)

Nij
  • 2,028
  • 2
  • 22
  • 27
  • Some floating-point formats for the 6502 were decimal based. MOS technologies' KIMath software (published in print form) used unpacked decimal mantissa with binary exponent for calculation and packed decimal for storage. – supercat Nov 27 '13 at 05:12
  • I used BCD to integrate PC and mainframe systems at that time. – olivecoder Aug 28 '15 at 11:03
1

When I was in college over 30 years ago, I was told the reasons why BCD (COMP-3 in COBOL) was a good format.

None of those reasons are still relevant with modern hardware. We have fast, binary fixed point arithmetic. We no longer need to be able to convert BCD to a displayable format by adding an offset to each BCD digit. We rarely store numbers as eight bits per digit, so the fact that BCD only takes four bits per digit isn't very interesting.

BCD is a relic, and should be left in the past, where it belongs.

John Saunders
  • 160,644
  • 26
  • 247
  • 397
0

Modern computing has emphasized coding that captures the design logic rather than optimizing a few cpu cycles here or there. The value of the time and/or memory saved often isn't worth writing special bit-level routines.

That being said, BCD is still occasionally useful.

The one example I can think of is when you have a huge database flatfiles or other such big data that's in an ASCII format like CSV. BCD is awesome if all you're doing is looking for value between some limits. To convert all of the values as you scan all that data would greatly increase processing time.

VoteCoffee
  • 4,692
  • 1
  • 41
  • 44
-1

Very few humans can size amounts expressed in hexa, so it is useful to show or at least allow to view intermediary result in decimal. Specially in the financial or accounting world.

Jeroen Vannevel
  • 43,651
  • 22
  • 107
  • 170
  • 1
    remember that the question is about assembler "datatype". in low level doesn't makes too much sense to use BCD and that's the core of my question. today most of software development is done with high level languages that will display information in human readable form. – llazzaro Mar 01 '14 at 17:43
  • I see it more like a question of firmware. You can not sit before a machine and wait more then 3 seconds for an answer. In the end it is the response that the end user gets that really matters. – user3237507 Mar 01 '14 at 20:10
  • Modern computers take only a few tens of nanoseconds to convert a binary integer to a string of ASCII decimal digits. Storing a number in binary doesn't constrain how you display it to the user. (BCD makes conversion to decimal ASCII very cheap, but makes calculation much more expensive for all but the tiniest integers, on modern computers with very faster binary add/sub/mul/div.) – Peter Cordes Nov 11 '20 at 00:54