1

I know the basic definition of base-2 and base-10 But I wonder what is the difference between them in the performance and the speed of a program.

for example: in C#, data type double is a Base-2 and data type decimal is a Base-10 so that double is very fast in calculations and decimal is up to 10x slower than double.

I don't understand why this ,so please anyone explain this to me and thanks in advance :)

mohamed_abdullah
  • 469
  • 1
  • 5
  • 12
  • 1
    Binary operations in binary PCs are obviously much faster than in other bases, just like a human thinking in base 10 will be bad at binary arithmatics. Moreover there are often no instructions for decimal operations so they must be simulated by software which is even slower. possible duplicate: [Decimal vs Double Speed](https://stackoverflow.com/q/329613/995714) – phuclv Jun 15 '17 at 16:38
  • 1
    I am old enough to remember the days when binary floating point computations were done in software rather than highly optimized hardware and guess what, binary floating point computations used to be slow back in the day. – Eric Lippert Jun 15 '17 at 17:00

1 Answers1

2

The reason for the performance difference is not due to the difference in the numeric base, but rather in availability for hardware assistance in performing the computations.

double in .NET is represented in the way that follows the IEEE-754 standard, which means that there is a lot of hardware assistance available to perform double computations on most platforms.

Unlike binary representation, decimal representation is somewhat recent (the standard is less than ten years old), so the availability of hardware assistance is still rather limited. This may change in the future, but currently this means that computations in decimal require more CPU cycles to perform.

Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
  • Because of its binary format and other restrictions, IEEE-754 can support useful precision over a very wide range in a 64-bit packet. That makes it practical to do its arithmetic as register-to-register rather than memory-to-memory. Decimal is inherently less compact, and to get its full benefit you need bigger variable sizes, making registers much less feasible. – Patricia Shanahan Jun 15 '17 at 18:01