3

I noticed on Windows and Linux x86, float is a 4-byte type, double is 8, but long double is 12 and 16 on x86 and x86_64 respectively. C99 is supposed to be breaking such barriers with the specific integral sizes.

The initial technological limitation appears to be due to the x86 processor not being able to handle more than 80-bit floating point operations (plus 2 bytes to round it up) but why the inconsistency in the standard compared to int types? Why don't they go at least to 80-bit standardization?

phuclv
  • 37,963
  • 15
  • 156
  • 475
j riv
  • 3,593
  • 6
  • 39
  • 54
  • This question http://stackoverflow.com/questions/271076/what-is-the-difference-between-an-int-and-a-long-in-c/271132#271132 is about C++ but it holds for C. It shows that the integer types are not standardized even in C: The standard deliberately gives compiler wiggle room so that the implementation can be as fast as possible. – Martin York Aug 10 '10 at 09:15

4 Answers4

6

The C language doesn't specify the implementation of various types, so that it can be efficiently implemented on as wide a variety of hardware as possible.

This extends to the integer types too - the C standard integral types have minimum ranges (eg. signed char is -127 to 127, short and int are both -32,767 to 32,767, long is -2,147,483,647 to 2,147,483,647, and long long is -9,223,372,036,854,775,807 to 9,223,372,036,854,775,807). For almost all purposes, this is all that the programmer needs to know.

C99 does provide "fixed-width" integer types, like int32_t - but these are optional - if the implementation can't provide such a type efficiently, it doesn't have to provide it.

For floating point types, there are equivalent limits (eg double must have at least 10 decimal digits worth of precision).

caf
  • 233,326
  • 40
  • 323
  • 462
3

They were trying to (mostly) accommodate pre-existing C implementations, some of which don't even use IEEE floating point formats.

Darron
  • 21,309
  • 5
  • 49
  • 53
  • +1 The standard specifies the precision of the types, not how many bits it takes to store that level of precision (floating-point numbers can be implemented in many different ways). – bta Aug 09 '10 at 22:12
  • The standard does not specify the precision, and even an implementation where all floats get rounded to 0 would probably be conformant. It does however recommend IEEE precision **and** format. – R.. GitHub STOP HELPING ICE Aug 10 '10 at 04:56
1

ints can be used to represent abstract things like ids, colors, error code, requests, etc. In this case ints are not really used as integers numbers but as sets of bits (= a container). Most of the time a programmer knows exactly how many bits he needs, so he wants to be able to use just as many bits as needed.

floats on the other hand are design for a very specific usage (floating point arithmetic). You are very unlikely to be able to size precisely how many bits you need for your float. Actually, most of the time the more bits you have the better it is.

Ben
  • 7,372
  • 8
  • 38
  • 46
  • This is true as long as you **know** the number of bits of precision. Often I find I need to know this in order to choose a large power of 2 to add/subtract to round to a specific number of binary places.. – R.. GitHub STOP HELPING ICE Aug 10 '10 at 04:57
1

C99 is supposed to be breaking such barriers with the specific integral sizes.

No, those fixed-width (u)intN_t types are completely optional because not all processors use type sizes that are a power of 2. C99 only requires that (u)int_fastN_t and (u)int_leastN_t to be defined. That means the premise why the inconsistency in the standard compared to int types is just plain wrong because there's no consistency in the size of int types

Lots of modern DSPs use 24-bit word for 24-bit audio. There are even 20-bit DSPs like the Zoran ZR3800x family or 28-bit DSPs like the ADAU1701 which allows transformation of 16/24-bit audio without clipping. Many 32 or 64-bit architectures also have some odd-sized registers to allow accumulation of values without overflow, for example the TI C5500/C6000 with 40-bit long and SHARC with 80-bit accumulator. The Motorola DSP5600x/3xx series also has odd sizes: 2-byte short, 3-byte int, 6-byte long. In the past there were lots of architectures with other word sizes like 12, 18, 36, 60-bit... and lots of CPUs that use one's complement of sign-magnitude. See Exotic architectures the standards committees care about

C was designed to be flexible to support all kinds of such platforms. Specifying a fixed size, whether for integer or floating-point types, defeats that purpose. Floating-point support in hardware varies wildly just like integer support. There are different formats that use decimal, hexadecimal or possibly other bases. Each format has different sizes of exponent/mantissa, different position of sign/exponent/mantissa and even the signed format. For example some use two's complement for the mantissa while some others use two's complement for the exponent or the whole floating-point value. You can see many formats here but that's obviously not every format that ever existed. For example the SHARC above has a special 40-bit floating-point format. Some platforms also use double-double arithmetic for long double. See also

That means you can't standardize a single floating-point format for all platforms because there's no one-size-fits-all solution. If you're designing a DSP then obviously you need to have a format that's best for your purpose so that you can churn as most data as possible. There's no reason to use IEEE-754 binary64 when a 40-bit format has enough precision for your application, fits better in cache and needs far less die size. Or if you're on a small embedded system then 80-bit long double is usually useless as you don't even have enough ROM for that 80-bit long double library. That's why some platforms limit long double to 64-bit like double

phuclv
  • 37,963
  • 15
  • 156
  • 475