10

I'm curious as to why the IEEE calls a 32-bit floating-point number single precision. Was it just a means of standardization, or does 'single' actually refer to a single 'something'.

Is it simply a standardized level? As in, precision level 1 (single), precision level 2 (double) and so on? I've searched all over and found a great deal about the history of floating point numbers, but nothing that quite answers my question.

phuclv
  • 37,963
  • 15
  • 156
  • 475
Keith Grout
  • 899
  • 11
  • 30
  • 5
    My guess is that it has roots in hardware, with some system that had 32-bit registers. So a "single precision" float would fit in one register, while a "double precision" float would require two registers. But it's only a guess -- I'd be interested to see what the actual answer is. – Daniel Pryden Jul 19 '13 at 21:09
  • Upvoting 'cause this was a fun question. – Dale Wilson Jul 19 '13 at 21:41
  • 2
    It occupies one 32-bit word, and did so on classical word architectures such as S/360. On other architectures it may have differed. (Eg, on IBM 70xx it occupied one 36-bit word). "Double", of course, occupies two words. – Hot Licks Jul 19 '13 at 21:50
  • 1
    I've often wished they were called "half precision" and "full precision". 32-bit seems to me to be over-used, given its very limited precision and modern memory sizes and floating point hardware. – Patricia Shanahan Jul 19 '13 at 22:11
  • 1
    @PatriciaShanahan Actually, when trying to find this answer, I found that IEEE already uses the term half-precision for 16 bit floats. – Keith Grout Jul 19 '13 at 22:15
  • 1
    @PatriciaShanahan - Actually, "single precision" IEEE float provides more than enough precision for the vast majority of applications. Plenty for calculating pixel location on this screen, plenty for expressing, eg, temperature, wind speed, et al in a weather report, plenty for most of the measurements in an automobile design, plenty for the measurements of a house. It's really only when you get into certain mathematical calculations (eg, matrix inversion) or perhaps Mars mission trajectories that more precision is needed. – Hot Licks Jul 20 '13 at 00:13

4 Answers4

11

On the machine I was working on at the time, a float occupied a single 36 bit register. A double occupied two 36 bit registers. The hardware had separate instructions for operating on the 1 register and 2 register versions of the number. I don't know for certain that that's where the terminology came from, but it's possible.

Dale Wilson
  • 9,166
  • 3
  • 34
  • 52
  • As a sidelight, Bell Labs (home of unix & C) had several machines from the same family. – Dale Wilson Jul 19 '13 at 21:11
  • 3
    So the internet being the wonderful place that it is, I just found an image of the assembly language pocket guide for that system: http://www.trailingedge.com/misc/GCOS-GMAP-PocketGuide.pdf I note that in there the two-register operations have a leading "D" for double. i.e FAD is single precision floating add, DFAD is double precision floating add. Yes they do use the "double" terminology. – Dale Wilson Jul 19 '13 at 21:38
  • The oldest commercial machine I could find that supported "scientific" numbers was the IBM 700/ 7000 series. The 7094 introduced "double precision" scientific numbers. http://en.wikipedia.org/wiki/IBM_700/7000_series – Dale Wilson Jul 19 '13 at 21:47
1

In addition to the hardware view, on most systems the 32-bit format was used to implement the Fortran "real" type, and the 64 bit format to implement the Fortran "double precision" type.

Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75
0

I think it just refers to the number of bits used to represent the floating-point number, where single-precision uses 32 bits and double-precision uses 64 bits, i.e. double the number of bits.

davnicwil
  • 28,487
  • 16
  • 107
  • 123
0

The terminology "double" isn't quite correct, but it's close enough.

A 64 bit float uses 52 of the bits for the fraction instead of the 23 bits used for the fraction in a 32 bit float - it's not really "double", but it does use double the total bits.

The answer to this question is very interesting - you should give it a read.

Community
  • 1
  • 1
greg84
  • 7,541
  • 3
  • 36
  • 45
  • I actually did read that before posting. Some interesting info for sure, but nothing that quite answers my question. :) – Keith Grout Jul 19 '13 at 21:21
  • And one of the advantages we GE/Honeywell folks claimed for our processor was not only did we have 72 bits of precision (2X36) but we also had a separate 8 bit exponent register so our double precision truly was double. – Dale Wilson Jul 19 '13 at 21:40
  • The difference goes back to well before IEEE float format was even invented. In the late 60s IBM 70xx machines had 36 and 72 bit float formats, and IBM S/360s had 32 and 64 bit float formats. DEC had their own, but I'm reasonably sure they had both single and double. (And I think a few architectures implemented quadruple.) I'm trying to remember what the 60-bit CDC 6xxx series had, but I think it has both single and double as well. These were always known as "single precision" and "double precision". – Hot Licks Jul 19 '13 at 21:49