I'm curious as to why the IEEE calls a 32-bit floating-point number single precision. Was it just a means of standardization, or does 'single' actually refer to a single 'something'.
Is it simply a standardized level? As in, precision level 1 (single), precision level 2 (double) and so on? I've searched all over and found a great deal about the history of floating point numbers, but nothing that quite answers my question.