0

Most of the time, decimal numbers are represented in the floating-point number format. However, this data format has some drawbacks and it is possible that for some applications, other formats could be more useful.

I know that fixed-precision arithmetic used to be a thing, but was abandoned in favour of floating-point. But apart from these two, I have not heard of any other ways to represent real numbers on a computer.

My question is - what number formats have been proposed to represent real numbers and for which applications might they be more useful than the floating-point standard?

*I'm not sure if this is the appropriate site to post this question, so please suggest a better one if not.

FusRoDah
  • 119
  • 3
  • 1
    IMO, this question is too broad for this site. If you are interested in representing decimal floating point numbers there are [decimal32](https://en.wikipedia.org/wiki/Decimal32_floating-point_format) (and 64, 128) in the IEEE754-2008 standard. – chtz Feb 24 '22 at 00:22
  • And fixed-point numbers are not "abandoned" -- there are applications where they are still useful (e.g. for portability or on systems w/o hardware floating point support) – chtz Feb 24 '22 at 00:25
  • 1
    See [this answer](https://stackoverflow.com/a/12007422/298225). And add [unums and posits](https://en.wikipedia.org/wiki/Unum_(number_format)). – Eric Postpischil Feb 24 '22 at 00:59
  • That's interesting. Can unums and posits be more useful than float in some sitations? – FusRoDah Feb 24 '22 at 09:33

0 Answers0