14

While, as far as I remember, IEEE 754 says nothing about a flush-to-zero mode to handle denormalized numbers faster, some architectures offer this mode (e.g. http://docs.sun.com/source/806-3568/ncg_lib.html ).

In the particular case of this technical documentation, standard handling of denormalized numbers is the default, and flush-to-zero has to be activated explicitly. In the default mode, denormalized numbers are also handled in software, which is slower.

I work on a static analyzer for embedded C which tries to predict correct (if sometimes imprecise) ranges for the values that can happen at run-time. It aims at being correct because it is intended to be usable to exclude the possibility of something going wrong at run-time (for instance for critical embedded code). This requires having captured all possible behaviors during the analysis, and therefore all possible values produced during floating-point computations.

In this context, my question is twofold:

  1. among the embedded architectures, are there architectures that offer only flush-to-zero? They would perhaps not have to right to advertise themselves as "IEEE 754", but could offer close-enough IEEE 754-style floating-point operations.

  2. For the architectures that offer both, in an embedded context, isn't flush-to-zero likely to be activated by the system, in order to make the reaction time more predictable (a common constraint for these embedded systems)?

Handling flush-to-zero in the interval arithmetic that I use for floating-point values is simple enough if I know I have to do it, my question is more whether I have to do it.

Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
  • Good question, well stated. I'm no expert on embedded systems, but I suspect there isn't a clear answer. It would solely depend on your particular end-user environment. – GManNickG Jan 18 '10 at 02:42

2 Answers2

9

Yes to both questions. There are platforms that support flush-to-zero only, and there are many platforms where flush-to-zero is the default.

You should also be aware that many embedded and dsp platforms use a "Denormals Are Zero" mode, which is another wrinkle in the floating-point semantics.


Edit further explanation of FTZ vs. DAZ:

In FTZ, when an operation would produce a denormal result under the usual arithmetic, a zero is returned instead. Note that some implementations always flush to positive zero, whereas others may flush to either positive or negative zero. It's probably best not to depend on either behavior.

In DAZ, when an input to an operation is a denormal, a zero is substituted in its place. Again, there's no general guarantee about which zero will be substituted.

Some implementations that support these modes allow them to be set independently (and some support only one of the two), so it may be necessary for you to be able model either mode independently as well as together.

Note also that some implementations combine these two modes into "Flush to Zero". The ARM VFP "flush to zero" mode is both FTZ and DAZ, for example.

Stephen Canon
  • 103,815
  • 19
  • 183
  • 269
  • Today I implemented the interval arithmetics that at once encompasses all possibilities of FTZ, DAZ (flushing to +0 or same-sign zero) and IEEE 754 subnormals. None of our regression tests showed any difference compared to the previous, IEEE 754-only arithmetics. So it probably won't be necessary to bother users with an option for this, the new mode should make everyone happy. This is a very good thing. Thanks again! – Pascal Cuoq Jan 21 '10 at 23:23
  • What would be the practicality of performing floating point math so that any mantissa bits which represented values smaller than the smallest normalized value would get rounded off? I would think that could be cheaper than dealing with denormalized values, since all floating-point numbers would have the same representation. Only the 'final cleanup' stage would have to change. – supercat Feb 22 '12 at 00:21
  • @supercat: I'm not a hardware designer, but I don't *think* that scheme would actually save much complication in practice (and it would cause even more loss of precision than flushing). – Stephen Canon Feb 22 '12 at 01:32
  • @StephenCanon: The issue with flush-to-zero isn't with precision so much as with the logical problems that occur when x+(y-x) is no closer to y than was x. Having numbers be rounded off such that their differences can be represented would avoid such issues. – supercat Feb 22 '12 at 06:21
  • @supercat: absolutely right, but I don't think that it would actually be significantly cheaper to implement in hardware than gradual underflow, which has the same property and greater precision. As I noted, however, I'm not a hardware designer. – Stephen Canon Feb 22 '12 at 13:55
  • @supercat: There's also the issue that you would also need to come up with some way to interpret all of the encodings that have significand bits smaller than the smallest normalized value set. – Stephen Canon Feb 22 '12 at 14:32
  • @StephenCanon: My idea would be a means of achieving gradual underflow without requiring special encoding for denormals, or any special input processing (unlike denormals, which require special handling on both input and output). Using the approach with truncate-toward-zero (rather than rounding) would be pretty easy; I'm not sure how rounding would best be realized in hardware. – supercat Feb 22 '12 at 15:37
2

ARM Cortex cores have a flush to zero option, hard to see how you can ignore it. Then again, don't take business advice from a forum. Talk to your customers.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
  • The existing users are great, they use sane platforms, understand floating-point issues and they go to the length of de-activating the silent generation of `fmadd` by their compiler to make rounding errors more predictable. It's the prospective users I am interested in. Thanks for your feedback. – Pascal Cuoq Jan 18 '10 at 03:18