The IEEE floating point standard defines several exceptions that occur when the result of a floating point operation is unclear or undesirable: underflow, overflow, inexact, invalid, and divide-by-zero.
As you can see, such exceptions could occur quite frequently: even 0.2+0.1 should trigger the inexact
exception. For a piece of numerical code involving N floating-point instructions, N/2 or more might trigger at least one of these exceptions. So I wonder how the OS works to avoid the performance overhead of constantly triggering these exceptions?