I'm creating a library in C that contains common data structures, convenience functions, etc. that is intended for general use. Within, I've implemented a dynamic array, and I've chosen the golden ratio as the growth factor for the reason explained here. However, this necessarily involves multiplication of floating-point numbers, which can cause FE_INEXACT
to be raised if they have large significands.
When I implemented it, I was under the impression that, as the library is for general use, floating point exceptions must be avoided if at all possible. I first tried something like
fenv_t fenv;
feholdexcept(&fenv);
// expand dynamic array
feclearexcept(FE_INEXACT);
feupdateenv(&fenv);
, but this had such an enormous time cost that it wasn't worth it.
Eventually, I came up with a solution that had negligible time cost. While not avoiding FE_INEXACT
entirely, it made it highly unlikely. Namely,
size_t newCapacity = nearbyint((double)(float)PHI * capacity);
This would only raise FE_INEXACT
if the current capacity was extremely large, at least for compilers that adhere to IEEE 754 standards.
I'm starting to wonder whether my efforts have gone into solving a relative nonissue. For library code, is it reasonable to expect the user to handle the raising of FE_INEXACT
when necessary, or should it be avoided within the library? In the latter case, how important is the issue compared to other factors, such as efficiency?