I have some places in my code where I want to assure that a division of 2 arbitrary floating point numbers (32 bit single precision) won't overflow. The target/compiler does not guarantee (explicitly enough) nice handling of -INF/INF and (does not fully guarantees IEEE 754 for the exceptional values - (possibly undefined) - and target might change). Also I cannot make save assumtions on the inputs for this few special places and I am bound to C90 standard libraries.
I have read What Every Computer Scientist Should Know About Floating-Point Arithmetic but to be honest, I am a little bit lost.
So... I want to ask the community, if the following piece of code would do the trick, and if there are better/faster/exacter/correcter ways to do it:
#define SIGN_F(val) ((val >= 0.0f)? 1.0f : -1.0f)
float32_t safedivf(float32_t num, float32_t denum)
{
const float32_t abs_denum = fabs(denum);
if((abs_denum < 1.0f) && ((abs_denum * FLT_MAX) <= (float32_t)fabs(num))
return SIGN_F(denum) * SIGN_F(num) * FLT_MAX;
else
return num / denum;
}
Edit: Changed ((abs_denum * FLT_MAX) < (float32_t)fabs(num))
to ((abs_denum * FLT_MAX) <= (float32_t)fabs(num))
as recommeded by Pascal Cuoq.