Whenever I search for the term 'denormal numbers' or 'denormals', I only find ways how to detect them and round them to zero. Apparently, nobody really likes them, because dealing with them incurs a performance penalty.
And yet, they're implemented everywhere. Why? If it's for precision, I'd say you're gonna need a bigger float, or change the order of your operations such that you avoid really small intermediate values. I find it hard to believe that that little bit of extra precision is really worth the precious clock cycles.
Are there any good reasons why one would still use denormal numbers? And if there are no significant reasons to have denormal numbers, why implement them at all? Only to have IEEE754 compliance?)