Let's say I want to use floating point for calculation, but I want to represent my input data with constant resolution. One possible answer is to start with integers and convert them to floating point but this can be wasteful if you have to do it repeatedly. It would seem like subnormal floating point is a possible solution, since in this region, the magnitude of the number is controlled only by the mantissa and not the exponent.
The question is, are there any gotchas with how floating point numbers are handled in CPUs (for CPUs that support subnormal numbers, that is) that would prevent this from working as I expect?