As Paul R said in comments, the comments are probably referring to denormal (I more usually see them described as subnormal, so will use that term) numbers, which fill the underflow gap around zero in floating point arithmetic.
If a
and b
are sufficiently close in value, then a-b
will produce a subnormal value. When those values are handled entirely in hardware, there is often a performance hit. There are techniques to mitigate that in hardware but, with some modern processors, instructions involving subnormals can take over 100 times longer than the same instructions acting on normal values. If those values are handled entirely in software (e.g. the hardware instructions don't handle them directly, and a floating point exception is raised that has to be caught and then sorted out in software) there is virtually always a decrease in performance.
Depending on the type of application, the resultant issues can vary from the insignificant (e.g. a few extra milliseconds for a long numeric calculation that doesn't encounter subnormals too often) to the major (e.g. introducing a potential timing side channel into a security-related system).
The solution given in the question does rely on interp
being neither a subnormal itself, not too close to 1.0
.