I recently read some java scientific code which needs to do a lot of multiplications.
The implementation use log value because the developer thought it's faster and preciser.
For example,
If he/she needs to calculate double values A[1]*A[2]*...*A[n]
, instead, he will let La[i]=log(A[i])
, and then do La[1]+La[2]...+La[n]
.
(all value are final so he can reuse. No need to worry about the one time log() operation.)
To me, I am really not sure if this will bring significant performance benefit.
I checked online (What's the relative speed of floating point add vs. floating point multiply, and https://agner.org/optimize/). I don't see FP addition being significantly faster than FP multiplication in modern CPUs.
I also read IEEE's standard of binary representation of double. I don't think log value can keep more information to make the value preciser, because IEEE standard is doing the same thing.
What do you think?