0

I recently read some java scientific code which needs to do a lot of multiplications.

The implementation use log value because the developer thought it's faster and preciser.

For example, If he/she needs to calculate double values A[1]*A[2]*...*A[n], instead, he will let La[i]=log(A[i]), and then do La[1]+La[2]...+La[n].

(all value are final so he can reuse. No need to worry about the one time log() operation.)

To me, I am really not sure if this will bring significant performance benefit.

  1. I checked online (What's the relative speed of floating point add vs. floating point multiply, and https://agner.org/optimize/). I don't see FP addition being significantly faster than FP multiplication in modern CPUs.

  2. I also read IEEE's standard of binary representation of double. I don't think log value can keep more information to make the value preciser, because IEEE standard is doing the same thing.

What do you think?

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
Frank
  • 7,235
  • 9
  • 46
  • 56
  • Right, FP add is similar performance to FP mul on modern x86. (Haswell/Broadwell have better mul throughput than add but worse mul latency. On Skylake their performance is identical). If you're talking about *integer* logarithms, that's a very different problem. Integer add is faster, with or without SIMD. But keeping only the integer part of a log2() is very imprecise, especially for small numbers. – Peter Cordes Sep 05 '18 at 19:06

1 Answers1

-2

The purpose may not be performance, but handling really big numbers. May be useful when one is dealing with number of atoms in a system, total number of states, etc, which will easily go beyond the floating point range.

Abhay
  • 768
  • 4
  • 13