0

The majority of integer multiplications don't actually need multiply:

  • Floating-point is, and has been since the 486, normally handled by dedicated hardware.
  • Multiplication by a constant, such as for scaling an array index by the size of the element, can be reduced to a left shift in the common case where it's a power of two, or a sequence of left shifts and additions in the general case.
  • Multiplications associated with accessing a 2D array, can often be strength reduced to addition if it's in the context of a loop.

So what's left?

  • Certain library functions like fwrite that take a number of elements and an element size as runtime parameters.
  • Exact decimal arithmetic e.g. Java's BigDecimal type.
  • Such forms of cryptography as require multiplication and are not handled by their own dedicated hardware.
  • Big integers e.g. for exploring number theory.
  • Other cases I'm not thinking of right now.

None of these jump out at me as wildly common, yet all modern CPU architectures include integer multiply instructions. (RISC-V omits them from the minimal version of the instruction set, but has been criticized for even going this far.)

Has anyone ever analyzed a representative sample of code, such as the SPEC benchmarks, to find out exactly what use case accounts for most of the actual uses of integer multiply (as measured by dynamic rather than static frequency)?

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
rwallace
  • 31,405
  • 40
  • 123
  • 242
  • 1
    Functions that take runtime-variable 2D array dimensions may need multiply. Also, if your constant multiply can't be done with 2 or fewer instructions (on modern x86), it's normally best to just use `imul`. e.g. for a [multiplicative inverse for division by compile-time constants](https://stackoverflow.com/questions/41183935/why-does-gcc-use-multiplication-by-a-strange-number-in-implementing-integer-divi), the multiplier is often something like `0x55555555` that would be horribly expensive to implement with manual shifts/adds. – Peter Cordes Nov 23 '20 at 14:08
  • 2
    Having a fast multiplier means you'll find more uses for it, e.g. for horizontal sum of bytes at the end of a popcount bithack ([How to count the number of set bits in a 32-bit integer?](https://stackoverflow.com/q/109023)), or in [What's the fastest way to pack 32 0/1 values into the bits of a single 32-bit variable?](https://stackoverflow.com/a/26201755) – Peter Cordes Nov 23 '20 at 14:12
  • 1
    Programs that use fixed-point math will also use integer multiplies. – instinct71 Nov 25 '20 at 02:21

0 Answers0