The majority of integer multiplications don't actually need multiply:
- Floating-point is, and has been since the 486, normally handled by dedicated hardware.
- Multiplication by a constant, such as for scaling an array index by the size of the element, can be reduced to a left shift in the common case where it's a power of two, or a sequence of left shifts and additions in the general case.
- Multiplications associated with accessing a 2D array, can often be strength reduced to addition if it's in the context of a loop.
So what's left?
- Certain library functions like
fwrite
that take a number of elements and an element size as runtime parameters. - Exact decimal arithmetic e.g. Java's
BigDecimal
type. - Such forms of cryptography as require multiplication and are not handled by their own dedicated hardware.
- Big integers e.g. for exploring number theory.
- Other cases I'm not thinking of right now.
None of these jump out at me as wildly common, yet all modern CPU architectures include integer multiply instructions. (RISC-V omits them from the minimal version of the instruction set, but has been criticized for even going this far.)
Has anyone ever analyzed a representative sample of code, such as the SPEC benchmarks, to find out exactly what use case accounts for most of the actual uses of integer multiply (as measured by dynamic rather than static frequency)?