2

Does the carry-less multiplication instruction run in constant time? Said differently, is the time it takes to execute independent of its arguments?

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
yberman
  • 306
  • 1
  • 11
  • 2
    I don't think the execution time varies according to the values of the operands in any CPU that implements the PCLMULQDQ instruction, but the actual latency of the instruction can vary widely depending a number of factors including what operands are used (ie. whether a register operand is dependent on a previous instruction or where a memory operand is located in the cache hierarchy) just like with any other instruction. – Ross Ridge Nov 20 '18 at 21:24
  • 2
    Given that CLMUL was made for cryptography, I would personally be very surprised if it was not constant time. – fuz Nov 21 '18 at 03:13

1 Answers1

7

According to https://agner.org/optimize/ and PCLMULQDQ has fixed latency on any given CPU. (http://www.uops.info/table.html doesn't list a latency for it, but has good stuff for most instructions).

There's no reason to expect it to be data-dependent- typically only division / sqrt has data-dependent performance in modern high-performance CPUs. Regular multiply doesn't: instead they just make it fast for the general case with lots of hardware parallelism inside the execution unit.

Out-of-order instruction scheduling is a lot easier when uops have fixed latency, and so is building fully-pipelined execution units for them. The scheduler (reservation station) can avoid having 2 operations finish at the same time on the same port and create a write-back conflict. Or worse, in the same execution unit and cause stalls within it. This is why fixed-latency is very common.

(A microcoded multi-uop pclmulqdq with branching could potentially have variable latency, or more plausibly latency that depends on the immediate operand: maybe an extra shuffle uop or two when the immediate is non-zero. So the fixed-latency of a single uop argument doesn't necessarily apply to a micro-coded instruction, but pclmuqdq is still simple enough that you wouldn't expect it to actually branch internally the way rep movsb has to.)


As @fuz points out, PCLMUL was made for crypto, so data-dependent performance would make it vulnerable to timing attacks. So there's a very strong reason to make PCLMUL constant time. (Or at worst, dependent on the immediate, but not the register/memory sources. e.g. an immediate other than 0 could cost extra shift uops to get the high halves of the sources fed to a 64x64 => 128 carryless-multiply unit.)


Numbers from Agner Fog's tables

On Intel since Broadwell, pclmuludq is 1 uop. On Skylake, it's 7 cycle latency, 1 per clock throughput. (So you need to keep 7 independent PCLMUL operations in flight to saturate the execution unit on port 5). Broadwell has 5 cycle latency. With a memory source operand, it's 1 extra uop.

On Haswell, it's 3 uops (2p0 p5) with 7 cycle latency and one per 2 clock throughput.

On Sandybridge/IvyBridge it's 18 uops, 14c latency, one per 8 clock throughput.

On Westmere (2nd Gen Nehalem) it's 12c latency, one per 8c throughput. (Unknown number of uops, neither Agner Fog nor uops.info has it. But we can safely assume it's microcoded.) This was the first generation to support the instruction- one of the very few differences from Nehalem to Westmere.


On Ryzen it's 4 uops, 4c latency, one per 2 clock throughput. http://instlatx64.atw.hu/ shows it 4.5 cycle latency. I'm not sure what the difference is between their testing and Agner's.

On Piledriver it's 5 uops, 12c latency, one per 7 clock throughput.


On Jaguar it's 1 uop, 3c latency, one per 1 clock throughput!

On Silvermont it's 8 uops, 10c latency/throughput. Goldmont = 3 uops, 6c lat / 3c tput.


See also What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand? and Agner Fog's optimization guide to understand how latency and throughput (and front-end bottlenecks) matter for performance on out-of-order CPUs, depending on the surrounding code.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
  • 1
    [instlatx64](http://instlatx64.atw.hu/) shows fractional latencies on Sandybridge, IvyBridge, and Ryzen. All the others match though. Also I was not able to find `pclmuludq` in [http://www.uops.info/table.html](http://www.uops.info/table.html). – Hadi Brais Nov 20 '18 at 22:14
  • @HadiBrais: on uops.info, you have to enable the "other" checkbox for ISA extensions. And it's `pclmulqdq`, oops, I mis-remembered the mnemonic in this answer. – Peter Cordes Nov 20 '18 at 22:29
  • I think that despite [https://agner.org/optimize/](https://agner.org/optimize/) showing integral values for the latency, that doesn't necessarily imply that the latency is fixed for all possible inputs. Perhaps, the reported average just happens to be an integral value (the average of two odd or even numbers for example). – Hadi Brais Nov 20 '18 at 22:33
  • `PCLMULQDQ` is listed in [http://www.uops.info/table.html](http://www.uops.info/table.html) but I don't see a latency value under the `Lat` column. – Hadi Brais Nov 20 '18 at 22:34
  • @HadiBrais: right. uops.info isn't actually useful for this answer, maybe I should take it out. It's generally a useful web site, though, so I wanted to link it. – Peter Cordes Nov 20 '18 at 22:35
  • Note also that Westmere was the first Intel processor to support `PCLMULQDQ`, consider adding it to the answer. – Hadi Brais Nov 20 '18 at 22:42
  • @HadiBrais: guess I might as well. I wasn't aiming to be exhaustive, but I guess I came close. (e.g. I left out KNL). Was it really not present in Nehalem, and only added with Westmere? I thought there were no core changes. – Peter Cordes Nov 20 '18 at 22:50
  • 1
    According to [this](https://www.intel.ph/content/dam/www/public/us/en/documents/white-papers/carry-less-multiplication-instruction-in-gcm-mode-paper.pdf) Intel document, the instruction is new in Westmere. [Wikipedia](https://en.wikipedia.org/wiki/Westmere_(microarchitecture)) also states the same. – Hadi Brais Nov 20 '18 at 22:52
  • Overall, I do think the instruction has a latency independent of its inputs and I have not seen anyone saying otherwise (other than instlatx64 I guess), but not 100% sure on all microarchitectures. – Hadi Brais Nov 20 '18 at 22:56
  • 1
    @HadiBrais: the only plausible mechanism would be decoding to different uops for non-zero immediate (extra shuffles). I added a bit to the answer about why fixed uop latencies are a huge deal. A microcoded instruction as a whole can plausibly be variable by branching (unlikely here) or decoding differently depending on immediates, though. – Peter Cordes Nov 20 '18 at 23:05
  • 2
    OK, I figured out why PCLMULQDQ is missing from uops.info. My benchmark script uses the cpuid command under Linux to find out which instructions are supported, and apparently, cpuid lists the instruction as PCLMULDQ (i.e., without the first "Q"). I will add data for PCLMULQDQ to uops.info with the next update of the site. – Andreas Abel Nov 21 '18 at 00:04
  • @AndreasAbel: interesting. That sounds like a bug in `cpuid(1)` that should get reported. – Peter Cordes Nov 21 '18 at 00:09
  • @PeterCordes I reported it to cpuid@etallen.com. – Andreas Abel Nov 21 '18 at 00:21