3

Inspired by this answer to

FLOPS per cycle for sandy-bridge and haswell SSE2/AVX/AVX2

what are the numbers of just-loads/loads-and-stores which one could issue on a core - for Sandy/Ivy Bridge, Broad/Haswell, Sky/Kaby Lake? Also interesting are the numbers of AMD Bulldozer, Jaguar and Zen.

PS - I know that might not be a sustainable rate because of cache/memory bandwidths, I'm only asking about issues.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
einpoklum
  • 118,144
  • 57
  • 340
  • 684

1 Answers1

5

Based on information from:

Sandy/Ivy: per cycle, 2 loads, or 1 load and 1 store. 256bit loads and stores count double, but only with respect to the load or store itself - it still only has one address so the AGU becomes available again the next cycle. By mixing in some 256b operations you can still get 2x 128b loads and 1x 128b store per cycle.

Haswell/Broadwell: 2 loads and a store, and 256bit loads/stores don't count double. Port 7 (store AGU) can only handle simple address calculations (base+const, no index), complex cases will go to p2/p3 and compete with loads, simple cases may compete anyway but at least don't have to.

Sky/Kaby: the same as Broadwell

Bulldozer: 2 loads, or 1 load and 1 store. 256bit loads and stores count double.

Jaguar: 1 load or 1 store, and 256bit loads and stores count double. By far the worst one in this list, because it's the only low-power µarch in the list.

Ryzen: 2 loads, or 1 load and 1 store. 256bit loads and stores count double.

harold
  • 61,398
  • 6
  • 86
  • 164
  • SnB/IvB can sustain 2x128b loads and 1x128b store per cycle, but only if they're all (or mostly) 256b. The bottleneck is on AGUs, not on cache read-write ports. A store-address uop can run on p2/p3 even while the load-data part of the port is still busy with the 2nd cycle of a 256b load. – Peter Cordes Jul 17 '17 at 04:44
  • Haswell doesn't have cache-bank conflicts either, according to Agner Fog. That's only SnB/IvB. AFAIK, HSW/BDW/SKL/KBL are all the same as far as L1D aligned load/store throughput. – Peter Cordes Jul 17 '17 at 04:46
  • Simple store-address uops don't always go to p7 on HSW+. They can be scheduled there, but unfortunately they still get scheduled to p2/p3 and steal cycles from loads. Intel's optimization manual quotes SKL average *sustained* L1D bandwidth as ~81B/cycle, even though peak is 96B. This is due to those resource conflicts. – Peter Cordes Jul 17 '17 at 05:00
  • You might want to add that on recent Intel all loads or stores that cross a cache line boundary seem to count double, but other misaligned loads or stores are the same as their aligned counterparts. On AMD Ryzen the picture is much [more complex](http://fh.tl/CD). – BeeOnRope Jul 27 '17 at 21:21