0

Given a 2 processor Nehalem Xeon server with 12GB of RAM (6x2GB), how are memory addresses mapped onto the physical memory modules?

I would imagine that on a single processor Nehalem with 3 identical memory modules, the address space would be striped over the modules to give better memory bandwidth. But with what kind of stripe size? And how does the second processor (+memory) change that picture?

jdphenix
  • 15,022
  • 3
  • 41
  • 74
ptman
  • 787
  • 13
  • 19

1 Answers1

1

Intel is not very clear on that, you have to dig into their hardcore technical documentation to find out all the details. Here's my understanding. Each processor has an integrated memory controller. Some Nehalems have triple-channel controllers, some have dual-channel controllers. Each memory module is assigned to one of the processors. Triple channel means that accesses are interleaved across three banks of modules, dual channel = two banks.

The specific interleaving pattern is configurable to some extent, but, given their design, it's almost inevitable that you'll end up with 64 to 256 byte stripes.

If one of the processors wants to access memory that's attached to the IMC of some other processor, the access goes through both processor and incurs additional latency.

Eugene Smith
  • 9,126
  • 6
  • 36
  • 40
  • Do you have any idea if the address spaces of the two processors are interleaved or not? That is, does the first processor (which in this case has a three-channel memory controller, and thus three memory modules) control the first half of the address space and the second processor the rest or is the address space somehow interleaved? – ptman Nov 15 '10 at 12:30
  • That should be up to the BIOS and/or the operating system. Read up on NUMA. – Eugene Smith Nov 15 '10 at 12:47
  • I've got some kind of picture of NUMA, but I was hoping for some specifics on the Intel implementation. – ptman Nov 15 '10 at 13:13