17

I'm putting together a small patch for the cachegrind/callgrind tool in valgrind which will auto-detect, using completely generic code, CPU instruction and cache configuration (right now only x86/x64 auto-configures, and other architectures don't provide CPUID type configuration to non-privileged code). This code will need to execute entirely in a non-privileged context i.e. pure user mode code. It also needs to be portable across very different POSIX implementations, so grokking /proc/cpuinfo won't do as one of our destination systems doesn't have such a thing.

Detecting the frequency of the CPU, the number of caches, their sizes, and even cache line size can all be done using 100% generic POSIX code which has no CPU-specific opcodes whatsoever (just a lot of reasonable assumptions, such as that adding two numbers together, if without memory or register dependency stalls, probably will be executed in a single cycle). This part is fairly straightforward.

What isn't so straightforward, and why I ask StackOverflow, is how to detect cache line associativity for a given cache? Associativity is how many places in a cache can contain a given cache line from main memory. I can see that L1 cache associativity could be detected, but L2 cache? Surely the L1 associativity gets in the way?

I appreciate this is probably a problem which cannot be solved. But I throw it onto StackOverflow and hope someone knows something I don't. Note that if we fail here, I'll simply hard code in an associativity default of four way, assuming it wouldn't make a huge difference to results.

Thanks,
Niall

Mysticial
  • 464,885
  • 45
  • 335
  • 332
Niall Douglas
  • 9,212
  • 2
  • 44
  • 54
  • 2
    Consider to start a bounty. – 0x90 Apr 15 '13 at 10:50
  • What I've done instead is to start the open source library release process at BlackBerry turning. One day we'll get the completely generic config detection library made publicly available, and then I'll link to it here. That library hardcodes associativity to 4. Hopefully someone someday will submit a patch with something better. Niall – Niall Douglas Apr 16 '13 at 20:44

3 Answers3

7

Here's a scheme:

Have a memory access pattern with a stride S , and number of unique elements accessed = N. The test first touches each unique element, and then measures the average time to access each element, by accessing the same pattern a very large number of times.

Example: for S = 2 and N = 4 the address pattern would be 0,2,4,6,0,2,4,6,0,2,4,6,...

Consider a multi-level cache hierarchy. You can make the following reasonable assumptions:

  • Size of n+1 th level-cache is a power of two times the size of the nth cache
  • The associativity of n+1 th cache is also a power of two times the associativity of the nth cache.

These 2 assumptions allow us to say that if two addresses map to the same set in n+1 th cache(say L2), then they must map to the same set in nth cache(say L1).

Say you know the sizes of L1, L2 caches. You need to find the associativity of L2 cache.

  • set stride S = size of L2 cache (so that every access maps to the same set in L2, and in L1 too)
  • vary N (by powers of 2)

You get the following regimes:

  • Regime 1: N <= associativity of L1. (All accesses HIT in L1)
  • Regime 2: associativity of L1 < N <= associativity of L2 (All accesses miss in L1, but HIT in L2)
  • Regime 3: N > associativity of L2 ( All accesses miss in L2)

So, if you plot average access time against N (when S = size of L2), you will see a step-like plot. The end of the lowest step gives you the associativity of L1. The next step gives you the associativity of L2.

You can repeat the same procedure between L2-L3 and so-on. Please let me know if that helps. The method of obtaining cache parameters by varying the stride of a memory access pattern is similar to that used by the LMBENCH benchmark. I don't know if lmbench infers associativity too.

Neha Karanjkar
  • 3,390
  • 2
  • 29
  • 48
  • 2
    This is pretty much what I tried too. Unfortunately, on recent Intel chips the prefetchers spot that you are striding at a regular interval and prefetch the lines, thus ruining the results. I then tried randomly permuting the strides to confuse the prefetchers, but even there Intel can keep track of four separate streams of strides, so it defeated me. Niall – Niall Douglas Apr 21 '13 at 18:34
  • 2
    I think on most Intel processors it is possible to turn off hardware prefetching – Subodh Apr 23 '13 at 08:14
  • 1
    Turning off hardware prefetching: http://stackoverflow.com/questions/784041/how-do-i-programatically-disable-hardware-prefetching – Subodh Apr 23 '13 at 08:22
  • That's a kernel mode only instruction. I asked about user mode only. And it only works on Sandy Bridge. Ivy Bridge and later it has no effect. Also, the original question was about completely generic code. – Niall Douglas May 09 '13 at 01:53
0

Could you do a small program that only accesses lines from the same set? Then you can increase the stack distance between the accesses and when the execution time dramatically fall, you can assume you have reach the associativity.

It's probably not very stable, but maybe that could give a lead, don't know. I hope it can help.

Maxime Chéramy
  • 17,761
  • 8
  • 54
  • 75
-3

For x86 platform you can use cpuid:

See http://www.intel.com/content/www/us/en/processors/processor-identification-cpuid-instruction-note.html for details.

You need something like:

long _eax,_ebx,_ecx,_edx;
long op = func;

asm ("cpuid"
    : "=a" (_eax),
    "=b" (_ebx),
    "=c" (_ecx),
    "=d" (_edx)
    : "a" (op)
);

Then use the info according to the doc in the link mentioned above.

0x90
  • 39,472
  • 36
  • 165
  • 245
Shay
  • 15
  • 2