2

I am using the PAPI high level API to check TLB misses in a simple program looping through an array, but seeing larger numbers than expected.

In other simple test cases, the results seem quite reasonable, which leads me to think the results are real and the extra misses are due to a hardware prefetch or similar.

Can anyone explain the numbers or point me to some error in my use of PAPI?

int events[] = {PAPI_TLB_TL};
long long values[1];
char * databuf = (char *) malloc(4096 * 32);

if (PAPI_start_counters(events, 1) != PAPI_OK) exit(-1);
if (PAPI_read_counters(values, 1) != PAPI_OK) exit(-1); //Zeros the counters

for(int i=0; i < 32; ++i){
    databuf[4096 * i] = 'a';
}

if (PAPI_read_counters(values, 1) != PAPI_OK) exit(-1); //Extracts the counters

printf("%llu\n", values[0]);

I expected the printed number to be in the region of 32, or at least some multiple, but consistently get a result of 93 or above (not consistently above 96 i.e. not simply 3 misses for every iteration). I am running pinned to a core with nothing else on it (apart from timer interrupts).

I am on Nehalem and not using huge pages, so there are 64 entries in DTLB (512 in L2).

Mysticial
  • 464,885
  • 45
  • 335
  • 332
jmetcalfe
  • 1,296
  • 9
  • 17
  • Curiosity, why do you care about TLB misses? – Tony The Lion Feb 19 '13 at 14:53
  • @Tony Working on an application where they could potentially be avoided by a prefetch (if cross page boundary prefetching is available on the particular architecture), avoiding the overhead of the miss on the hot path. I suspect that the cost of the tlb miss is dwarfed by the cache miss (which can't be avoided), but wanted to verify this assumption and came across this problem while doing so. – jmetcalfe Feb 19 '13 at 15:09
  • @jmetcalfe Try using `calloc()` instead of `malloc()`. – Mysticial Feb 19 '13 at 15:33
  • @Mysticial That reduces the reported misses to 32. Similarly if I manually loop through the array beforehand to effectively prefault it. I don't follow why there are still 32 misses (Seems like all of them should still happily fit in to the tlb and stay there until the second loop). – jmetcalfe Feb 19 '13 at 15:49
  • Ha! I guessed right. But, I still can't explain the final 32 misses that are still left. – Mysticial Feb 19 '13 at 15:49
  • @Mysticial Turns out that if I put my prefault loop after the PAPI_start_counters call (before the read that zeros them), it goes down to zero. So seems like a peculiarity of PAPI. I am still interested in the calloc/malloc difference though - still don't see why the original numbers were so high. – jmetcalfe Feb 19 '13 at 16:08

2 Answers2

5

Based on the comments:

  • ~90 misses if malloc() is used.
  • 32 misses if calloc() is used or if the array is iterated through before hand.

The reason is due to lazy allocation. The OS isn't actually giving you the memory until you touch it.

When you first touch the page, it will result in a page-fault. The OS will trap this page-fault and properly allocate it on the fly (which involves zeroing among other things). This is the overhead that is resulting in all those extra TLB misses.

But if you use calloc() or you touch all the memory ahead of time, you move this overhead to before you start the counters. Hence the smaller result.

As for the 32 remaining misses... I have no idea.
(Or as mentioned in the comments, it's probably the PAPI interference.)

Community
  • 1
  • 1
Mysticial
  • 464,885
  • 45
  • 335
  • 332
  • Per my comment, the extra 32 misses seems to be PAPI interference. I actually thought of this before, but thought I was avoiding a page fault by using `mallopt(M_MMAP_MAX, 0)` to force it to use sbrk (Not included in the question since I didn't want to include irrelevant stuff!). Apparently that doesn't work. – jmetcalfe Feb 19 '13 at 16:18
  • 1
    There is no assembly code in your answer, let me help you out: `lea ebx, [2*eax + 1]` – fredoverflow Feb 19 '13 at 16:53
  • 1) malloc doesn't zero out the memory at all. It never has. That would effectively make it calloc. 2) calloc uses an optimization to assign all the pages to memory that's fixed to 0's. it then marks those pages as copy on write, so when you write them, it triggers a page fault. Reading them will read 0's. – J.D. May 19 '21 at 21:29
0

The reason for this is possibly because you are jumping by 4096 in every loop iteration:

for(int i=0; i < 32; ++i){
    databuf[4096 * i] = 'a';
}

There is a good chance that you get a cache miss for every access.

junix
  • 3,161
  • 13
  • 27
  • The whole point of the test is to incur a DTLB miss every time. I am looking for an explaination of why that number is so high (i.e. more than one DTLB miss per iteration). – jmetcalfe Feb 19 '13 at 16:09