I'm having a hard time interpreting Intel performance events reporting.
Consider the following simple program that mainly reads/writes memory:
#include <stdint.h>
#include <stdio.h>
volatile uint32_t a;
volatile uint32_t b;
int main() {
printf("&a=%p\n&b=%p\n", &a, &b);
for(size_t i = 0; i < 1000000000LL; i++) {
a ^= (uint32_t) i;
b += (uint32_t) i;
b ^= a;
}
return 0;
}
I compile it with gcc -O2
and run under perf
:
# gcc -g -O2 a.c
# perf stat -a ./a.out
&a=0x55a4bcf5f038
&b=0x55a4bcf5f034
Performance counter stats for 'system wide':
32,646.97 msec cpu-clock # 15.974 CPUs utilized
374 context-switches # 0.011 K/sec
1 cpu-migrations # 0.000 K/sec
1 page-faults # 0.000 K/sec
10,176,974,023 cycles # 0.312 GHz
13,010,322,410 instructions # 1.28 insn per cycle
1,002,214,919 branches # 30.699 M/sec
123,960 branch-misses # 0.01% of all branches
2.043727462 seconds time elapsed
# perf record -a ./a.out
&a=0x5589cc1fd038
&b=0x5589cc1fd034
[ perf record: Woken up 3 times to write data ]
[ perf record: Captured and wrote 0.997 MB perf.data (9269 samples) ]
# perf annotate
The result of perf annotate
(annotated for memory loads/stores by me):
Percent│ for(size_t i = 0; i < 1000000000LL; i ++) {
│ xor %eax,%eax
│ nop
│ a ^= (uint32_t) i;
│28: mov a,%edx // 32-bit load
│ xor %eax,%edx
9.74 │ mov %edx,a // 32-bit store
│ b += (uint32_t) i;
12.12 │ mov b,%edx // 32-bit load
8.79 │ add %eax,%edx
│ for(size_t i = 0; i < 1000000000LL; i ++) {
│ add $0x1,%rax
│ b += (uint32_t) i;
18.69 │ mov %edx,b // 32-bit store
│ b ^= a;
0.04 │ mov a,%ecx // 32-bit load
22.39 │ mov b,%edx // 32-bit load
8.92 │ xor %ecx,%edx
19.31 │ mov %edx,b // 32-bit store
│ for(size_t i = 0; i < 1000000000LL; i ++) {
│ cmp $0x3b9aca00,%rax
│ ↑ jne 28
│ }
│ return 0;
│ }
│ xor %eax,%eax
│ add $0x8,%rsp
│ ← retq
My observations:
- From the 1.28
insn per cycle
I conclude that the program is mainly memory-bound. a
andb
appear to be located in the same cache line, adjacent to each other.
My Question:
- Shouldn't CPU time be more consistent for the various memory loads and stores?
- Why is CPU time of the 1st memory load (
mov a,%edx
) zero? - Why is the time of the 3rd load
mov a,%ecx
0.04%, while the one just next to itmov b,%edx
22.39%? - Why do some instructions take 0 time? The loop consists of 14 instructions, so each instruction must contribute some observable time.
Notes:
OS: Linux 4.19.0-amd64, CPU: Intel Core i9-9900K, 100% idle system (also tested on i7-7700, same result).