Given: Broadwell CPU with hyperthreading disabled in BIOS
[root@ny4srv03 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 44
On-line CPU(s) list: 0-43
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel
Model name: Intel(R) Xeon(R) CPU E5-2696 v4 @ 2.20GHz
BIOS Model name: Intel(R) Xeon(R) CPU E5-2696 v4 @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 1
Core(s) per socket: 22
Socket(s): 2
Stepping: 1
CPU max MHz: 3700.0000
CPU min MHz: 1200.0000
BogoMIPS: 4399.69
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aper
fmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_
l3 invpcid_single intel_ppin tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local d
therm ida arat pln pts
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 1.4 MiB (44 instances)
L1i: 1.4 MiB (44 instances)
L2: 11 MiB (44 instances)
L3: 110 MiB (2 instances)
NUMA:
NUMA node(s): 2
NUMA node0 CPU(s): 0-21
NUMA node1 CPU(s): 22-43
Vulnerabilities:
Itlb multihit: KVM: Mitigation: VMX disabled
L1tf: Mitigation; PTE Inversion; VMX vulnerable, SMT disabled
Mds: Vulnerable; SMT disabled
Meltdown: Vulnerable
Mmio stale data: Vulnerable
Retbleed: Not affected
Spec store bypass: Vulnerable
Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Spectre v2: Vulnerable, STIBP: disabled, PBRSB-eIBRS: Not affected
Srbds: Not affected
Tsx async abort: Vulnerable
which according to The Intel 64 and IA-32 Architectures Software Developer's Manual
If a processor core is shared by two logical processors, each logical processors can access up to four counters (IA32 PMCO-IA32 PMC3). This is the same as in the prior generation for processors based on Nehalem microarchi-tecture. If a processor core is not shared by two logical processors, up to eight general-purpose counters are visible. If CPUID.OAH:EAX[15:8] reports 8 counters, then IA32_PMC4-IA32_PMC7 would cc MSR addresses 0C5H through 0C8H. Each counter is accompanied by an event select MS (IA32_PERFEVTSEL4-IA32_PERFEVTSEL7).
should have 8 performance counters accessible, and cpuid
shows exactly that
[root@ny4srv03 ~]# cpuid -1 | grep counters
number of counters per logical processor = 0x8 (8)
number of contiguous fixed counters = 0x3 (3)
bit width of fixed counters = 0x30 (48)
However if I try to use perf
in the following way (under root
account and with kernel.perf_event_paranoid
set to -1
) I get some strange results
[root@ny4srv03 ~]# perf stat \
-r 100 \
-e cycles:u \
-e instructions:u \
-e branches:u \
-e branch-misses:u \
-e cache-references:u \
-e cache-misses:u \
-e faults:u \
ls>/dev/null
Performance counter stats for 'ls' (100 runs):
0 cycles:u
668753 instructions:u ( +- 0.01% )
131991 branches:u ( +- 0.00% )
6936 branch-misses:u # 5.25% of all branches ( +- 0.33% )
11105 cache-references:u ( +- 0.13% )
6 cache-misses:u # 0.055 % of all cache refs ( +- 5.86% )
103 faults:u ( +- 0.19% )
0.00100211 +- 0.00000487 seconds time elapsed ( +- 0.49% )
which always show cycles:u
equal to 0
no matter how many times I run perf
(please notice -r 100
parameter) until I remove one of the branches:u
, branch-misses:u
, cache-references:u
, cache-misses:u
events. In that case perf
works as expected
[root@ny4srv03 ~]# perf stat \
-r 100 \
-e cycles:u \
-e instructions:u \
-e branches:u \
-e branch-misses:u \
-e cache-references:u \
-e faults:u \
ls>/dev/null
Performance counter stats for 'ls' (100 runs):
614142 cycles:u ( +- 0.06% )
668790 instructions:u # 1.09 insn per cycle ( +- 0.00% )
132052 branches:u ( +- 0.00% )
6874 branch-misses:u # 5.21% of all branches ( +- 0.11% )
10735 cache-references:u ( +- 0.05% )
101 faults:u ( +- 0.06% )
0.00095650 +- 0.00000108 seconds time elapsed ( +- 0.11% )
perf
also works as expected in these cases
- In case of obtaining metrics for
cycles
event either without modifiers at all or with:k
modifier
[root@ny4srv03 ~]# perf stat \
-r 100 \
-e cycles \
-e instructions:u \
-e branches:u \
-e branch-misses:u \
-e cache-references:u \
-e cache-misses:u \
-e faults:u \
ls>/dev/null
Performance counter stats for 'ls' (100 runs):
1841276 cycles ( +- 0.79% )
668400 instructions:u ( +- 0.00% )
131966 branches:u ( +- 0.00% )
6121 branch-misses:u # 4.64% of all branches ( +- 0.40% )
10987 cache-references:u ( +- 0.16% )
0 cache-misses:u # 0.000 % of all cache refs
102 faults:u ( +- 0.18% )
0.00102359 +- 0.00000649 seconds time elapsed ( +- 0.63% )
- In case hyperthreading is enabled in BIOS and disabled by
nosmt
kernel parameter
[root@ny4srv03 ~]# perf stat \
-r 100 \
-e cycles:u \
-e instructions:u \
-e branches:u \
-e branch-misses:u \
-e cache-references:u \
-e cache-misses:u \
-e faults:u \
ls>/dev/null
Performance counter stats for 'ls' (100 runs):
618443 cycles:u ( +- 0.39% )
668466 instructions:u # 1.05 insn per cycle ( +- 0.00% )
131968 branches:u ( +- 0.00% )
6529 branch-misses:u # 4.95% of all branches ( +- 0.34% )
11096 cache-references:u ( +- 0.47% )
1 cache-misses:u # 0.010 % of all cache refs ( +- 53.16% )
107 faults:u ( +- 0.18% )
0.00097825 +- 0.00000554 seconds time elapsed ( +- 0.57% )
in this case cpuid
also shows that there are only 4 performance counters avaiable
[root@ny4srv03 ~]# cpuid -1 | grep counters
number of counters per logical processor = 0x4 (4)
number of contiguous fixed counters = 0x3 (3)
bit width of fixed counters = 0x30 (48)
So I'm wondering whether there is a bug in perf
or some kind of system misconfiguration. Could you please help?
Update 1
Trying to run perf -d
shows that there is NMI watchdog
enabled
[root@ny4srv03 likwid]# perf stat \
-e cycles:u \
-e instructions:u \
-e branches:u \
-e branch-misses:u \
-e cache-references:u \
-e cache-misses:u \
-e faults:u \
-d \
ls>/dev/null
Performance counter stats for 'ls':
0 cycles:u
709098 instructions:u
140131 branches:u
6826 branch-misses:u # 4.87% of all branches
11287 cache-references:u
0 cache-misses:u # 0.000 % of all cache refs
104 faults:u
593753 L1-dcache-loads
32677 L1-dcache-load-misses # 5.50% of all L1-dcache accesses
8679 LLC-loads
<not counted> LLC-load-misses (0.00%)
0.001102213 seconds time elapsed
0.000000000 seconds user
0.001134000 seconds sys
Some events weren't counted. Try disabling the NMI watchdog:
echo 0 > /proc/sys/kernel/nmi_watchdog
perf stat ...
echo 1 > /proc/sys/kernel/nmi_watchdog
Disabling it helps to get the expected result
echo 0 > /proc/sys/kernel/nmi_watchdog
[root@ny4srv03 likwid]# perf stat \
-e cycles:u \
-e instructions:u \
-e branches:u \
-e branch-misses:u \
-e cache-references:u \
-e cache-misses:u \
-e faults:u \
-d \
ls>/dev/null
Performance counter stats for 'ls':
745760 cycles:u
708833 instructions:u # 0.95 insn per cycle
140122 branches:u
6757 branch-misses:u # 4.82% of all branches
11503 cache-references:u
0 cache-misses:u # 0.000 % of all cache refs
101 faults:u
586223 L1-dcache-loads
32856 L1-dcache-load-misses # 5.60% of all L1-dcache accesses
8794 LLC-loads
29 LLC-load-misses # 0.33% of all LL-cache accesses
0.001000925 seconds time elapsed
0.000000000 seconds user
0.001080000 seconds sys
But it still does not explain why cycles:u
is 0
with nmi_watchdog
enabled even if dmesg
shows
[ 0.300779] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
Update 2
I found this nice comment in the sources of likwid toolsuite that states
Please be aware that the counters PMC4-7 are broken on Intel Broadwell. They don't increment if either user- or kernel-level filtering is applied. User-level filtering is default in LIKWID, hence kernel-level filtering is added automatically for PMC4-7. The returned counts can be much higher.
So it can explain the behaviour, so now it's interesting to find the origin of this information if os.