3

Given: Broadwell CPU with hyperthreading disabled in BIOS

[root@ny4srv03 ~]# lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         46 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  44
  On-line CPU(s) list:   0-43
Vendor ID:               GenuineIntel
  BIOS Vendor ID:        Intel
  Model name:            Intel(R) Xeon(R) CPU E5-2696 v4 @ 2.20GHz
    BIOS Model name:     Intel(R) Xeon(R) CPU E5-2696 v4 @ 2.20GHz
    CPU family:          6
    Model:               79
    Thread(s) per core:  1
    Core(s) per socket:  22
    Socket(s):           2
    Stepping:            1
    CPU max MHz:         3700.0000
    CPU min MHz:         1200.0000
    BogoMIPS:            4399.69
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aper
                         fmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_
                         l3 invpcid_single intel_ppin tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local d
                         therm ida arat pln pts
Virtualization features:
  Virtualization:        VT-x
Caches (sum of all):
  L1d:                   1.4 MiB (44 instances)
  L1i:                   1.4 MiB (44 instances)
  L2:                    11 MiB (44 instances)
  L3:                    110 MiB (2 instances)
NUMA:
  NUMA node(s):          2
  NUMA node0 CPU(s):     0-21
  NUMA node1 CPU(s):     22-43
Vulnerabilities:
  Itlb multihit:         KVM: Mitigation: VMX disabled
  L1tf:                  Mitigation; PTE Inversion; VMX vulnerable, SMT disabled
  Mds:                   Vulnerable; SMT disabled
  Meltdown:              Vulnerable
  Mmio stale data:       Vulnerable
  Retbleed:              Not affected
  Spec store bypass:     Vulnerable
  Spectre v1:            Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
  Spectre v2:            Vulnerable, STIBP: disabled, PBRSB-eIBRS: Not affected
  Srbds:                 Not affected
  Tsx async abort:       Vulnerable

which according to The Intel 64 and IA-32 Architectures Software Developer's Manual

If a processor core is shared by two logical processors, each logical processors can access up to four counters (IA32 PMCO-IA32 PMC3). This is the same as in the prior generation for processors based on Nehalem microarchi-tecture. If a processor core is not shared by two logical processors, up to eight general-purpose counters are visible. If CPUID.OAH:EAX[15:8] reports 8 counters, then IA32_PMC4-IA32_PMC7 would cc MSR addresses 0C5H through 0C8H. Each counter is accompanied by an event select MS (IA32_PERFEVTSEL4-IA32_PERFEVTSEL7).

should have 8 performance counters accessible, and cpuid shows exactly that

[root@ny4srv03 ~]# cpuid -1 | grep counters
      number of counters per logical processor = 0x8 (8)
      number of contiguous fixed counters      = 0x3 (3)
      bit width of fixed counters              = 0x30 (48)

However if I try to use perf in the following way (under root account and with kernel.perf_event_paranoid set to -1) I get some strange results

[root@ny4srv03 ~]# perf stat \
  -r 100 \
  -e cycles:u \
  -e instructions:u \
  -e branches:u \
  -e branch-misses:u \
  -e cache-references:u \
  -e cache-misses:u \
  -e faults:u \
  ls>/dev/null

 Performance counter stats for 'ls' (100 runs):

                 0      cycles:u
            668753      instructions:u                                                ( +-  0.01% )
            131991      branches:u                                                    ( +-  0.00% )
              6936      branch-misses:u           #    5.25% of all branches          ( +-  0.33% )
             11105      cache-references:u                                            ( +-  0.13% )
                 6      cache-misses:u            #    0.055 % of all cache refs      ( +-  5.86% )
               103      faults:u                                                      ( +-  0.19% )

        0.00100211 +- 0.00000487 seconds time elapsed  ( +-  0.49% )

which always show cycles:u equal to 0 no matter how many times I run perf (please notice -r 100 parameter) until I remove one of the branches:u, branch-misses:u, cache-references:u, cache-misses:u events. In that case perf works as expected

[root@ny4srv03 ~]# perf stat \
  -r 100 \
  -e cycles:u \
  -e instructions:u \
  -e branches:u \
  -e branch-misses:u \
  -e cache-references:u \
  -e faults:u \
  ls>/dev/null

 Performance counter stats for 'ls' (100 runs):

            614142      cycles:u                                                      ( +-  0.06% )
            668790      instructions:u            #    1.09  insn per cycle           ( +-  0.00% )
            132052      branches:u                                                    ( +-  0.00% )
              6874      branch-misses:u           #    5.21% of all branches          ( +-  0.11% )
             10735      cache-references:u                                            ( +-  0.05% )
               101      faults:u                                                      ( +-  0.06% )

        0.00095650 +- 0.00000108 seconds time elapsed  ( +-  0.11% )

perf also works as expected in these cases

  1. In case of obtaining metrics for cycles event either without modifiers at all or with :k modifier
[root@ny4srv03 ~]# perf stat \
  -r 100 \
  -e cycles \
  -e instructions:u \
  -e branches:u \
  -e branch-misses:u \
  -e cache-references:u \
  -e cache-misses:u \
  -e faults:u \
  ls>/dev/null

 Performance counter stats for 'ls' (100 runs):

           1841276      cycles                                                        ( +-  0.79% )
            668400      instructions:u                                                ( +-  0.00% )
            131966      branches:u                                                    ( +-  0.00% )
              6121      branch-misses:u           #    4.64% of all branches          ( +-  0.40% )
             10987      cache-references:u                                            ( +-  0.16% )
                 0      cache-misses:u            #    0.000 % of all cache refs
               102      faults:u                                                      ( +-  0.18% )

        0.00102359 +- 0.00000649 seconds time elapsed  ( +-  0.63% )

  1. In case hyperthreading is enabled in BIOS and disabled by nosmt kernel parameter
[root@ny4srv03 ~]# perf stat \
  -r 100 \
  -e cycles:u \
  -e instructions:u \
  -e branches:u \
  -e branch-misses:u \
  -e cache-references:u \
  -e cache-misses:u \
  -e faults:u \
  ls>/dev/null

 Performance counter stats for 'ls' (100 runs):

            618443      cycles:u                                                      ( +-  0.39% )
            668466      instructions:u            #    1.05  insn per cycle           ( +-  0.00% )
            131968      branches:u                                                    ( +-  0.00% )
              6529      branch-misses:u           #    4.95% of all branches          ( +-  0.34% )
             11096      cache-references:u                                            ( +-  0.47% )
                 1      cache-misses:u            #    0.010 % of all cache refs      ( +- 53.16% )
               107      faults:u                                                      ( +-  0.18% )

        0.00097825 +- 0.00000554 seconds time elapsed  ( +-  0.57% )

in this case cpuid also shows that there are only 4 performance counters avaiable

[root@ny4srv03 ~]# cpuid -1 | grep counters
      number of counters per logical processor = 0x4 (4)
      number of contiguous fixed counters      = 0x3 (3)
      bit width of fixed counters              = 0x30 (48)

So I'm wondering whether there is a bug in perf or some kind of system misconfiguration. Could you please help?

Update 1

Trying to run perf -d shows that there is NMI watchdog enabled

[root@ny4srv03 likwid]# perf stat \
   -e cycles:u \
   -e instructions:u \
   -e branches:u \
   -e branch-misses:u \
   -e cache-references:u \
   -e cache-misses:u \
   -e faults:u \
   -d \
   ls>/dev/null

 Performance counter stats for 'ls':

                 0      cycles:u
            709098      instructions:u
            140131      branches:u
              6826      branch-misses:u           #    4.87% of all branches
             11287      cache-references:u
                 0      cache-misses:u            #    0.000 % of all cache refs
               104      faults:u
            593753      L1-dcache-loads
             32677      L1-dcache-load-misses     #    5.50% of all L1-dcache accesses
              8679      LLC-loads
     <not counted>      LLC-load-misses                                               (0.00%)

       0.001102213 seconds time elapsed

       0.000000000 seconds user
       0.001134000 seconds sys


Some events weren't counted. Try disabling the NMI watchdog:
    echo 0 > /proc/sys/kernel/nmi_watchdog
    perf stat ...
    echo 1 > /proc/sys/kernel/nmi_watchdog

Disabling it helps to get the expected result

echo 0 > /proc/sys/kernel/nmi_watchdog

[root@ny4srv03 likwid]# perf stat \
   -e cycles:u \
   -e instructions:u \
   -e branches:u \
   -e branch-misses:u \
   -e cache-references:u \
   -e cache-misses:u \
   -e faults:u \
   -d \
   ls>/dev/null

 Performance counter stats for 'ls':

            745760      cycles:u
            708833      instructions:u            #    0.95  insn per cycle
            140122      branches:u
              6757      branch-misses:u           #    4.82% of all branches
             11503      cache-references:u
                 0      cache-misses:u            #    0.000 % of all cache refs
               101      faults:u
            586223      L1-dcache-loads
             32856      L1-dcache-load-misses     #    5.60% of all L1-dcache accesses
              8794      LLC-loads
                29      LLC-load-misses           #    0.33% of all LL-cache accesses

       0.001000925 seconds time elapsed

       0.000000000 seconds user
       0.001080000 seconds sys

But it still does not explain why cycles:u is 0 with nmi_watchdog enabled even if dmesg shows

[    0.300779] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.

Update 2

I found this nice comment in the sources of likwid toolsuite that states

Please be aware that the counters PMC4-7 are broken on Intel Broadwell. They don't increment if either user- or kernel-level filtering is applied. User-level filtering is default in LIKWID, hence kernel-level filtering is added automatically for PMC4-7. The returned counts can be much higher.

So it can explain the behaviour, so now it's interesting to find the origin of this information if os.

szhem
  • 4,672
  • 2
  • 18
  • 30
  • With HT enabled, the 8 programmable counters per physical core are partitioned 4:4 between logical cores. (In addition to the fixed counters that can only count the `cycles` and `instructions` events on each logical core). But `perf` doesn't know about those fixed counters when deciding how to multiplex, so the set of events it picks first might not include `cycles`. And for *very* short tasks like `ls`, it might exit before cycling to the next set of 4 events. Oh, but that doesn't explain it, you're using the default set of events, which are few enough that it doesn't have to multiplex. – Peter Cordes Mar 01 '23 at 09:07
  • IDK why booting with `nosmt` doesn't make all 8 counters per logical core available, but try a task that runs longer than 1 ms (and much of that inside system calls)! Like `awk 'BEGIN{for(i=0;i<10000000;i++){}}'` spends a bunch of CPU time in user-space. If that doesn't help, check errata for Broadwell; there have been some for PMU stuff. – Peter Cordes Mar 01 '23 at 09:08
  • 1
    @PeterCordes `awk 'BEGIN{for(i=0;i<10000000;i++){}}'` shows exactly the same behaviour with `cycles:u` equal to `0`. I also found this page https://github.com/RRZE-HPC/likwid/wiki/Broadwell that warns that `Counters PMC4-7 can only be measured if the 'kernel' option is set.` and it can explain the behaviour, but I can't find the original source of this info. – szhem Mar 01 '23 at 09:28
  • It should be using `IA32_FIXED_CTR1` for that event. https://perfmon-events.intel.com/broadwell_server.html doesn't show any errata that apply to `CPU_CLK_UNHALTED.THREAD`. The precise (PEBS) version of the same event needs a programmable counter, `CPU_CLK_UNHALTED.THREAD_P`; you might see counts for that with `perf stat --all-user -e cpu_clk_unhalted.thread_p,instructions` etc. (e.g. `-e task-clock,context-switches,cpu-migrations,page-faults,cycles,cpu_clk_unhalted.thread_p,instructions` is the default software events + cycles_p and instructions.) – Peter Cordes Mar 01 '23 at 09:38
  • 1
    I don't see any relevant errata that mention "clock" or "cycles", or "event" in https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/5th-gen-core-family-spec-update.pdf (BDM errata) or https://cdrdv2-public.intel.com/332054/332054_Intel%20Xeon%20D-1500-D-1500%20NS%20and%20Intel%20Xeon%20D-1600%20NS%20Processor%20Fam%20SU_Rev026US.pdf (BDE errata). The PMU is part of the core, and this isn't an uncore event, so it should be the same across client / server chips. – Peter Cordes Mar 01 '23 at 09:51
  • Interesting that the NMI watchdog has any interaction. Could be a bug in the kernel's PAPI / `perf_event_open` stuff that `perf` uses. The `cycles` event should be using the fixed counter dedicated to that, but I'd assumed the NMI watchdog would use one of the programmable counters. At least I assumed it did on my Skylake without really testing if I could count 4 different events that aren't instructions or cycles. But maybe if the fixed counter can be set to count something even when the core is in a sleep state, it could work as a watchdog? Or watchdog depends on the CPU clock ticking – Peter Cordes Mar 01 '23 at 10:31
  • @PeterCordes is it possible that `:u` modifier consumes one of the programmable counters even if there is a dedicated one? As I can see from my experiments `cycles:k` and `cycles` are always non-zero comparing to `cycles:u`. What I can't understand is how to understand how many of programmable counters are consumed and what for during `perf` execution. – szhem Mar 01 '23 at 15:28
  • @PeterCordes I found BDE104 in the docs you provided earlier that states ` General-Purpose Performance Monitoring Counters 4-7 Will Not Increment Do Not Count With USR Mode Only Filtering`, so it seems to explain the behaviour. I would be happy to up and accept your answer if you provide it. – szhem Mar 01 '23 at 18:18
  • On my Skylake, `cycles:u` is counted on a fixed counter. I have hyperthreading enabled so I only have 4 programmable counters per logical core. `perf stat -etask-clock,context-switches,cpu-migrations,page-faults,cycles:u,instructions,uops_issued.any,uops_executed.thread,idq.mite_uops,ild_stall.lcp` counts 6 total events; 2 fixed (cycles:u and instructions) and 4 others, without multiplexing. Adding one more, like `uops_dispatched_port.port_0`, results in multiplexing (numbers like `(86.32%)` in a new column at the right.) Taking out `cycles:u` and `instructions` doesn't avoid multiplexing. – Peter Cordes Mar 01 '23 at 19:05
  • For Skylake at least, the fixed counters support user/kernel/both masking, and `perf` chooses to take advantage of it. You could run the same experiment to see if your Broadwell system counts `cycles:u` on one of the 4 / 8 programmable counters instead of a fixed counter. (Using `--all-user` for my testing also didn't make a difference.) The `perf stat --all-user` default events include 4 non-fixed events plus cycles and instructions, so with HT enabled it couldn't count that many without multiplexing if `cycles:u` took a programmable counter. Or with the NMI watchdog taking any... – Peter Cordes Mar 01 '23 at 19:13
  • I forgot that your earlier testing was with the NMI watchdog enabled; I've had it disabled on my system for ages. It apparently occupies the fixed counter that `cycles` (:u or otherwise) could use, so you get multiplexing with the default set of events. But removing `cycles` from that set makes the multiplexing go away, so it must be counting `cycles` on a programmable counter. – Peter Cordes Mar 01 '23 at 19:16
  • I am on a skylake client machine (6700K), booting with `nosmt=force` also does not make all 8 counters per logical core available. `cpuid` shows 4 counters only. – CH_skar Jul 06 '23 at 18:15

1 Answers1

2

This is erratum BDE104, with the NMI watchdog occupying a fixed counter so cycles has to use a programmable counter.


From Intel's Xeon-D "specification update" (errata) document (I didn't find one for regular Xeon v4)

BDE104: General-Purpose Performance Monitoring Counters 4-7 Will Not Increment Do Not Count With USR Mode Only Filtering

Problem: The IA32_PMC4-7 MSR (C5H-C8H) general-purpose performance monitoring counters will not count when the associated CPL filter selection in IA32_PERFEVTSELx MSR's (18AH-18DH) USR field (bit 16) is set while OS field (bit 17) is not set.

Implication: Software depending upon IA32_PMC4-7 to count only USR events will not operate as expected. Counting OS only events or OS and USR events together is unaffected by this erratum.

Workaround: None identified.


The NMI watchdog occupies fixed counter 1, the one that can normally count the cycles event. This makes perf choose a programmable counter for it, apparently picking one of the buggy ones.

With the NMI watchdog disabled, perf uses fixed counter #1 for cycles. (It apparently supports user/kernel/both masking.)

I tested on my Skylake system with HT enabled, so 4 programmable counters per logical core, plus the fixed counters.

  • NMI watchdog disabled: cycles+instructions + 4 other events - no multiplexing.
  • NMI watchdog disabled: cycles+instructions + 5 other events - multiplexing. (Numbers like (86.32%) in a new column at the right, indicating how much of the time this event was active for; perf extrapolates from that fraction to the total time.)
  • NMI watchdog disabled: 5 events not including cycles or instructions - multiplexing. (Confirms that cycles and instructions use fixed counters).

Confirming the limit of 4 arbitrary events plus any of cycles, instructions vs. with NMI watchdog enabled:

  • NMI watchdog enabled: 4 events not including cycles or instructions - no multiplexing, confirming that the NMI watchdog left all 4 programmable counters free

  • NMI watchdog enabled: 4 events not including cycles or instructions - no multiplexing, confirming that the NMI watchdog left all 4 programmable counters free

  • NMI watchdog enabled: 4 events plus cycles - multiplexing, confirming that cycles now has to use a programmable counter, implying that the NMI watchdog used that fixed counter.

  • NMI watchdog enabled: cycles+instructions + 3 other events - no multiplexing, as we expect. Further confirming that cycles becomes one of the events that competes for a programmable counter.

This is all the same if I use perf stat --all-user or cycles:u.

For example (with some horizontal whitespace removed for SO's narrow code blocks)

# with NMI watchdog enabled
$ taskset -c 0 perf stat --all-user -etask-clock,context-switches,cpu-migrations,page-faults,cycles,instructions,uops_issued.any,uops_executed.thread,idq.mite_uops,idq.dsb_uops    -r1 ./a.out

 Performance counter stats for './a.out':

             40.74 msec task-clock            #    0.994 CPUs utilized          
                 0      context-switches      #    0.000 /sec                   
                 0      cpu-migrations        #    0.000 /sec                   
               119      page-faults           #    2.921 K/sec                  
       165,566,262      cycles                #    4.064 GHz      (61.39%)
       160,597,987      instructions          #    0.97  insn per cycle (83.46%)
       286,675,168      uops_issued.any       #    7.036 G/sec       (85.28%)
       286,258,415      uops_executed.thread  #    7.026 G/sec       (85.28%)
        76,619,024      idq.mite_uops         #    1.881 G/sec       (85.28%)
        77,238,565      idq.dsb_uops          #    1.896 G/sec       (82.77%)

       0.040990242 seconds time elapsed

       0.040912000 seconds user
       0.000000000 seconds sys
$ echo 0 | sudo tee  /proc/sys/kernel/nmi_watchdog
0
$ taskset -c 0 perf stat --all-user -etask-clock,context-switches,cpu-migrations,page-faults,cycles,instructions,uops_issued.any,uops_executed.thread,idq.mite_uops,idq.dsb_uops    -r1 ./a.out

 Performance counter stats for './a.out':

             45.01 msec task-clock            #    0.992 CPUs utilized          
                 0      context-switches      #    0.000 /sec                   
                 0      cpu-migrations        #    0.000 /sec                   
               120      page-faults           #    2.666 K/sec                  
       177,494,136      cycles                #    3.943 GHz                    
       160,265,384      instructions          #    0.90  insn per cycle         
       287,253,352      uops_issued.any       #    6.382 G/sec                  
       286,705,343      uops_executed.thread  #    6.369 G/sec                  
        78,189,827      idq.mite_uops         #    1.737 G/sec                  
        75,911,530      idq.dsb_uops          #    1.686 G/sec                  

       0.045389998 seconds time elapsed

       0.045165000 seconds user
       0.000000000 seconds sys

https://perfmon-events.intel.com/broadwell_server.html says there's a 3rd fixed counter for CPU_CLK_UNHALTED.REF_TSC. So it's separate from the counter that counts INST_RETIRED.ANY (counter #0) or CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.THREAD_ANY (counter #1).

ref_tsc is fixed frequency, not core clock cycles; it might be better if the NMI watchdog could use that, since I expect it's much less widely used. The cycles even is CPU_CLK_UNHALTED.THREAD on Intel CPUs, counting core clock cycles while this logical core is active. Perf counts it by default.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847