4

I had been working on some time-sensitive project. Because of some undesired spikes in the timing, I had to go a bit deeper.

Scenario:

I have a kernel module, which is pinned to a CPU core. This CPU core is also listed in isolcpus in the kernel boot parameters. Here's what I have done to kernel boot parameters in cmdline

intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=1 intel_idle.max_cstate=0 processor.max_cstate=0 nohz_full=7-11 isolcpus=7-11 mce=off rcu_nocbs=7-11 nosoftlockup idle=poll cpuidle.off=1 powersave=off nonmi_ipi nowatchdog

I ran the following command ( I am trying to profile just CPU 8 at this moment)

sudo ./perf record -e context-switches -a -g --cpu=8 taskset -c 9 ./test.sh

**EDIT 1 - Additional Information **

Kernel Version: 4.15.12

My Kernel Module sends synchronous packets every X time units. Currently, I have configured it to send it every 50ms.

I had simplified test.sh in this case. It takes several parameters, but, an important thing about this script is that it invokes the Kernel module.

For instance, My KM had a proc fs. When a write event is triggered on this proc fs, it creates a new Kthread, binds it to CPU (8), and starts generating packet every 50ms.

To avoid collision and context-switches, I had moved this thing to the kernel space. Also, I had set the affinity of my script to a different CPU than the kernel module.

Thus, what I have observed is, there is a bit of jitter in the sending times, possibly because of these context switches.

and here is my output after entering perf report

# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 8  of event 'context-switches'
# Event count (approx.): 39
#
# Children      Self  Command      Shared Object     Symbol
# ........  ........  ...........  ................  .................
#
    69.23%    69.23%  :-1          [kernel.vmlinux]  [k] do_task_dead
            |
            ---do_task_dead

    25.64%    25.64%  swapper      [kernel.vmlinux]  [k] schedule_idle
            |
            ---schedule_idle

     2.56%     2.56%  :2100        [kernel.vmlinux]  [k] _cond_resched
            |
            ---_cond_resched

     2.56%     2.56%  kworker/8:1  [kernel.vmlinux]  [k] schedule
            |
            ---schedule

It says that there have been 8 context-switches. Also, I could not understand what :-1 actually meant in the Command column of first do_task_dead() row. It would be great if anyone would provide me some directions in digging deeper into this issue.

Edit 2 - perf script report and cpu_idle analysis result

swapper     0 [008] 64409.434193:          1 context-switches:
                  aceea8 schedule_idle (/lib/modules/4.15.12/build/vmlinux)

:-1    -1 [008] 64410.434267:          1 context-switches:
                  2ac066 do_task_dead (/lib/modules/4.15.12/build/vmlinux)

swapper     0 [008] 64410.442240:          1 context-switches:
                  aceea8 schedule_idle (/lib/modules/4.15.12/build/vmlinux)

:29026 29026 [008] 64411.442313:          1 context-switches:
                  acee0d _cond_resched (/lib/modules/4.15.12/build/vmlinux)

kworker/8:1   181 [008] 64411.442318:          1 context-switches:
                  acebf2 schedule (/lib/modules/4.15.12/build/vmlinux)

:-1    -1 [008] 64411.442327:          1 context-switches:
                  2ac066 do_task_dead (/lib/modules/4.15.12/build/vmlinux)

swapper     0 [008] 64411.466238:          8 context-switches:
                  aceea8 schedule_idle (/lib/modules/4.15.12/build/vmlinux)

swapper     0 [008] 64414.538207:         31 context-switches:
                  aceea8 schedule_idle (/lib/modules/4.15.12/build/vmlinux)

running with power:cpu_idle event, here is the output of perf script

swapper     0 [008] 65787.514565: power:cpu_idle: state=4294967295 cpu_id=8
                  ad3a2f cpu_idle_poll (/lib/modules/4.15.12/build/vmlinux)

swapper     0 [008] 65788.514653: power:cpu_idle: state=0 cpu_id=8
                  ad39d0 cpu_idle_poll (/lib/modules/4.15.12/build/vmlinux)

swapper     0 [008] 65788.522618: power:cpu_idle: state=4294967295 cpu_id=8
                  ad3a2f cpu_idle_poll (/lib/modules/4.15.12/build/vmlinux)

swapper     0 [008] 65789.522693: power:cpu_idle: state=0 cpu_id=8
                  ad39d0 cpu_idle_poll (/lib/modules/4.15.12/build/vmlinux)

swapper     0 [008] 65789.546577: power:cpu_idle: state=4294967295 cpu_id=8
                  ad3a2f cpu_idle_poll (/lib/modules/4.15.12/build/vmlinux)

swapper     0 [008] 65790.546648: power:cpu_idle: state=0 cpu_id=8
                  ad39d0 cpu_idle_poll (/lib/modules/4.15.12/build/vmlinux)

swapper     0 [008] 65790.570574: power:cpu_idle: state=4294967295 cpu_id=8
                  ad3a2f cpu_idle_poll (/lib/modules/4.15.12/build/vmlinux)
....

and perf report shows

# Samples: 22  of event 'power:cpu_idle'
# Event count (approx.): 22
#
# Children      Self  Trace output
# ........  ........  .........................
#
    50.00%    50.00%  state=0 cpu_id=8
            |
            ---cpu_idle_poll

    50.00%    50.00%  state=4294967295 cpu_id=8
            |
            ---cpu_idle_poll

Thank You,

Cooshal.

cooshal
  • 758
  • 8
  • 21
  • 2
    Some ideas: 1) use `perf script` to see the actual events and their order 2) use `-e power:cpu_idle` (requires debugfs / tracepoints), to also see cpuidle state changes. About `:-1` - I believe that means the `tid` is recorded as `-1`, but I have no idea why that might be. About your question: It is not quite clear what your issue you think you have and what - why your expectation is. What does `test.sh` do? What does your kernel module do? What is your kernel version? – Zulan Aug 29 '18 at 09:04
  • Hi ! thank you for your inputs. I will try those things now. Regarding further information, I have updated my original post. – cooshal Aug 29 '18 at 09:42

1 Answers1

3

Zulan is quite close and all his suggestions should be followed through.

From the man page of perf report:

The command column in the perf report output refers to the process from which the samples were collected. In per-thread/per-process mode, this is always the name of the monitored command. But in cpu-wide mode, the command can vary.

Because you are measuring per-cpu context switch events as can be seen from your perf record ... -cpu=8... command, the perf report command will start to report PID/TID of next/prev task. You can see this here -

per-cpu context switch records pid/tid

Now the value -1 refers to a process which is dead, i.e. the process has gone beyond the state of being a zombie process. This means the task_struct members now point to already freed memory and no dereferencing should be allowed.The do_task_dead method should clearly reflect this. This is where the pid of the process is being returned as -1 and simultaneously reported in perf report.

There is an extensive discussion on this problem. Initially a value of 0 used to refer to such a process state in the perf report output but as you may have guessed, pid=0 refers to the idle thread and hence a value of -1 is used.

Arnabjyoti Kalita
  • 2,325
  • 1
  • 18
  • 31
  • Hi, thank you very much for clarifying this. This seems a bit clearer now; at least on the code part. But, I am not sure about the source of this thing. Will inspect it further. Following the suggestions of Zulan, I did run perf for those events, and I cannot make much sense out of it. I have pasted the dumps in the question. Probably, because of the kernel boot parameter `idle=poll`, I see a lot of invocations of `cpu_idle_poll` and `schedule_idle`. But, thank you very much for your inputs. – cooshal Aug 29 '18 at 13:03