4

I recently implemented a security mechanism for Linux which hooks into system calls. Now I have to measure the overhead caused by it. The project requires to compare the execution time of typical Linux apps with and without the mechanism. By typical Linux apps I assume ex. gzipping 1G file, doing 'find /', grepping files. The main goal is to show the overhead in different types of tasks: CPU bound, I/O bound etc.

The question is: how to organise the test so that they will be reliable? The first important thing is the fact that my mechanism works only in kernel space, so it is relevant to compare systime. I can use 'time' command for it, but is it the most accurate way of measuring systime? Another idea is to run those apps in long loops to minimize error. Then the loops should be inside or outside time command? If they are outside I will get many results - should I choose min, max, median, average?

Thanks for any suggestions.

Łukasz Sowa
  • 1,287
  • 2
  • 11
  • 14
  • 3
    Start by using the standard Linux benchmark: kernel compilation. It's relatively syscall heavy. – ninjalj Dec 21 '11 at 19:40
  • 1
    What security mechanism is this? – Corey Henderson Dec 21 '11 at 20:09
  • @ninjalj: Thanks for the suggestion about kernel compilation. It'll become one of my tests for sure :). – Łukasz Sowa Dec 22 '11 at 02:18
  • @CoreyHenderson: it's cgroup subsystem that allows you to disallow certain system calls inside given control group. If you're interested I can send/post a patch for you. – Łukasz Sowa Dec 22 '11 at 02:20
  • I'm very interested :) I've been working on a security module that does something similar; https://github.com/cormander/tpe-lkm – Corey Henderson Dec 22 '11 at 02:55
  • @CoreyHenderson: https://github.com/luksow/syscalls-cgroup I appreciate any comments :). Sorry for posting it as a patch file but I haven't got enough time recently to push it nicely on github. BTW: Have you tried pushing your mechanism upstream? It looks promising. – Łukasz Sowa Dec 23 '11 at 03:02
  • Thanks for the link, don't worry about it being a patch. It looks interesting. I haven't tried pushing mine upstream yet; I really don't have the time to manage that with some of the other stuff I have going on. Maybe later this next year. – Corey Henderson Dec 26 '11 at 18:02

1 Answers1

3

I think you want more to measure a typical application payload (as Ninjajl's comment suggests, the compilation of the kernel could be a good payload). You probably don't want to measure the overhead inside each syscall itself, or even inside the kernel as a whole.

The reason for this is that most applications spend much more time and resource in user-space than in kernel-land (i.e. syscalls), so overhead inside syscalls is a "second-order" effect and probably don't matter as much. Of course, there are probable exceptions.

Perhaps phoronix test suite might be relevant.

You might be interested by oprofile

See also this answer and this question

Community
  • 1
  • 1
Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
  • Yup, I feel that typical application payload is a key. However any similar mechanisms to mine causes meaningful overhead like 30% (if they are ptrace-based) so I have to show that my mechanism is faster, much faster. The main goal is to show that the overhead is less then 5%. Any other suggestions of typical application, different from kernel compilation? They should be somehow different, ex. CPU-bound, I/O-bound. – Łukasz Sowa Dec 22 '11 at 02:30
  • The phoronix test suite have several applications. And you could also try major Linux server programs (apache, mysql)... – Basile Starynkevitch Dec 22 '11 at 07:04