2

I have a process that needs to be extensible by loading shared libraries. Is there a way to run the shared library code in a sandbox environment (other than an external process) so that if it segfaults it doesn't crash the process and has limitations on how much memory it can allocate, the cpu cycles it can use, etc.

IttayD
  • 28,271
  • 28
  • 124
  • 178

2 Answers2

1

No. If the shared library segfaults your process will segfault (the process is executing the library code). If you run it as an external process and use an RPC mechanism then you'd be okay as for crashing, but your program would need to detect when the service wasn't available (and something would need to restart it). Tools like chroot can sandbox processes, but not the individual libraries that an executable links to.

Elliott Frisch
  • 198,278
  • 20
  • 158
  • 249
1

I don't think there is a clean way to do it. You could try:

  • Catching segfaults and recovering from them (tricky, architecture specific, but doable)
  • Replacing calls to malloc/calloc for the library with an instrumented version that would count the allocated space (how to replace default malloc by code)
  • Alternatively use malloc hooks (http://www.gnu.org/software/libc/manual/html_node/Hooks-for-Malloc.html)
  • CPU cycles are accounted for the whole process, so I don't think there is any way you could get the info for just the library. Only viable option - manuallty measure ticks for every library call that your code makes.

In essence - this would be fun to try, but I recommend you go with the separate process approach and use RPC, quotas, ulimits etc.

Community
  • 1
  • 1
jmajnert
  • 418
  • 4
  • 8