1

I just wrote a question about caching and it got me thinking. Is using a server process with a strict API any slower then a server lib statically linked (possibly in its own thread)?

If it is slower, how much overhead is there? The OS is linux but most of my development and testing is on windows.

Community
  • 1
  • 1

2 Answers2

2

Yes, it is slower as it involves context switches and the extra work of copying data around. This is for example one reason why SQLite is popular.

As for how much overhead, "It depends", but the answer is likely "Not enough to be a problem for you". As always if in doubt the only thing to do is try both ways and benchmark/profile them.

blueshift
  • 6,742
  • 2
  • 39
  • 63
1

It depends.

I don't think context-switching is really any different in Linux between threads and processes. However:

  • Starting a new process (e.g. with posix_spawn or fork/exec) is slower than starting a new thread, because you have to load a new process image, which invokes the dynamic linker, and does other things

  • Multiple process systems can use a lot more memory than multithreaded ones - sometimes - depending on what data structures you share. This of course can impact performance, leaving you less memory for other things.

Processes which "fork but don't exec" are somewhere "in between" processes and threads. However they come with their own problems (for example, many libraries become confused if they share their file descriptors with another process).

MarkR
  • 62,604
  • 14
  • 116
  • 151
  • At least on ARM there is a big performance hit on process switches vs thread switches as they involve a cache flush due to virtual address space change. – blueshift Nov 09 '11 at 07:27
  • @blueshift: IIRC newer ARMs no longer have virtually-indexed virtually-tagged caches, so this doesn't apply to newer ARMs. OTOH, you may need to flush the TLB and the branch predictor. – ninjalj Nov 09 '11 at 10:01