I've just read, in multiprocessing, when a waiting process comes into context, the entire cache becomes invalid and we see a lot of cache misses. I'm wondering how long a process runs continuously before it goes to wait state... Is it long enough such that the newly updated cache can be used meaningfully? But then, other processes will be waiting too long ? Appreciate any help. Thanks!
1 Answers
The simple answer is that "yes, it is usually long enough". If it wasn't long enough, they would not put a cache on the chip. The time is controlled by the operating system, you can see a post on it here: Linux Scheduler Time Slice.
However, there are cases where it might not be long enough. This would typically be where you have a lot of I/O and not much computation in between. A good example would be a program like telnet that can read a character, output a character and then go to sleep. However, I don't think that it is correct to say that the entire cache is invalidated. This would only happen if the new process used enough memory to overwrite all of the cache entries from the previous process.
To avoid issues like this you should use buffered file I/O or database access, caches in the application and avoid character by character processing.

- 8,529
- 8
- 43
- 62