In some well-known article there is such statement about the Linux disk cache:
there's absolutely no reason to disable it!
Also:
A healthy Linux system with more than enough memory will, after running for a while, show the following expected and harmless behavior:
free memory is close to 0
used memory is close to total
available memory (or "free + buffers/cache") has enough room (let's say, 20%+ of total)
swap used does not change
These conditions are met in my case and there is a problem. I have some production kernel-mode networking code which has to allocate memory in "atomic" context (kmalloc()
with GFP_ATOMIC
flag set). So under the high load as expected while "free memory is close to 0" my code can't allocate memory and then eventually it becomes a denial-of-service.
Obviously cron
with sync && echo 3 > /proc/sys/vm/drop_caches
is not a solution due to disk performance issues. It's possible just to try to choose some set of files to turn off caching on them, however it does not seem the good and reliable solution.
The questions are:
- What is the proper and reliable solution in such case? (from kernel-mode or user-mode side or both)
- Why does it considered that there can be no reason to disable (reduce intensity of) disk cache?