0

OK in a comment to this question:

How to clean caches used by the Linux kernel

ypnos claims that:

"Applications will always be first citizens for memory and don't have to fight with cache for it."

Well, I think my cache is rebelious and does not want to accept its social class. I ran the experiment here:

http://www.linuxatemyram.com/play.html

step 1:

$ free -m
total used free shared buffers cached
Mem: 3015 2901 113 0 15 2282
-/+ buffers/cache: 603 2411
Swap: 2406 2406 0

So 2282MB is used by cache and 113MB is free.

Now:

$ ./munch
Allocated 1 MB
Allocated 2 MB
Allocated 3 MB
Allocated 4 MB
.
.
.
Allocated 265 MB
Allocated 266 MB
Allocated 267 MB
Allocated 268 MB
Allocated 269 MB
Killed

OK, Linux gave me, generously another 156MB and that's it! So, how can I tell Linux that my programs are more important than that 2282MB cache?

Extra info: my /home is encrypted.

More people with the same problem (These make the encryption hypothesis not very plausible):

https://serverfault.com/questions/171164/can-you-set-a-minimum-linux-disk-buffer-size

and

https://askubuntu.com/questions/41778/computer-freezing-on-almost-full-ram-possibly-disk-cache-problem

Community
  • 1
  • 1
Syed Lavasani
  • 341
  • 2
  • 6
  • What is your `overcommit_memory` setting (in `/proc/sys/vm/overcommit_memory`)? It is possible that the lion's share of the free memory is committed to other processes that haven't touched it yet. `/proc/meminfo` also gives far more detailed information than `free` does. – caf Nov 13 '11 at 11:51

3 Answers3

1

The thing to know about caching in the kernel is that it's designed to be efficient as possible. This often means things put into cache are left there when there's nothing else asking for memory.

This is the kernel preparing to be lucky in case the thing in cache is asked for again. If no-one else needs the memory, there's little benefit in freeing it up.

bigendian
  • 788
  • 4
  • 11
  • But "I" need the memory and Linux doesn't give it to me because has devoted it to the cache. Sometimes Linux closes my browsers, showing "out of memory" error, while 2GB is used by cache, and this is a serious problem. – Syed Lavasani Nov 10 '11 at 08:02
  • Sorry, I missed your original point. That said, your problem is even more strange than reported. The output of free actually shows that you have 2411MB free (-/+ buffers/cache line). Perhaps the actual free memory was consumed by another process when munch was running? You could put a sleep in your munch program to sleep 1 second between memory allocations, then run "free -m -s 1" to monitor memory usage while munch runs. – bigendian Nov 10 '11 at 20:02
  • 1
    You may also be running into a limit on the amount of memory your process can allocate. Check the output of "ulimit -a" for any memory contraints on your shell. – bigendian Nov 10 '11 at 20:03
  • on a good day I can allocate as much as 2GB or so. So I don't think there's limit: for example this is from now `Allocated 1519 MB Allocated 1520 MB Allocated 1521 MB ^C ` (I stopped it otherwise, OOM killer will go into effect, close my browser and my comment will be lost). further more `max memory size (kbytes, -m) unlimited` from my ulimit -a. – Syed Lavasani Dec 12 '11 at 09:39
  • what should I look for in free -m -s 1? I have system monitor widget and cache is light green and user memory is dark green. On a good day the dark green goes up and the light green shrinks on a bad day when the light green reach the top, the computer freezes and then my program be killed. There's kind of cache that are accumulated after few days that doesn't go away. – Syed Lavasani Dec 12 '11 at 09:55
0

I am not sure about Linux specific stuff, but a good OS will keep track of how many times a memory page was accessed, and how long ago. If it wasn't accessed much lately, it can swap it out, and use the RAM for caching. Also, allocated but unused memory can be sent to swap as well, because sometimes programs allocate more than they actually need, so many memory pages would just sit there filling your RAM.

Radu
  • 923
  • 2
  • 9
  • 21
  • But the problem is that now I need ram and linux does not give it to me even though the ram is technically available for program (cache should only use ram if nobody else needs it) – Syed Lavasani Nov 10 '11 at 18:13
  • Well, did you actually use that ram, or just allocated it? If you use it, the cache should shrink and your program should use all the RAM. – Radu Nov 10 '11 at 23:23
  • As you can check the link, the program run this line after allocating memory: `memset(buffer, 0, 1024*1024)` which means that it uses the ram. Yet the cache stay rock solid there and my program dies. – Syed Lavasani Dec 12 '11 at 09:32
0

I found out if I turn off the swap by

#swapoff -a

The problem is going away. If I have swap, when I ask for more memory, then linux tries to move the cache to the swap and then swap get full, then linux halts the whole operation instead of dropping the cache. This results in "out of memory". But without swap Linux knows that it has no hope but dropping the cache in the first place.

I think it's a bug in linux kernel.

From the one of the link that added to the question suggests that:

sysctl -w vm.min_free_kbytes=65536

helps, for me with 64MG still I can easily get into trouble. I'm working with 128MG margin and when the greedy cache reach there, the machine becomes very slow but unlike before it doesn't freeze. I'll check with 256MG margin and see if there will be an improvement or not.

Syed Lavasani
  • 341
  • 2
  • 6
  • Cache is never "moved to swap". Dirty pagecache pages are written back to their backing storage, clean pagecache pages are simply dropped. – caf Nov 13 '11 at 11:48