34

Just learned these 3 new techniques from https://unix.stackexchange.com/questions/87908/how-do-you-empty-the-buffers-and-cache-on-a-linux-system:


To free pagecache:

# echo 1 > /proc/sys/vm/drop_caches

To free dentries and inodes:

# echo 2 > /proc/sys/vm/drop_caches

To free pagecache, dentries and inodes:

# echo 3 > /proc/sys/vm/drop_caches

I am trying to understand what exactly are pagecache, dentries and inodes. What exactly are they?

Do freeing them up also remove the useful memcached and/or redis cache?

--

Why i am asking this question? My Amazon EC2 server RAM was getting filled up over the days - from 6% to up to 95% in a matter of 7 days. I am having to run a bi-weekly cronjob to remove these cache. Then memory usage drops to 6% again.

Community
  • 1
  • 1
Rakib
  • 12,376
  • 16
  • 77
  • 113
  • These approaches should not really have anything to do with memcached or redis. These two applications would be maintaining their own internal caching mechanisms to provide their functionality to the end user, and whether or not your 3 system operations impact them is an implementation detail of Memcached or redis. – jdi Apr 26 '15 at 04:21
  • 1
    I'm a bit late to this thread but it would be good to know how you determine that 95% of RAM is used in your vm. Often there is a misconception that all physical memory is used while it is exactly in the buffers+cache we are discussing here. See [link](http://www.linuxatemyram.com/) for a good explanation of those columns. – bfloriang Sep 03 '15 at 15:13
  • Amazon EC2 detailed monitoring reports the memory (RAM) usage and it used to show 95% usage.. Sometimes even 98-99% – Rakib Sep 03 '15 at 16:16
  • @syedrakib did you solve the memory issue? – Kassav' Aug 19 '16 at 09:42
  • @kassav yes. Like i mentioned at the end of the question. Ran the 3rd command via a cron job at 1 hour intervals – Rakib Aug 19 '16 at 10:15
  • I'm having a same issue on debian running ruby2.2.0 and still thinking about your solution. is it a reliable solution when the load increases? – Kassav' Aug 19 '16 at 10:30
  • It worked out for me. You could increase the frequency of the cron job in case you see that you're running out of your memory even faster. – Rakib Aug 19 '16 at 10:45

4 Answers4

33

With some oversimplification, let me try to explain in what appears to be the context of your question because there are multiple answers.

It appears you are working with memory caching of directory structures. An inode in your context is a data structure that represents a file. A dentries is a data structure that represents a directory. These structures could be used to build a memory cache that represents the file structure on a disk. To get a directly listing, the OS could go to the dentries--if the directory is there--list its contents (a series of inodes). If not there, go to the disk and read it into memory so that it can be used again.

The page cache could contain any memory mappings to blocks on disk. That could conceivably be buffered I/O, memory mapped files, paged areas of executables--anything that the OS could hold in memory from a file.

Your commands flush these buffers.

Chad
  • 111
  • 1
  • 12
user3344003
  • 20,574
  • 3
  • 26
  • 62
  • So i understand, clearing these won't impact the useful in-memory cache of redis and/or memcache? – Rakib Apr 26 '15 at 05:34
  • Probably not. You really shouldn't need to clear caches in most cases anyway. – user3344003 Apr 26 '15 at 17:38
  • 1
    I actually do... My AWS ec2 RAM is going from 5% to 95% in about a week and never going down (still don't know why)... Am having to clean these cache on a bi weekly basis using a cronjob – Rakib Apr 27 '15 at 12:20
  • 1
    Sounds like something else is broken. It's cases like that where you have to do those tricks. – user3344003 Apr 27 '15 at 15:36
8

I am trying to understand what exactly are pagecache, dentries and inodes. What exactly are they?

user3344003 already gave an exact answer to that specific question, but it's still important to note those memory structures are dynamically allocated.

When there's no better use for "free memory", memory will be used for those caches, but automatically purged and freed when some other "more important" application wants to allocate memory.

No, those caches don't affect any caches maintained by any applications (including redis and memcached).

My Amazon EC2 server RAM was getting filled up over the days - from 6% to up to 95% in a matter of 7 days. I am having to run a bi-weekly cronjob to remove these cache. Then memory usage drops to 6% again.

Probably you're mis-interpreting the situation: your system may just be making efficient usage of its ressources.

To simplify things a little bit: "free" memory can also be seen as "unused", or even more dramatic - a waste of resources: you paid for it, but don't make use of it. That's a very un-economic situation, and the linux kernel tries to make some "more useful" use of your "free" memory.

Part of its strategy involves using it to save various kinds of disk I/O by using various dynamically sized memory caches. A quick access to cache memory saves "slow" disk access, so that's often a useful idea.

As soon as a "more important" process wants to allocate memory, the Linux kernel voluntarily frees those caches and makes the memory available to the requesting process. So there's usually no need to "manually free" those caches.

The Linux kernel may even decide to swap out memory of an otherwise idle process to disk (swap space), freeing RAM to be used for "more important" tasks, probably also including to be used as some cache.

So as long as your system is not actively swapping in/out, there's little reason to manually flush caches.

A common case to "manually flush" those caches is purely for benchmark comparison: your first benchmark run may run with "empty" caches and so give poor results, while a second run will show much "better" results (due to the pre-warmed caches). By flushing your caches before any benchmark run, you're removing the "warmed" caches and so your benchmark runs are more "fair" to be compared with each other.

2

Common misconception is that "Free Memory" is important. Memory is meant to be used.

So let's clear that out :

  • There's used memory, which is where important data is stored, and if that reaches 100% you're dead
  • Then there's cache/buffer, which is used as long as there is space to do so. It's facultative memory to access disk files faster, mostly. If you run out of free memory, this will just free itself and let you access disk directly.

Clearing cached memory as you suggest is most of the case useless and means you're deactivating an optimization, therefore you'll get a slow down.

If you really run out of memory, that is if your "used memory" is high, and you begin to see swap usage, then you must do something.

HOWEVER : there's a known bug running on AWS instances, with dentry cache eating memory with no apparent reason. It's clearly described and solved in this blog.

My own experience with this bug is that "dentry" cache consumes both "used" and "cached" memory and does not seem to release it in time, eventually causing swap. The bug itself can consume resources anyway, so you need to look into it.

Tristan
  • 149
  • 7
0

Hate to bring an old thread back from the dead, but I've been dealing with memory issues lately on my Linux Virtual Machines. Unfortunately, even with the virtualization of computing machines being great and the advancements of Linux memory and resource allocation being superb, conflicts occur when the hypervisor acts out what it calls "performance features".

VMWare will actively send RAM that hasn't been "written or modified" recently, to the disk. When your disk is on a SAN, that means reading from the RAM is now at 1Gbps to 10Gbps at best if you have a REALLY performant RAID and steady network access (ignoring the fact that now the RAM of say 100 VMs are all using the same SAN). DDR3 RAM operates at 25Gbps+ on modern systems, so I'll assume you can see the problem with systems running at 1/25th to less than 1/2 of the speed anticipated.

The caches on my linux systems are literally the same speed as disk I/O of the filesystem, meaning they do not help our performance and are actively sending the OS's RAM into Swap instead of clearing caches. This is a huge problem thanks to VMWare, not because of Linux, but be aware that cloud infrastructure often does stupid crap like this all the time unfortunately. You can read more here: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/perf-vsphere-memory_management.pdf or if you use VMWare, surely you'll notice the "allocated memory" vs "active memory" and where your VMs will always display a different amount than VMWare because of this distinction and treatment of the memory.

Tmanok
  • 105
  • 1
  • 6
  • The question was about AWS EC2 which is not based on VMware. Instead, EC2 is using XEN (older instance types) or Linux KVM (newer instance types). And they do not overcommit memory on the virtualization level, so there is nothing like the VMware swapfile in EC2. – Juergen Sep 15 '22 at 13:33