1

There are two other related posts

NoSpamLogger.java Maximum memory usage reached Cassandra

in cassandra Maximum memory usage reached (536870912 bytes), cannot allocate chunk of 1048576 bytes

But they aren't exactly asking the same thing. I am asking for a thorough understanding of what does this message mean? It doesn't seem to impact my latency at the moment.

I did a nodetool cfstats

            SSTable count: 5
            Space used (live): 1182782029
            Space used (total): 1182782029
            Space used by snapshots (total): 0
            Off heap memory used (total): 802011
            SSTable Compression Ratio: 0.17875764458149868
            Number of keys (estimate): 34
            Memtable cell count: 33607
            Memtable data size: 5590408
            Memtable off heap memory used: 0
            Memtable switch count: 902
            Local read count: 4689
            Local read latency: NaN ms
            Local write count: 51592342
            Local write latency: 0.035 ms
            Pending flushes: 0
            Percent repaired: 0.0
            Bloom filter false positives: 0
            Bloom filter false ratio: 0.00000
            Bloom filter space used: 120
            Bloom filter off heap memory used: 80
            Index summary off heap memory used: 291
            Compression metadata off heap memory used: 801640
            Compacted partition minimum bytes: 447
            Compacted partition maximum bytes: 2874382626
            Compacted partition mean bytes: 164195240
            Average live cells per slice (last five minutes): NaN
            Maximum live cells per slice (last five minutes): 0
            Average tombstones per slice (last five minutes): NaN
            Maximum tombstones per slice (last five minutes): 0
            Dropped Mutations: 0

The latency looks fine to me.

I also did a histogram

  • Percentile SSTables WriteLatency ReadLatency PartitionSize CellCount
  • 50% 0.00 35.43 0.00 1629722 35425
  • 75% 0.00 42.51 0.00 129557750 2346799
  • 95% 0.00 61.21 0.00 668489532 14530764
  • 98% 0.00 73.46 0.00 2874382626 52066354
  • 99% 0.00 88.15 0.00 2874382626 52066354
  • Min 0.00 11.87 0.00 447 11
  • Max 0.00 785.94 0.00 2874382626 52066354

The stats look fine to me! So what is Cassandra complaining about?

Erick Ramirez
  • 13,964
  • 1
  • 18
  • 23
mofury
  • 644
  • 1
  • 11
  • 23

2 Answers2

1

The comment in this jira has an explanation: https://issues.apache.org/jira/browse/CASSANDRA-12221

Quote:

Wei Deng added a comment - 18/Jul/16 05:01

See CASSANDRA-5661. It's a cap to limit the amount of off-heap memory used by RandomAccessReader, and if there is a need, you can change the limit by file_cache_size_in_mb in cassandra.yaml.

Community
  • 1
  • 1
Gigi Li
  • 11
  • 1
0

The log message is relatively harmless. It indicates that the node's off-heap cache is full because the node is busy servicing reads.

The 134217728 bytes in the log message means that you have set file_cache_size_in_mb to 128 MB. You should consider setting it to the default 512 MB.

It is fine to see the occasional occurrences of the message in the logs which is why it is logged at INFO level but if it gets logged repeatedly, it is an indicator that the node is getting overloaded and you should consider increasing the capacity of your cluster by adding more nodes.

For more info, see my post on DBA Stack Exchange -- What does "Maximum memory usage reached" mean in the Cassandra logs?. Cheers!

Erick Ramirez
  • 13,964
  • 1
  • 18
  • 23