0

running DSE 4.8.10 - I have 3 DSE Search nodes in my cluster, RF=3. I'm seeing some messages in system.log like those below. It seems they always come after a compaction. Is there a problem with the solr indexes or is there at least an explanation of these messages?

INFO [CompactionExecutor:12] 2016-11-14 23:09:31,243 CompactionTask.java:274 - Compacted 4 sstables to [/data/lib/cassandra/data/system/local/system-local-ka-13314,]. 1,564 bytes to 1,378 (~88% of original) in 17ms = 0.077304MB/s. 4 total partitions merged to 1. Partition merge counts were {4:1, } INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,008 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,053 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,144 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,187 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,230 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,270 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,311 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,353 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,395 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,436 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,478 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,519 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,559 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,600 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,640 AbstractSolrSecondaryIndex.java:1689 - Found 200 rows with expired columns. INFO [Solr TTL scheduler-0] 2016-11-14 23:09:36,681 AbstractSolrSecondaryIndex.java:1689 - Found 31 rows with expired columns.

LHWizard
  • 2,121
  • 19
  • 30

1 Answers1

0

I am assuming you have some TTL set on your data.

If you want to expire data in a cassandra, you don’t have much choice: you need a periodic task that somehow finds expired data and removes it. With lots of data, keeping this efficient can be a challenge. Cassandra actually includes a great opportunity for that kind of job: compaction. Compaction already goes through your data periodically, throwing away old versions of your data, so it is really easy and cheap to use it for data expiration.

This might be the reason why you see those messages only after compaction.

You can read more here : http://www.datastax.com/dev/blog/whats-new-cassandra-07-expiring-columns

root
  • 3,517
  • 2
  • 19
  • 25