1

This is a follow-up to an answer to: How to start Solr automatically?

I have solr running as a daemon as suggested above. But I realize it has used up more than 250GB for the log. I'm considering using logrotate to regulate the log size. In the logrotate conf file posted above there's a postrotate command to restart solr.

I'm running some critical processes that are always reading and writing to solr. So I don't want to regularly restart it. Is the postrotate restart strictly necessary?

UPDATE:

I tried this with delaycompress and not restarting the daemon. A new log file was created, but solr kept writing to the old one (which was renamed to solr.log.1)

I tried again, without the delaycompress, and now both solr.log and solr.log.1 are empty. But df still shows the same disk usage as it did before the massive log disappeared!

Community
  • 1
  • 1
Joe
  • 65
  • 2
  • 9

1 Answers1

0

The reason for the disk usage remaining high is because your solr process still has an open file handle to the log file. Try adding copytruncate to your logrotate.d for solr instead of the postrotate restart. You'll need to restart solr to get it to release that handle and mark the blocks free on your disk.

roktechie
  • 1,345
  • 1
  • 10
  • 15
  • with `copytruncate` what would happen in case if for some reason the copy is slow (huge file size / heavy disk IO), then the intermediate log lines will be lost or will those also be copied before truncate. – Anbarasan Apr 10 '19 at 09:10