0

I am in a situation where multiple threads (from the same JVM) are writing to the same file (logging by using Logger). I need to delete this file at some point, and next use of logger will create the file and log.

The logging library is synchronized, therefore I do not need to worry about concurrent logging to the same file.

But... I want to add an external operation which operates this file, and this operation is to delete the file, therefore I have to somehow synchronize the logging (Logger) with this delete operation because I do not want to delete the file while the Logger is doing work.

Things I thought of:

  1. Use FileChannel.lock to lock the file, something Logger does, as well. I decided against this, because of this:

File locks are held on behalf of the entire Java virtual machine. They are not suitable for controlling access to a file by multiple threads within the same virtual machine.

Which means in my case (same JVM, multiple threads) this will not cause the effect I want.

What are my options?

Am I missing something vital here?

Perhaps there is a way to do this using the already existing stuff in the Logger?

Hubbs
  • 163
  • 11

3 Answers3

1

It seems you are looking for log rolling and log archiving functionalities. Log rolling is a common feature in Log4j and Logback (SLF4j also).

You can configure the logging library to create a new log file based on size of the current file or the time of day. You can configure the file name format for the rolled file and then have the external process archive or delete old rolled log files.

You can refer to the Log4j 2 configuration given in this answer.

Saptarshi Basu
  • 8,640
  • 4
  • 39
  • 58
  • My desire is to at some point recreate the file. This point though, has to be specified within the project. The bigger picture is that this log file is merged to a database, then it has to be deleted, and the next time this point is reached, the same will happen, merge and delete. In-between this, writing to the log file should be possible. I do not think I can use what you specified since the configuration relies on specific things, and I cannot refer to manual deletion by code. I am hiding the bigger picture to simplify the problem in question. – Hubbs Oct 26 '18 at 15:01
  • @Hubbs I'm assuming your main objective is to store the logs in DB at certain events which are created by your business logic. In such scenarios, you might consider configuring an additional appender to post you log messages in a JMS queue and have a separate Java app process those messages and determine when to insert the data in DB. You may also want to explore things like Logstash. At least, this way you won't have to synchronize things. Also post to the JMS asynchronously for performance. – Saptarshi Basu Oct 26 '18 at 15:16
0

Filesystems are generally synchronized by the OS, so you can simply delete the file without having to worry about locks or anything. Depending on how log4j locks the file that delete process might fail though, and you need to add a retry-loop.

int attempts = 3;
final File logfile = new File(theLogFilePath);
while ((attempts > 0) && logfile.exists() && !logfile.delete()) {
  --attempts;
  try {
    Thread.sleep(1000);
  } catch (InterruptedException e) {
    attempts = 0;
  }
}

This isn't exactly clean code, but then what you do isn't clean anyways. ;)

You interfere with the logging process rather rudely, but since also a user could delete that file at any time, log4j should handle that gracefully. Worst case my guess is that a message that was about to be logged will get lost, but that's probably not an issue considering that you simply delete the log-file anyways.

For a cleaner implementation see this question.

TwoThe
  • 13,879
  • 6
  • 30
  • 54
  • The user cannot delete that file. This file is heavily server-sided. – Hubbs Oct 26 '18 at 15:05
  • Probably not in this case, but in other cases users could do that, so log4j has to make sure it can run properly even if someone deletes the logfile. – TwoThe Oct 26 '18 at 15:27
0

A trick I've used in the past when there is no other option (see Saptarshi Basu's log-rolling suggestion https://stackoverflow.com/a/53011323/823393) is to just rename the current log file.

After the rename, any outstanding logging that is queued up for it continues into the renamed one. Usually, any new log requests will create a new file.

All that remains is to clean up the renamed one. You can usually manage this using some external process or just delete any old log files whenever this process triggers.

OldCurmudgeon
  • 64,482
  • 16
  • 119
  • 213