24

In log4j, when using a FileAppender with BufferedIO=true and BufferSize=xxx properties (i.e. buffering is enabled), I want to be able to flush the log during normal shutdown procedure. Any ideas on how to do this?

Amos
  • 1,403
  • 2
  • 13
  • 19
  • Doesn't Log4J flush the appender automatically during normal shutdown? I would at least expect it to do so. – Péter Török Jun 17 '10 at 09:44
  • 1
    As I understand the code - no flushing when you decide for BufferedIO. You gain performance but pay a price: you'll loose the last log entries... – Andreas Dolk Jun 17 '10 at 10:12
  • When I wrote my own appender (to DB, but doesn't really matter), I did buffered output while flushing automatically every few seconds. – ripper234 Nov 28 '11 at 15:23

8 Answers8

54

When shutting down the LogManager:

LogManager.shutdown();

all buffered logs get flushed.

Kalle Richter
  • 8,008
  • 26
  • 77
  • 177
Amos
  • 1,403
  • 2
  • 13
  • 19
  • 10
    Please select this as the answer - it's clearly the nicest option. Don't feel bad about removing a green tick from someone if you've earned it yourself. – Duncan Jones Apr 09 '14 at 13:51
  • this looks like the best answer... but how do you access this "LogManager" object? (log4php novice) – mike rodent Aug 30 '14 at 19:53
7
public static void flushAllLogs()
{
    try
    {
        Set<FileAppender> flushedFileAppenders = new HashSet<FileAppender>();
        Enumeration currentLoggers = LogManager.getLoggerRepository().getCurrentLoggers();
        while(currentLoggers.hasMoreElements())
        {
            Object nextLogger = currentLoggers.nextElement();
            if(nextLogger instanceof Logger)
            {
                Logger currentLogger = (Logger) nextLogger;
                Enumeration allAppenders = currentLogger.getAllAppenders();
                while(allAppenders.hasMoreElements())
                {
                    Object nextElement = allAppenders.nextElement();
                    if(nextElement instanceof FileAppender)
                    {
                        FileAppender fileAppender = (FileAppender) nextElement;
                        if(!flushedFileAppenders.contains(fileAppender) && !fileAppender.getImmediateFlush())
                        {
                            flushedFileAppenders.add(fileAppender);
                            //log.info("Appender "+fileAppender.getName()+" is not doing immediateFlush ");
                            fileAppender.setImmediateFlush(true);
                            currentLogger.info("FLUSH");
                            fileAppender.setImmediateFlush(false);
                        }
                        else
                        {
                            //log.info("fileAppender"+fileAppender.getName()+" is doing immediateFlush");
                        }
                    }
                }
            }
        }
    }
    catch(RuntimeException e)
    {
        log.error("Failed flushing logs",e);
    }
}
Niv
  • 86
  • 1
5
public static void flushAll() {
    final LoggerContext logCtx = ((LoggerContext) LogManager.getContext());
    for(final org.apache.logging.log4j.core.Logger logger : logCtx.getLoggers()) {
        for(final Appender appender : logger.getAppenders().values()) {
            if(appender instanceof AbstractOutputStreamAppender) {
                ((AbstractOutputStreamAppender) appender).getManager().flush();
            }
        }
    }
}
  • While this code may answer the question, providing additional context regarding how and/or why it solves the problem would improve the answer's long-term value. – Donald Duck Jun 30 '17 at 22:56
  • This code is just tries to flush all flushable appenders (all appenders extending AbstractOutputStreamAppender, where the method "flush" is declared). Using this with Log4J2 v2.8.2 in my project. – Andrey Kurilov Jul 03 '17 at 19:59
1

Maybe you could override WriterAppender#shouldFlush( LoggingEvent ), so it would return true for a special logging category, like log4j.flush.now, and then you call:

LoggerFactory.getLogger("log4j.flush.now").info("Flush")

http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/WriterAppender.html#shouldFlush%28org.apache.log4j.spi.LoggingEvent%29

Ondra Žižka
  • 43,948
  • 41
  • 217
  • 277
1

Sharing my experience with using "Andrey Kurilov"'s code example, or at least simmilar.

What I actually wanted to achieve was to implement an asynchronous log entries with manual flush (immediateFlush = false) to ensure an idle buffers content will be flushed before the bufferSize is reached.

The initial performance results were actually comparable with the ones achieved with the AsyncAppender - so I think it is a good alternative of it.

The AsyncAppender is using a separate thread (and additional dependency to disruptor jar), which makes it more performant, but with the cost of more CPU and even more disk flushing(no matter with high load flushes are made on batches).

So if you want to save disk IO operations and CPU load, but still want to ensure your buffers will be flushed asynchronously at some point, that is the way to go.

Alex Ciocan
  • 2,272
  • 14
  • 20
1

The only solution that worked for me is waiting for a while:

private void flushAppender(Appender appender) {
    // this flush seems to be useless
    ((AbstractOutputStreamAppender<?>) appender).getManager().flush(); 
    try {
        Thread.sleep(500); // wait for log4j to flush logs
    } catch (InterruptedException ignore) {
    }
}
Ferran Maylinch
  • 10,919
  • 16
  • 85
  • 100
0

I have written an appender that fixes this, see GitHub or use name.wramner.log4j:FlushAppender in Maven. It can be configured to flush on events with high severity and it can make the appenders unbuffered when it receives a specific message, for example "Shutting down". Check the unit tests for configuration examples. It is free, of course.

ewramner
  • 5,810
  • 2
  • 17
  • 33
0

Try:

LogFactory.releaseAll();
rsp
  • 23,135
  • 6
  • 55
  • 69