3

I am using Boost.log in my application. Having multiple threads logging massively, a logging operation must not be locking so I log using this sink:

    boost::log::sinks::ordering_asynchronous_sink

into this frontend file:

    boost::log::sinks::text_file_backend

The main purpose of logging is viewing critical errors to diagnose a crash. Yet I've noticed records are being written to file only once in a while (when a certain amount of records has added up probably), which means a sudden crash will leave no log-records explaining it.

What can I do here? Can I force file write on Fatal severity errors? A better approach?

Leo
  • 1,213
  • 2
  • 13
  • 26

1 Answers1

2

It sounds like your log entries are not being flushed to disk right away. This is typical default behaviour to improve disk performance (avoiding too many small writes) but it has the downside that you describe here. There is an auto_flush flag that you can set on your logging backend to make sure that all log entries are written to disk right away. See the docs for more details.

John Zwinck
  • 239,568
  • 38
  • 324
  • 436
zdan
  • 28,667
  • 7
  • 60
  • 71
  • Is it possible to control when auto_flush is applied? I want only error records to be rushed into the file. – Leo Nov 14 '11 at 11:05
  • @Leo: Not sure. The only way I can think of is if you used multiple logging backends, one for each type of log. – zdan Nov 14 '11 at 16:00
  • @Leo. Again, I'm not sure, as I've only played with boost log a little. Your best bet would be to try posting a separate question. – zdan Nov 16 '11 at 18:28