I have a c# website which logs API calls, it inserts the log as a single row into a SQL Server database. When the site gets really busy this is the pinch point, the log table locks and we see timeouts waiting for connections to the database. I am trying to remedy this by using SqlBulkCopy
. The original log method is replaced with one in a new BulkLogger
class and when the DataTable containing the logs reaches a limit (100 at the moment) a Flush()
method is called and the logs are all written to the database.
The original log method was called from all over the code, it's old code with no DI so I have have to create new BulkLogger
in a couple of places. I'm worried that some logs wont be written because exceptions aren't handled well and the BulkLogger
could be lost, or just when the page request completes if some logs are still in memory waiting to be BulkCopied.
My BulkLogger
class can be sealed and I don't have any resources to clear or dispose of. So if BulkLogger
implements IDisposable and I implement Dispose as below should this mean all the logs are written to the database in most circumstances?
public sealed class BulkLogger : IDisposable
{
DataSet _logSet;
DataTable _logTable;
public BulkLogger() {
// set up the dataset and datatable
}
public void Log(string message, DateTime logTime) {
// add message and time to _logTable
// if _logTable.count > 100 call Flush()
}
public void Flush() {
// Use SqlBulkCopy to insert logs
}
public void Dispose()
{
Flush();
}
}