3

I have a c# website which logs API calls, it inserts the log as a single row into a SQL Server database. When the site gets really busy this is the pinch point, the log table locks and we see timeouts waiting for connections to the database. I am trying to remedy this by using SqlBulkCopy. The original log method is replaced with one in a new BulkLogger class and when the DataTable containing the logs reaches a limit (100 at the moment) a Flush() method is called and the logs are all written to the database.

The original log method was called from all over the code, it's old code with no DI so I have have to create new BulkLogger in a couple of places. I'm worried that some logs wont be written because exceptions aren't handled well and the BulkLogger could be lost, or just when the page request completes if some logs are still in memory waiting to be BulkCopied.

My BulkLogger class can be sealed and I don't have any resources to clear or dispose of. So if BulkLogger implements IDisposable and I implement Dispose as below should this mean all the logs are written to the database in most circumstances?

public sealed class BulkLogger : IDisposable
{
    DataSet _logSet;
    DataTable _logTable;

    public BulkLogger() {
        // set up the dataset and datatable 
    }

    public void Log(string message, DateTime logTime) {
        // add message and time to _logTable
        // if _logTable.count > 100 call Flush()
    }

    public void Flush() {
        // Use SqlBulkCopy to insert logs
    }

    public void Dispose()
    {
        Flush();
    }
}
jonnarosey
  • 520
  • 1
  • 8
  • 19
  • 2
    IDisposable isn't magic. Someone needs to call its `Dispose` (typically as the result of a `using` statement). What you may want to do is make your normal calling pattern mandate a Dispose call, and have Dispose call flush after N Log calls or X amount of time. Add an unhandled exception handler that calls your Flush. But, ... making you class Disposable just to call Flush buys you little. Conscientious users, on seeing IDispossble will wrap their calls in a using, defeating your design – Flydog57 Sep 26 '19 at 13:00
  • @Flydog57 I was planning to wrap my calls in using statements. Reading your reply makes me realise my whole approach wont work. Big ask but can you see a good alternative? – jonnarosey Sep 26 '19 at 13:05
  • If you wrap code in a using then the object is automatically disposed on exiting the using :using(SQLConnection con = new SQLConnection("yourConnection")){} – jdweng Sep 26 '19 at 13:10
  • You may be getting an exception and the using might not be showing the exception. Did you add a exception handler inside the using? If code in using is calling other methods the error could be happening in the lower level methods and not being reported properly. – jdweng Sep 26 '19 at 13:15
  • @Flydog57 You suggest "Add an unhandled exception handler that calls your Flush. ", this sounds like what I'm looking for, I just want to flush on exception or request complete. Could you expand on this please? Maybe even in an answer! – jonnarosey Sep 26 '19 at 13:23
  • 1
    Have you checked how the table is being locked, what transaction isolation level are you using? A log table should have minimal indexes to optimise for writes. Have you considered memory optimised tables? Do you close your connection between calls (best practice)? – Steve Ford Sep 26 '19 at 15:31
  • @SteveFord there is a daily delete task which always causes the problem, the rest of the time it just seems to correspond with busy periods. Yes, there is an unnecessary index we're trying to get removed (good spot) but I don't think that will 100% fix our issue. We do close connections and I will check out memory optimised tables. Thanks! – jonnarosey Sep 26 '19 at 15:58
  • @jonaglon could the delete task either be moved to a less busy time of day or batched i.e. WHILE @@ROWCOUNT > 0 DELETE TOP 1000 FROM table WHERE x=y END? Have you considered using Read committed snapshot isolation or snapshot isolation? – Steve Ford Sep 27 '19 at 11:51

2 Answers2

1

Using something like SQLBulkCopy, just allows you to insert a big load of data to the db/table as per MSDN documentation (https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlbulkcopy?view=netframework-4.8). So going for this approach would most likely mean that you would first have to store your calls that you want to log to DB in memory and once they reach a certain threshold write them to the DB.

I also don't think this is a great approach as you will most likely see an increase in the resources used.

Consider creating a separate thread to write your log rows to the DB. It should not be needed to change any code just make the call from a thread that instantiates its own connection instance to the DB.

Thread t1 = new Thread(()=>{
SQLConnection con = new SQLConnection("yourConnection");
SqlCommand command = new SqlCommand("INSERT INTO LOG TABLE", con );
command.ExecuteNonQuery();
});
t1.Start();

See how to create a thread from: https://learn.microsoft.com/en-us/dotnet/api/system.threading.thread.start?view=netframework-4.8

Here is another link with an example project of how to implement DB logging. https://code.msdn.microsoft.com/How-to-implement-logging-4cbcfc64

a_local_nobody
  • 7,947
  • 5
  • 29
  • 51
Terblanche Daniel
  • 1,169
  • 2
  • 12
  • 18
  • The code currently works a little like this. Unfortunately we end up with loads of blocked threads sat waiting for a database table to unlock and this brings the site down. – jonnarosey Sep 26 '19 at 13:10
  • why do you think there is a dead lock? Are there UPDATES or SELECTS bombarding this same table? – T McKeown Sep 26 '19 at 13:19
  • @jonaglon Please see Updated answer for a link to a project demonstrating how to implement DB logging. – Terblanche Daniel Sep 26 '19 at 13:24
  • @TMcKeown I may have used the wrong language when I wrote 'unlock'. What we see in the error logs when we try to do too many inserts is "Execution Timeout Expired. The timeout period elapsed proir to completion of the operation or the server is not responding" – jonnarosey Sep 26 '19 at 13:46
1

If you are overwhelming the DB with too many threaded off tasks to insert into DB then perhaps you need to narrow the direct pipeline to the DB. Instead of kicking off threads to do inserts, use 1 thread or a pool of threads of consume the requests to insert into the DB. You could use ConcurrentQueue<T> to Enqueue (Producer) the request on the main app thread while the dedicated thread(s) monitors this queue and Consumes the requests. This way the inserts are being processed synchronously instead of a free for all.

Here is a link to a Producer/Consumer example:

https://learn.microsoft.com/en-us/dotnet/standard/collections/thread-safe/blockingcollection-overview?redirectedfrom=MSDN

C# producer/consumer

T McKeown
  • 12,971
  • 1
  • 25
  • 32
  • 1
    I've not been given the go ahead to implement this but if and when I am this is the pattern I'll try and use to solve my problem so am accepting this answer. Thanks. – jonnarosey Sep 30 '19 at 07:40