4

I have a web application working with sqlite database. My version of sqlite is the latest from official windows binary distribution - 3.7.13.

The problem is that under heavy load on database, sqlite API functions (such as sqlite3_step) are returning SQLITE_BUSY.

I pass the following pragmas when initializing a connection:

journal_mode = WAL
page_size = 4096
synchronous = FULL
foreign_keys = on

The databas is one-file database. And I'm using Mono 2.10.8 and Mono.Data.Sqlite assembly provided with it to access database.

I'm testing it with 50 parallel threads which are sending 50 subsequent http-requests each to my application. On every request some reading and writing are done to the database. Every set of IO operations is executed inside the transaction.

Everything goes well until near 400th - 700th request. In this (random) moment API functions are starting to return SQLITE_BUSY permanently (To be more exact - until the limit of retries is reached).

As far as i know WAL mode transparently supports parallel reads and writes. I've guessed that it could be because of attempt to read database while checkpoint operation is executed. But even after turning autocheckpoint off the situation remains the same.

What could be wrong in this situation? How to serve large amount of parallel database IO correctly?

P.S.

Only one connection per request is supposed. I use nhibernate configured with WebSessionContext.

I initialize my NHibernate session like this:

ISession session = null;

//factory variable is session factory
if (CurrentSessionContext.HasBind(factory))
{
  session = factory.GetCurrentSession();
  if (session == null)
    CurrentSessionContext.Unbind(factory);
}

if (session == null)
{
  session = factory.OpenSession();
  CurrentSessionContext.Bind(session);
}

return session;

And on HttpApplication.EndRequest i release it like this:

//factory variable is session factory
if (CurrentSessionContext.HasBind(factory))
{
  try
  {
    CurrentSessionContext.Unbind(factory)
      .Dispose();
  }
  catch (Exception ee)
  {
    Logr.Error("Error uninitializing session", ee);
  }
}

So as far as i know there should be only one connection per request life cycle. While proceessing the request, code is executed sequentially (ASP.NET MVC 3). So it doesn't look like any concurency is possible here. Can i conclude that no connections are shared in this case?

ILya
  • 2,670
  • 4
  • 27
  • 40
  • Is a distinct connection opened for each request, or do all requests share the same connection? Also, could you add some relevant bits of code, especially the part where the threads interact with the database? –  Aug 29 '12 at 10:37
  • 2
    Your assumption about WAL mode is wrong. http://www.sqlite.org/wal.html says: "since there is only one WAL file, there can only be one writer at a time." For a database with heavy write load, use something like PostgreSQL or MySQL. – CL. Sep 02 '12 at 12:50

1 Answers1

3

It's not clear to me if the request threads share the same connection or not. If they don't then you should not be having these issues.

Assuming that you are indeed sharing the connection object across multiple threads, you should use some locking mechanism as the the SqliteConnection isn't thread-safe (an old post, but the SQLite library maintained as part of Mono evolved from System.Data.SQLite found on http://sqlite.phxsoftware.com).

So assuming that you don't lock around using the SqliteConnection object, can you please try it? A simple way to accomplish this could look like this:

static readonly object _locker = new object();

public void ProcessRequest()
{
    lock (_locker) {
        using (IDbCommand dbcmd = conn.CreateCommand()) {
            string sql = "INSERT INTO foo VALUES ('bar')";
            dbcmd.CommandText = sql;
            dbcmd.ExecuteNonQuery();
        }
    }
}

You may however choose to open a distinct connection with each thread to ensure you don't have any more threading issues with the SQLite library.

EDIT

Following-up on the code you posted, do you close the session after committing the transaction? If you don't use some ITransaction, do you flush and close the session? I'm asking since I don't see it in your code, and I see it mentioned in https://stackoverflow.com/a/43567/610650

I also see it mentioned on http://nhibernate.info/doc/nh/en/index.html#session-configuration:

Also note that you may call NHibernateHelper.GetCurrentSession(); as many times as you like, you will always get the current ISession of this HTTP request. You have to make sure the ISession is closed after your unit-of-work completes, either in Application_EndRequest event handler in your application class or in a HttpModule before the HTTP response is sent.

Community
  • 1
  • 1
  • Thanks for answer. I've added some information to answer your question. Please take a look – ILya Aug 29 '12 at 14:21
  • I do exactly as in the quote from nhforge. Every conversation with database is wrapped into Unit Of Work, i.e. every block of queries is executed in transaction and no queries are executed without a transaction. My session is released at EndRequest. As CL. mentioned in comments to the question, SQLite documentation says that concurrent writes are impossible in WAL mode. So i thing i'll give a bounty to you. But if you know how to achieve concurrent writes with SQLite (without monopoly locks) please sahre this knowledge. – ILya Sep 03 '12 at 08:30
  • @ILya: look in your posted code there is no session.Close(); in EndRequest so my question stands do you actually make that call? also yes SQLite won't have more than 1 writer at a time, that has been established by you from the start and by no means am I trying to revisit that. To the best of my knowledge SQLite will work fine with multiple writers until there are too many. What too many is depends on many factors. But your question seems to imply that the problems arise not because there are more and more requests at a time, but rather because more and more time has passed by. –  Sep 03 '12 at 08:44
  • I do Dispose() of my session, so this part is ok. Yep, the provider i use throws an exception after a timeout of retries to call sqlite3_step. It happens when queue is too long. So thank you. it's clear for me now. – ILya Sep 03 '12 at 15:16
  • I gave a bounty to you, but please add to your answer that concurrent writes are impossible so it will be complete. Thank you. – ILya Sep 03 '12 at 15:17