243

Does SQLite3 safely handle concurrent access by multiple processes reading/writing from the same DB? Are there any platform exceptions to that?

anonymoose
  • 1,169
  • 2
  • 21
  • 62
anand
  • 2,439
  • 2
  • 15
  • 3
  • 10
    I forgot to mention the **bounty** goall: most answers say it's ok: "SQLite is fast enough", "SQLite handles concurrency well" etc. but, imho, do not answer in detail / don't explain clearly what happens if two write operations would arrive at exactly the same time (theoritic very rare case). 1) Would it trigger an error and interrupt the program? or 2) Would the second write operation wait until the first one is finished? or 3) Would one of the write operation be discarded (data loss!)? 4) Something else? Knowing the limitations of concurrent writing might be useful in many situations. – Basj Jan 23 '18 at 20:49
  • 12
    @Basj In short,2)it will wait and retry several times(Configurable),1) trigger an error,SQLITE_BUSY.3)you can register A Callback To Handle SQLITE_BUSY Errors. – obgnaw Jan 30 '18 at 03:49
  • [Can multiple applications or multiple instances of the same application access a single database file at the same time?](https://www.sqlite.org/faq.html#q5) – 8ctopus Jun 30 '23 at 07:40

8 Answers8

136

If most of those concurrent accesses are reads (e.g. SELECT), SQLite can handle them very well. But if you start writing concurrently, lock contention could become an issue. A lot would then depend on how fast your filesystem is, since the SQLite engine itself is extremely fast and has many clever optimizations to minimize contention. Especially SQLite 3.

For most desktop/laptop/tablet/phone applications, SQLite is fast enough as there's not enough concurrency. (Firefox uses SQLite extensively for bookmarks, history, etc.)

For server applications, somebody some time ago said that anything less than 100K page views a day could be handled perfectly by a SQLite database in typical scenarios (e.g. blogs, forums), and I have yet to see any evidence to the contrary. In fact, with modern disks and processors, 95% of web sites and web services would work just fine with SQLite.

If you want really fast read/write access, use an in-memory SQLite database. RAM is several orders of magnitude faster than disk.

Palec
  • 12,743
  • 8
  • 69
  • 138
kijin
  • 8,702
  • 2
  • 26
  • 32
  • 56
    OP does not ask about efficiency and speed, but about concurrent access. Web servers have nothing to do with it. Same about in memory database. – Jarekczek Jul 09 '17 at 10:17
  • 4
    You are right to an extent but efficiency/speed does play a role. Faster accesses means time spent waiting for locks is lower, thereby reducing the drawbacks of SQLite's concurrency performance. Particularly, if you have few and fast writes, the DB will not seem to have any concurrency issues at all to a user. – Caboose Sep 07 '17 at 16:02
  • 1
    how would you manage concurrent access to an in-memory sqlite database? – P-Gn Mar 03 '18 at 15:11
  • @kijin "SQLite is fast enough as there's **not enough concurrency.** " You mean there are **not many** readers and writer do their work **at the same time**? – John Oct 12 '21 at 08:25
  • Does not answer the question. – sstn Nov 05 '21 at 11:55
  • In-memory SQLite database does not mean that read/writes will be faster. It just mean that the database is volatile (anonymous main memory vs file-backed main memory) Both of them work in RAM. – MrIo Apr 26 '23 at 01:12
85

Yes it does. Lets figure out why

SQLite is transactional

All changes within a single transaction in SQLite either occur completely or not at all

Such ACID support as well as concurrent read/writes are provided in 2 ways - using the so-called journaling (lets call it “old way”) or write-ahead logging (lets call it “new way”)

Journaling (Old Way)

In this mode SQLite uses DATABASE-LEVEL locking. This is the crucial point to understand.

That means whenever it needs to read/write something it first acquires a lock on the ENTIRE database file. Multiple readers can co-exist and read something in parallel.

During writing it makes sure an exclusive lock is acquired and no other process is reading/writing simultaneously and hence writes are safe.

(This is known as a multiple-readers-single-writer or MSRW lock)

This is why here they’re saying SQlite implements serializable transactions

Troubles

As it needs to lock an entire database every time and everybody waits for a process handling writing concurrency suffers and such concurrent writes/reads are of fairly low performance

Rollbacks/outages

Prior to writing something to the database file SQLite would first save the chunk to be changed in a temporary file. If something crashes in the middle of writing into the database file it would pick up this temporary file and revert the changes from it

Write-Ahead Logging or WAL (New Way)

In this case all writes are appended to a temporary file (write-ahead log) and this file is periodically merged with the original database. When SQLite is searching for something it would first check this temporary file and if nothing is found proceed with the main database file.

As a result, readers don’t compete with writers and performance is much better compared to the Old Way.

Caveats

SQlite heavily depends on the underlying filesystem locking functionality so it should be used with caution, more details here

You're also likely to run into the database is locked error, especially in the journaled mode so your app needs to be designed with this error in mind

Qqwy
  • 5,214
  • 5
  • 42
  • 83
Oleg Kuralenko
  • 11,003
  • 1
  • 30
  • 40
46

Yes, SQLite handles concurrency well, but it isn't the best from a performance angle. From what I can tell, there are no exceptions to that. The details are on SQLite's site: https://www.sqlite.org/lockingv3.html

This statement is of interest: "The pager module makes sure changes happen all at once, that either all changes occur or none of them do, that two or more processes do not try to access the database in incompatible ways at the same time"

Jarekczek
  • 7,456
  • 3
  • 46
  • 66
vcsjones
  • 138,677
  • 31
  • 291
  • 286
  • 2
    [Here are some comments about issues on different platforms](http://www.sqlite.org/faq.html#q5), namely NFS file systems, and Windows (although it might pertain only to old versions of Windows ...) – Nate Jun 15 '12 at 11:27
  • 2
    Is it possible to load an SQLite3 database into RAM to be used for all users in PHP? I'm guessing no with it being procedural – Anon343224user Jul 25 '13 at 19:00
  • 1
    @foxyfennec.. a starting point, though SQLite may not be the optimal db for this use case. https://www.sqlite.org/inmemorydb.html – kingPuppy Jun 16 '15 at 22:23
39

Nobody seems to have mentioned WAL (Write Ahead Log) mode. Make sure the transactions are properly organised and with WAL mode set on, there is no need to keep the database locked whilst people are reading things whilst an update is going on.

The only issue is that at some point the WAL needs to be re-incorporated into the main database, and it does this when the last connection to the database closes. With a very busy site you might find it take a few seconds for all connections to be close, but 100K hits per day should not be a problem.

akc42
  • 4,893
  • 5
  • 41
  • 60
  • Interesting, but works only on a single machine, not on scenarious where database is accessed onver network. – Bobík Jun 05 '17 at 14:00
  • 2
    Its worth to mention that the default timeout for a writer to wait is 5 seconds and after that `database is locked` error will be raise by writer – mirhossein Jan 17 '19 at 16:30
24

SQLite has a readers-writer lock on the database level. Multiple connections (possibly owned by different processes) can read data from the same database at the same time, but only one can write to the database.

SQLite supports an unlimited number of simultaneous readers, but it will only allow one writer at any instant in time. For many situations, this is not a problem. Writer queue up. Each application does its database work quickly and moves on, and no lock lasts for more than a few dozen milliseconds. But there are some applications that require more concurrency, and those applications may need to seek a different solution. -- Appropriate Uses For SQLite @ SQLite.org

The readers-writer lock enables independent transaction processing and it is implemented using exclusive and shared locks on the database level.

An exclusive lock must be obtained before a connection performs a write operation on a database. After the exclusive lock is obtained, both read and write operations from other connections are blocked till the lock is released again.

Implementation details for the case of concurrent writes

SQLite has a lock table that helps locking the database as late as possible during a write operation to ensure maximum concurrency.

The initial state is UNLOCKED, and in this state, the connection has not accessed the database yet. When a process is connected to a database and even a transaction has been started with BEGIN, the connection is still in the UNLOCKED state.

After the UNLOCKED state, the next state is the SHARED state. In order to be able to read (not write) data from the database, the connection must first enter the SHARED state, by getting a SHARED lock. Multiple connections can obtain and maintain SHARED locks at the same time, so multiple connections can read data from the same database at the same time. But as long as even only one SHARED lock remains unreleased, no connection can successfully complete a write to the database.

If a connection wants to write to the database, it must first get a RESERVED lock.

Only a single RESERVED lock may be active at one time, though multiple SHARED locks can coexist with a single RESERVED lock. RESERVED differs from PENDING in that new SHARED locks can be acquired while there is a RESERVED lock. -- File Locking And Concurrency In SQLite Version 3 @ SQLite.org

Once a connection obtains a RESERVED lock, it can start processing database modification operations, though these modifications can only be done in the buffer, rather than actually written to disk. The modifications made to the readout content are saved in the memory buffer. When a connection wants to submit a modification (or transaction), it is necessary to upgrade the RESERVED lock to an EXCLUSIVE lock. In order to get the lock, you must first lift the lock to a PENDING lock.

A PENDING lock means that the process holding the lock wants to write to the database as soon as possible and is just waiting on all current SHARED locks to clear so that it can get an EXCLUSIVE lock. No new SHARED locks are permitted against the database if a PENDING lock is active, though existing SHARED locks are allowed to continue.

An EXCLUSIVE lock is needed in order to write to the database file. Only one EXCLUSIVE lock is allowed on the file and no other locks of any kind are allowed to coexist with an EXCLUSIVE lock. In order to maximize concurrency, SQLite works to minimize the amount of time that EXCLUSIVE locks are held. -- File Locking And Concurrency In SQLite Version 3 @ SQLite.org

So you might say SQLite safely handles concurrent access by multiple processes writing to the same DB simply because it doesn't support it! You will get SQLITE_BUSY or SQLITE_LOCKED for the second writer when it hits the retry limitation.

Palec
  • 12,743
  • 8
  • 69
  • 138
obgnaw
  • 3,007
  • 11
  • 25
  • Thank you. An example of code with 2 writers would be super great to understand how it works. – Basj Jan 30 '18 at 09:07
  • 2
    @Basj in short,sqlite have a Read-Write Lock on the database file.It's same as concurrent write a file. And with WAL,still cannot concurrent writing,but WAL can speed up writing,and read and write can be concurrent.And WAL introduce new lock like WAL_READ_LOCK,WAL_WRITE_LOCK. – obgnaw Jan 31 '18 at 02:21
  • 2
    Could you use a Queue and have multiple threads feed the Queue and just one thread writing to the DB using the SQL Statements in the Queue. Something like [this](https://gist.github.com/User001501/3053f26100ddf281600668fed347e518) – Gabriel Jan 31 '18 at 22:16
23

In 2019, there are two new concurrent write options not released yet but available in separate branches.

"PRAGMA journal_mode = wal2"

The advantage of this journal mode over regular "wal" mode is that writers may continue writing to one wal file while the other is checkpointed.

BEGIN CONCURRENT - link to detailed doc

The BEGIN CONCURRENT enhancement allows multiple writers to process write transactions simultanously if the database is in "wal" or "wal2" mode, although the system still serializes COMMIT commands.

When a write-transaction is opened with "BEGIN CONCURRENT", actually locking the database is deferred until a COMMIT is executed. This means that any number of transactions started with BEGIN CONCURRENT may proceed concurrently. The system uses optimistic page-level-locking to prevent conflicting concurrent transactions from being committed.

Together they are present in begin-concurrent-wal2 or each in a separate own branch.

Community
  • 1
  • 1
V.B.
  • 6,236
  • 1
  • 33
  • 56
  • 1
    Do we have any idea when those features will make their way into the release version? They could really come in handy for me. – Emperor Eto Apr 03 '19 at 22:49
  • 2
    No idea. You could built easily from the branches. For .NET I have a library with low-level interface & WAL2 + begin concurrent + FTS5: https://github.com/Spreads/Spreads.SQLite – V.B. Apr 03 '19 at 23:00
  • Oh, sure thanks. I'm more wondering about stability. SQLite's pretty top notch when it comes to their releases but I don't know how risky it would be to use a branch in production code. – Emperor Eto Apr 04 '19 at 12:45
  • 2
    See this thread https://github.com/Expensify/Bedrock/issues/65 and Bedrock in general. They use it in production and pushed that `begin concurrent` stuff. – V.B. Apr 04 '19 at 13:11
  • sqlite3.OperationalError: near "CONCURRENT": syntax error – iperov Oct 18 '21 at 11:11
9

This thread is old but I think it would be good to share result of my tests done on SQLite: I ran 2 instances of python program (different processes same program) executing statements SELECT and UPDATE SQL commands within transaction with EXCLUSIVE lock and timeout set to 10 seconds to get a lock, and result were frustrating. Every instance did in 10000 step loop:

  • connect to db with exclusive lock
  • select on one row to read counter
  • update the row with new value equal to counter incremented by 1
  • close connection to db

Even if SQLite granted exclusive lock on transaction, the total number of really executed cycles were not equal to 20 000 but less (total number of iterations over single counter counted for both processes). Python program almost did not throw any single exception (only once during select for 20 executions). SQLite revision at moment of test was 3.6.20 and python v3.3 CentOS 6.5. In MY opinion it is better to find more reliable product for this kind of job or restrict writes to SQLite to single unique process/thread.

Zain
  • 37,492
  • 7
  • 60
  • 84
asf las
  • 111
  • 2
  • 4
  • 7
    It looks like you need to say some magic words in order to get a lock in python, as discussed here: http://stackoverflow.com/a/12848059/1048959 This despite the fact that the python sqlite documentation leads you to believe that `with con` is enough. – Dan Stahlke Mar 28 '14 at 12:16
0

It is natural when you specify the name for db or even in memory db if you have concurrent access (specially write) you will get this.
In my case, I am using Sqlite for testing and it is because there are several tests in the same solution it happens.
You can have two improvements:

  1. Delete before creating db.Database.EnsureDeletedAsync();
  2. Use an empty string for connection, in this case it will create a random name each call:
{
  "ConnectionStrings": {
    "ConnectionType": "sqlite",
    "ConnectionString": ""
  }
}
Zach Jensz
  • 3,650
  • 5
  • 15
  • 30
sa.as
  • 381
  • 2
  • 4