88

Why is this simple test case inserting 100,000 rows 25 times slower with SQLAlchemy than it is using the sqlite3 driver directly? I have seen similar slowdowns in real-world applications. Am I doing something wrong?

#!/usr/bin/env python
# Why is SQLAlchemy with SQLite so slow?
# Output from this program:
# SqlAlchemy: Total time for 100000 records 10.74 secs
# sqlite3:    Total time for 100000 records  0.40 secs


import time
import sqlite3

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String,  create_engine 
from sqlalchemy.orm import scoped_session, sessionmaker

Base = declarative_base()
DBSession = scoped_session(sessionmaker())

class Customer(Base):
    __tablename__ = "customer"
    id = Column(Integer, primary_key=True)
    name = Column(String(255))

def init_sqlalchemy(dbname = 'sqlite:///sqlalchemy.db'):
    engine  = create_engine(dbname, echo=False)
    DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
    Base.metadata.drop_all(engine)
    Base.metadata.create_all(engine)

def test_sqlalchemy(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    for i in range(n):
        customer = Customer()
        customer.name = 'NAME ' + str(i)
        DBSession.add(customer)
    DBSession.commit()
    print "SqlAlchemy: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"

def init_sqlite3(dbname):
    conn = sqlite3.connect(dbname)
    c = conn.cursor()
    c.execute("DROP TABLE IF EXISTS customer")
    c.execute("CREATE TABLE customer (id INTEGER NOT NULL, name VARCHAR(255), PRIMARY KEY(id))")
    conn.commit()
    return conn

def test_sqlite3(n=100000, dbname = 'sqlite3.db'):
    conn = init_sqlite3(dbname)
    c = conn.cursor()
    t0 = time.time()
    for i in range(n):
        row = ('NAME ' + str(i),)
        c.execute("INSERT INTO customer (name) VALUES (?)", row)
    conn.commit()
    print "sqlite3: Total time for " + str(n) + " records " + str(time.time() - t0) + " sec"

if __name__ == '__main__':
    test_sqlalchemy(100000)
    test_sqlite3(100000)

I have tried numerous variations (see http://pastebin.com/zCmzDraU )

braddock
  • 1,345
  • 2
  • 11
  • 13

3 Answers3

200

The SQLAlchemy ORM uses the unit of work pattern when synchronizing changes to the database. This pattern goes far beyond simple "inserts" of data. It includes that attributes which are assigned on objects are received using an attribute instrumentation system which tracks changes on objects as they are made, includes that all rows inserted are tracked in an identity map which has the effect that for each row SQLAlchemy must retrieve its "last inserted id" if not already given, and also involves that rows to be inserted are scanned and sorted for dependencies as needed. Objects are also subject to a fair degree of bookkeeping in order to keep all of this running, which for a very large number of rows at once can create an inordinate amount of time spent with large data structures, hence it's best to chunk these.

Basically, unit of work is a large degree of automation in order to automate the task of persisting a complex object graph into a relational database with no explicit persistence code, and this automation has a price.

So ORMs are basically not intended for high-performance bulk inserts. This is the whole reason why SQLAlchemy has two separate libraries, which you'll note if you look at http://docs.sqlalchemy.org/en/latest/index.html you'll see two distinct halves to the index page - one for the ORM and one for the Core. You cannot use SQLAlchemy effectively without understanding both.

For the use case of fast bulk inserts, SQLAlchemy provides the core, which is the SQL generation and execution system that the ORM builds on top of. Using this system effectively we can produce an INSERT that is competitive with the raw SQLite version. The script below illustrates this, as well as an ORM version that pre-assigns primary key identifiers so that the ORM can use executemany() to insert rows. Both ORM versions chunk the flushes at 1000 records at a time as well which has a significant performance impact.

Runtimes observed here are:

SqlAlchemy ORM: Total time for 100000 records 16.4133379459 secs
SqlAlchemy ORM pk given: Total time for 100000 records 9.77570986748 secs
SqlAlchemy Core: Total time for 100000 records 0.568737983704 secs
sqlite3: Total time for 100000 records 0.595796823502 sec

script:

import time
import sqlite3

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String,  create_engine
from sqlalchemy.orm import scoped_session, sessionmaker

Base = declarative_base()
DBSession = scoped_session(sessionmaker())

class Customer(Base):
    __tablename__ = "customer"
    id = Column(Integer, primary_key=True)
    name = Column(String(255))

def init_sqlalchemy(dbname = 'sqlite:///sqlalchemy.db'):
    global engine
    engine = create_engine(dbname, echo=False)
    DBSession.remove()
    DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
    Base.metadata.drop_all(engine)
    Base.metadata.create_all(engine)

def test_sqlalchemy_orm(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    for i in range(n):
        customer = Customer()
        customer.name = 'NAME ' + str(i)
        DBSession.add(customer)
        if i % 1000 == 0:
            DBSession.flush()
    DBSession.commit()
    print "SqlAlchemy ORM: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"

def test_sqlalchemy_orm_pk_given(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    for i in range(n):
        customer = Customer(id=i+1, name="NAME " + str(i))
        DBSession.add(customer)
        if i % 1000 == 0:
            DBSession.flush()
    DBSession.commit()
    print "SqlAlchemy ORM pk given: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"

def test_sqlalchemy_core(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    engine.execute(
        Customer.__table__.insert(),
        [{"name":'NAME ' + str(i)} for i in range(n)]
    )
    print "SqlAlchemy Core: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"

def init_sqlite3(dbname):
    conn = sqlite3.connect(dbname)
    c = conn.cursor()
    c.execute("DROP TABLE IF EXISTS customer")
    c.execute("CREATE TABLE customer (id INTEGER NOT NULL, name VARCHAR(255), PRIMARY KEY(id))")
    conn.commit()
    return conn

def test_sqlite3(n=100000, dbname = 'sqlite3.db'):
    conn = init_sqlite3(dbname)
    c = conn.cursor()
    t0 = time.time()
    for i in range(n):
        row = ('NAME ' + str(i),)
        c.execute("INSERT INTO customer (name) VALUES (?)", row)
    conn.commit()
    print "sqlite3: Total time for " + str(n) + " records " + str(time.time() - t0) + " sec"

if __name__ == '__main__':
    test_sqlalchemy_orm(100000)
    test_sqlalchemy_orm_pk_given(100000)
    test_sqlalchemy_core(100000)
    test_sqlite3(100000)

See also: http://docs.sqlalchemy.org/en/latest/faq/performance.html

zzzeek
  • 72,307
  • 23
  • 193
  • 185
  • 1
    Thanks for the explanation. Is engine.execute() significantly different than DBSession.execute()? I had tried an insert expression using DBSession.execute() but it was not significantly faster than the full ORM version. – braddock Aug 02 '12 at 02:20
  • 5
    engine.execute() and DBSession.execute() are mostly the same except DBSession.execute() will wrap a given plain SQL string in text(). It makes a *huge* difference if you are using execute/executemany syntax. pysqlite is written entirely in C and has almost no latency, so any Python overhead added to it's execute() call will show up palpably in profiling. Even a single pure-Python function call is significantly slower than a pure C-function call like pysqlite's execute(). You also need to consider that SQLAlchemy expression constructs go through a compilation step per execute() call. – zzzeek Aug 02 '12 at 02:47
  • Thanks for the great explanation. When you were building SQLAlchemy did you create the core first and then build the ORM on top, or did you create them in parallel? – culix Oct 21 '13 at 04:06
  • 4
    the core was created first, though after the first few weeks once the core proof of concept worked (and it was *terrible*) the ORM and core were developed in parallel from that point forward. – zzzeek Oct 21 '13 at 22:12
  • 3
    I really don't know why anyone would choose the ORM model then. Most projects using a database will have +10,000 rows. maintaining 2 update methods (one for single row and one for bulk) just doesn't sound smart. – Peter Moore Jan 25 '14 at 05:08
  • 6
    will have....10000 rows they need to insert all at once in bulk all the time? not particularly. the vast majority of web applications for example probably exchange half a dozen rows per request. the ORM is pretty popular with some very famous and high traffic websites. – zzzeek Jan 25 '14 at 16:35
  • man, i wish if the first paragraph would be more readable.... :( //regards, on behalf of all non-native-English users ... – kmonsoor Jun 30 '15 at 06:33
  • @PeterMoore I do wonder whether adding a parameter to session.add to say it should be a "fast insert" might be a cool API addition. But it might not work mixed with the standard ORM stuff happening in the same session commit. – Rob Grant Aug 04 '16 at 21:50
  • @zzzeek Mike Thanks for awesome response and for SQLAlchemy. Its a beautiful piece of software. My question is that if I use SQLAlchemy.core and later create a mapper, and then use sessionmaker etc to create objects with threadllocal session, will it be same as using declarative base and overhead? or core+sessionmaker is equal to core – doesnt_matter Jun 13 '17 at 06:47
  • 1
    @PeterMoore - agree. I am desiging a basic flask app and it takes bulk updates from CSV files and perists them. Got really far with SQLAlchemy just to hit this wall. If it can't do bulk inserts well, it's not worth using IMO – Adam Hughes Mar 23 '20 at 16:50
25

Excellent answer from @zzzeek. For those wondering about the same stats for queries I've modified @zzzeek code slightly to query those same records right after inserting them then convert those records to a list of dicts.

Here's the results

SqlAlchemy ORM: Total time for 100000 records 11.9210000038 secs
SqlAlchemy ORM query: Total time for 100000 records 2.94099998474 secs
SqlAlchemy ORM pk given: Total time for 100000 records 7.51800012589 secs
SqlAlchemy ORM pk given query: Total time for 100000 records 3.07699990273 secs
SqlAlchemy Core: Total time for 100000 records 0.431999921799 secs
SqlAlchemy Core query: Total time for 100000 records 0.389000177383 secs
sqlite3: Total time for 100000 records 0.459000110626 sec
sqlite3 query: Total time for 100000 records 0.103999853134 secs

Interesting to note that querying using bare sqlite3 is still about 3 times faster than using SQLAlchemy Core. I guess that's the price you pay for having a ResultProxy returned instead of a bare sqlite3 row.

SQLAlchemy Core is about 8 times faster than using ORM. So querying using ORM is a lot slower no matter what.

Here's the code I used:

import time
import sqlite3

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String,  create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.sql import select

Base = declarative_base()
DBSession = scoped_session(sessionmaker())

class Customer(Base):
    __tablename__ = "customer"
    id = Column(Integer, primary_key=True)
    name = Column(String(255))

def init_sqlalchemy(dbname = 'sqlite:///sqlalchemy.db'):
    global engine
    engine = create_engine(dbname, echo=False)
    DBSession.remove()
    DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
    Base.metadata.drop_all(engine)
    Base.metadata.create_all(engine)

def test_sqlalchemy_orm(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    for i in range(n):
        customer = Customer()
        customer.name = 'NAME ' + str(i)
        DBSession.add(customer)
        if i % 1000 == 0:
            DBSession.flush()
    DBSession.commit()
    print "SqlAlchemy ORM: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"
    t0 = time.time()
    q = DBSession.query(Customer)
    dict = [{'id':r.id, 'name':r.name} for r in q]
    print "SqlAlchemy ORM query: Total time for " + str(len(dict)) + " records " + str(time.time() - t0) + " secs"


def test_sqlalchemy_orm_pk_given(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    for i in range(n):
        customer = Customer(id=i+1, name="NAME " + str(i))
        DBSession.add(customer)
        if i % 1000 == 0:
            DBSession.flush()
    DBSession.commit()
    print "SqlAlchemy ORM pk given: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"
    t0 = time.time()
    q = DBSession.query(Customer)
    dict = [{'id':r.id, 'name':r.name} for r in q]
    print "SqlAlchemy ORM pk given query: Total time for " + str(len(dict)) + " records " + str(time.time() - t0) + " secs"

def test_sqlalchemy_core(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    engine.execute(
        Customer.__table__.insert(),
        [{"name":'NAME ' + str(i)} for i in range(n)]
    )
    print "SqlAlchemy Core: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"
    conn = engine.connect()
    t0 = time.time()
    sql = select([Customer.__table__])
    q = conn.execute(sql)
    dict = [{'id':r[0], 'name':r[0]} for r in q]
    print "SqlAlchemy Core query: Total time for " + str(len(dict)) + " records " + str(time.time() - t0) + " secs"

def init_sqlite3(dbname):
    conn = sqlite3.connect(dbname)
    c = conn.cursor()
    c.execute("DROP TABLE IF EXISTS customer")
    c.execute("CREATE TABLE customer (id INTEGER NOT NULL, name VARCHAR(255), PRIMARY KEY(id))")
    conn.commit()
    return conn

def test_sqlite3(n=100000, dbname = 'sqlite3.db'):
    conn = init_sqlite3(dbname)
    c = conn.cursor()
    t0 = time.time()
    for i in range(n):
        row = ('NAME ' + str(i),)
        c.execute("INSERT INTO customer (name) VALUES (?)", row)
    conn.commit()
    print "sqlite3: Total time for " + str(n) + " records " + str(time.time() - t0) + " sec"
    t0 = time.time()
    q = conn.execute("SELECT * FROM customer").fetchall()
    dict = [{'id':r[0], 'name':r[0]} for r in q]
    print "sqlite3 query: Total time for " + str(len(dict)) + " records " + str(time.time() - t0) + " secs"


if __name__ == '__main__':
    test_sqlalchemy_orm(100000)
    test_sqlalchemy_orm_pk_given(100000)
    test_sqlalchemy_core(100000)
    test_sqlite3(100000)

I also tested without converting the query result to dicts and the stats are similar:

SqlAlchemy ORM: Total time for 100000 records 11.9189999104 secs
SqlAlchemy ORM query: Total time for 100000 records 2.78500008583 secs
SqlAlchemy ORM pk given: Total time for 100000 records 7.67199993134 secs
SqlAlchemy ORM pk given query: Total time for 100000 records 2.94000005722 secs
SqlAlchemy Core: Total time for 100000 records 0.43700003624 secs
SqlAlchemy Core query: Total time for 100000 records 0.131000041962 secs
sqlite3: Total time for 100000 records 0.500999927521 sec
sqlite3 query: Total time for 100000 records 0.0859999656677 secs

Querying with SQLAlchemy Core is about 20 times faster compared to ORM.

Important to note that those tests are very superficial and should not be taken too seriously. I might be missing some obvious tricks that could change the stats completely.

The best way to measure performance improvements is directly in your own application. Don't take my stats for granted.

Alex
  • 1,366
  • 19
  • 22
  • 2
    Just wanted to let you know that in 2019 with the latest versions of everything, I don't observe significant relative deviations from your timings. Still, I am also curious if some "trick" is missed. – PascalVKooten Mar 02 '19 at 11:45
0

I would try the insert expression test and then benchmark.

It will probably still be slower because of OR mapper overhead but I would hope not that much slower.

Would you mind trying and posting results. This is very interesting stuff.

SuperShoot
  • 9,880
  • 2
  • 38
  • 55
Edmon
  • 4,752
  • 4
  • 32
  • 42
  • 1
    Only **10% faster** using an insert expression. I wish I knew why: SqlAlchemy Insert: Total time for 100000 records 9.47 secs – braddock Aug 02 '12 at 00:52
  • Not to bug you with this, but if you are interested maybe time the db session related code after the inserts and using timit. http://docs.python.org/library/timeit.html – Edmon Aug 02 '12 at 01:07
  • I have the same problem with the insert expression, it´s dead slow, see http://stackoverflow.com/questions/11887895/sqlalchemy-raw-sql-vs-expression-language-statements – dorvak Aug 09 '12 at 17:08