I have a python script which is run by multiple users over multiple projects which (among other things) updates a sqlite database and I see (rare but occasional) database locked errors. The original insert commands checked for duplicate data, such as:
conn.execute("insert into j_table (jid, rid, rname, completed, elapstime, dir) select ?, ?, ?, ?, ?, ? where not exists ( select * from j_table where j_id = ?);",(i[0], rid, i[2], i[3], i[4], i[5], i[0]))
conn.commit()
but I changed the code to:
for x in range(len(jvals)):
i = jvals[x]
z = conn2.execute("select count(*) from j_table where jid = ?;",(i[0], )).fetchall()[0][0]
if z == 0:
job_arr.append((i[0], rid, i[2], i[3], i[4], i[5]))
and then:
conn2.execute("BEGIN")
conn2.executemany("insert into j_table (jid, rid, rname, completed, elapstime, dir) values (?, ?, ?, ?, ?, ?);", (job_arr))
conn2.commit()
The code runs in the same amount of time (~20 seconds) but the insert/commit takes less than a second. My question is, should this (shortening the time of the insert/commit sets) help with the database locked problem? Or would I need to do something like open/close the db file in readonly or write mode for the different sets of commands? Does opening the file in write mode cause the database locked error?
Also (unimportant), is that "BEGIN" statement needed, or is it implied if not used?