My issue refers to this old post about importing a data set in Sqlite in chunks using a transaction SQLite transaction for CSV importing:
import csv, sqlite3, time
def chunks(data, rows=10000):
for i in range (0, len(data), rows):
yield data[i:i+rows]
if __name__ == "__main__":
t = time.time()
con = sqlite3.connect('test.db')
cur = con.cursor()
cur.execute("DROP TABLE IF EXISTS sensor;")
cur.execute("CREATE TABLE sensor(key INT, reading REAL);")
filename = 'dummy.csv'
reader = csv.reader(open(filename,"r"))
divdata = chunks(list(reader))
for chunk in divdata:
cur.execute('BEGIN TRANSACTION')
for col1, col2 in chunk:
cur.execute('INSERT INTO sensor (key, reading) VALUES (?, ?)', col1, col2))
con.commit()
While the csv.reader reads the whole file in memory and then the file gets chopped by calling the function chunks, I am looking to a solution that reads the file in chunks (of say, 10k rows) and then each chunk is inserted in the Sqlite table like above until the whole file is inserted. How do I modify the above?