6

I'm having troubles with sqlite inserts performance even with transactions. My android app receives approximately 23.000 rows from a web service and I have to insert them into a single table.

The web service is partitioned so that I receive about 2000 rows in every request to the WS and I wrap the 2000 inserts within a transaction. After those inserts are done I send the new request to the WS and again use a new transaction for the new 2000 rows.

At the beginning it works fine. It does a lot of inserts per second. but with the time it gets slower and finally ends up with 4 or 3 inserts per second until it completes the 23000 rows.

Is there a problem with the size of the table? When it gets bigger the inserts get slower? Is there any way to improve the performance for that large amount of data?

Thanks for your help.

Ken White
  • 123,280
  • 14
  • 225
  • 444
Luis Mejia
  • 151
  • 2
  • 8
  • 3
    Do you have any indexes which could be slowing it down? And are you sure you're only inserting 2000 at a time, and not (all data received so far) each time? – Graham Borland Jul 24 '12 at 22:41
  • 3
    This sounds like a transaction working set expanding in the absence of a COMMIT. – DaveE Jul 24 '12 at 22:46
  • Well it all depends on the data you are inserting too, strings will slow your db down. What is the size of those 2000 rows in terms of bytes? What is the size difference in the DB file after inserting 2000 rows. I have a db with 388000 rows and its 64M in file size but the data is all doubles. – JPM Jul 24 '12 at 22:48
  • Can you post the snippet of code that actually starts the transaction and does the insert? – twaddington Jul 24 '12 at 22:54
  • 1
    Why are you downloading 23,000 records and trying to insert them? Have a server generate a SQLite database that contains the 23,000 records, and download the database. – CommonsWare Jul 24 '12 at 22:58
  • Hello. Thanks for your replies. Graham, I don't have any indexes. And I'm pretty sure I'm commiting the 2000 inserts every time the loop ends. JPM, the size of every request is 200KB, aprox 1800 rows and most of the fields are strings. – Luis Mejia Jul 24 '12 at 22:59
  • I did a test. when about 15000 inserts are done I force close the app to get sure is not about the app and memory performance. When I reopen my app and start to sync again it keeps with the slowness it was accumulating... so that's why i think is the size of the table. – Luis Mejia Jul 24 '12 at 23:07
  • Transactions may be slowing you down, by default it's one per INSERT. Try grouping many rows in a BEGIN,COMMIT. – Diego Torres Milano Jul 24 '12 at 23:32
  • dtmilano, thanks for your reply. That's what i'm doing. Like I said before, I'm using one transaction to wrap about 2000 inserts. – Luis Mejia Jul 25 '12 at 00:19
  • What is primary key in this table? Is it a composite key or a single column? What type? – pawelzieba Jul 25 '12 at 08:14
  • Hello pawelzieba. thanks for your reply. It is a table of products. this is the PK: ItemCode VARCHAR( 20 ) PRIMARY KEY NOT NULL. Like I said before, we dont use indexes. – Luis Mejia Jul 25 '12 at 13:35
  • @LuisMejia Do you have existing items in the table prior to your bulk inserts? Please post your schema and some example data. – Tom Kerr Jul 25 '12 at 21:30
  • Tom kerr, thanks for your help. Here is my table CREATE TABLE OITM ( ItemCode VARCHAR( 20 ) PRIMARY KEY NOT NULL ItemName VARCHAR( 100 ), ItmsGrpCod INT, PrchseItem VARCHAR(1), uri VARCHAR, tempId GUID, IsDeleted INTEGER DEFAULT '0', IsDirty INTEGER DEFAULT '0' ); – Luis Mejia Jul 26 '12 at 15:23
  • There's no data before inserts. Here is an example of a row: 'ABRASIVO101','DISCO CORTE METALES 7X 1/8','138','Y' 'Server/Service/ArticuloScopeSyncService.svc/OITM(ItemCode='ABRASIVO101')',null,0,0 – Luis Mejia Jul 26 '12 at 15:33

2 Answers2

1

There is an excellent thread covering SQLite performance in the question How Do I Improve The Performance of SQLite? which is quite good. I would go for at least the prepared statements if you are not using them already.

However, as you are on Android, I am guessing that you might be running in to some I/O bottlenecks writing to flash memory. Try running the code on a couple of different devices from different manufacturers too see if there is any extreme variation.

Community
  • 1
  • 1
David Pettersson
  • 368
  • 2
  • 11
0

you should use partitioning of data . as you can insert 100 - 100 insertion one by one so the process will be smooth and no data lose problem will occur.

another thing I assuming that you are using asynch(can be other) thread.

or you can also use service in last case to consume all data at a time.

Vivek Shah
  • 380
  • 2
  • 8