0

I am trying to import a table from an old database (MS Access) to MySQL server using CRBatchMove using Delphi 2007.

The program fetches data from the legacy database over an ODBC connection and stores it on the local hard drive using TADOTable.SaveToFile(). The second part of the program reads this file into another TADOTable and uses TCRBatchMove to transfer it to a MySQL server (via DevArt's TMyTable). In this process the batch move appears to be extremely slow for some reason.

Amount of data in the following trial is about 100,000 records each with about 120 fields. Most of the fields are integers and VARCHAR (each of VARCHAR less than 32 chars).

The performance figures I obtained are:

Time taken to bring data to local file over ODBC connection: 17 seconds
Time taken to load data from local file into TADOTable: 3 seconds
Time taken by TCRBatchMove to move data from TADOTable to TMyTable: > 30 minutes

MySQL server is running locally on the development machine (which is an i7-2.8GHz) and the database is otherwise very snappy).

Why is it so slow for the batch move to push data to MySQL server. Is there a way to speed up this task? Or is there a better way to accomplish this?

ssh
  • 943
  • 1
  • 14
  • 23
  • 1
    Hard to say without seeing the code. It might have something to do with the way you insert the data in TMyTable and subsequently in MySQL, for example if that is triggering an index update on MySQL for every row. – Guillem Vicens Dec 18 '12 at 21:13
  • 1
    Are you using a transaction? It may speed up the insert a lot. – Arnaud Bouchez Dec 18 '12 at 22:04
  • MySQL has a function called `load data infile` see: http://dev.mysql.com/doc/refman/5.1/en/load-data.html You can use that to time the fastest time possible to insert data. This will give you a baseline for the insert time into MySQL and allow you to pinpoint whether the delay is in MySQL or Delphi. If you have the source for TMyTable, you can use a profiler as well. – Johan Dec 19 '12 at 15:32

1 Answers1

1

Not really an answer, but I'm running out of space in the comments.

MySQL has a function called load data infile
see: http://dev.mysql.com/doc/refman/5.1/en/load-data.html

You can use that to time the fastest time possible to insert data. This will give you a baseline for the insert time into MySQL and allow you to pinpoint whether the delay is in MySQL or Delphi. If you have the source for TMyTable, you can use a profiler as well.

Another option is to download ZEOS data access components at:
http://sourceforge.net/projects/zeoslib/
If there's is some snafu in the component you're using a change of toolset might fix the problem. (Devart's components are usually excellent though).

On the MySQL side you can disable index updates before the bulk-insert and enable the index after. If you have a lot of inserts that usually works out faster.
See: https://stackoverflow.com/a/9524988/650492

SET autocommit=0; 
SET unique_checks=0; 
SET foreign_key_checks=0;

your insert here

SET autocommit=1; 
SET unique_checks=1; 
SET foreign_key_checks=1;
Community
  • 1
  • 1
Johan
  • 74,508
  • 24
  • 191
  • 319
  • You may be right that it could be because of index rebuilding. I could get much better performance by using batch inserts of 1000 rows in each insert statement. I will try to disable indexes and try that as well. – ssh Dec 19 '12 at 19:32
  • +1, since you give the OP various options on how to solve his problem. I would suggest to ellaborate on the Zeos part, though (why is it an option that can help?). – Guillem Vicens Dec 20 '12 at 07:29