I'm doing some performance tests on a mysql database. All of the tables use the InnoDB engine, the mysql config is almost the default one, no special flags set.
I have a test.sql file with 64 000 SQL INSERTs, which are declared in the INSERT INTO TableName(column1, column2) VALUES (value1, value2)
format.
I run this sql file on the mysql client by executing the following line in the command prompt:
mysql -u username -ppassword db_name < test.sql
When doing this, the execution of the file is done in a minute.
In the next step I broke the 64 000 statements into two files, each with 32 000 rows. In a .bat file I open two mysql
clients simultaneously, one running with the file containing the first part, the other one with the other file. Execution time drops to 34 seconds.
So I did this a couple more times, and the times drop drastically. The measurements are done by adding a timestamp to each row, and subtracting the min. timestamp from the max. timestamp.
- 1 - 58 sec
- 2 - 34 sec
- 4 - 18.5 sec
- 8 - 14.68 sec
- 16 - 9.79 sec
- 32 - 9.14 sec
Before starting the measurements, my guess was that running any number of files, if the total count stays the same, the overall time would almost be the same in all cases. I'm not really interested in speeding up the whole import, I'm rather curious what makes the times drop so much?
The tests are run on a machine powered by this Xeon processor.