5

So I am trying to insert 250 000 rows in to my database. As this is just sample data I just need to query a x, y variable that does change thus the for loops and just the terrain which is simply 'water'

But this is taking forever. Anyone know how I can make this happen faster?

ini_set('max_execution_time', 70000);

$conn = mysql_connect('localhost', 'root', '') or die(mysql_error());
mysql_select_db('hol', $conn) or die(mysql_error());

for ($y=0;$y<500;$y++) {
    for ($x=0;$x<500;$x++) {
        mysql_query("INSERT INTO map(x, y, terrain) VALUES($x, $y, 'water')");
    }
}
WojtekT
  • 4,735
  • 25
  • 37
josmith
  • 1,049
  • 7
  • 17
  • 2
    "quickly" and "25000" don't go well hand in hand, especially not with 250k individual queries! Take @yes123's advice and insert all your values using 1 query, or fewer at least if the string gets too long. – Bojangles Jun 10 '12 at 21:11
  • why do i get the feeling, that, what you really need is, to import data from another database ? – tereško Jun 10 '12 at 21:13
  • You are simply wrong, I am afraid. It's just about inserting some sample data, provided above. – josmith Jun 10 '12 at 21:19

4 Answers4

8

You might do better using the MySQL LOAD DATA FROM INFILE statement. Batch loading tends to be faster than constructing massive insert statement strings. Write your data to file and do a single LOAD.

http://dev.mysql.com/doc/refman/5.1/en/load-data.html

Insert/Load also depends on the number of indexes on the table. Create indexes only on the necessary columns. You'll also often get better performance by adding the indexes after the data is loaded into the table so that you're not updating indexes while adding data.

Girish Rao
  • 2,609
  • 1
  • 20
  • 24
  • Interesting. The question is how long it would take to write all the data to a file before I even can begin to do that single load – josmith Jun 10 '12 at 21:17
  • 1
    @josmith depends on disk IO activity already, the speed of the disk, if you write the file to a RAMdisk, etc. – damianb Jun 10 '12 at 21:18
  • Agreed. I tend to have a problem with big single files especially text files, actually. I'm not quite sure why though. – josmith Jun 10 '12 at 21:20
  • 1
    @josmith probably disk IO. If you can, RAMdisk it for pure speed. – damianb Jun 10 '12 at 21:25
  • To increase performance for MyISAM tables, for both LOAD DATA INFILE and INSERT, enlarge the key cache by increasing the key_buffer_size system variable. – Adam Fili Jun 10 '12 at 21:39
3
INSERT INTO map (x, y, terrain) 
 VALUES ($x, $y, 'water'), ($x, $y, 'water') [etc]

So you will need only 1 query.

Of course split the query into multiple query when you have added 400-500 values

dynamic
  • 46,985
  • 55
  • 154
  • 231
2

You should make sure that you are not running those 250.000 in auto-commit mode (250.000 transactions).

By sending them all in one transaction, you will speed things up quite a bit because the mysql engine will not have to invoke its transactional mechanisms 250.000 times.

cf PHP + MySQL transactions examples

mysql_query("START TRANSACTION");
... your requests
mysql_query("COMMIT");

If you don't care about the state of the indexes, you should also remove the indices you have on the table before starting the transaction and recreate them after. This will speed things up a lot also since the engine will not need to check unicity, update indices,.. for each row.

Community
  • 1
  • 1
Jerome WAGNER
  • 21,986
  • 8
  • 62
  • 77
  • So mysql_query("BEGIN"); would do it in one big transaction? – josmith Jun 10 '12 at 21:22
  • Use "START TRANSACTION" which is more standard. Please keep me updated with the difference in speed. It should make a difference if your are using a transactional engine like Mysql InnoDB. – Jerome WAGNER Jun 10 '12 at 21:30
  • 1
    How are transactions handled internally in mysql? Will all 250,000 rows be held in memory until the transaction is committed? – Mike B Jun 10 '12 at 21:41
  • I am not an expert of mysql core and the specific configuration of the OP could change things, but during the execution of the transaction, a lock will be put on the table. while the lock is on the table, mysql will manage 2 tables : 1 where the requests will get executed and 1 that is a read-only snapshot of the original table ; – Jerome WAGNER Jun 10 '12 at 21:47
0

Obviously since there are only 500 values for the first statement it's not quite so critical how you insert the values, you could do this using a loop.

INSERT INTO temp500 (value)
  VALUES (0), (1), (2), ... (499);
INSERT INTO map (x, y, terrain)
  SELECT x.value, y.value, 'water'
    FROM temp500 x, temp500 y;
Neil
  • 54,642
  • 8
  • 60
  • 72