4

I have a lot of data to INSERT LOW_PRIORITY into a table. As the index is rebuilt every time a row is inserted, this takes a long time. I know I could use transactions, but this is a case where I don't want the whole set to fail if just one row fails.

Is there any way to get MySQL to stop rebuilding indices on a specific table until I tell it that it can resume?

Ideally, I would like to insert 1,000 rows or so, set the index do its thing, and then insert the next 1,000 rows.

I cannot use INSERT DELAYED as my table type is InnoDB. Otherwise, INSERT DELAYED would be perfect for me.

Not that it matters, but I am using PHP/PDO to access MySQL. Any advice you could give would be appreciated. Thanks!

Brad
  • 159,648
  • 54
  • 349
  • 530
  • What you want would really mess with the index. Have you looked at bulk insertion to see if that gets around your index rebuilding issue? – OMG Ponies Apr 10 '11 at 18:33
  • I don't know of a fast way to do bulk inserts with PDO. I might just change my table type to MyISAM and use INSERT DELAYED. The main reason I strayed away from MyISAM was the excess table locking upon insert. Is there another bulk insert method you would recommend? – Brad Apr 10 '11 at 20:28
  • @Brad Hi Brad, did you find a good solution to this issue? Thanks – manash Jun 18 '14 at 09:19
  • @MickaelMarrache Unfortunately, no. – Brad Jun 18 '14 at 13:22

2 Answers2

7
ALTER TABLE tableName DISABLE KEYS
// perform inserts
ALTER TABLE tableName ENABLE KEYS

This disables updating of all non-unique indexes. The disadvantage is that those indexes won't be used for select queries as well.

You can however use multi-inserts (INSERT INTO table(...) VALUES(...),(...),(...) which will also update indexes in batches.

Maxim Krizhanovsky
  • 26,265
  • 5
  • 59
  • 89
  • 1
    Sadly, this works only for nonunique keys on MyISAM tables, see http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html#option_mysqldump_disable-keys – rfay Oct 22 '13 at 21:16
4

AFAIK, for those that use InnoDB tables, if you don't want indexes to be rebuilt after each INSERT, you must use transactions.

For example, for inserting a batch of 1000 rows, use the following SQL:

SET autocommit=0;
//Insert the rows one after the other, or using multi values inserts
COMMIT;

By disabling autocommit, a transaction will be started at the first INSERT. Then, the rows are inserted one after the other and at the end, the transaction is committed and the indexes are rebuilt.

If an error occurs during execution of one of the INSERT, the transaction is not rolled back but an error is reported to the client which has the choice of rolling back or continuing. Therefore, if you don't want the entire batch to be rolled back if one INSERT fails, you can log the INSERTs that failed and continue inserting the rows, and finally commit the transaction at the end.

However, take into account that wrapping the INSERTs in a transaction means you will not be able to see the inserted rows until the transaction is committed. It is possible to set the transaction isolation level for the SELECT to READ_UNCOMMITTED but as I've tested it, the rows are not visible when the SELECT happens very close to the INSERT. See my post.

Community
  • 1
  • 1
manash
  • 6,985
  • 12
  • 65
  • 125