54

In our production database, we ran the following pseudo-code SQL batch query running every hour:

INSERT INTO TemporaryTable
    (SELECT FROM HighlyContentiousTableInInnoDb
     WHERE allKindsOfComplexConditions are true)

Now this query itself does not need to be fast, but I noticed it was locking up HighlyContentiousTableInInnoDb, even though it was just reading from it. Which was making some other very simple queries take ~25 seconds (that's how long that other query takes).

Then I discovered that InnoDB tables in such a case are actually locked by a SELECT! https://www.percona.com/blog/2006/07/12/insert-into-select-performance-with-innodb-tables/

But I don't really like the solution in the article of selecting into an OUTFILE, it seems like a hack (temporary files on filesystem seem sucky). Any other ideas? Is there a way to make a full copy of an InnoDB table without locking it in this way during the copy. Then I could just copy the HighlyContentiousTable to another table and do the query there.

Artem
  • 6,420
  • 6
  • 26
  • 26
  • I didn't ask here, but I haven't found a way. I am using an outfile to prevent the 20 minutes of locking that my query takes :) – therealsix Apr 15 '10 at 03:34
  • 1
    Does anyone know if this issue is actually resolved in MySQL 5.1 as the article implies? – Artem Apr 16 '10 at 14:56
  • 1
    Nope, MySQL 5.1.44 — same problem – clops Mar 22 '12 at 14:09
  • Please provide `SHOW CREATE TABLE TemporaryTable`; there could be things in that that are unnecessarily lengthening the lock time. Also, let's see the conditions and the `SHOW CREATE TABLE HighlyContentiousTableInInnoDb`; there could be ways to significantly improve the `SELECT` speed. – Rick James Jan 18 '16 at 19:02
  • @Ryan, the fact that you had to type so much text in your bounty message is a stong indicator that you should create a new question instead (and perhaps link to this question for reference). A step-by-step procedure is available [right there in the manual](http://dev.mysql.com/doc/refman/5.7/en/replication-howto.html). – RandomSeed Jan 22 '16 at 22:33

8 Answers8

31

The answer to this question is much easier now: - Use Row Based Replication and Read Committed isolation level.

The locking you were experiencing disappears.

Longer explaination: http://harrison-fisk.blogspot.com/2009/02/my-favorite-new-feature-of-mysql-51.html

Morgan Tocker
  • 3,370
  • 25
  • 36
  • I have noticed a decrease of over 50% in execution time when used on an update query using sub-selects, nice bonus. – StrangeElement Apr 04 '13 at 14:47
  • 2
    Just added a +50 bounty on this question for a more detailed, step-by-step answer of the above. – Ryan Jan 18 '16 at 05:06
13

You can set binlog format like that:

SET GLOBAL binlog_format = 'ROW';

Edit my.cnf if you want to make if permanent:

[mysqld]
binlog_format=ROW

Set isolation level for the current session before you run your query:

SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
INSERT INTO t1 SELECT ....;

If this doesn't help you should try setting isolation level server wide and not only for the current session:

SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

Edit my.cnf if you want to make if permanent:

[mysqld]
transaction-isolation = READ-UNCOMMITTED

You can change READ-UNCOMMITTED to READ-COMMITTED which is a better isolation level.

diamonddog
  • 131
  • 1
  • 5
  • 1
    To check your current isolation level, you can run `SELECT @@TX_ISOLATION;` – Leo Galleguillos May 31 '20 at 01:41
  • I've been fighting with this many times, having to copy tables terrabytes large in mysql can take weeks to finish if relying on it's flawed internal performance. You need to do it with 20-50 threads. But I had deadlocks even when randomly accessing rows so I had to go over the table 2-3 times. "SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" That did it finally, no deadlocks anymore and no randomization required. – John Nov 18 '21 at 16:53
1

Disclaimer: I'm not very experienced with databases, and I'm not sure if this idea is workable. Please correct me if it's not.

How about setting up a secondary equivalent table HighlyContentiousTableInInnoDb2, and creating AFTER INSERT etc. triggers in the first table which keep the new table updated with the same data. Now you should be able to lock HighlyContentiousTableInInnoDb2, and only slow down the triggers of the primary table, instead of all queries.

Potential problems:

  • 2 x data stored
  • Additional work for all inserts, updates and deletes
  • Might not be transactionally sound
Internet Friend
  • 1,082
  • 7
  • 10
1

The reason for the lock (readlock) is to secure your reading transaction not to read "dirty" data a parallel transaction might be currently writing. Most DBMS offer the setting that users can set and revoke read & write locks manually. This might be interesting for you if reading dirty data is not a problem in your case.

I think there is no secure way to read from a table without any locks in a DBS with multiple transactions.

But the following is some brainstorming: if space is no issue, you can think about running two instances of the same table. HighlyContentiousTableInInnoDb2 for your constantly read/write transaction and a HighlyContentiousTableInInnoDb2_shadow for your batched access. Maybe you can fill the shadow table automated via trigger/routines inside your DBMS, which is faster and smarter that an additional write transaction everywhere.

Another idea is the question: do all transactions need to access the whole table? Otherwise you could use views to lock only necessary columns. If the continuous access and your batched access are disjoint regarding columns, it might be possible that they don't lock each other!

nhahtdh
  • 55,989
  • 15
  • 126
  • 162
Philipp Andre
  • 997
  • 3
  • 11
  • 18
1

If you can allow some anomalies you can change ISOLATION LEVEL to the least strict one - READ UNCOMMITTED. But during this time someone is allowed to read from ur destination table. Or you can lock destination table manually (I assume mysql is giving this functionality?).

Or alternatively you can use READ COMMITTED, which should not lock source table also. But it also locks inserted rows in destination table till commit.

I would choose second one.

Azho KG
  • 1,161
  • 1
  • 13
  • 25
  • This is an interesting direction. http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html The destination table is a temporary (non-replicated) one anyways, so I think READ COMMITTED is the way to go. I'd like to try this out. – Artem Apr 26 '10 at 15:46
  • 1
    I have now tried it and it seems to work without problems! I now do: SET TRANSACTION ISOLATION LEVEL READ COMMITTED; INSERT INTO TemporaryTable SELECT ... FROM HighlyContentiousTableInInnoDb; And this does not lock HighlyContentiousTableInInnoDb. I don't know of any disadvantages to using this as opposed to the SELECT INTO OUTFILE method. I don't replicate this TemporaryTable, so I think I should not have issues. – Artem Jul 07 '10 at 15:33
0

Probably you could use Create View command (see Create View Syntax). For example,

Create View temp as SELECT FROM HighlyContentiousTableInInnoDb WHERE allKindsOfComplexConditions are true

After that you could use your insert statement with this view. Something like this

INSERT INTO TemporaryTable (SELECT * FROM temp)

This is only my proposal.

smg
  • 173
  • 1
  • 8
  • 1
    Does this actually work? I would think the View would do exactly the same work... – Artem Apr 16 '10 at 14:55
  • if you edit/read fields using a view, your DBMS has to lock the fields as well as you access them directly. the only difference is that it does not lock the whole row (with all columns) but only the columns used by the view. if your transactions use disjoint columns, than this could really help you. (who the hell gave -1 to this answer?) – Philipp Andre May 18 '10 at 10:15
  • A `VIEW` is just syntactic sugar around a `SELECT`; no performance gain. – Rick James Jan 18 '16 at 19:04
0

I'm not familiar with MySQL, but hopefully there is an equivalent to the transaction isolation levels Snapshot and Read committed snapshot in SQL Server. Using any of these should solve your problem.

MEMark
  • 1,493
  • 2
  • 22
  • 32
0

I was facing the same issue using CREATE TEMPORARY TABLE ... SELECT ... with SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction.

Based on your initial query, my problem was solved by locking the HighlyContentiousTableInInnoDb before starting the query.

LOCK TABLES HighlyContentiousTableInInnoDb READ;
INSERT INTO TemporaryTable
    (SELECT FROM HighlyContentiousTableInInnoDb
    WHERE allKindsOfComplexConditions are true)
UNLOCK TABLES;