0

I have a table thus:

CREATE TABLE `DeviceGrants` (
    `DeviceId` BINARY(16) NOT NULL
        COMMENT "128-bit UID",
    `Grant` ENUM("AcceleratedRead") NOT NULL
        COMMENT "The kind of grant this is",
    `ExpiryTime` DATETIME NOT NULL
        COMMENT "The date/time at which this grant will expire",

    PRIMARY KEY(`DeviceId`, `Grant`),
    KEY (`ExpiryTime`),
    FOREIGN KEY (`DeviceId`) REFERENCES `Devices` (`DeviceId`)
      ON DELETE CASCADE
) ENGINE=InnoDB;

(Grant may only take one value right now, but this list is very likely to grow in future versions so the column is in place for forward-compatibility.)

I've witnessed a deadlock between this (I think):

INSERT INTO `DeviceGrants` (`DeviceId`, `Grant`, `ExpiryTime`)
VALUES(
    UNHEX('<x>'),
    'AcceleratedRead',
    NOW() + INTERVAL 60 SECOND
)
ON DUPLICATE KEY UPDATE
    `ExpiryTime` = NOW() + INTERVAL 60 SECOND

and this:

DELETE FROM `DeviceGrants` WHERE `ExpiryTime` <= NOW()

Now, following the advice in this excellent answer, I wanted to avoid the deadlock by rewriting the second statement. However, since the table has no single-column, auto-increment primary key (which is semantically needless), I'm not sure how to go about it.

What is my best bet here?

Community
  • 1
  • 1
Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
  • All you need to avoid deadlock is to maintain __same__ operation order for all transactions. That means: you may use just `ORDER BY` two columns and that's it: your both statement will have same order => impossible to get the deadlock (at least because of opposite locking order of rows) => DBMS is happy (as well as you) – Alma Do Jul 23 '14 at 12:16
  • @AlmaDo: Not sure I follow you. Can you expand on that in an answer? I thought the problem was not the order in which rows were processed, but the order in which individual indexes were locked whilst processing a specific row. How does `ORDER BY` affect that? The linked answer doesn't suggest it at all. – Lightness Races in Orbit Jul 23 '14 at 12:56
  • Not sure what's unclear. To avoid order-dependent deadlock, all your transactions which work with same rows, should do that in same order. That will prevent from cyclic lock wait (causing timeout). And to maintain that order, all that you need is to produce row locks in same order. So you always do "lock row X", "lock row Y", "lock row Z". And to do that, all you need - is some __solid__ condition for ordering row. In the answer you've linked, it's done with ordering by PK (which identifies row exactly, so ordering by it will produce constant order for all transactions) – Alma Do Jul 23 '14 at 13:28
  • @AlmaDo: What I don't understand yet is how producing row locks in the same order avoids this. – Lightness Races in Orbit Jul 23 '14 at 13:48
  • About how many rows are being deleted when you attempt to delete them? – Mic1780 Aug 29 '14 at 00:11

2 Answers2

1

I am not quite sure how the other solution guarantees that the keys are locked in the right order (the subquery could very well not use the primary key at all) but I suppose we can easily extend this solution to:

DELETE FROM DeviceGrants
WHERE (DeviceId, Grants) IN (
    -- the other solution did not mention: You can't specify target table 'xx' for update in FROM clause
    -- I used the workaround suggested in https://stackoverflow.com/a/45498/1446005
    -- hence the sub-sub-query
    SELECT * FROM (
      SELECT DeviceId, Grants
      FROM DeviceGrants
      WHERE expire <=  NOW()
    ) AS subq) ;

Since I do not fully understand the original solution, I can't prove the above is correct, but it seems to be. Running the below test for 10 minutes hasn't raised any deadlock (remove the ORDER BY clause and deadlocks occur):

mysql> CREATE TABLE t (id INT, gr INT, expire DATETIME, PRIMARY KEY(id, gr), KEY(expire));

bash#1> while 1; \
  do mysql test -e "insert into t values (2, 2, NOW()) on duplicate key update expire = NOW() + SLEEP(3);"; \
  done;

bash#2> while true; \
  do mysql test -e "delete from t where (id, gr) in (select * from (select id,gr from t where expire <=  now() order by id, gr ) as sub)" ; \
  done;
Community
  • 1
  • 1
RandomSeed
  • 29,301
  • 6
  • 52
  • 87
-1

From the MySQL documentation

InnoDB uses automatic row-level locking. You can get deadlocks even in the case of transactions that just insert or delete a single row. That is because these operations are not really “atomic”; they automatically set locks on the (possibly several) index records of the row inserted or deleted.

If it is such a problem where data is getting corrupt, I would use table locks to ensure that no two transactions try to execute simultaneously. From the same page I would do something like so:

SET autocommit=0;
LOCK TABLES DeviceGrants WRITE;
DELETE FROM DeviceGrants WHERE ExpiryTime <= NOW()
COMMIT;
UNLOCK TABLES;
Mic1780
  • 1,774
  • 9
  • 23
  • Data is not "getting corrupt", because InnoDB is ACID-compliant and won't allow it (that's what the locks ensure). I don't want table locks because that's a blunt instrument and will reduce my performance significantly in this instance. – Lightness Races in Orbit Aug 28 '14 at 23:55
  • 1
    I have given this answer an upvote because there might be situations where this approach is useful. In a test program I am getting index gap deadlocks on a trivial index where I INSERT/DELETE independent records in 3 separate threads. LOCK TABLES gets rid of the deadlocks, surprisingly WITHOUT measurable loss of performance. Obviously you could see severe performance degradation in a different setup, but for some scenarios the table lock could work OK. – StaticNoiseLog Jun 20 '17 at 07:58