2

This a follow up to a previous question of mine...

I have a table with to fields: pos (point of sale) and voucher_number. I need to have an unique sequence per pos.

CREATE TABLE `test_table` (
    `id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
    `date_event` DATETIME NOT NULL,
    `pos` TINYINT NOT NULL,
    `voucher_number` INT NOT NULL,
    `origin` CHAR(1) NULL,
    PRIMARY KEY (`id`))
  ENGINE = InnoDB;

I perform the tests described in my answer and everything works fine. Basically there are two or more scripts trying to do the following at the same time:

//Notice the SELECT ... FOR UPDATE
$max_voucher_number = select max(voucher_number) as max_voucher_number from vouchers where pos = $pos for update;
    
$max_voucher_number  = $max_voucher_number + 1;
    
insert into (pos, voucher_number) values ($pos, $max_voucher_number);

but the first script set a sleep(10) before the insert in order to test lock for the sequence

The problem arise when I add a UNIQUE INDEX

ALTER TABLE `test_table` 
ADD UNIQUE INDEX `per_pos_sequence` (`pos` ASC, `voucher_number` ASC);

Then I get this error for the :

SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

  1. Why do I get that error if an index is present?
  2. Is it possible to mantain the index and get no errors?
Matías Cánepa
  • 5,770
  • 4
  • 57
  • 97

1 Answers1

4

I'd guess you are running into the behavior described in this bug report: https://bugs.mysql.com/bug.php?id=98324

It matches an increase in deadlocks we have observed at my company since upgrading to MySQL 5.7.26 or later. Many different applications and tables have increased frequency of deadlocks, and the only thing in common is that they are using tables with PRIMARY KEY and also secondary UNIQUE KEY.

The response to the bug report says that deadlocks happening is not a bug, but a natural consequence of concurrent clients requesting locks. If the locks cannot be granted atomically, then there is a chance of deadlocks. Application clients should not treat this as a bug or an error. Just follow the instructions in the deadlock message: re-try the failed transaction.

The only ways I know of to avoid deadlocks are:

  • Avoid defining multiple unique keys on a given table.
  • Disallow concurrent clients requesting locks against the same table.
  • Use pessimistic table-locks to ensure clients access the table serially.
Bill Karwin
  • 538,548
  • 86
  • 673
  • 828