2

We have table MySql 5.5:

CREATE TABLE IF NOT EXISTS `invoices` (
  `id` varchar(36) NOT NULL,
  `client_id` smallint(4) NOT NULL,
  `invoice_number` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `client_id_2` (`client_id`,`invoice_number`),
  KEY `client_id` (`client_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

We insert data into that table like this:

INSERT INTO `invoices` ( `id` , `client_id` ,  `invoice_number`   )  
VALUES (
    UUID(),
    10 ,
    ( SELECT (MAX(`invoice_number`) +1)  as next_invoice_number FROM `invoices`  WHERE `client_id`  = 10 )
);

"10" is client_id value.

It works but, it has bad concurrency. How can I have working solution, which has good concurrency?

Composite-primary-key auto increment is not a solution. We need autoincrement per client_id value. Composite-primary-key auto increment gives autoincrement all over table not per client_id column value.

hekep
  • 31
  • 5
  • what do you mean by bad concurrency? Do you mean it is it slow when running a lot of queries? Do you really need to have subsequent invoice numbers for each user? – dre-hh Dec 01 '16 at 12:28
  • Only one process can create invoices. It locks the table. When another application process tries to insert into the same table, it basicly gets timeout. Sometimes there can be 5 processes up, waiting for the table to get unlocked. – hekep Dec 01 '16 at 12:31
  • oh, cool, i see there is pure sql solution. it`s addressed in another question here. will flag this as duplicate of http://stackoverflow.com/questions/677542/auto-increment-by-group – dre-hh Dec 01 '16 at 15:40
  • @dre-hh composite-primary-key auto increment is not a solution. We need autoincrement per `client_id` value. Composite-primary-key auto increment gives autoincrement all over table not per `client_id` column value. – hekep Dec 02 '16 at 09:12
  • Primary key is not Auto increment in our case, so it is not dublicate question. – hekep Dec 02 '16 at 10:46
  • @dre-hh Our PHP framework does not support composite keys at the moment. – hekep Dec 02 '16 at 11:34
  • Do you have _any_ outside need for the `UUID`? If not, get rid if it. – Rick James Dec 02 '16 at 19:24
  • @RickJames UUID gives as better debugging possibilities, rather than use just some numbers. It is easy to `grep` all mysqlbinlogs (and other logs ) for debugging purposes. – hekep Dec 14 '16 at 11:29

4 Answers4

0

Not sure what you meant by bad concurrency here. Though every DML operation runs on implicit transaction, you can as well wrap it in a explicit transaction block by using begin transaction ... end construct.

Rahul
  • 76,197
  • 13
  • 71
  • 125
  • we have it! Bad Concurrency means, that only one process (session) can access `invoices` table at the time. – hekep Dec 13 '16 at 13:13
0

It seems that mysql is really locking the whole table on insert into ... select.

Below a strategy which is working for us (for a similar problem) in pseudocode

function insert_user(){
  begin_transaction
  next_invoice_number = select_max_invoice_number + 1
  insert_user(next_invoice_number)
  end_transaction
}

function perform_insert(){
  try
   insert_user
  catch RecordNotUniqueError 
   perform_insert
  end
} 

This requires performing a query in some high level programming language. You basically start a tansaction, where you first perform a query to read the next invoice number for the user. Afterwards you perform the insert query with next_invoice_number and hope for the best. In case there is a concurrent process trying to insert with the same invoice number for the same user, the transaction will fail for one of the processes. It can then try to repeat it. At the end there should be no concurrent operation for the same invoice number and every transaction will succeed.

dre-hh
  • 7,840
  • 2
  • 33
  • 44
  • Ok, the idea here is to catch Error #1062 - Duplicate entry 'XXX-XXX' for key 'unique_columns' and on application level just retry to insert with another "auto incremented" value. – hekep Dec 02 '16 at 09:24
  • yes, that`s correct. you increment the invoice number on application level and in most cases this will just work. on concurrency errors you just retry – dre-hh Dec 02 '16 at 09:51
  • There is one thing to worry about. We are using also transactions. `BEGIN` and `COMMIT` -queries. Can the `Error #1062 - Duplicate entry 'XXX-XXX' for key 'unique_columns' ` break the transaction ? For example all the Queries before Invoice-INSERT-query might get forgotten ? Does this solution work with transactions? – hekep Dec 02 '16 at 10:43
  • an error will on most db lib implementations abort the whole transaction. It is the Atomicity property of transactions. This allows you to safely repeat all the operations for the failed transaction. – dre-hh Dec 02 '16 at 15:23
  • @hekep any update on the issue? plz select/upvote an answer working for you – dre-hh Dec 08 '16 at 16:45
  • we are trying this solution, atm unsuccessful, but it suppose to work. somehow `nextAutoIncerment()` returns a value, that is not allowed later. Solution fails even if retry to save 100 times . `nextAutoIncerment()` returns always the same value, which is not accepted later. – hekep Dec 13 '16 at 12:38
  • i would just fire up the nested query, you posted: `SELECT MAX(invoice_number) +1) as next_invoice_number FROM invoices WHERE client_id = 10` as a separate query – dre-hh Dec 13 '16 at 20:00
0

I see a number of issues here.

First, for each invoice registration you are scanning the same table to find what the next invoice number should be used for this particular customer.

A far faster solution is to have a table with two columns: Customer_ID (key) and Last invoice ID.

Whenever you need to register a new invoice, you simply get-and-update the new invoice number from this new table and use it in the insert.

Second, what makes you think that the operation you are showing in your example should not lock the table?

Since this is happening only sometimes, the best solution is to minimize the probability of a collision, and the approach presented here will certainly do that.

FDavidov
  • 3,505
  • 6
  • 23
  • 59
  • We are tring this approach. Trigger should update the "new table" 'invoices_history' for last used invoice_number ... We get error message: `Error: SQLSTATE[HY000]: General error: 1442 Can't update table 'invoices_history' in stored function/trigger because it is already used by statement which invoked this stored function/trigger.` We really want foolproof solution. – hekep Dec 02 '16 at 08:49
  • I would need to see your code. I don't quite understand why you are getting that error (I mean, MySQL is NOT wrong of course, but your code might). Please edit your question and add the code for the insert as well as that of the trigger. – FDavidov Dec 02 '16 at 09:11
  • Getting a new invoice number should be an explicit step in your application, _not_ a side effect happening in a trigger. – Rick James Dec 02 '16 at 19:23
  • @RickJames, that is a BUSINESS decision and not a technical one. Besides, I don't quite see why a mechanism implemented within a trigger has less merit than the one explicitly invoked (I mean, proper **documentation** of the system and the code would cover those aspects). In any case, the same code I suggested to be part of the trigger could be made a common stored procedure. It is for the OP to select what it fits him/her best. – FDavidov Dec 03 '16 at 10:05
0

Reformulate the query this way. This is probably simpler and faster.

INSERT INTO `invoices` ( `id` , `client_id` ,  `invoice_number`   )  
    SELECT UUID(),
           10 ,
           MAX(`invoice_number`) +1
        FROM `invoices`
        WHERE `client_id`  = 10;

Is this in a transaction by itself? With autocommit=1?

Or is this a part of a much larger set of commands? And possibly they are part of what is leading to the error?

How will you subsequently get the UUID and/or invoice_number for the client? Doesn't the application need to display them and/or store them in some other table?

Rick James
  • 135,179
  • 13
  • 127
  • 222
  • We have `begin` and `commit` , as invoice has many `invoice_rows` . Any failure to insert `invoice_row` should skip entire transaction. Other sql queries do not cause issues in this transaction. – hekep Dec 05 '16 at 07:28