3

I have a table called 'messages' (INNODB) where the user can insert their own posts. I want to put a limitation. In my php script when the table gets to 10 records, you can not add more. The logic of the program is more or less as follows.

Step 1. I run a query to count the lines that are in the table.

Step 2. Recovered that value, I decide whether to insert a new post.

My difficulty is in properly managing the possibility of two user who do the same thing simultaneously. If the user A is in step 1 while user B has just finished entering the tenth post, user A will include the eleventh.

How to avoid it?

Federkun
  • 36,084
  • 8
  • 78
  • 90

3 Answers3

4

You can create CHAR(1) NOT NULL field and cover it with UNIQUE INDEX. This will prevent of inserting more than 10 rows.

Other solution that could work would be to create BEFORE INSERT trigger that checks number of rows and raises error if there are more than 10 (look here for sample) (but in this case you can fail with condition races).

Community
  • 1
  • 1
zerkms
  • 249,484
  • 69
  • 436
  • 539
  • @Martin: it is a **char** field with length `1`. So it will contain `0..9` values (supposing you'll never try to write non-digits there) ;-) – zerkms Apr 23 '11 at 12:36
  • It's the bracketed part of your comment I'm referring to. Could a check constraint enforce that (in MySQL)? – Martin Smith Apr 23 '11 at 12:39
  • @Martin: uhm, no. Handling of possible values is completely a programmer's work here. And we're supposing that a developer will use `0..9` values. As you know there are no (?) "real" check constraints in mysql (like oracle has, for example), so I cannot think of any "correct" solution. – zerkms Apr 23 '11 at 12:43
  • The `BEFORE INSERT` trigger (used with locking) is the only real solution here - the index method only works for a value of 10, and while it answers this specific question, it gives you a solution that can't be altered - if they change the requirements to say, 15 records, you'd then have to go with the trigger method *anyway* - it's much more maintainable. – Simon Apr 23 '11 at 13:44
3

In order to allow you to change your threshold value for the table, you can use a trigger. Because MySQL triggers don't have a "prevent INSERT" option, you need a value in your table set to NOT NULL. The trigger can then set the inserted value for that column to NULL which will prevent the INSERT if your condition check fails.

A trigger like this:

CREATE TRIGGER block_insert 
BEFORE INSERT ON table_name 
FOR EACH ROW 
BEGIN   
  DECLARE count INT;
  SELECT COUNT(*) 
  FROM table_name INTO count;
  IF count >= 10 
  THEN
    SET NEW.non_nullable_value = NULL;   
  END IF; 
END;

would fail if you inserted an 11th row, like this:

ERROR 1048 (23000): Column 'non_nullable_value' cannot be null

You may wish to set the non-nullable column's name to something that represents its use. You could improve this by having the trigger pull the limit value from a configuration table.

Update

To avoid having to use the non-nullable columns, you could alternatively create an error procedure, and CALL it from your trigger - similar to the example in the "Emulating Check Constraints" section of this page - they're referencing Oracle databases, where a check constraint achieves what you want, but MySQL doesn't support them.

The "error procedure" in the example performs an INSERT of a duplicate row into an error table, causing a unique key error and stops the parent transaction also.

Update 2

As pointed out in the comment below, multiple simultaneous transactions may get round the checks - you'll have to use LOCK TABLES <name> WRITE in order to ensure that they can't.

Simon
  • 1,980
  • 14
  • 21
  • The trigger solution is handy but is affected by race conditions. – zerkms Apr 23 '11 at 13:05
  • The race conditions can be solved by proper use of locking. It's a shame MySQL lacks check constraints, but it means that using triggers is the only real workaround. – Simon Apr 23 '11 at 13:42
1

2/ You can also lock the MySQL table.

  • execute : LOCK TABLES my_table
  • Then do your business rules.
  • execute : UNLOCK TABLES

This also ensure that each action is sequentially executed. (but you have to deal with performance overhead)

Hope this could be useful.

Updated since the comments below : transaction don't work in this case

1/ You are using InnoDB, you can also use database transaction

  • Open transaction
  • Then do your business rules.
  • Commit or rollback your transaction

This will ensure that each action are executed one after another.

Riba
  • 1,108
  • 8
  • 10
  • For resolving concurrency. @trickwallett said : _My difficulty is in properly managing the possibility of two user who do the same thing simultaneously. If the user A is in step 1 while user B has just finished entering the tenth post, user A will include the eleventh. How to avoid it?_ – Riba Apr 23 '11 at 13:51
  • I think the point that @zerkms is making is that the transaction element is not itself actually helping, it's the `LOCK TABLES` that prevents the concurrent (erroneous) `INSERT` succeeding. Even if both operations happen in their own transaction, both will be able to be committed without errors, leaving you with over 10 rows. **You are however bang on with the `LOCK TABLES` suggestion** – Simon Apr 23 '11 at 15:44
  • Thx for explanation, tough that 'read' was also locked during transaction with InnoDB. (Actually i'm using the LOCK TABLES trick to emulate sequence for pk) – Riba Apr 23 '11 at 18:54