20

Note: I'm new to databases and PHP

I have an order column that is set to auto increment and unique.

In my PHP script, I am using AJAX to get new data but the problem with that is, is that the order skips numbers and is substantially higher thus forcing me to manually update the numbers when the data is inserted. In this case I would end up changing 782 to 38.

$SQL = "INSERT IGNORE INTO `read`(`title`,`url`) VALUES\n ".implode( "\n,",array_reverse( $sql_values ) );

How can I get it to increment +1?

Glorfindel
  • 21,988
  • 13
  • 81
  • 109
Chris Burton
  • 1,195
  • 6
  • 19
  • 54

4 Answers4

29

The default auto_increment behavior in MySQL 5.1 and later will "lose" auto-increment values if the INSERT fails. That is, it increments by 1 each time, but doesn't undo an increment if the INSERT fails. It's uncommon to lose ~750 values but not impossible (I consulted for a site that was skipping 1500 for every INSERT that succeeded).

You can change innodb_autoinc_lock_mode=0 to use MySQL 5.0 behavior and avoid losing values in some cases. See http://dev.mysql.com/doc/refman/5.1/en/innodb-auto-increment-handling.html for more details.

Another thing to check is the value of the auto_increment_increment config variable. It's 1 by default, but you may have changed this. Again, very uncommon to set it to something higher than 1 or 2, but possible.

I agree with other commenters, autoinc columns are intended to be unique, but not necessarily consecutive. You probably shouldn't worry about it so much unless you're advancing the autoinc value so rapidly that you could run out of the range of an INT (this has happened to me).


How exactly did you fix it skipping 1500 for ever insert?

The cause of the INSERT failing was that there was another column with a UNIQUE constraint on it, and the INSERT was trying to insert duplicate values in that column. Read the manual page I linked to for details on why this matters.

The fix was to do a SELECT first to check for existence of the value before attempting to INSERT it. This goes against common wisdom, which is to just try the INSERT and handle any duplicate key exception. But in this case, the side-effect of the failed INSERT caused an auto-inc value to be lost. Doing a SELECT first eliminated almost all such exceptions.

But you also have to handle a possible exception, even if you SELECT first. You still have a race condition.

You're right! innodb_autoinc_lock_mode=0 worked like a charm.

In your case, I would want to know why so many inserts are failing. I suspect that like many SQL developers, you aren't checking for success status after you do your INSERTs in your AJAX handler, so you never know that so many of them are failing.

They're probably still failing, you just aren't losing auto-inc id's as a side effect. You should really diagnose why so many fails occur. You could be either generating incomplete data, or running many more transactions than necessary.

Bill Karwin
  • 538,548
  • 86
  • 673
  • 828
  • `You probably shouldn't worry about it so much unless you're advancing the autoinc value so rapidly` - That's exactly why it's happening. I'm using AJAX to grab new data ever 2 seconds and the increment goes up by the number of rows I have. – Chris Burton Jul 22 '13 at 23:24
  • Not sure it's 100% relevant here but interesting as well: innodb would also lose autoincrement value after server restart (and will restore it as `MAX(PK) + 1` on next start) – zerkms Jul 22 '13 at 23:25
  • @zerkms, yes that's true, but that shouldn't make InnoDB skip a large span of values subsequently. – Bill Karwin Jul 22 '13 at 23:27
  • @Chris Burton: "every 2 seconds" isn't rapidly - you have 68 years to fill all the signed int32 space (assuming you have only one simultaneous user at the moment, thanks to Bill Karwin for keeping eyes on our mistakes) – zerkms Jul 22 '13 at 23:27
  • 1
    @Bill Karwin: Indeed, was just an irrelevant interesting (?) fact for further readers :-) – zerkms Jul 22 '13 at 23:28
  • 1
    @zerkms, depends on how many concurrent clients are doing it. :-) – Bill Karwin Jul 22 '13 at 23:28
  • @BillKarwin How exactly did you fix it skipping 1500 for ever insert? – Chris Burton Jul 22 '13 at 23:30
  • @Chris Burton: the thing is - you shouldn't rely on the exact PK/autoincrement value. Just shouldn't. If you want guaranteed ordered enumeration - you have to maintain it manually (with proper synchronization between concurrent sessions) – zerkms Jul 22 '13 at 23:30
  • I'm just worried that because I am using AJAX that checks every 2 seconds and INSERTS if there is new data, that I will eventually run out and throw an error. I left my page up one night and it went from 32 to over 300,000. – Chris Burton Jul 22 '13 at 23:34
  • 2
    You're right! `innodb_autoinc_lock_mode=0` worked like a charm. – Chris Burton Jul 22 '13 at 23:42
  • @BillKarwin The reason I'm using `INSERT IGNORE` is because when I'm grabbing the data from an RSS Feed, it tries to reinsert everything thus causing the skipped numbers. See [here](http://stackoverflow.com/questions/17796427/trying-to-get-insert-to-only-insert-new-data). – Chris Burton Jul 23 '13 at 00:09
  • hi all, how and where would I set the innodb lock mode? – Kentot Sep 21 '15 at 01:28
  • 1
    @Kentot, Read the second paragraph in my answer above, and follow the link to the documentation I reference. – Bill Karwin Sep 21 '15 at 05:31
  • In this post you mentioned that you crossed the INT boundary, how did you handled that issue? Did you just converted the INT to Long INT or did you came with any other issue? – Ujjaval Moradiya Jun 13 '17 at 13:06
  • @UjjavalMoradiya, yes, you have to upgrade to BIGINT. In the case I worked on, I first upgraded to BIGINT the columns in all other tables that referenced this auto-increment column. It would not be a good idea to upgrade the auto-increment first while the application is in use, because it would create entries with large values that other tables cannot reference. – Bill Karwin Jun 15 '17 at 01:17
3

After you change 782 in 38 you can reset the autoincrement with ALTER TABLE mytable AUTO_INCREMENT = 39. This way you continue at 39.

However, you should check why your gap is so high and change your design accordingly. Changing the autoincement should not be "default" behaviour.

Rik
  • 1,982
  • 1
  • 17
  • 30
0

auto increment doesn't care, if you delete some rows - everytime you insert a row, the value is incremented.

If you want a numbering without gaps, don't use auto increment and do it by yourself. You could use something like this to achive this for inserting

INSERT INTO tablename SET
    `order` = (SELECT max(`order`) + 1 FROM (SELECT * from tablename) t),
    ...

and if you delete a row, you have to rearange the order column manually

Philipp
  • 15,377
  • 4
  • 35
  • 52
  • I was actually thinking I could do it by using `count()` with PHP. – Chris Burton Jul 22 '13 at 23:09
  • You could, but I dont think, this is very efficient – Philipp Jul 22 '13 at 23:12
  • And if you want to find the gaps look [here](http://stackoverflow.com/questions/4340793/how-to-find-gaps-in-sequential-numbering-in-mysql). – Rik Jul 22 '13 at 23:13
  • 2
    Did you try that INSERT? It results in an error: `You can't specify target table 'tablename' for update in FROM clause`. Even if it worked, it would be susceptible to race conditions, if two threads try to insert concurrently. – Bill Karwin Jul 22 '13 at 23:23
  • I did not @BillKarwin – Chris Burton Jul 22 '13 at 23:31
  • @ChrisBurton just tested the query and updated it - now workes fine for me – Philipp Jul 22 '13 at 23:40
  • @BillKarwin Are you sure, this isn't atomic, if you use InnoDB!? Even if not, you could wrap the code inside a transaction – Philipp Jul 22 '13 at 23:42
  • @BillKarwin suggested `innodb_autoinc_lock_mode=0` and it worked. I no longer have gaps. – Chris Burton Jul 22 '13 at 23:43
  • @ChrisBurton keep in mind, you got your gaps again, if you delete something – Philipp Jul 22 '13 at 23:45
  • Ah, my mistake, it does work, since the second transaction blocks on the commit of the first transaction. So it doesn't generate conflicting values, but it *does* effectively serialize transactions. – Bill Karwin Jul 22 '13 at 23:49
  • @Philipp While that may be true, I don't ever plan on deleting articles I've read in the database. That's basically what this database does, saves the article title and URL of what I've read. – Chris Burton Jul 23 '13 at 00:21
0

I know the question has been answered already.. But if you have deleted rows in the table before, mysql will remember the used ID/Number because typically your Auto increment is Unique.. So therefore will not create duplicate increments.. To reindex and increment from the current max ID/integer you could perform:

ALTER TABLE TableName AUTO_INCREMENT=(SELECT max(order) + 1 FROM tablename)
Daryl Gill
  • 5,464
  • 9
  • 36
  • 69