When you are doing hundreds of thousands of queries on a db server, switching from
SELECT foo FROM bar WHERE rec_id=xyz
if (!record_found){
INSERT INTO bar (x,y,z) VALUES (1,2,3)
}
to:
INSERT INTO bar (x,y,z) VALUES (1,2,3) ON DUPLICATE KEY UDPATE count=count+1
has a HUGE performance gains. The CPU load on the server drops drastically. However, if you have fields in the table that are set to auto-increment, you lose contiguity and they increase at a very rapid rate.
I do not care about the contiguity of the auto-increment field but am a little concerned about the exponential rate at which the auto-increment field is ballooning at the moment since I encounter a ton of duplicate records. We are talking about the auto-increment value reaching a value of 10 million+ when the actual number of records in the database is close to 0.5 million (in matter of a week).
The field is of type BIGINT(20) so I still have ways to go. I also understand that once it increases by a couple of orders of magnitude, the rate of change may not be that significant. If I ever breach that limit, can I continue to make the field bigger? Is this the best possible way to deal with this situation?