2

Possible Duplicate:
is there a way to avoid calling nextval() if the insert fails in PostgreSQL?
Rails - Postgres does not re-use deleted ids but MySql does?

I have very simple database with PostgreSQL 9.2.1. When i insert something with Doctrine2 by using this method:

$em->persist($entity);
$em->flush();

If I get an error

-means record NOT inserted-

it still adds 1 to my primary key. So if i refresh the page 50 times, it says auto increment value is set to 50. How can I prevent this? If record not insterted than why primary key increases. This is very critical because it uses indexes for nothing and the number keep going bigger.

I have no idea what it came from and why, so, i did not tried anything. Very stuck at this point.

Community
  • 1
  • 1
xangr
  • 879
  • 14
  • 28
  • 1
    See this earlier answer: http://stackoverflow.com/a/9985219/398670 – Craig Ringer Oct 21 '12 at 01:12
  • Oh, i was thinking it was came from doctrine. This is my first discover of this issue. I never see such this thing before. Sorry guys. I accept an answer and yes this question is duplicate. – xangr Oct 21 '12 at 13:21

2 Answers2

5

Doctrine2 uses transactions to complete it's tasks.

Each time a Doctrine flushes its changes to the database, it opens a transaction, each record is prepared and then the commit will action the changes all at once.

During this process, the records that require an auto_increment ID are assigned. If the record then fails during the commit phase, the records aren't added, but the auto increment has issued that ID.

You can read more about Transactions here: http://www.postgresql.org/docs/current/static/tutorial-transactions.html

Note: This is also the same for MySQL and I'm sure the same for many other RDBMS.

Craig Ringer
  • 307,061
  • 76
  • 688
  • 778
noetix
  • 4,773
  • 3
  • 26
  • 47
  • 2
    It's actually the nature of sequence operation under concurrency more than the presence of transactional context that's key here. Even without transactions, it would be undesirable to reset a sequence's last value when an insert fails as another concurrent activity may have already advanced it. – dbenhur Oct 21 '12 at 01:12
2

There is no way to avoid the behaviour you observe without incurring substantial performance penalty or incorrect concurrent operation.

Auto-incrementing fields are implemented by tying the field to a sequence via default nextval('seqeunce_name').

Sequences are designed to guarantee that they always emit a unique value even under concurrent operation. They have no guarantee that they always emit sequential values. When nextval() is called as you attempt to insert a row, the value is emitted and will never be emitted again by the sequence unless you manually reset the sequence using setval().

Since incrementing a sequence safely necessarily incurs lock overhead, under highly concurrent operations some dbms systems will cache a set of sequence values for each transaction or session to avoid excessive locking -- this further exacerbates the skipped sequence value problem. In postgres, such caching of values is controlled via the CACHE cache clause of the sequence creation.

dbenhur
  • 20,008
  • 4
  • 48
  • 45