I have a Postgres 9.3 table that has a column called id
as PKEY, id
is char(9)
, and only allow lowercase a-z0-9
, I use Python with psycopg to insert to this table.
When I need to insert into this table, I call a Python function get_new_id()
, my question is, how to make get_new_id()
efficient?
I have the following solutions, none of them satisfy me.
a) Pre-generate a lot of id
s, store them in some table, when I need a new id, I SELECT
one from this table, then delete it from this table, then return this selected id. Down side of this solution is that it need to maintain this table, in each get_new_id()
call, there will also have a SELECT COUNT
in order to find out if I need to generate more ids to put into this table.
b) When get_new_id()
gets called, it generate a random id, then pass this id to a stored procedure to check if this id is already in use, if no, we are good, if yes, do b) again. Down side of this solution is, when the table gets bigger, the failure rate may be high, and there is a chance that, two get_new_id()
calls in two processes will generate the same id, say, 1234567
, and 1234567
is not used a PKEY yet, so, when insert, one process will fail.
I think this is a pretty old problem, what's the perfect solution?
Edit
I think this has been answered, see Jon Clements' comment.