Your best option is to migrate MongoDB's ObjectId
fields to PostgreSQL's uuid
columns. Note that UUID has more bytes in it, so you'll need to pad the values.
See more on that:
If you really want to use bigint
s, you have two options:
1. Create completely new values
- create your schema (with tables, constraints, etc.)
- in this schema, use
text
/ varchar
for your ObjectId
values (for now)
- create foreign keys for all your relations, with
ON UPDATE CASCADE
for all ObjectId
columns.
- create sequences for all tables, which have
ObjectId
columns.
update ObjectId
columns (while they are still text
/ varchar
), with:
UPDATE table_name
SET object_id_col = nextval('table_name_object_id_col_seq')::text
(This will propagate the changes to referencing tables, because foreign keys were set up earlier.)
- drop foreign keys
- change your column types of these
ObjectId
columns to bigint
- alter your sequences to be
OWNED BY
the table column
- alter tables to use
nextval('table_name_object_id_col_seq')
as default values
- re-add foreign keys
This method is guaranteed to never cause duplicate values during migration. And the sequence can be used to create new values for the primary keys.
2. Use your original values in some way
Truncation will cause information loss, so you may end up with duplicated values, no matter what method you will try. However, you can reduce the chance of this by using f.ex. bitwise XOR
(usually the #
operator in PostgreSQL) instead of modulus.
With this function f.ex. you can use your original values as:
- start with
0
(or with some other, fix starting value)
- with each iteration, use N number of the least significant bits from input
- calculate result as
<the_previous_result> # <value_from_2.>
- continue at 2. when there is more, unused bits (input should be the old input but the N least significant bits)
Here is an SQL function, which does that:
create or replace function hex_xor(p_hex text, p_bits int default 64, p_default bigint default 0)
returns bigint
language sql
immutable
as $func$
with recursive r as (
select ('x' || p_hex)::varbit h, p_default r, 0 i
union all
select case
when bit_length(h) <= p_bits then varbit ''
else substring(h for bit_length(h) - p_bits)
end,
r # case
when bit_length(h) <= p_bits then h::bit(64)::bigint
else substring(h from bit_length(h) - p_bits + 1 for p_bits)::bigint
end,
i + 1
from r
where bit_length(h) > 0
)
select r
from r
order by i desc
limit 1
$func$;
This assumes that the p_hex
parameter is really in hex format & the p_bits
parameter is never greater than 64
.
But if you just use this as is, you may end up later, with conflicting values upon INSERT
. What you can do is f.ex. to use:
select -hex_xor('56c4100560b2d8308f4bde21', 63)
upon migration. This way migrated ObjectId
s will always be negative values & later generated primary keys (f.ex. from a sequence) will always be positive.
http://rextester.com/RCVFN77368