1

For the last few days I've been trying to implement a working multi-master replication using postgresql however, unsuccessfull. I came across multiple tools, none of which as worked.

I think I'm getting close to a solution but I'm hitting a wall.

Right now I'm using pg_logical. It seems that pg_logical alone doesn't allow multi-master but only master-slave. I saw people using the plugin "BDR", but I can't find any way of installing it and from what I read it is no longer maintained and has moved onto a paid on-cloud solution (which I cannot use in my scenario)

If anyone has a solution to replicate, multi-master, multi-servers (without access to the internet, only during installation)

I've also tried bucardo, but, once again, unsuccessful.

Thank you all

  • Give up and try a better architecture that does *not* involve multi-master replication. – Laurenz Albe Dec 21 '22 at 04:45
  • @LaurenzAlbe I don't think there is any other option here. I can't give full details but I need a scenario where I have multiple instances of a software and each has it's own database because one will probably be lost, and I don't know which one. if I have 5 instances it is possible that 3/4 might get destroyed (security systems) and I cannot have internet access (client restriction) – João Ferreira Dec 21 '22 at 09:00
  • I should mention that I'm stuck with it being postgresql or sqlite, because it is the database for a Headscale server, and it only supports these 2 atm – João Ferreira Dec 21 '22 at 09:25
  • Good luck; I don't think you will find a viable ready-made solution. write your own data synchronization. – Laurenz Albe Dec 21 '22 at 09:30
  • Unfortunately that's what I thought. Thank you for your time! – João Ferreira Dec 21 '22 at 09:33
  • If Headscale connects to posgresql the standard way perhaps cockroachdb could be a solution for multimaster? – ateam Jun 02 '23 at 00:21

2 Answers2

1

You could try logical replication row filtering to replicate table data bidirectionally.

https://www.postgresql.org/docs/15/logical-replication-row-filter.html

Give each node a unique id and create a publication that's limited to its unique id.

That would be ideal for read only tables where you only INSERT, i.e., audit logs. But would have limited uses for other operations.

--On host=192.168.2.2
--Set WAL level to logical
--Execute in # order,
--#1
CREATE TABLE service.message_log
(
    id bigint NOT NULL,
    server_id smallint NOT NULL,
    message bytea,
    CONSTRAINT message_log_pkey PRIMARY KEY(id, server_id)
);

--#2
CREATE PUBLICATION message_log_host2002 FOR TABLE service.message_log WHERE (server_id = 2002);

--#5
CREATE SUBSCRIPTION message_log_host1002 CONNECTION 'host=192.168.1.2 port=5435 user=repuser dbname=testdb' PUBLICATION message_log_host1002;

--#8
INSERT into service.message_log (id, server_id, message) VALUES (1, 2002, 'Hello from 192.168.2.2');
INSERT into service.message_log (id, server_id, message) VALUES (2, 2002, 'Test from 192.168.2.2');
--On host=192.168.1.2
--Set WAL level to logical
--#3
CREATE TABLE service.message_log
(
    id bigint NOT NULL,
    server_id smallint NOT NULL,
    message bytea,
    CONSTRAINT message_log_pkey PRIMARY KEY(id, server_id)
);

--#4
CREATE PUBLICATION message_log_host1002 FOR TABLE service.message_log WHERE (server_id = 1002);

--#6
CREATE SUBSCRIPTION message_log_host2002 CONNECTION 'host=192.168.2.2 port=5435 user=repuser dbname=testdb' PUBLICATION message_log_host2002;

--#7 
INSERT into service.message_log (id, server_id, message) VALUES (1, 1002, 'Hello from 192.168.1.2');
INSERT into service.message_log (id, server_id, message) VALUES (2, 1002, 'Test from 192.168.1.2');

To verify

select id, server_id, convert_from(message, 'UTF8') from service.message_log 

Output from 192.168.1.2: 
1   1002    "Hello from 192.168.1.2"
2   1002    "Test from 192.168.1.2"
1   2002    "Hello from 192.168.2.2"
2   2002    "Test from 192.168.2.2"

Output from 192.168.2.2:
1   1002    "Hello from 192.168.1.2"
2   1002    "Test from 192.168.1.2"
1   2002    "Hello from 192.168.2.2"
2   2002    "Test from 192.168.2.2"
trigg
  • 46
  • 4
  • I've tried that, it works for master-slave, not multi-master. as soon as you setup the 2nd computer it stops working. – João Ferreira Jan 09 '23 at 11:52
  • I'm running Postgres 15.1 with two seperate hosts. Did you set the row filters in the CREATE PUBLICATION so you wouldn't see collisions? – trigg Jan 09 '23 at 12:00
  • Also make sure the publication and subscription names are unique per host. – trigg Jan 09 '23 at 12:10
  • I've tested this and it actually works, I believe thanks to the "Where server_id" on the publication. – João Ferreira Jan 09 '23 at 16:42
  • I'll mark this as an answer for anyone else trying to a solution, unfortunately in my case I don't have control over the table since I'm doing this replication for a HeadScale database. Whenever I come across a solution I'll update the comments. Thank you for your solution it will definitely be a good help getting us on the right track. – João Ferreira Jan 09 '23 at 16:43
  • Do you know, by any chance, if it is possible to do something like: create publication for all tables.... without a where, on 2 machines and subscribe one another? – João Ferreira Jan 10 '23 at 15:54
  • I don't think it’s possible without the WHERE using standard Postgres, as you need to use that to limit the publication to only replicate the data it produces, otherwise it will error out and break the replication. In my setup I'm using master->slave for most tables (which just pushes configuration) then use the above to replicate audit trail and other data produced by each machine. Though I'm still experimenting with the setup as I have new use cases. – trigg Jan 10 '23 at 22:43
0

You could try to use the postgresql spock extension. I've done multi-master logical replication with postgresql using custom triggers and UUID primary keys.

If your database relies on sequences for primary keys then the Spock extension above is a better fit.

Duy Vu
  • 1
  • 1