2

Here I have a problem while applying Citus rebalancing after distributing a specific PostgreSQL table and adding new nodes to scale my database.

You can take a look at this useful article if you would like to understand rebalancing in citus before helping out.

In my case I have tried to spread my data to newly added nodes by using citus rebalancing.

So let's assume I have several servers with same credentials and have same databases created. I have assigned one of them as the coordinator node(it will be represented as "192.168.1.100" in the example configuration and queries below), and a node that I would like to add to scale my data (will be represented as ("192.168.1.101" in the example configuration and queries below ).

First of all, I have set the coordinator node by executing the following query.

SELECT citus_set_coordinator_host('192.168.1.100', 5432);

Then, I have distributed my table by

select create_distributed_table('public."Table"','distributedField')

As you may know, For "citus rebalancing" to make sense, we should be capable of rebalancing our data after adding/removing nodes.

SELECT * from citus_add_node('192.168.1.101', 5432);

So we have executed the following query to manage it.

Select * from rebalance_table_shards('public."Table"');

The following error occured every time we tried to execute the query with different configurations or fixes.

connection to the remote node localhost:5432 failed with the following error: fe_sendauth: no password supplied

After hours of research and applying all the suggested solutions in this question, I decided to create a new question to discuss about this.

The system details and configuration files are below.

OS: Ubuntu 20.04.4 LTS

Citus Version : 11.0-2

DB: PostgreSQL 14.4 (Ubuntu 14.4-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit

pg_hba.conf file content :

local   all             postgres                                peer

local   all             all                                     peer

host    all             all             127.0.0.1/32            scram-sha-256

host    all             all             ::1/128                 scram-sha-256

local   replication     all                                     peer
host    replication     all             127.0.0.1/32            scram-sha-256
host    replication     all             ::1/128                 scram-sha-256
host    all             all             192.0.0.0/8             trust

host    all             all             127.0.0.1/32            trust
host    all             all             ::1/128                 trust
host    all             all             192.168.1.101/32        trust

Any help would be appreciated, thanks in advance.

Uncle Bent
  • 163
  • 1
  • 1
  • 11
  • 1
    I also have exactly the same problem. – Kellad Jul 25 '22 at 11:22
  • 3
    In pg_hba, the earlier line wins. So the 'trust' lines will be preempted by the earlier scram lines for the same IP addresses. If you look in the db log file (not the message sent to the client) it should tell you which pg_hba line was matched. – jjanes Jul 25 '22 at 14:37

1 Answers1

4

Thank you jjanes.

  • I moved trust lines above the scram lines in pg_hba.conf,
  • Defined a primary key for the distributed table and
  • set wal_level=logical in the postgresql.conf

then my table have been successfully rebalanced now.

Kellad
  • 618
  • 1
  • 5
  • 10