8

We have a mongoDB replica set which has 3 nodes;

  1. Primary
  2. Secondary
  3. Arbiter

Somehow our replica set has ended up with 1 & 2 both being set as secondary members. I'm not sure how this has happened (we have had a server migration which one of the nodes runs on, but only one).

Anyway, I've been trying to re-elect a new primary for the replica set following the guide here

I'm unable to just use

rs.reconfig(cfg)

as it will only work if directed at the primary (which I don't have).

Using the force parameter

rs.reconfig(cfg, { force: true})

would appear to work but then when I requery the status of the replica set, both servers are still only showing as Secondary.

Why hasn't the force reconfig worked? Currently the database is locked out whatever I try.

obaylis
  • 2,904
  • 4
  • 39
  • 66
  • Can you post the output of rs.status()? This will provide current state of your replica set members. – James Wahlin Apr 07 '14 at 16:10
  • Not even close to relating to a programming question. Please submit to [dba.stackexchange.com](http://dba.stackexchange.com) instead. This is not the place for subjects not directly related to programming topics. – Neil Lunn Apr 07 '14 at 16:28
  • This will probably be locked soon as it isn't a programming question. Rather than try to force one, you can use the "votes" parameter to tune the primary http://docs.mongodb.org/manual/reference/replica-configuration/#local.system.replset.members[n].votes. A node will never vote for itself, so whichever you give more votes will be the secondary. You can simply tune this value and force an election to switch primary – bauman.space Apr 07 '14 at 16:52
  • Fine, I'll ask it on the DBA site. – obaylis Apr 11 '14 at 11:06

2 Answers2

6

1.Convert all nodes to standalone.

Stop mongod deamon and edit /etc/mongod.conf to comment replSet option.

Start mongod deamon.

2.Use mongodump to backup data for all nodes.

Reference from mongo docs:

https://docs.mongodb.com/manual/reference/program/mongodump/

3.Log into each node, and drop local database.

Doing this will delete replica set config on the node.

Or you can just delete a record in collection system.replset in local db, like it said here:

https://stackoverflow.com/a/31745150/4242454

4.Start all nodes with replSet option.

5.On the previous data node (not arbiter), initialize a new replica set.

6.Finally, reconfig replica set with rs.reconfig.

Community
  • 1
  • 1
Kevin_wyx
  • 1,273
  • 1
  • 10
  • 9
  • Saved us, thanks for the answer. For others - drop local db: `use local` `db.dropDatabase()`, initialize new replica set: `rs.initiate()` – Bilbo Nov 23 '21 at 16:37
0

I had same situation: because arbiter received information that he has most recent opTime timestamp.

It found in log: grep ELECTION /var/log/mongodb/mongod.log

"ARBITER-NODE:27017" ... "reason":"candidate's data is staler than mine. candidate's last applied OpTime: .."

The reason for this behavior is that the data-nodes were restored from a backup snapshot, while the arbitrator is not. if it is acceptable the solution is temporary stop arbiter node.