I have an AWS AMI backup of a single instance of a replicaSet. I would like to recover the full replicaSet. The replicaSet was configured with 3 hosts (db1.mydomain.com, db2.mydomain.com, db3.mydomain.com)
What I tried:
- Launch 3 instances from that one AMI
- Point Route53 CNAMES db[1,2,3].mydomain.com to the newly created instances
My assumption was, since they all start from the same point in time, and are already part of a replicaSet and can see each other, they would just find each other, elect a primary and start working, but that doesn't happen, what I get is:
mongo -u <login> -p <password> --authenticationDatabase admin
my-rs:OTHER> rs.status()
{
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig"
}
What does work:
- Stop mongod
sudo service mongod stop
- Connect directly to the DB on each
sudo mongod --dbpath /data &
echo -e "use local\ndb.dropDatabase()" | mongo
sudo killall -9 mongod
sudo chown mongod:mongod -R /data
sudo chown mongod:mongod /tmp/mongodb-27017.sock
sudo rm /var/run/mongodb/mongod.pid
sudo service mongod restart
- Connect to one of them and re-create the replicaSet()
mongo -u <login> -p <password> db1.mydomain.com --authenticationDatabase admin
mongo> rs.initiate({_id: "my-rs", members: [{_id: 0, host: "db1.mydomain.com:27017"}, {_id: 1, host: "db2.mydomain.com:27017"}, {_id: 2, host: "db3.mydomain.com:27017"}]})
This works, but it looks like db2 and db3 are stuck in STARTUP2 for a few hours. It feels like they are syncing to db1 from scratch.
Is there any way that I can revive that replicaSet without dropping the local database, recreating a replicaSet and doing a full resync.