2

I have 3 servers all running Mongo as a replica set, 1 Primary and 2 Replicas.

If one should fail, my auto scale will kill the instance and restart the server using user data to reinstall the packages.

How can i notify the primary (new or old) that the instance has come back to life as it will have a new private ip address (you can't assign static private ip's to autoscale/launch config)

I can't seem to find a way in either Mongoose or recent versions of the Mongo Node Driver to tell the primary there is a new instance that wants to join.

I can do this manually by accessing the Mongo shell and passing in

rs0:PRIMARY> rs.add("10.23.229.17");

It will be added, and the data will sync. But how can I do rs.add through the node driver or Bash script?

update

I found this post pretty useful, Can I call rs.initiate() and rs.Add() from node.js using the MongoDb driver?

basically, this is a list of all database commands. Though I can't find anything that wraps these well, it's easy enough to do it using the command method:

var MongoClient = require('mongodb').MongoClient;
MongoClient.connect('mongodb://10.23.223.231', function(err, db) {
  if (err) {
    console.log('error', err);
  }
  else {
    var adminDb = db.admin();
    //use any command from https://goo.gl/0oh6H5
    adminDb.command({ replSetGetConfig: 1 }, function(err, info) {
     if (!eer) { console.log(info); }
    });
  } 
});
Community
  • 1
  • 1
dmo
  • 5,102
  • 3
  • 25
  • 28
  • Well you got one part of it, the rest of the commands are listed in the same place [Replication Commands](https://docs.mongodb.org/manual/reference/command/#replication-commands). But it's not so much as "add" as a method, but rather manipulate your returned array of members from `replSetGetConfig` and submit the whole configuration with [`replSetReconfig`](https://docs.mongodb.org/manual/reference/command/replSetReconfig/) – Blakes Seven Feb 12 '16 at 03:13

1 Answers1

0

What you want to do is wrong in so many ways, I don't know where to start. And stick to the technical ones.

First, if you simply add a new node to the replica set, by the config, you will have 4 nodes in the replica set, which is fine as long as all three remaining servers are running. But now, when one fails, you immediately loose the quorum, and the replica set will revert to secondary state, making writes impossible.

You add another node, rising the number to 5 nodes as per configuration. The same rules as above apply. Now that you add the sixth node, the real trouble begins: you can't possibly reach a quorum again. Ever. Unless you remove the non-existing machines any more. So in a worst case scenario, you will spawn machine after machine - without ever reaching a quorum. Let this happen Friday night, and I am sure your company will be broke by Monday, save it is Amazon itself or a F500.

Let's say you reconfigure the replica set each time scripted. This can trigger the primary to step down, causing yet another election and an according service interruption (you remember, that thing which you are trying to reduce). In a real worst case scenario, it may even be that a rollback is caused and that machine will of course fail next - sending written data into Nirwana because you simply dump the machine.

All in all: that's a Very Bad Idea™.

If you need the security that more than one machine can fail, add another member to the replica set, if that makes sense monetary. From my experience, on AWS, 3 data bearing nodes are good enough.

If you still really want to do it, you might want to have a look at the documentation for replica set administrative commands, namely replSetReconfig.

Markus W Mahlberg
  • 19,711
  • 6
  • 65
  • 89