We are managing two distinct Mongo replicasets, one for staging, one for production, running on Kubernetes. Networking is provided via Kubernetes' internal infrastructure, such so that Mongo servers on each cluster are available at:
- mongo-0.mongodb-service.default.svc.cluster.local:27017
- mongo-1.mongodb-service.default.svc.cluster.local:27017
- mongo-2.mongodb-service.default.svc.cluster.local:27017 ,...etc
We need to use mongoexport / mongoimport to copy data from the staging cluster to prod. Clients on the staging cluster can use an SSH bridge to access the private network on the production cluster, so we set up a bridge:
ssh -L 27020:mongo-0.mongodb-service.default.svc.cluster.local:27017 [..]
Then do mongoexport locally, and run mongoimport against localhost:27020.
Problem: Mongo load balances the primary (writable) master. So, when mongoexport connects to mongo-0, it is being told to connect to mongo-1.
So, hey, we just set up 3 port forwarding, right?
ssh -L 27020:mongo-0.mongodb-service.default.svc.cluster.local:27017 -L 27021:mongo-1.mongodb-service.default.svc.cluster.local:27017 -L 27022:mongo-2.mongodb-service.default.svc.cluster.local:27017 [...]
mongoimport --host rs0/localhost:27020,localhost:27021,localhost:27022 [..]
Well, no. At this point, Mongo-0 on the remote cluster tells mongoimport to connect to Mongo-1, which it does so on the local cluster (and fails at authentication, because different username / password).
Question: In what way might we transfer data from one replicaset to another when remote connection has to go through SSH bridging?
Thank you!