0

I have 3 replica set running and I am also using the cluster module to fork 3 other processes ( the number of replica set created, does not have anything to do with the number of process forked ). In mongoose connect method i have the following option set

"use strict";

const mongoose = require("mongoose");
const config   = require("../config.js");
// Set up mongoose connection

mongoose.connect( config.mongoURI, {

    useNewUrlParser: true,

    // silent deprecation warning
    useCreateIndex: true,

    // auto reconnect to db
    autoReconnect: true,

    // turn off buffering, and fail immidiately mongodb disconnects
    bufferMaxEntries: 0,
    bufferCommands: false,

    keepAlive: true, 
    keepAliveInitialDelay: 450000,

    // number of socket connection to keep open
    poolSize: 1000
}, error => {
    if (error) {
        console.log(error);
    }
});


module.exports = mongoose.connection;

The above code is in a file named db.js. In my server.js which starts the express application i require db.js.

Whenever i reload the webpage multiple times it get's to a point were the app slows down to load drastically ( all this started happening when i decided to use a replica set ). I connected to mongdb through mongo shell and ran db.serverStatus().connections everytime i reloaded the page the current field increases ( which is what is expected anytime a new connection is made to mongodb ), but the problem is whenever the current field reaches the specified poolSize the application takes a lot of time to load. I tried calling db.disconnect() whenever the end event is emitted on the req express object, which will disconnect from mongodb ( this worked as expected but since i am using stream changes the above solution to close opend connections will throw MongoError: Topology was destroyed. The error been throwed is not the problem, the problem is preventing the app to slow down drastically if the currently opened connection hits the specified poolSize.

I also tried setting maxIdleTimeMS in the mongodb connection string, and it is not working ( maybe mongoose does not support it )

Note: whenever i run db.currentOps() all active connections are set to false

0.sh
  • 2,659
  • 16
  • 37
  • What is the average load on your website (e.g. users/hour)? – BenSower Jan 28 '19 at 17:01
  • we are just starting out, but am trying to prevent this issue if traffic increases – 0.sh Jan 28 '19 at 17:02
  • 1
    Ok, then you definitely don't need a poolsize of 1000 one way or another. Do you expect many queries that will take a lot of time to complete? If not, then you'll not need 1000 connections for a long time. If you are just starting, then this might even slow down other parts of your project, since the 1000 connections all take up some ram. For more info, here's a good article https://blog.mlab.com/2013/11/deep-dive-into-connection-pooling/ – BenSower Jan 28 '19 at 17:07
  • Also, something of note: A regular Linux system often limits the maximum possible amount of connections mongoDB can handle to something like 80% of 1024, which would explain why your requests might not be handled properly. To avoid this, you can update the ulimit settings in your os and check the https://docs.mongodb.com/manual/reference/configuration-options/#net.maxIncomingConnections. Also, check out the official recommendations: https://docs.mongodb.com/manual/administration/production-notes/#manage-connection-pool-sizes – BenSower Jan 28 '19 at 17:12
  • What happens when you set `poolSize` to `0`? – Márius Rak Sep 17 '20 at 17:16

1 Answers1

0

I actually found out the cause of this issue. Since i am heavily using change streams in the application, the higher the number of change streams you create the higher the number of poolSize you will need. This issue have also been reported on CORE SERVER board in mongodb jira platform

DOCS-11270

NODE-1305

Severe Performance Drop with Mongodb change streams

0.sh
  • 2,659
  • 16
  • 37