0

Following is the code for fetching the results from the db providing collection, filter query, sorting query and number of limit.

func DBFetch(collection *mongo.Collection, filter interface{}, sort interface{}, limit int64) ([]bson.M, error) {
    findOptions := options.Find()
    findOptions.SetLimit(limit)
    findOptions.SetSort(sort)
    cursor, err := collection.Find(context.Background(), filter, findOptions)
    var result []bson.M
    if err != nil {
        logger.Client().Error(err.Error())
        sentry.CaptureException(err)
        cursor.Close(context.Background())
        return nil, err
    }
    if err = cursor.All(context.Background(), &result); err != nil {
        logger.Client().Error(err.Error())
        sentry.CaptureMessage(err.Error())
        return nil, err
    }
    return result, nil
}
  1. I am using mongo-go driver version 1.8.2
  2. mongodb community version 4.4.7 sharded mongo with 2 shards
  3. Each shard is with 30 CPU in k8 with 245Gb memory having 1 replica
  4. 200 rpm for the api
  5. Api fetches the data from mongo and format it and the serves it
  6. We are reading and writing both on primary.
  7. Heavy writes occur every hour approximately.
  8. Getting timeouts in milliseconds ( 10ms-20ms approx. )
  • this seems to be already answered in details here: https://stackoverflow.com/questions/24199729/pymongo-errors-cursornotfound-cursor-id-not-valid-at-server – R2D2 Jan 25 '22 at 14:44

1 Answers1

0

As pointed out by @R2D2 in the comment, no cursor timeout error occurs when the default timeout (10 minutes) exceeds and there was no request from go for next set of data.

There are couple of workarounds you can do to mitigate getting this error.

First option is to set batch size for your find query by using the below option. By doing do, you are instructing MongoDB to send data in specified chunks rather than sending more data. Note that this will usually increase the roundtrip time between MongoDB and Go server.

findOptions := options.Find()
findOptions.SetBatchSize(10)  // <- Batch size is set to `10`

cursor, err := collection.Find(context.Background(), filter, findOptions)

Furthermore, you can set the NoCursorTimeout option which will keep your MongoDB find query result cursor pointer to stay alive unless you manually close it. This option is a double edge sword since you have to manually close the cursor once you no longer need that cursor, else that cursor will stay in memory for a prolonged time.

findOptions := options.Find()
findOptions.SetNoCursorTimeout(true)  // <- Applies no cursor timeout option

cursor, err := collection.Find(context.Background(), filter, findOptions)

// VERY IMPORTANT
_ = cursor.Close(context.Background())  // <- Don't forget to close the cursor

Combine the above two options, below will be your complete code.

func DBFetch(collection *mongo.Collection, filter interface{}, sort interface{}, limit int64) ([]bson.M, error) {
    findOptions := options.Find()
    findOptions.SetLimit(limit)
    findOptions.SetSort(sort)
    findOptions.SetBatchSize(10)  // <- Batch size is set to `10`
    findOptions.SetNoCursorTimeout(true)  // <- Applies no cursor timeout option
    cursor, err := collection.Find(context.Background(), filter, findOptions)
    var result []bson.M
    if err != nil {
        //logger.Client().Error(err.Error())
        //sentry.CaptureException(err)
        _ = cursor.Close(context.Background())
        return nil, err
    }
    if err = cursor.All(context.Background(), &result); err != nil {
        //logger.Client().Error(err.Error())
        //sentry.CaptureMessage(err.Error())
        return nil, err
    }
    // VERY IMPORTANT
    _ = cursor.Close(context.Background())  // <- Don't forget to close the cursor
    return result, nil
}
hhharsha36
  • 3,089
  • 2
  • 12
  • 12