3

I have a capped collection, which was created in java-code:

this.collection = db.createCollection("stat", new BasicDBObject("capped", true).append("size", 200000000).append("max", 1000000));

Now in statistics for this collection we have:

/* 0 */
{
    "ns" : "myDatabase.stat",
    "count" : 12212,
    "size" : 2146416,
    "avgObjSize" : 175,
    "storageSize" : 200003584,
    "numExtents" : 1,
    "nindexes" : 4,
    "lastExtentSize" : 200003584,
    "paddingFactor" : 1,
    "systemFlags" : 1,
    "userFlags" : 0,
    "totalIndexSize" : 2272928,
    "indexSizes" : {
        "_id_" : 1259104,
        "downloaded_1" : 335216,
        "submitted_1" : 318864,
        "retries_1" : 359744
    },
    "capped" : true,
    "max" : 1000000,
    "ok" : 1
}

And if I try to insert document in collection with this code:

BasicDBObject doc = new BasicDBObject().
        append("downloaded", new Date(0)).
        append("sessionId", sessionId).
        append("group", group);
collection.update(new BasicDBObject("_id", request.getUrl()), new BasicDBObject("$set", doc), true, false);

I catch this error:

com.mongodb.WriteConcernException: { "serverUsed" : "/127.0.0.1:27017" , "lastOp" : { "$ts" : 0 , "$inc" : 0} , "connectionId" : 16420 , "err" : "failing update: objects in a capped ns cannot grow" , "code" : 10003 , "n" : 0 , "ok" : 1.0}
        at com.mongodb.CommandResult.getException(CommandResult.java:77)
        at com.mongodb.CommandResult.throwOnError(CommandResult.java:110)
        at com.mongodb.DBTCPConnector._checkWriteError(DBTCPConnector.java:102)
        at com.mongodb.DBTCPConnector.say(DBTCPConnector.java:142)
        at com.mongodb.DBTCPConnector.say(DBTCPConnector.java:115)
        at com.mongodb.DBApiLayer$MyCollection.update(DBApiLayer.java:327)
        at com.mongodb.DBCollection.update(DBCollection.java:178)
        at com.mongodb.DBCollection.update(DBCollection.java:209)
        at com.srg.hydra.monitoring.HydraStatistics.insert(HydraStatistics.java:63)
        at com.srg.hydra.HydraSite.onSubmit(HydraSite.java:91)
        at ru.decipher.site.AbstractSite.submit(AbstractSite.java:198)
        at com.srg.hydra.Eip.start(Eip.java:48)
        at com.srg.hydra.runner.DefaultHydraRunner.doCrawling(DefaultHydraRunner.java:180)
        at com.srg.hydra.runner.DefaultHydraRunner$1.run(DefaultHydraRunner.java:155)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)

What am I doing wrong? And how to fix it? (exclude to .drop() this collection and recreate) Thanks for answers!

flutesa
  • 469
  • 5
  • 5

1 Answers1

2

You cannot update a document within a capped collection.(There are workarounds to this if you read my entire answer).I have experienced this issue in the past. Please find details(and solution) to this issue from the MongoDB docs below:

Recommendations and Restrictions

  • You can only make in-place updates of documents. If the update operation causes the document to grow beyond their original size, the update operation will fail.

  • If you plan to update documents in a capped collection, create an index so that these update operations do not require a table scan.

  • If you update a document in a capped collection to a size smaller than its original size, and then a secondary resyncs from the primary, the secondary will replicate and allocate space based on the current smaller document size. If the primary then receives an update which increases the document back to its original size, the primary will accept the update but the secondary will fail with a failing update: objects in a capped ns cannot grow error message.

  • To prevent this error, create your secondary from a snapshot of one of the other up-to-date members of the replica set. Follow our tutorial on filesystem snapshots to seed your new secondary.

  • Seeding the secondary with a filesystem snapshot is the only way to guarantee the primary and secondary binary files are compatible. MMS Backup snapshots are insufficient in this situation since you need more than the content of the secondary to match the primary.

  • You cannot delete documents from a capped collection. To remove all records from a capped collection, use the ‘emptycapped’ command. To remove the collection entirely, use the drop() method.

  • You cannot shard a capped collection.

  • Capped collections created after 2.2 have an _id field and an index on the _id field by default. Capped collections created before 2.2 do not have an index on the _id field by default. If you are using capped collections with replication prior to 2.2, you should explicitly create an index on the _id field.

Community
  • 1
  • 1
vmr
  • 1,895
  • 13
  • 24