21

Known Information: Its is know that MongoDB stores in BSON (Binary JSON) and the maximum BSON document size is 16MB.

Question: Why 16MB itself why not 32MB or 64MB or still more and where exactly the limit has been put for 16MB and what are the reasons to depend on exactly 16MB?

It is mentioned that during transmission, excessive amount of bandwidth will not be consumed and does not require excessive amount of RAM at server. But What if we can afford the network bandwidth and RAM memory consumption. Then also, we are left with no option other than GridFS. Why?

It may sound stupid. But please could anyone put some bright light on this?


Update: It was 4MB and now 16MB.

mongodb BSON size

We can check it on Mongo Shell by issuing following command

db.isMaster().maxBsonObjectSize/(1024*1024)

Why it is not allowed as configurable by DBAs?

Community
  • 1
  • 1
Amol M Kulkarni
  • 21,143
  • 34
  • 120
  • 164
  • 5
    What makes 32 MB so much more logical than 16 MB? – Matt Ball Mar 09 '13 at 06:02
  • @MattBall Updated my question.. But, I hope you can understand my actual question asked and can give some hint or answer itself... – Amol M Kulkarni Mar 09 '13 at 06:05
  • @PrincessOftheUniverse: If you know why it was 4MB and now 16MB, Please answer – Amol M Kulkarni Mar 09 '13 at 08:45
  • 2
    Here is the ticket that caused the increase from 4MB: [SERVER-431](https://jira.mongodb.org/browse/SERVER-431) – theon Mar 09 '13 at 09:29
  • it's 16MB because the developers decided to go for 16MB from 4 MB earlier.. Why is the world not blue? –  Mar 09 '13 at 14:57
  • Regarding your GridFS statement: GridFS is not your only option for storing large objects. You can easily store a URI to, say, cloud storage such as a blob in Windows Azure, or other cloud-scale storage systems. Even staying within the bounds of MongoDB, you can devise a schema where you're using referenced documents in a separate collection instead of sub-documents (a good choice with unbounded subdocument count, such as restaurant reviews). – David Makogon Mar 10 '13 at 01:58
  • mongodb being schema-less is ideal for data (json)dumping but since 16 mb kills the concept as whole ! quiet silly and Gridfs has no CRUD operation on file contents ! can't we just increase the file limit from 16 mb how do people scale using mongodb worries me ! . if you distribute the dump you might as well go for traditional RDBMS ! if there is gridfs crud i am good to go ! please make gridfs crud happen – Rizwan Patel Jul 18 '16 at 11:44
  • We need to store raw EEG data and it ranges from around 100kb to 22mb :-/ the limitation is very annnoying. – Oliver Dixon Dec 16 '20 at 15:57

1 Answers1

21

Check out the thread on the JIRA ticket that increased the value from 4MB to 16MB. There is a sizeable debate on the ticket: https://jira.mongodb.org/browse/SERVER-431

It seems to be arbitrary why they chose 16MB, and not say 32MB. It was increased because many people needed to store documents larger than 4MB (and I presume lower than 16MB). Some people have asked in that thread to have it made configurable (like you asked), which makes sense to me. Not sure why they haven't decided to do this.

theon
  • 14,170
  • 5
  • 51
  • 74