Looks like you are hitting the ulimit
for your user. This is likely a function of some or all of the following:
- Your user having the default
ulimit
(probably 256 or 1024 depending on the OS)
- The size of your DB, MongoDB's use of memory mapped files can result in a large number of open files during the restore process
- The way in which you are running
mongorestore
can increase the concurrency thereby increasing the number of file handles which are open concurrently
You can address the number of open files allowed for your user by invoking ulimit -n <some number>
to increase the limit for your current shell. The number you choose cannot exceed the hard limit configured on your host. You can also change the ulimit permanently, more details here. This is the root cause fix but it is possible that your ability to change the ulimit
is constrained by AWS so you might want to look at reducing the concurrency of your mongorestore
process by tweaking the following settings:
--numParallelCollections int
Default: 4
Number of collections mongorestore should restore in parallel.
--numInsertionWorkersPerCollection int
Default: 1
Specifies the number of insertion workers to run concurrently per collection.
If you have chosen values for these other than 1 then you could reduce the concurrency (and hence the number of concurrently open file handles) by setting them as follows:
--numParallelCollections=1 --numInsertionWorkersPerCollection=1
Naturally, this will increase the run time of the restore process but it might allow you to sneak under the currently configured ulimit
. Although, just to reiterate; the root cause fix is to increase the ulimit
.