I'm upgrading a secondary replica set member to wiredTiger. I have upgraded it from MongoDB 2.6.3 to 3.0.4 and changed the storage engine to wiredTiger. Now it is resyncing all data from the primary. At some point the following error is received, and the process starts all over again:
2015-07-22T13:18:55.658+0000 I INDEX [rsSync] building index using bulk method
2015-07-22T13:18:55.664+0000 I INDEX [rsSync] build index done. scanned 1591 total records. 0 secs
2015-07-22T13:18:56.397+0000 E STORAGE [rsSync] WiredTiger (24) [1437571136:397083][20413:0x7f3d9ed29700], file:WiredTiger.wt, session.create: WiredTiger.turtle: fopen: Too many open files
2015-07-22T13:18:56.463+0000 E REPL [rsSync] 8 24: Too many open files
2015-07-22T13:18:56.463+0000 E REPL [rsSync] initial sync attempt failed, 9 attempts remaining
The same machine was previously running 2.6.3 version without any open file limits issues. I'm aware that wiredTiger might be creating much more files, so it must be it, but does it keep them all open simultaneously ?
For reference:
cat /proc/sys/fs/file-max
10747371
In /etc/init.d/mongod the configuration is:
ulimit -n 64000
According to the documentation it seems that mongo holds a file descriptor for every data file. As in wiredTiger this results in a file for each collection + a file for each index, according to a calculation for our usecase, can add up to over 700K.
So I can change the ulimit to 700000 or higher, but I'm wondering whether this is the most correct solution, and what alternatives exist.