1

Prologue:

I'm experimenting with a "multi-tenant" flat file database system. On application start, prior to starting the server, each flat file database (which I'm calling a journal) is converted into a large javascript object in memory. From there the application starts it's service.

The application runtime behavior will be to serve requests from many different databases (one db per domain). All reads come from the in memory object alone. While any CRUDs both modify the in memory object as well as stream it to the journal.

Question: If I have a N of these database objects in memory which are already loaded from flat files (let's say averaging around 1MB each), what kind of limitations would I be dealing with by having N number of write streams?

Kevin
  • 388
  • 7
  • 23
  • What are the writable streams used for? You said you read your data into memory so I don't understand the role of the writable streams. Do these streams have a file handle behind them or are they just your own custom stream abstraction on your data in memory? – jfriend00 Feb 18 '17 at 07:35
  • It's no different then a log file. I'm streaming changes to my database that's in memory to a flat file...like this `fs.createWriteStream('log.txt', {'flags': 'a'})`. I could potentially have 1000s of these streams open. I'm not using streams for the database that has been serialized from the journal into memory. The journal contains the entire history of the database, the in memory object only contains the latest state. I should also mention. – Kevin Feb 18 '17 at 16:13
  • 1
    The reason I ask is that if you're using custom streams that just go to memory, then no system resources are consumed by those other than the memory from the Javascript object that represents the stream and your only limitation would be memory. If there's a file handle behind the stream, then you have system limits on open file handles that varies by OS and configuration. – jfriend00 Feb 18 '17 at 22:26
  • thanks @jfriend00 I think that's the answer I was looking for: "system limits on open file handles". – Kevin Feb 19 '17 at 04:39

1 Answers1

1

If you are using streams that have an open file handle behind them, then your limit for how many of them you can have open will likely be governed by the process limit on open file handles which will vary by OS and (in some cases) by how you have that OS configured. Each open stream also consumes some memory, both for the stream object and for read/write buffers associated with the stream.

If you are using some sort of custom stream that just reads/writes to memory, not to files, then there would be no file handle involved and you would just be limited by the memory consumed by the stream objects and their buffers. You could likely have thousands of these with no issues.

Some reference posts:

Node.js and open files limit in linux

How do I change the number of open files limit in Linux?

Check the open FD limit for a given process in Linux

jfriend00
  • 683,504
  • 96
  • 985
  • 979