1

I have a large node.js project that deals with the linux redhat filesystem quite a bit, and is being managed by PM2. I'm using fs and rimraf for disk cleanup on a redhat AWS ephemeral drive.

When running through all of the disk cleanup operations, the files are deleted, but kept open by node.js until it's restarted. Using df shows that the disk is full, but du shows the disk as clean, which would make sense.

It's my understanding that when using fs, if you use fs.open you need to call fs.close, but these functions are never in use. The fs operations I'm using are:

fs.writeFile fs.readdirSync fs.lstatSync fs.existsSync fs.mkdirSync fs.chownSync fs.unlink fs.rmdirSync fs.writeFile fs.createReadStream fs.createWriteStream

Rimraf is also being used, but looking through their code, I don't think that's the problem.

I believe all of these functions manage closing themselves, according to the documentation, but if somebody knows something I didn't see that help would be appreciated.

My other fear is that pm2 is somehow keeping files open, is that possible?

pgm
  • 328
  • 2
  • 7

1 Answers1

0

fs.createWriteStream requires manual fs.close(). fs.createReadStream also requires it, unless you set the autoClose option and then read the file all the way until end or error.

Sergey Salnikov
  • 321
  • 1
  • 5
  • Is `fs.close()` or `fs.destroy()` the preferred method for read streams? – pgm Dec 30 '15 at 06:20
  • Interesting. Seems that there is no documented way, but the folks who have read and discussed the source [suggest](http://stackoverflow.com/a/19277382/2063220) `stream.destroy()`. – Sergey Salnikov Dec 30 '15 at 06:51