I am using a named pipe as described in this post to run a command on the host from within a docker container.
To write to the named pipe from my node.js app (running in the container), I'm using the following:
const ws = fs.createWriteStream(`/path/to/pipe/in/container`);
const timeoutMs = 5000;
let finished = false;
ws.on('finish', () => {
logger.info('finished writing command to mypipe');
finished = true;
});
setTimeout(() => {
if (!finished) {
logger.warn('writing to mypipe timed out... restart pipe on host with:');
logger.warn('# ~/execpipe &');
}
}, timeoutMs);
ws.write(cmd);
ws.close();
I'm trying to account for the fact that sometimes the process which is reading content from the pipe (PART 3 in the linked post above) may have terminated, or may have not been started in the first place. If that process is not running, then writing to the pipe just seems to wait patiently forever, so I've put in a timeout of sorts.
But what I want to avoid is all of these writes backing up and all being run together, as soon as the process (to read content from the pipe) is restarted.
So in my timeout function what I really want to do is "cancel" the write, so that it never happens even when the content-reading process restarts, rather than simply warning about it.
How might this be achieved?