2

Somehow I don't find a sufficient answer to my problem, only parts of hackarounds.

I'm calling a single "chained" shell command (from a Node app), that starts a long-running update process, which it's stdout/-err should be handed over, as arguments, to the second part of the shell command (another Node app that logs into a DB).

I'd like to do something like this:

updateCommand 2>$err 1>$out ; logDBCommand --log arg err out
  • Can't use > as it is only for files or file descriptors.
  • Also if I use shell variables (like error=$( { updateCommand | sed 's/Output/tmp/'; } 2>&1 ); logDBCommand --log arg \"${error}.\"), I can only have stdout or both mixed into one argument.
  • And I don't want to pipe, as the second command (logCommand) should run whether the first one succeeded or failed execution.
  • And I don't want to cache to file, cause honestly that's missing the point and introduce another asynchronous error vector
  • List item
Tha Brad
  • 148
  • 12
  • 1
    How would caching to file cause an asynchronous error? – 123 Jun 12 '17 at 16:59
  • With a pipe, both sides are run simultaneously, though either side might exit on SIGPIPE if one end closes, but if your `logCommand` handled that it gracefully, though it would still be tough to disentangle stdout and stderr with a pipe – Eric Renouf Jun 12 '17 at 17:00
  • @123, obviously ending one program with a file write and picking it up later is asynchronous and having an extra disk write is missing the point and isn't the point (and could, not saying it will, make a problem worst case) – Tha Brad Jun 12 '17 at 18:52
  • @EricRenouf, and that's exactly what I'd like to avoid, cause (I should clear up that it's embedded hardware) it is a typical "one calls the other with arguments" setup, and having both parallel (though having to gain through that) is an anti-pattern and a waste of resources (especially that the update command could take up a few hours to days, worst case). But how would you write that? Maybe I'll get the solutions a little closer. – Tha Brad Jun 12 '17 at 18:53
  • If `logCommand` is a bash script you could add something like `trap "done_flag=1" SIGPIPE` then within the rest of the script, just see if `done_flag` is set, so you know it's time to exit. – Eric Renouf Jun 12 '17 at 19:22
  • @ThaBrad That is synchronous, see https://stackoverflow.com/questions/748175/asynchronous-vs-synchronous-execution-what-does-it-really-mean. Also you haven't explained how you think this will introduce errors. – 123 Jun 12 '17 at 19:34
  • @123, anyways, leave synchronousness out, it introduces another error vector with a disk read/write – Tha Brad Jun 13 '17 at 11:43
  • @ThaBrad how does it? – 123 Jun 13 '17 at 11:53
  • @EricRenouf, nope, a Nodejs app – Tha Brad Jun 13 '17 at 14:52
  • I'm not familiar with node.js, but perhaps it offers similar signal handling abilities – Eric Renouf Jun 13 '17 at 14:53

1 Answers1

1

After a little chat in #!/bin/bash someone suggested to just make use of tpmsf (file system held in RAM), which is the 2nd most elegant (but only possible) way to do this. So I can make use of the > operator and have stdout and stderr in separate variables in memory.

command1 >/dev/shm/c1stdout 2>/dev/shm/c1stderr 
A=$(cat /dev/shm/c1sdtout) 
B=$(cat /dev/shm/c1stderr) 
command2 $A $B

(or shorter):

A=$(command1 2>/dev/shm/c1stderr ) 
B=$(cat /dev/shm/c1stderr) 
command2 $A $B
Community
  • 1
  • 1
Tha Brad
  • 148
  • 12