0

I've been recently doing the following:

ssh uid@somemachine -- someRelativelyLongCommand

This works fine, however,

  1. The number of ssh sessions per machine are limited (10 by default, not sure what upper bound here is)
  2. This maintains sessions open, meaning that if something happens to host machine, session, and cmd with it, dies

Would be interesting to know what alternative would, for example, spawn command on the remote machine + give pid back. Periodically check for that process' state, and once done, mark it as finished.

It feels there should be a bash way to achieve this and not resort to e.g., some kind of distributed scheduler environment, which is most likely an overkill for the use case considered (things work fine, it just seems they are sub-optimal).

Thanks!

sdgaw erzswer
  • 2,182
  • 2
  • 26
  • 45
  • Would running the command via `timeout` an option? See _man timeout_. – user1934428 Jul 19 '23 at 13:48
  • that link (duplicate) has several suggestions; the accepted answer doesn't (directly) address your question but several of the other answers should suffice (search the page for `ssh`), eg, `ssh uid@somemachine -- someRelativelyLongCommand >/dev/null 2>&1 &`; this should address the issue of the command continuing to run regardless of what happens on the local host; as for the 'status' of the running job ... occasional `ssh/what-is-the-status` calls (grep ps or log file?) would be 'easy', while more involved solutions would look at server-to-server comms (eg, email, sockets, etc) – markp-fuso Jul 19 '23 at 14:04
  • 1
    Thanks! Wasn't aware of that question. – sdgaw erzswer Jul 19 '23 at 14:16

0 Answers0