0

I'm using the following command to open a temporary ssh tunnel for making a mysql connection:

exec('ssh -f -L 3400:127.0.0.1:3306 user@example.com sleep 1 > /dev/null');
$connection = @new \mysqli(127.0.0.1, $username, $password, $database, 3400);

This works splendidly. However, once in a while there may be another process using that port in which case it fails.

bind [127.0.0.1]:3400: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 3401
Could not request local forwarding.

What I'd like to do is capture the error output of exec() so that I can retry using a different port. If I add 2>&1 to my command the error output just goes nowhere since stdout is already being piped to /dev/null.

One solution I've come up with is to pipe output to a file instead of /dev/null:

exec('ssh -f -L 3400:127.0.0.1:3306 user@example.com sleep 1 >temp.log 2>&1');
$output = file_get_contents('temp.log');

This works, but it feels messy. I'd prefer not to use the filesystem just to get the error response. Is there a way to capture the error output of this command without piping it to a file?

UPDATE: For the sake of clarity:

(a) Capturing result code using the second argument of exec() does not work in this case. Don't ask me why - but it will always return 0 (success)

(b) stdout must be redirected somewhere or php will not treat it as a background process and script execution will stop until it completes. (https://www.php.net/manual/en/function.exec.php#refsect1-function.exec-notes)

If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.

  • 1
    According to the documentation from `exec` the **second** argument should be the output (array). The third argument should be the result code. Another reason why you get `0` is, that you possibly currently capture the output from `sleep`, not from `ssh`. –  Oct 19 '21 at 00:33
  • Instead of using sleep, try to pipe to `tee` . This may look like `ssh -f -L 3400:127.0.0.1:3306 user@example.com | tee temp.log`. Also, instead of using `exec`, you could simply pass your command into a variable with system() `$command = system("ssh ...")`, which return the LAST LINE only, on your case only `Could not request local forwarding.`. To just catch a switching state it can be enough. Lastly, if you intent to high load, and into a UNIX machine, use the system tools, you better have to run a separate script task and POSIX signals to fine control it, and it's just for ssh use a LIB – NVRM Oct 25 '21 at 15:37

2 Answers2

1

As far as i can tell, exec is not the right tool. For a more controlled approach, you may use proc_open. This may look something like this:

$process = proc_open(
   'ssh -f -L 3400:127.0.0.1:3306 user@example.com sleep 1',
   [/*stdin*/ 0 => ["pipe", "r"], /*stdout*/ 1 => ["pipe", "w"], /*stderr*/2 => ["pipe", "w"]],
   $pipes
);

// Set the streams to non-blocking
// This is required since any unread output on the pipes may result in the process still marked as running
// Note that this does not work on windows due to restrictions in the windows api (https://bugs.php.net/bug.php?id=47918)
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);

// Wait a litte bit - you would probably have to loop here and check regulary
// Also note that you may need to read from stdout and stderr from time to time to allow the process to finish
sleep(2);

// The process should now be running as background task
// You can check if the process has finished like this
if (
    !proc_get_status($process)["running"] ||
    proc_get_status($process)["signaled"] ||
    proc_get_status($process)["stopped"] ||
) {
   // Process should have stopped - read the output
   $stdout = stream_get_contents($pipes[1]) ?: "";
   $stderr = stream_get_contents($pipes[2]) ?: "";

   // Close everything
   @fclose($pipes[1]);
   @fclose($pipes[2]);
   proc_close($process);
}

You can find more details on that the manual on proc_open

D B
  • 534
  • 3
  • 14
  • Although this approach could work, it doesn't really answer my question and I think it's a lot more complicated that just piping output to a file. At the end of the day I agree with you though - `exec()` not the right tool In fact, I would go so far as to say opening an ssh tunnel from with my php script is the beginning of the problem here. – But those new buttons though.. Oct 26 '21 at 02:26
0

If I add 2>&1 to my command the error output just goes nowhere since stdout is already being piped to /dev/null.

You can redirect stdout to null and stderr to stdout. That would seem to me as the simpler way of doing what you want (minimal modification).

So instead of

>temp.log 2>&1

do:

2>&1 1>/dev/null

Note that the order of the redirects is important.

Test

First we exec without redirection, then we redirect as above to capture stderr.

<?php
    $me = $argv[0];

    $out = exec("ls -la no-such-file {$me}");

    print("The output is '{$out}'\n");

    print("\n----\n");

    $out = exec("ls -la no-such-file {$me} 2>&1 1>/dev/null");

    print("The output is '{$out}'\n");

    print("\n");
~

$ php -q tmp.php

ls: cannot access 'no-such-file': No such file or directory
The output is 'The output is '-rw-r--r-- 1 lserni users 265 Oct 25 22:48 tmp.php'

----
The output is 'ls: cannot access 'no-such-file': No such file or directory'

Update

This requirement was not clear initially: "process must detach" (as if it went into the background). Now, the fact is, whatever redirection you do to the original stream via exec() will prevent the process from detaching, because at the time the detachment would happen, the process has not completed, its output is not delivered.

That is also why exec() reports a zero error code - there was no error in spawning. If you want the final result, someone must wait for the process to finalize. So, you have to redirect locally (that way it will be the local file that will wait), then reconnect with whoever it was that waited for the process to finalize and then read the results.

For what you want, exec will never work. You ought to use the proc_* functions.

You might however force detach even so using nohup (you have no control over the spawned pid, so this is less than optimal)

if (file_exists('nohup.out')) { unlink('nohup.out'); }
$out = shell_exec('nohup ssh ... 2>&1 1>/dev/null &');
...still have to wait for connection to be established...
...read nohup.out to verify...
...
...do your thing...

As I said, this is less than optimal. Using proc_*, while undoubtedly more complicated, would allow you to start the ssh connection in tunnel mode without a terminal, and terminate it as soon as you don't need it anymore.

Actually, however, no offense intended, but this is a "X-Y problem". What you want to do is open a SSH tunnel for MySQL. So I'd look into doing just that.

LSerni
  • 55,617
  • 10
  • 65
  • 107
  • I like your idea but it won't work. Redirecting stderr to stdout causes exec not to put the process into the background. I tested this just now and was hopeful but since it doesn't go into the background, the ssh tunnel closes as soon as exec is finished. – But those new buttons though.. Oct 26 '21 at 02:21
  • Ah, I see now. Updating answer. – LSerni Oct 26 '21 at 07:01
  • Thanks again for your input. Please note that ssh2 is also incapable of supporting a remote mysql connection since to my knowledge it is *still* incapable of specifying a local port to forward. More on that here: https://stackoverflow.com/questions/464317/connect-to-a-mysql-server-over-ssh-in-php/46184003#46184003 - As mentioned above I'm now using systemd to manage this which is much better suited for the job. – But those new buttons though.. Oct 26 '21 at 10:25