8

I have an array with object server like this:

Array
(
    [0](
                (
                    [id] => 1
                    [version] => 1
                    [server_addr] => 192.168.5.210
                    [server_name] => server1
                )
        )
    [1](
                (
                    [id] => 2
                    [server_addr] => 192.168.5.211
                    [server_name] => server2
                )
        )
)

By running the code below, I'm able to get the desired output

foreach ($model as $server) {
        $cpu_usage = shell_exec('sudo path/to/total_cpu_usage.sh '.$server->server_addr);
        $memory_usage = shell_exec('sudo path/to/total_memory_usage.sh '.$server->server_addr);
        $disk_space = shell_exec('sudo path/to/disk_space.sh '.$server->server_addr);
        $inode_space = shell_exec('sudo path/to/inode_space.sh '.$server->server_addr);
        $network = shell_exec('sudo path/to/network.sh '.$server->server_addr);
        exec('sudo path/to/process.sh '.$server->server_addr, $processString);
        $processArray = array();
        foreach ($processString as $i) {
          $row = explode(" ", preg_replace('/\s+/', ' ', $i));
          array_push($processArray,$row);
        }
        $datetime = shell_exec('sudo path/to/datetime.sh '.$server->server_addr);
        echo $cpu_usage;
        echo $mem_usage;
        echo $disk_space;
        ......
}

My scripts are similar like:

#!/bin/bash
if [ "$1" == "" ]
then
        echo "To start monitor, please provide the server ip:"
        read IP
else
        IP=$1
fi

ssh root@$IP "date"

But the whole process took like 10 sec for 5 servers compared to 1 server for less than 2 sec. Why is that? Is there anyway to lessen the time? My guess is that the exec command was waiting for the output to be assign to the variable before going to next loop? I tried to google a little bit but most of the answer are for without returning any output at all... I need the output though

Lim SY
  • 175
  • 2
  • 15
  • start_time=2016-12-23T17:42:50, end_time=2016-12-23T17:43:01. about 11 sec for 5 loops, first loop end at 42:51 so about 1++sec – Lim SY Dec 23 '16 at 09:45

4 Answers4

10

You can run your scripts simultaneously with popen() and grab the output later with fread().

//execute
foreach ($model as $server) {
    $server->handles = [
        popen('sudo path/to/total_cpu_usage.sh '.$server->server_addr, 'r'),
        popen('sudo path/to/total_memory_usage.sh '.$server->server_addr, 'r'),
        popen('sudo path/to/disk_space.sh '.$server->server_addr, 'r'),
        popen('sudo path/to/inode_space.sh '.$server->server_addr, 'r'),
        popen('sudo path/to/network.sh '.$server->server_addr, 'r'),
    ];
}

//grab and store the output, then close the handles
foreach ($model as $server) {
    $server->cpu_usage = fread($server->handles[0], 4096);
    $server->mem_usage = fread($server->handles[1], 4096);
    $server->disk_space = fread($server->handles[2], 4096);
    $server->inode_space = fread($server->handles[3], 4096);
    $server->network = fread($server->handles[4], 4096);

    foreach($server->handles as $h) pclose($h);
}

//print everything
print_r($model);

I tested a similar code to execute 5 scripts that sleep for 2 seconds and the whole thing took only 2.12 seconds instead of 10.49 seconds with shell_exec().

Update 1: Big thanks to Markus AO for pointing out an optimization potential.

Update 2: Modified the code to remove the possibility of overwrite. The results are now inside $model.

This can also show which server refused the connection, in case that issue about sshd is affecting you.

Community
  • 1
  • 1
Rei
  • 6,263
  • 14
  • 28
  • i realise it is still not running simultaneously though. but it is by a bit faster – Lim SY Dec 27 '16 at 04:38
  • This is a neat and simple approach. You might be able to speed it up some more if you split it into two loops, the first one simply opening all the handles (e.g. as `$handles[$server]`) that make the calls running on the background, and the second one reading all the responses and closing the handles. Just so we get all requests crunching before expecting (and possibly waiting) for anything. I imagine that'll be simultaneous enough! – Markus AO Dec 27 '16 at 09:56
  • alright I will try it when I get to the office, thanks your for help – Lim SY Dec 27 '16 at 11:56
  • @LimSY Let me know how fast it goes. Use `microtime()` to measure. – Rei Dec 27 '16 at 12:02
  • @MarkusAO That is a great idea. Easy to implement, too. I integrated it in the updated code. – Rei Dec 27 '16 at 12:07
  • Looks good. @LimSY I'd recommend this approach, unless you feel like getting your hands dirty experimenting and learning with the other options mentioned in my answer. – Markus AO Dec 27 '16 at 12:11
  • it is indeed faster, but for some reason some data from different servers are missing? – Lim SY Dec 28 '16 at 02:38
  • each time i refresh the page im missing different data. but it is so much faster though. – Lim SY Dec 28 '16 at 02:45
  • I tried to use dynamic variable for each loop and sleep before closing the file. still the same. any idea? – Lim SY Dec 28 '16 at 03:23
  • Hi btw I just put 2>&1 at the end of the command like popen('sudo path/to/total_cpu_usage.sh '.$server->server_addr. '2>&1', 'r') and now its returning the error ssh_exchange_identification: Connection closed by remote host. I guess the sshd think I'm trying to ddos – Lim SY Dec 28 '16 at 06:41
  • i think I can relate to this issue - http://serverfault.com/questions/529812/intermittent-ssh-exchange-identification-connection-closed-by-remote-host/529813 – Lim SY Dec 28 '16 at 06:44
  • Maybe coroutines ? – Matrix12 Jan 01 '17 at 18:33
1

All you need to do is add an > /dev/null & at the end on Linux, you wont get the output though, but it will run as a background ( async ) process.

shell_exec('sudo path/to/datetime.sh '.$server->server_addr.'  > /dev/null &');

see also this Background process script from my GitHub, ( it has windows compatible background processes )

https://github.com/ArtisticPhoenix/MISC/blob/master/BgProcess.php

Cheers!

ArtisticPhoenix
  • 21,464
  • 2
  • 24
  • 38
0

I don't know how to make your logic faster but I can tell you how I use to track time of running when I have scripts. At the begin of the script put some var $start = date('c'); and at the end just simple echo ' start='.$start; echo ' end='.date(c);

Alexei
  • 119
  • 17
0

Yes you're correct: your PHP script is waiting for each response before moving onward.

I presume you're hoping to run the requests to all servers simultaneously, instead of waiting for each server to respond. In that case, assuming you're running a thread-safe version of PHP, look into pthreads. One option is to use cURL multi-exec for making asynchronous requests. Then there's also pcntl_fork that may help you out. Also see this & this thread for possible thread/async approaches.

Aside that, do test and benchmark the shell scripts individually to see where the bottlenecks are, and whether you can speed them up. That may be easier than thread/async setups in PHP. If you have issues with network latency, then write an aggregator shell script that executes the other scripts and returns the results in one request, and only call that in your PHP script.

Community
  • 1
  • 1
Markus AO
  • 4,771
  • 2
  • 18
  • 29
  • how do I implement cURL multi-exec? – Lim SY Dec 27 '16 at 02:07
  • @LimSY you'd make SSH calls over cURL to your server(s), first opening a tunnel with PHP's `ssh_*` functions, then initiating multi-cURL and adding each request as a handle. The basics of cURL multi init/exec are here: http://php.net/manual/en/function.curl-multi-init.php and http://php.net/manual/en/function.curl-multi-exec.php ... and SSH usage example here: http://stackoverflow.com/questions/22765956/php-ssh2-tunnel-using-proxy-socks-throw-ssh-server ... this (rather complicated) approach to parallel calls will only make sense if you're connecting directly to remote servers though. – Markus AO Dec 27 '16 at 09:41
  • And to that I should add that I haven't played around with a cURL / SSH combo myself, nor does there seem to be a lot of info online. If you go down that route, prepare for a fair bit of hacking-around. Another option for cURL, probably the easier way around, would be to set up a basic API that returns the required data over HTTP, and make multi-calls to that instead. – Markus AO Dec 27 '16 at 09:45
  • thanks for your input, I think i'm down to writing an aggregator script like you mentioned – Lim SY Dec 27 '16 at 11:34
  • Shouldn't be too hard to write a bash script that returns all the values in one shot. However, if this isn't an issue caused by network latency, you will simply transfer the lag into your bash script, unless you fork or otherwise async the requests there. – Markus AO Dec 27 '16 at 11:40
  • "if this isn't an issue caused by network latency, you will simply transfer the lag into your bash script" why is that? Wasn't it bottlenecked because the shell_exec function from PHP was waiting for the output therefore not moving forward? By doing it all on the script I will only have to run the shell_exec once. – Lim SY Dec 27 '16 at 11:47
  • You will have to find out where the lag is. I don't think `shell_exec` in itself causes a noteworthy lag, the lag is caused by `shell_exec` waiting for responses. Likewise, your aggregator shell script will have to wait for responses from your servers if you issue the calls with the same logic, ie. one by one, without threading. – Markus AO Dec 27 '16 at 12:05
  • **set up a basic API that returns the required data over HTTP, and make multi-calls to that instead** This could be faster than parallelizing with `popen()`. You should try this @LimSY. – Rei Dec 27 '16 at 12:23