Call your remote server as normal. But, In the PHP script you normally call, Take all the functionality and put it in a third script. Then from the old script call the new one with (on Linux)
exec('php -f "{path to new script}.php" $args > /dev/null &');
The &
at the end makes this a background or non-blocking call. Because you call it from the remote sever you don't have to change anything on the calling server. The php -f
runs a php file. The > /dev/null
sends the output from that file to the garbage.
On windows you can use COM and WScript.Shell
to do the same thing
$WshShell = new \COM('WScript.Shell');
$oExec = $WshShell->Run('cmd /C php {path to new script}.php', 0, false);
You may want to use escapeshellarg
on the filename and any arguments supplied.
So it will look like this
- Server1 calls Server2
- Script that was called (on Server2) runs exec and kicks off a background job (Server2) then exits
- Server1 continues as normal
- Server2 continues the background process
So using your example instead of calling:
file_get_contents('Website-2/update.php');
You will call
file_get_contents('Website-2/update_kickstart.php');
In update_kickstart.php
put this code
<?php
exec('php -f "{path}update.php" > /dev/null &');
Which will run update.php
as a separate background (non-blocking) call. Because it's non-blocking update_kickstart.php
will finish and return to searver1
which can go about it's business and update.php
will run on server2
independantly
Simple...
The last note is that file_get_contents
is a poor choice. I would use SSH and probably PHPSecLib2.0 to connect to server2 and run the exec command directly with a user that has access only to that file(Chroot it or something similar). As it is anyone can call that file and run it. With it behind a SSH login it's protected, with it Chrooted that "special" user can only run that one file.