0

I want to duplicate one of my web pages to a different URL, but I want it copied AFTER all of the PHP includes have finished running on the page. Here is what I have so far, but of course it just copies the source code of the page and not resulting page AFTER all PHP includes have run.

The page I want to copy has 90 different data insertions using PHP includes so if course it loads really slow. So I want to copy the FINISHED page after all 90 includes are done loading.

This is what I tried first:

     <?php

     $source =file_get_contents('web-page.htm');

     $destination = 'new-web-page.htm';

     $handle = fopen($destination, "w"); 

     fwrite($handle, $source);

     fclose($handle); 

      ?>

Here is an example of one of the 90 different PHP includes on the original page:

       <?php include 'data.txt';?>

OK - I solved it this way:

        <?php

         $ch = curl_init();

         curl_setopt($ch, CURLOPT_URL, "https://example.com/example.htm");
         curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);

         $result = curl_exec($ch);
         if (curl_errno($ch)) {
         echo 'Error:' . curl_error($ch);
         }
         curl_close ($ch);

         $destination = 'new-page-after-all-php-and-javascript-has-run.htm';

          $data = $result; 

           $handle = fopen($destination, "w"); 

           fwrite($handle, $data);

           fclose($handle); 


            ?>
Cœur
  • 37,241
  • 25
  • 195
  • 267
  • 2
    To do this in code you'd need something called a "headless browser" to open the page and process the scripts/styling/etc. It's basically a web browser in code with no user interface. – David Jan 11 '18 at 17:30
  • If you're talking about the javascript scripts, those won't run via `file_get_contents` or `curl`. Javascript is executed by the browser, whereas `file_get_contents` and `curl` retrieve exactly what the server sends out. – aynber Jan 11 '18 at 17:31
  • Possible duplicate of [How to get webcontent that is loaded by JavaScript using cURL?](https://stackoverflow.com/questions/20554113/how-to-get-webcontent-that-is-loaded-by-javascript-using-curl) – aynber Jan 11 '18 at 17:31
  • PHP scripts run before the data is sent to the browser/calling script. If you're just doing `file_get_contents` on the file itself, it's just looking at the local file and not sending it through the PHP processor. You'll have to use the full URL to the file: `file_get_contents('http://......./web-page.htm');` Also make sure your web server is set to process htm files as php. – aynber Jan 11 '18 at 17:40
  • Thanks @aynber - I have my server running php on htm files. Check. I am using the full URL as you suggested but the new file created is created before the my PHP includes have had a chance to run – Brent Truitt Jan 11 '18 at 17:44
  • Is it possible for you to just isolate the common code from the first page, put it in a separate .php file, and include this .php file in both the old and new pages? This would be one approach. – Scott C Wilson Jan 11 '18 at 18:19
  • What do you have in `file_get_contents` now? You can also try `curl` as a possibility to get the response from the server. – aynber Jan 11 '18 at 18:48

0 Answers0