3

I would like to create a PHP script that will go to another website (given a URL) and check the page source of that page for a certain string of data.

I actually have a way of doing it right now, but looking for an alternative way.

Right now I'm using the file_get_contents php function to read in the page source of the URL into a variable.

$link = "www.example.com";
$linkcontents = file_get_contents($link);

Then I use the strpos php function to search the page for the string I'm looking for:

$needle = "<div>find me</div>";
if (strpos($linkcontents, $needle) == false) {
echo "String not found";
} else {
echo "String found";
}

I have heard the cURL command is good for handling things that have to do with URLs, I'm just not sure how I would use it to do what I'm doing with file_get_contents and strpos functions combined like I have put above.

Or if there is another way to do it, I'm all ears :-)

Charlie
  • 129
  • 2
  • 11

3 Answers3

1

Well we construct a CURL function like this

function Visit($irc_server){
// Open the connection
        $user_agent = $_SERVER['HTTP_USER_AGENT'];
        $port = '80';
        $ch = curl_init();    // initialize curl handle
        curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
        curl_setopt($ch, CURLOPT_URL, $irc_server); 
        curl_setopt($ch, CURLOPT_FAILONERROR, 1);          
        curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);    
        curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); 
        curl_setopt($ch, CURLOPT_TIMEOUT, 50); 
        curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
        curl_setopt($ch, CURLOPT_PORT, $port);          

        $data = curl_exec($ch);
        $httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
        $curl_errno = curl_errno($ch);
        $curl_error = curl_error($ch);
        if ($curl_errno > 0) {
              $return = ("cURL Error ($curl_errno): $curl_error\n");
        } else {
               $return = $data;
        }
        curl_close($ch);
         /*if($httpcode >= 200 && $httpcode < 300){
           $return = 'OK';
       }else{
            $return ='Nok';
        }*/

        return $return;

}

Another function to process our url

function tenta($url){
// Now, create a instance of your class, define the behaviour
// of the crawler (see class-reference for more options and details)
// and start the crawling-process. 

$crawler = new MyCrawler();


// URL to crawl
$crawler->setURL($url);

// Only receive content of files with content-type "text/html"
$crawler->addContentTypeReceiveRule("#text/html#");

// Ignore links to pictures, dont even request pictures
$crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i");

// Store and send cookie-data like a browser does
$crawler->enableCookieHandling(true);

// Set the traffic-limit to 1 MB (in bytes,
// for testing we dont want to "suck" the whole site)
$crawler->setTrafficLimit(1000 * 1024);

// Thats enough, now here we go
$crawler->go();

// At the end, after the process is finished, we print a short
// report (see method getProcessReport() for more information)
$report = $crawler->getProcessReport();

if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
 /*   
echo "Summary:".$lb;
echo "Links followed: ".$report->links_followed.$lb;
echo "Documents received: ".$report->files_received.$lb;
echo "Bytes received: ".$report->bytes_received." bytes".$lb;
echo "Process runtime: ".$report->process_runtime." sec".$lb; */
}

We construct our class

// It may take a whils to crawl a site ...
set_time_limit(110000); 
// Inculde the phpcrawl-mainclass
include("libs/PHPCrawler.class.php");

// Extend the class and override the handleDocumentInfo()-method 
class MyCrawler extends PHPCrawler 
{
  function handleDocumentInfo($DocInfo) 
  {
      global $find;

    // Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>").
    if (PHP_SAPI == "cli") $lb = "\n";
    else $lb = "<br />";

    // Print the URL and the HTTP-status-Code
    echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb;
    //echo $img_url = '<img src="'.$DocInfo->url.'.jpg" width = "150" height = "150" />'.$lb;

    //we looking for kenya on this domain
    foreach ($find as $matche) {
        $matchb = implode(',',$matche);
    //$matchb = $matche['word'];
    if(preg_match("/(".$matchb.")/i", Visit($DocInfo->url))) { 
    echo "<a href=".$DocInfo->url." target=_blank>".$DocInfo->url."</a><b style='color:red;'>".$matche['word']."</b>".$lb;
    }
        }
    // Print the refering URL
    echo "Referer-page: ".$DocInfo->referer_url.$lb;

    // Print if the content of the document was be recieved or not
    if ($DocInfo->received == true)
      echo "Content received: ".$DocInfo->bytes_received." bytes".$lb;
    else
      echo "Content not received".$lb; 

    // Now you should do something with the content of the actual
    // received page or file ($DocInfo->source), we skip it in this example 

    echo $lb;

    flush();
  } 
}

Our variables in array Urls we will be crawling.

$url = array(
  array("id"=>7, "name"=>"soltechit","url" => "soltechit.co.uk"),
  array("id"=>5, "name"=>"CNN","url" => "cnn.com", "description" => "A social utility that connects people, to keep up with friends, upload photos, share links")
);
strings we are looking for
$find = array(
  array("word" => "routers"),
  array("word" => "Moose"),
  array("word" => "worm"),
  array("word" => "kenya"),
  array("word" => "alshabaab"),
  array("word" => "ISIS"),
  array("word" => "security"),
  array("word" => "windows 10 release"),
  array("word" => "hacked")
);

Which we call like this

foreach ($url as $urls) {
$url = $urls['url'];
echo '<h2>'.$urls['name'].'</h2>';
echo $urls['description'].'<br>';
echo tenta($url).'<br>';

}
philip
  • 83
  • 1
  • 1
  • 7
0

If file_get_contents works just fine for the task at hand, why change anything...? I say keep using it.

Note that you'll need to pass it a URL that starts with "http://", otherwise it'll try to open a local file called "www.example.com".

Also it's good practice to do === false with strpos, since otherwise a match at position 0 will not be recognized (since 0 == false but not 0 === false)

Matti Virkkunen
  • 63,558
  • 9
  • 127
  • 159
  • Good point Matti. Thank you. The reason I want an alternative is because I feel the way I'm currently doing it might be considered to intrusive of a method of doing what I want to do. – Charlie May 02 '11 at 19:46
  • @Charles: What do you mean by "intrusive"? – Matti Virkkunen May 02 '11 at 19:47
  • @Matti Virkkunen Well if I have to loop through a bunch of URLs (let's say 100) using this method, the web server or firewall may think that I'm trying to spam/attack it and I want to prevent that from happening. – Charlie May 02 '11 at 19:55
  • 1
    @Charles: In that case, if you're going to be doing a lot of requests to the same webserver at once, it might help to use cURL because it supports persistent connections. You'll still want to implement a request limiter of some sort, though, if the server in question doesn't like getting requests too frequently. – Matti Virkkunen May 02 '11 at 20:00
  • @Matt Virkkunen Ya I agree, cURL seems to be the way to go, just looking for an example of how this works in the way I need it to work. – Charlie May 02 '11 at 20:02
  • @Charles: The manual comes with an example AFAIK. Just don't close the session between requests if you want it to use persistent connections. – Matti Virkkunen May 02 '11 at 20:03
  • @Matt Virkkunen When I get home I'm going to try this - http://www.binarymoon.co.uk/2010/04/curl-read-content-web-page/ – Charlie May 02 '11 at 20:15
-2

Something better is here I guess it would be of help which goes like:

<?php 

// It may take a whils to crawl a site ... 
set_time_limit(10000); 

// Inculde the phpcrawl-mainclass 
include("libs/PHPCrawler.class.php"); 

// Extend the class and override the handleDocumentInfo()-method  
class MyCrawler extends PHPCrawler  
{ 
  function handleDocumentInfo($DocInfo)  
  { 
    // Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>"). 
    if (PHP_SAPI == "cli") $lb = "\n"; 
    else $lb = "<br />"; 

    // Print the URL and the HTTP-status-Code 
    echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb; 

    // Print the refering URL 
    echo "Referer-page: ".$DocInfo->referer_url.$lb; 

    // Print if the content of the document was be recieved or not 
    if ($DocInfo->received == true) 
      echo "Content received: ".$DocInfo->bytes_received." bytes".$lb; 
    else 
      echo "Content not received".$lb;  

    // Now you should do something with the content of the actual 
    // received page or file ($DocInfo->source), we skip it in this example  

    echo $lb; 

    flush(); 
  }  
} 

// Now, create a instance of your class, define the behaviour 
// of the crawler (see class-reference for more options and details) 
// and start the crawling-process.  

$crawler = new MyCrawler(); 

// URL to crawl 
$crawler->setURL("www.php.net"); 

// Only receive content of files with content-type "text/html" 
$crawler->addContentTypeReceiveRule("#text/html#"); 

// Ignore links to pictures, dont even request pictures 
$crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i"); 

// Store and send cookie-data like a browser does 
$crawler->enableCookieHandling(true); 

// Set the traffic-limit to 1 MB (in bytes, 
// for testing we dont want to "suck" the whole site) 
$crawler->setTrafficLimit(1000 * 1024); 

// Thats enough, now here we go 
$crawler->go(); 

// At the end, after the process is finished, we print a short 
// report (see method getProcessReport() for more information) 
$report = $crawler->getProcessReport(); 

if (PHP_SAPI == "cli") $lb = "\n"; 
else $lb = "<br />"; 

echo "Summary:".$lb; 
echo "Links followed: ".$report->links_followed.$lb; 
echo "Documents received: ".$report->files_received.$lb; 
echo "Bytes received: ".$report->bytes_received." bytes".$lb; 
echo "Process runtime: ".$report->process_runtime." sec".$lb;  
?>
kenorb
  • 155,785
  • 88
  • 678
  • 743
philip
  • 83
  • 1
  • 1
  • 7
  • This qualifies as "link-only" answer - could you include code snippet solving the problem, even if only copying it from page you linked to? – bardzusny May 30 '15 at 12:35