0

the code simply dips into a page and gets all the table content from the specified table inserts it into my db and echoes it.

its doing it very slowly i need ideas to streamline it to work faster

<?php

sets the loop

$pagenumber = 1001;

while ($pagenumber <= 5000) {

gets the content

$url = "http://www.example.com/info.php?num=$pagenumber";
$raw = file_get_contents($url);

$newlines = array("\t","\n","\r","&nbsp;","\0","\x0B");
$content = str_replace($newlines, '', $raw);

$start = strpos($content,'>Details<');
$end = strpos($content,'</table>',$start);
$table1 = substr($content,$start,$end-$start);
// $table1 = strip_tags($table1);

gets first name

$start = strpos($table1,'<td');
$end = strpos($table1,'<br />',$start);
$fnames = substr($table1,$start,$end-$start);
$fnames = strip_tags($fnames);
$fnames = preg_replace('/\s\s+/', '', $fnames);

gets surname

$start = strpos($table1,'<br />');
$end = strpos($table1,'</td>',$start);
$lnames = substr($table1,$start,$end-$start);
$lnames = strip_tags($lnames);
$lnames = preg_replace('/\s\s+/', '', $lnames);

gets the phone

$start = strpos($table1,'Phone:');
$end = strpos($table1,'</td>              </tr>              <tr>',$start);
$phone = substr($table1,$start,$end-$start);
$phone = strip_tags($phone);
$phone = str_replace("Phone:", "" ,$phone);
$phone = preg_replace('/\s\s+/', '', $phone);

gets the address

$start = strpos($table1,'Address:');
$end = strpos($table1,'</td>              </tr>              <tr>',$start);
$ad = substr($table1,$start,$end-$start);
$ad = strip_tags($ad);
$ad = str_replace("Address:", "" ,$ad);
$ad = preg_replace('/\s\s+/', '', $ad);

gets the apartment no

$start = strpos($table1,'Apt:');
$end = strpos($table1,'</td>              </tr>              <tr>',$start);
$apt = substr($table1,$start,$end-$start);
$apt = strip_tags($apt);
$apt = str_replace("Apt:", "" ,$apt);
$apt = preg_replace('/\s\s+/', '', $apt);

gets the country

$start = strpos($table1,'Country:');
$end = strpos($table1,'</td>              </tr>              <tr>',$start);
$country = substr($table1,$start,$end-$start);
$country = strip_tags($country);
$country = str_replace("Country:", "" ,$country);
$country = preg_replace('/\s\s+/', '', $country);

gets the city

$start = strpos($table1,'City:<br />                 State/Province:');
$end = strpos($table1,'</td>              </tr>              <tr>',$start);
$city = substr($table1,$start,$end-$start);
$city = strip_tags($city);
$city = str_replace("City:                 State/Province:", "" ,$city);
$city = preg_replace('/\s\s+/', '', $city);

gets the zip

$start = strpos($table1,'Zip:');
$end = strpos($table1,'</td>              </tr>              <tr>',$start);
$zip = substr($table1,$start,$end-$start);
$zip = strip_tags($zip);
$zip = str_replace("Zip:", "" ,$zip);
$zip = preg_replace('/\s\s+/', '', $zip);

gets the email

$start = strpos($table1,'email:');
$end = strpos($table1,'</td>              </tr>',$start);
$email = substr($table1,$start,$end-$start);
$email = strip_tags($email);
$email = str_replace("email:", "" ,$email);
$email = preg_replace('/\s\s+/', '', $email);

echoes the row

echo "<tr>
<td><a href='http://www.example.com/info.php?num=$pagenumber'>link</a></td>
<td>$fnames</td>
<td>$lnames</td>
<td>$phone</td>
<td>$ad</td>
<td>$apt</td>
<td>$country</td>
<td>$city</td>
<td>$zip</td>
<td>$email</td>
</tr>";

includes db info

include("inf.php");
$tablename = 'list';

$fnames = mysql_real_escape_string($fnames);
$lnames = mysql_real_escape_string($lnames);
$phone = mysql_real_escape_string($phone);
$ad = mysql_real_escape_string($ad);
$apt = mysql_real_escape_string($apt);
$country = mysql_real_escape_string($country);
$city = mysql_real_escape_string($city);
$zip = mysql_real_escape_string($zip);
$email = mysql_real_escape_string($email);

inserts row to db

$query = "INSERT INTO $tablename VALUES('', '$pagenumber', '$fnames', '$lnames', '$phone', '$ad', 

'$apt','$country','$city','$zip', '$email')";
mysql_query($query) or die(mysql_error()); 

resets the loop

$pagenumber = $pagenumber + 1;
}

?>
Dasa
  • 297
  • 1
  • 7
  • 23
  • 1
    Could you not try to build the data into say, an array, and then save this to db in one go? rather than scrape > save > scrape, you'd do scrape > scrape > scrape > one big save. Therefore saving DB roundtrips? – dougajmcdonald Oct 10 '11 at 10:06
  • How slow is "slow"? Reading a web page and saving it to the database 4,000 times is never going to be particularly fast. – Widor Oct 10 '11 at 10:07
  • i cant get my head around that how would i do it – Dasa Oct 10 '11 at 10:07
  • Please consider http://codereview.stackexchange.com/ for your question. About questions suitable for this site, please see the [FAQ](http://stackoverflow.com/faq). – hakre Oct 10 '11 at 10:08
  • 2
    I think there's nothing much to optimize in your code. It is full of regex elaboration and you're taking so many webpages source code (and so a lot of http requests). I think you should change the whole approach. – Aurelio De Rosa Oct 10 '11 at 10:08
  • Anyway, I think you should learn about xpath to just make your code doing things more easily. It looks like from the pre-xpath era, checkout http://stackoverflow.com/questions/34120/html-scraping-in-php – hakre Oct 10 '11 at 10:09
  • *(related)* [Best Methods to parse HTML](http://stackoverflow.com/questions/3577641/best-methods-to-parse-html/3577662#3577662) – Gordon Oct 10 '11 at 10:09
  • ive been asking coding questions here for ages why now a different site whats stackoverflow for then? – Dasa Oct 10 '11 at 10:10
  • the real bottleneck is the network latency and not your code (although the code could be improved as well) – Gordon Oct 10 '11 at 10:12
  • @Gordon true but with half a million entries every millisecond counts – Dasa Oct 10 '11 at 10:13
  • I just remembered to use the @ character – Dasa Oct 10 '11 at 10:13
  • @AurelioDeRosa what approach do you suggest? – Dasa Oct 10 '11 at 10:28
  • @dougajmcdonald how would i do it? – Dasa Oct 10 '11 at 16:25

2 Answers2

1

Don't use regex for html. You should use xpath, and for php specifically, DOMXPath

pguardiario
  • 53,827
  • 19
  • 119
  • 159
0

You could take a look at curl

http://nl2.php.net/manual/en/book.curl.php

After grabbing the pages(s) you could us a single pattern to grab all required fields. Matches can be done with preg_match_all

Also is there not any xml/rss feed available for the data you are seeking ? See if you can show more results per page on your example site , this would reduce the number of pages you need to crawl.

edit : as requested a simple example :

Make sure you have curl enabled on your server :

echo 'cURL is '.(function_exists('curl_init') ?: ' not').' enabled';

          $ch = curl_init();

    curl_setopt ($ch, CURLOPT_URL, 'http://example.com' );

    curl_setopt($ch, CURLOPT_REFERER, 'http://example.com');
    curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate');
    curl_setopt($ch, CURLOPT_AUTOREFERER, true);
    curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
    curl_setopt($ch, CURLOPT_TIMEOUT, 5);
    curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);

           $page =curl_exec ($ch);
Paolo_Mulder
  • 1,233
  • 1
  • 16
  • 28
  • i would love to use curl but i have now idea how it works and there are now simple tutorials so ive never gotten a hang on it, the people who designed the site im scraping are just at lousy at webdesign as i am – Dasa Oct 10 '11 at 10:19
  • changed to curl set it up to scrape into huge 2d array and now i need to insert to mysql db any ideas on how to go about it – Dasa Oct 11 '11 at 11:19