I know the question regarding PHP web page scrapers has been asked time and time and using this, I discovered SimpleHTMLDOM. After working seamlessly on my local server, I uploaded everything to my online server only to find out something wasn't working right. A quick look at the FAQ lead me to this. I'm currently using a free hosting service so edit any php.ini settings. So using the FAQ's suggestion, I tried using cURL, only to find out that this too is turned off by my hosting service. Are there any other simple solutions to scrape contents of a of another web page without the use or cURL or SimpleHTMLDOM?
4 Answers
If cURL
and allow_url_fopen
are not enabled you can try to fetch the content via
fsockopen
— Open Internet or Unix domain socket connection
In other words, you have to do HTTP Requests manually. See the example in the manual for how to do a GET Request. The returned content can then be further processed. If sockets are enabled, you can also use any third party lib utilitzing them, for instance Zend_Http_Client
.
On a sidenote, check out Best Methods to Parse HTML for alternatives to SimpleHTMLDom.
-
2+1 didn't know you could use fsockopen even if allow_url_fopen is disallowed. – NikiC Oct 20 '10 at 18:18
cURL is a specialty API. It's not the http library it's often made out to be, but a generic data transfer library for FTP,SFTP,SCP,HTTP PUT,SMTP,TELNET,etc. If you want to use just HTTP, there is an according PEAR library for that. Or check if your PHP version has the official http extension enabled. For scraping try phpQuery or querypath. Both come with builtin http support.

- 144,265
- 20
- 237
- 291
-
I think querypath uses DOM's loading facilities and afaik those depend on `allow_url_fopen`. phpquery on the other hand uses `Zend_Http_Client` so that might be an option. The PEAR library is a good call too. It's an implementation on top of `fsockopen`. – Gordon Oct 07 '10 at 11:01
Here's a simple way to grab images when allow_url_fopen
is set to false
, without studying up on estoteric tools.
Create a web page on your dev environment that loads all the images you're scraping. You can then use your browser to save the images. File -> "Save Page As"
.
This is handy if you need a one time solution for downloading a bunch of images from a remote server that has allow_url_fopen
set to 0
.
This worked for me after file_get_contents
and curl
failed.
file_get_contents() is the simplest method to grab a page without installing extra libraries.

- 15
- 2
-
That's the [same answer as Martin's above](http://stackoverflow.com/questions/3880628/how-to-scrape-websites-when-curl-and-allow-url-fopen-is-disabled/3880979#3880979). Unless your own answers do add something new, you are encouraged to upvote the original answer instead of repeating them (especially when they are not applicable for the OP's problem like in this case). – Gordon Oct 08 '10 at 17:42
-