0

I feed my website with info coming from a table in another website. I used to get the needed info with:

$html = file_get_contents('http://www.example.ex');

and then work with it through regular expressions.

Unfortunately, the other website has changed, and now the source code is not an HTML table anymore.

But, if I Inspect the element with the info (Chrome browser) I find out it is a table, and I can "copy" the "Outer-HTML" of that element and "paste" it into my files.

Is there any other way, more "professional", to capture that info (the Outer-HTML of an element or the whole page), than copy-paste? Thanks to everyone.

Muhammad Hassaan
  • 7,296
  • 6
  • 30
  • 50
Javi
  • 170
  • 3
  • 13

1 Answers1

1

Maybe this post is useful to you : Stackoverflow Post

But if this doesn't work. Someone over there suggests a PHP web scraper Framework called Goutte which could be (more) useful to you if the website changes again.

Community
  • 1
  • 1
Developer
  • 130
  • 13