I was thinking about a script that would scan 10+ websites for specific content inside a specific div
. Let's say it would be moderately used, some 400 searches a day.
Which of the two in the title would support better the load, take less resources and give better speeds:
Creating the DOM from each of the websites then iterating each for specific div id
OR
creating a string from the website with file_get_contents
,
and then regexping the needed string.
To be more specific of what kind of operation I would need to execute hear the following,
Additional question: Is regexp capable of searching the following occurrence of the given string:
<div id="myId"> needed string </div>
to identify the tag with the given ID and return ONLY what is between tags?
Please answer only yes/no, if it's possible, I'll open a separate question about syntax so it's not all bundled here.