I have a really simple python script on scraperwiki:
import scraperwiki
import lxml.html
html = scraperwiki.scrape("http://www.westphillytools.org/toolsListing.php")
print html
I haven't written anything to parse it yet... for now I just want the html.
When I run it in edit mode it works perfectly.
When a scheduled scrape runs (or I manually run it), it omits dozens (or even hundreds) of lines.
It's a very small webpage so data overload shouldn't be a problem. Any ideas?