If I call this code in a method more than once, it fails but no error is displayed in the Terminal. It only runs once. Is it not possible to recrawl with the same spider twice?
It fails at the line reactor.run() and the spider never runs the second time it's invoked, but there is no error in the logs.
def crawlSite(self):
self.mySpider = MySpider()
self.mySpider.setCrawlFolder(self.website)
settings = get_project_settings()
settings.set('DEPTH_LIMIT', self.depth)
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(self.mySpider)
crawler.start()
log.start(logfile="results.log", loglevel=log.ERROR, crawler=crawler, logstdout=False) #log.DEBUG
reactor.run() # the script will block here until the spider_closed signal was sent
This is the MySpider class
class MySpider(CrawlSpider):
name = "mysite"
crawlFolder = ""
crawlFolder1 = ""
crawlFolder2 = ""
allowed_domains = ["mysite.ca"]
start_urls = [ "http://www.mysite.ca" ]
rules = [ Rule(SgmlLinkExtractor(allow=(r'^http://www.mysite.ca/',), unique=True), callback='parse_item', follow=True), ]
def parse_item(self, response):
#store data in a website item object
item = WebsiteClass()
item['title'] = response.selector.xpath('//title/text()').extract()
item['body'] = response.selector.xpath('//body').extract()
item['url'] = response.url
...
Then I have a SetupClass that calls crawlSite() in CrawlerClass
self.crawlerClass.crawlSite()