1

If I call this code in a method more than once, it fails but no error is displayed in the Terminal. It only runs once. Is it not possible to recrawl with the same spider twice?
It fails at the line reactor.run() and the spider never runs the second time it's invoked, but there is no error in the logs.

def crawlSite(self):

    self.mySpider = MySpider()
    self.mySpider.setCrawlFolder(self.website)

    settings = get_project_settings()
    settings.set('DEPTH_LIMIT', self.depth)

    crawler = Crawler(settings)
    crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
    crawler.configure()
    crawler.crawl(self.mySpider)
    crawler.start()

    log.start(logfile="results.log", loglevel=log.ERROR, crawler=crawler, logstdout=False) #log.DEBUG

    reactor.run() # the script will block here until the spider_closed signal was sent

This is the MySpider class

 class MySpider(CrawlSpider):

    name = "mysite"
    crawlFolder = ""
    crawlFolder1 = ""
    crawlFolder2 = ""
    allowed_domains = ["mysite.ca"]

    start_urls = [  "http://www.mysite.ca" ]

    rules = [ Rule(SgmlLinkExtractor(allow=(r'^http://www.mysite.ca/',), unique=True), callback='parse_item', follow=True), ]

    def parse_item(self, response):

        #store data in a website item object
        item = WebsiteClass()
        item['title'] = response.selector.xpath('//title/text()').extract()
        item['body'] = response.selector.xpath('//body').extract()
        item['url'] = response.url



        ...

Then I have a SetupClass that calls crawlSite() in CrawlerClass

self.crawlerClass.crawlSite()
stevetronix
  • 1,231
  • 2
  • 16
  • 32
  • 1
    We need full code to see more – user1767754 Nov 28 '14 at 23:48
  • Is the close signal working? Maybe you should implement it via the dispatcher? Check [this answer](http://stackoverflow.com/questions/14777910/scrapy-crawl-from-script-always-blocks-script-execution-after-scraping/14802526#14802526)... – bosnjak Dec 05 '14 at 10:06

0 Answers0