7

I'm trying to run scrapy from a script as discussed here. It suggested using this snippet, but when I do it hangs indefinitely. This was written back in version .10; is it still compatible with the current stable?

ciferkey
  • 2,064
  • 3
  • 20
  • 28
  • This question and answer may be ready for update. Here is [a recent snippet from Scrapy](http://scrapy.readthedocs.org/en/0.16/topics/practices.html). It works, but the question, for me, becomes: how do you stop the Twisted reactor and move on when done? – scharfmn May 31 '13 at 17:28

1 Answers1

7
from scrapy import signals, log
from scrapy.xlib.pydispatch import dispatcher
from scrapy.crawler import CrawlerProcess
from scrapy.conf import settings
from scrapy.http import Request

def handleSpiderIdle(spider):
    '''Handle spider idle event.''' # http://doc.scrapy.org/topics/signals.html#spider-idle
    print '\nSpider idle: %s. Restarting it... ' % spider.name
    for url in spider.start_urls: # reschedule start urls
        spider.crawler.engine.crawl(Request(url, dont_filter=True), spider)

mySettings = {'LOG_ENABLED': True, 'ITEM_PIPELINES': 'mybot.pipeline.validate.ValidateMyItem'} # global settings http://doc.scrapy.org/topics/settings.html

settings.overrides.update(mySettings)

crawlerProcess = CrawlerProcess(settings)
crawlerProcess.install()
crawlerProcess.configure()

class MySpider(BaseSpider):
    start_urls = ['http://site_to_scrape']
    def parse(self, response):
        yield item

spider = MySpider() # create a spider ourselves
crawlerProcess.queue.append_spider(spider) # add it to spiders pool

dispatcher.connect(handleSpiderIdle, signals.spider_idle) # use this if you need to handle idle event (restart spider?)

log.start() # depends on LOG_ENABLED
print "Starting crawler."
crawlerProcess.start()
print "Crawler stopped."

UPDATE:

If you need to have also settings per spider see this example:

for spiderConfig in spiderConfigs:
    spiderConfig = spiderConfig.copy() # a dictionary similar to the one with global settings above
    spiderName = spiderConfig.pop('name') # name of the spider is in the configs - i can use the same spider in several instances - giving them different names
    spiderModuleName = spiderConfig.pop('spiderClass') # module with the spider is in the settings
    spiderModule = __import__(spiderModuleName, {}, {}, ['']) # import that module
    SpiderClass = spiderModule.Spider # spider class is named 'Spider'
    spider = SpiderClass(name = spiderName, **spiderConfig) # create the spider with given particular settings
    crawlerProcess.queue.append_spider(spider) # add the spider to spider pool

Example of settings in the file for spiders:

name = punderhere_com    
allowed_domains = plunderhere.com
spiderClass = scraper.spiders.plunderhere_com
start_urls = http://www.plunderhere.com/categories.php?
warvariuc
  • 57,116
  • 41
  • 173
  • 227
  • I get [this](https://gist.github.com/1051117) traceback. My scrapy project is named scraper. Could that be the problem? – ciferkey Jun 28 '11 at 13:22
  • I think that is the issue. This is from a real project. You can remove references to scraper. You just need some settings for spiders. – warvariuc Jun 28 '11 at 15:45
  • so after I remove the references to scraper how to I go about importing the settings for my project? – ciferkey Jun 28 '11 at 16:13
  • i made some comments. you need to make some changes to make it work - have a valid pipeline, fully implemented MySpider class, set all necessary settings. – warvariuc Jun 28 '11 at 17:24