9

After several readings to Scrapy docs I'm still not catching the diferrence between using CrawlSpider rules and implementing my own link extraction mechanism on the callback method.

I'm about to write a new web crawler using the latter approach, but just becuase I had a bad experience in a past project using rules. I'd really like to know exactly what I'm doing and why.

Anyone familiar with this tool?

Thanks for your help!

romeroqj
  • 829
  • 3
  • 10
  • 21

2 Answers2

11

CrawlSpider inherits BaseSpider. It just added rules to extract and follow links. If these rules are not enough flexible for you - use BaseSpider:

class USpider(BaseSpider):
    """my spider. """

    start_urls = ['http://www.amazon.com/s/?url=search-alias%3Dapparel&sort=relevance-fs-browse-rank']
    allowed_domains = ['amazon.com']

    def parse(self, response):
        '''Parse main category search page and extract subcategory search link.'''
        self.log('Downloaded category search page.', log.DEBUG)
        if response.meta['depth'] > 5:
            self.log('Categories depth limit reached (recursive links?). Stopping further following.', log.WARNING)

        hxs = HtmlXPathSelector(response)
        subcategories = hxs.select("//div[@id='refinements']/*[starts-with(.,'Department')]/following-sibling::ul[1]/li/a[span[@class='refinementLink']]/@href").extract()
        for subcategory in subcategories:
            subcategorySearchLink = urlparse.urljoin(response.url, subcategorySearchLink)
            yield Request(subcategorySearchLink, callback = self.parseSubcategory)

    def parseSubcategory(self, response):
        '''Parse subcategory search page and extract item links.'''
        hxs = HtmlXPathSelector(response)

        for itemLink in hxs.select('//a[@class="title"]/@href').extract():
            itemLink = urlparse.urljoin(response.url, itemLink)
            self.log('Requesting item page: ' + itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

        try:
            nextPageLink = hxs.select("//a[@id='pagnNextLink']/@href").extract()[0]
            nextPageLink = urlparse.urljoin(response.url, nextPageLink)
            self.log('\nGoing to next search page: ' + nextPageLink + '\n', log.DEBUG)
            yield Request(nextPageLink, callback = self.parseSubcategory)
        except:
            self.log('Whole category parsed: ' + categoryPath, log.DEBUG)

    def parseItem(self, response):
        '''Parse item page and extract product info.'''

        hxs = HtmlXPathSelector(response)
        item = UItem()

        item['brand'] = self.extractText("//div[@class='buying']/span[1]/a[1]", hxs)
        item['title'] = self.extractText("//span[@id='btAsinTitle']", hxs)
        ...

Even if BaseSpider's start_urls are not enough flexible for you, override start_requests method.

warvariuc
  • 57,116
  • 41
  • 173
  • 227
  • Thanks so much! I didn't mention I'm crawling Amazon, so you gave and incredibly useful resource :D. Amazon has some URLs that contain a hash character, and Scrapy is stripping the URL from that hash sign to the end. Do you if there's a way to modify this behavior and keep the whole URL? T.I.A, appreciate your help. – romeroqj Jul 07 '11 at 00:29
  • where is stripping? in request.url, xpath selector or? – warvariuc Jul 07 '11 at 06:33
  • I create a new thread for this if you don't mind checking. http://stackoverflow.com/questions/6604690/scrapy-hash-tag-on-urls – romeroqj Jul 07 '11 at 07:01
1

If you want selective crawling, like fetching "Next" links for pagination etc., it's better to write your own crawler. But for general crawling, you should use crawlspider and filter out the links that you don't need to follow using Rules & process_links function.

Take a look at the crawlspider code in \scrapy\contrib\spiders\crawl.py , it isn't too complicated.

user
  • 17,781
  • 20
  • 98
  • 124
  • right on the spot! actually I forgot to mention I intend to follow "Next" links! Thanks for the reference. – romeroqj Jul 06 '11 at 06:44