4

I've written a spider in Scrapy which is basically doing fine and does exactly what it is supposed to do. The problem is I need to make small change to it and I have tried several approaches without success (e.g. modifying the InitSpider). Here is what the script is supposed to do now:

  • crawl the start url http://www.example.de/index/search?method=simple
  • now proceed to the url http://www.example.de/index/search?filter=homepage
  • start the crawling from here with the pattern defined in the rules

So basically all that needs to be changed is to call one URL in between. I would rather not rewrite the whole thing with a BaseSpider, so I hoped that someone has an idea on how to achieve this :)

If you need any additional infos, please let me know. Below you can find the current script.

#!/usr/bin/python
# -*- coding: utf-8 -*-

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from example.items import ExampleItem
from scrapy.contrib.loader.processor import TakeFirst
import re
import urllib

take_first = TakeFirst()

class ExampleSpider(CrawlSpider):
    name = "example"
    allowed_domains = ["example.de"]

    start_url = "http://www.example.de/index/search?method=simple"
    start_urls = [start_url]

    rules = (
        # http://www.example.de/index/search?page=2
        # http://www.example.de/index/search?page=1&tab=direct
        Rule(SgmlLinkExtractor(allow=('\/index\/search\?page=\d*$', )), callback='parse_item', follow=True),
        Rule(SgmlLinkExtractor(allow=('\/index\/search\?page=\d*&tab=direct', )), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        hxs = HtmlXPathSelector(response)

        # fetch all company entries
        companies = hxs.select("//ul[contains(@class, 'directresults')]/li[contains(@id, 'entry')]")
        items = []

        for company in companies:
            item = ExampleItem()
            item['name'] = take_first(company.select(".//span[@class='fn']/text()").extract())
            item['address'] = company.select(".//p[@class='data track']/text()").extract()
            item['website'] = take_first(company.select(".//p[@class='customurl track']/a/@href").extract())

            # we try to fetch the number directly from the page (only works for premium entries)
            item['telephone'] = take_first(company.select(".//p[@class='numericdata track']/a/text()").extract())

            if not item['telephone']:
              # if we cannot fetch the number it has been encoded on the client and hidden in the rel=""
              item['telephone'] = take_first(company.select(".//p[@class='numericdata track']/a/@rel").extract())

            items.append(item)
        return items

Edit

Here is my attempt with the InitSpider: https://gist.github.com/150b30eaa97e0518673a I got that idea from here: Crawling with an authenticated session in Scrapy

As you can see, it still inherits from CrawlSpider, but I made some changes to the core Scrapy files (not my favourite approach). I let the CrawlSpider inherit from InitSpider instead of BaseSpider (source).

This works so far, but the spider just stops after the first page instead of picking up all the other ones.

Also, this approach seems to be absolutely unnecessary to me :)

Ismael Padilla
  • 5,246
  • 4
  • 23
  • 35
herrherr
  • 708
  • 1
  • 9
  • 26

1 Answers1

3

Ok, I found the solution myself and it is actually much simpler than I initially thought :)

Here is the simplified script:

#!/usr/bin/python
# -*- coding: utf-8 -*-

from scrapy.spider import BaseSpider
from scrapy.http import Request
from scrapy import log
from scrapy.selector import HtmlXPathSelector
from example.items import ExampleItem
from scrapy.contrib.loader.processor import TakeFirst
import re
import urllib

take_first = TakeFirst()

class ExampleSpider(BaseSpider):
    name = "ExampleNew"
    allowed_domains = ["www.example.de"]

    start_page = "http://www.example.de/index/search?method=simple"
    direct_page = "http://www.example.de/index/search?page=1&tab=direct"
    filter_page = "http://www.example.de/index/search?filter=homepage"

    def start_requests(self):
        """This function is called before crawling starts."""
        return [Request(url=self.start_page, callback=self.request_direct_tab)]

    def request_direct_tab(self, response):
        return [Request(url=self.direct_page, callback=self.request_filter)]

    def request_filter(self, response):
        return [Request(url=self.filter_page, callback=self.parse_item)]

    def parse_item(self, response):
        hxs = HtmlXPathSelector(response)

        # fetch the items you need and yield them like this:
        # yield item

        # fetch the next pages to scrape
        for url in hxs.select("//div[@class='limiter']/a/@href").extract():
            absolute_url = "http://www.example.de" + url             
            yield Request(absolute_url, callback=self.parse_item)

As you can see I'm now using a BaseSpider and just generating the new Requests myself at the end. And at the beginning I simply walk through all the different requests that need to be made before the crawling can start.

I hope this is helpful for someone :) If you have questions, I'll gladly answer them.

herrherr
  • 708
  • 1
  • 9
  • 26