I've written a spider in Scrapy which is basically doing fine and does exactly what it is supposed to do. The problem is I need to make small change to it and I have tried several approaches without success (e.g. modifying the InitSpider). Here is what the script is supposed to do now:
- crawl the start url
http://www.example.de/index/search?method=simple
- now proceed to the url
http://www.example.de/index/search?filter=homepage
- start the crawling from here with the pattern defined in the rules
So basically all that needs to be changed is to call one URL in between. I would rather not rewrite the whole thing with a BaseSpider, so I hoped that someone has an idea on how to achieve this :)
If you need any additional infos, please let me know. Below you can find the current script.
#!/usr/bin/python
# -*- coding: utf-8 -*-
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from example.items import ExampleItem
from scrapy.contrib.loader.processor import TakeFirst
import re
import urllib
take_first = TakeFirst()
class ExampleSpider(CrawlSpider):
name = "example"
allowed_domains = ["example.de"]
start_url = "http://www.example.de/index/search?method=simple"
start_urls = [start_url]
rules = (
# http://www.example.de/index/search?page=2
# http://www.example.de/index/search?page=1&tab=direct
Rule(SgmlLinkExtractor(allow=('\/index\/search\?page=\d*$', )), callback='parse_item', follow=True),
Rule(SgmlLinkExtractor(allow=('\/index\/search\?page=\d*&tab=direct', )), callback='parse_item', follow=True),
)
def parse_item(self, response):
hxs = HtmlXPathSelector(response)
# fetch all company entries
companies = hxs.select("//ul[contains(@class, 'directresults')]/li[contains(@id, 'entry')]")
items = []
for company in companies:
item = ExampleItem()
item['name'] = take_first(company.select(".//span[@class='fn']/text()").extract())
item['address'] = company.select(".//p[@class='data track']/text()").extract()
item['website'] = take_first(company.select(".//p[@class='customurl track']/a/@href").extract())
# we try to fetch the number directly from the page (only works for premium entries)
item['telephone'] = take_first(company.select(".//p[@class='numericdata track']/a/text()").extract())
if not item['telephone']:
# if we cannot fetch the number it has been encoded on the client and hidden in the rel=""
item['telephone'] = take_first(company.select(".//p[@class='numericdata track']/a/@rel").extract())
items.append(item)
return items
Edit
Here is my attempt with the InitSpider: https://gist.github.com/150b30eaa97e0518673a I got that idea from here: Crawling with an authenticated session in Scrapy
As you can see, it still inherits from CrawlSpider, but I made some changes to the core Scrapy files (not my favourite approach). I let the CrawlSpider inherit from InitSpider instead of BaseSpider (source).
This works so far, but the spider just stops after the first page instead of picking up all the other ones.
Also, this approach seems to be absolutely unnecessary to me :)