-1

Since nothing so far is working I started a new project with

python scrapy-ctl.py startproject Nu

I followed the tutorial exactly, and created the folders, and a new spider

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from Nu.items import NuItem
from urls import u

class NuSpider(CrawlSpider):
    domain_name = "wcase"
    start_urls = ['http://www.whitecase.com/aabbas/']

    names = hxs.select('//td[@class="altRow"][1]/a/@href').re('/.a\w+')

    u = names.pop()

    rules = (Rule(SgmlLinkExtractor(allow=(u, )), callback='parse_item'),)

    def parse(self, response):
        self.log('Hi, this is an item page! %s' % response.url)

        hxs = HtmlXPathSelector(response)
        item = Item()
        item['school'] = hxs.select('//td[@class="mainColumnTDa"]').re('(?<=(JD,\s))(.*?)(\d+)')
        return item

SPIDER = NuSpider()

and when I run

C:\Python26\Scripts\Nu>python scrapy-ctl.py crawl wcase

I get

[Nu] ERROR: Could not find spider for domain: wcase

The other spiders at least are recognized by Scrapy, this one is not. What am I doing wrong?

Thanks for your help!

Zeynel
  • 13,145
  • 31
  • 100
  • 145
  • 1
    can you provide a link to the tutorial (if it's online) ? would be an interesting read :) – RYFN Nov 27 '09 at 14:36
  • Yes, here's the link to the CrawlSpider example: http://doc.scrapy.org/topics/spiders.html#crawlspider-example – Zeynel Nov 27 '09 at 17:02

5 Answers5

6

Please also check the version of scrapy. The latest version uses "name" instead of "domain_name" attribute to uniquely identify a spider.

Gnu Engineer
  • 1,495
  • 2
  • 13
  • 15
3

Have you included the spider in SPIDER_MODULES list in your scrapy_settings.py?

It's not written in the tutorial anywhere that you should to this, but you do have to.

user137673
  • 1,517
  • 11
  • 7
  • This is included when project is created: SPIDER_MODULES = ['Nu.spiders'] But I don't know if I need to add the domain_name = 'wcase' as well? the spider is now running but it is just scanning the initial url, it doesn't go to allowed links. See my other question http://stackoverflow.com/questions/1809817/scrapy-sgmllinkextractor-question – Zeynel Nov 27 '09 at 17:20
3

These two lines look like they're causing trouble:

u = names.pop()

rules = (Rule(SgmlLinkExtractor(allow=(u, )), callback='parse_item'),)
  • Only one rule will be followed each time the script is run. Consider creating a rule for each URL.
  • You haven't created a parse_item callback, which means that the rule does nothing. The only callback you've defined is parse, which changes the default behaviour of the spider.

Also, here are some things that will be worth looking into.

  • CrawlSpider doesn't like having its default parse method overloaded. Search for parse_start_url in the documentation or the docstrings. You'll see that this is the preferred way to override the default parse method for your starting URLs.
  • NuSpider.hxs is called before it's defined.
Tim McNamara
  • 18,019
  • 4
  • 52
  • 83
2

I believe you have syntax errors there. The name = hxs... will not work because you don't get defined before the hxs object.

Try running python yourproject/spiders/domain.py to get syntax errors.

R. Max
  • 6,624
  • 1
  • 27
  • 34
2

You are overriding the parse method, instead of implementing a new parse_item method.

Markos Fragkakis
  • 7,499
  • 18
  • 65
  • 103