2

I am running the spider below, but it is not entering the parse method, I don't know why, Someone please help.

My code is below

    from scrapy.item import Item, Field
    from scrapy.selector import Selector
    from scrapy.spider import BaseSpider
    from scrapy.selector import HtmlXPathSelector


    class MyItem(Item):
        reviewer_ranking = Field()
        print "asdadsa"


    class MySpider(BaseSpider):
        name = 'myspider'
        allowed_domains = ["amazon.com"]
        start_urls = ["http://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp"]
        print"sadasds"
        def parse(self, response):
            print"fggfggftgtr"
            sel = Selector(response)
            hxs = HtmlXPathSelector(response)
            item = MyItem()
            item["reviewer_ranking"] = hxs.select('//span[@class="a-size-small a-color-secondary"]/text()').extract()
            return item

The output which I am getting is as below

    $ scrapy runspider crawler_reviewers_data.py
    asdadsa
    sadasds
    /home/raj/Documents/IIM A/Daily sales rank/Daily      reviews/Reviews_scripts/Scripts_review/Reviews/Reviewer/crawler_reviewers_data.py:18:     ScrapyDeprecationWarning: crawler_reviewers_data.MySpider inherits from deprecated class scrapy.spider.BaseSpider, please inherit from scrapy.spider.Spider. (warning only on first subclass, there may be others)
    class MySpider(BaseSpider):
    2014-06-24 19:21:35+0530 [scrapy] INFO: Scrapy 0.22.2 started (bot: scrapybot)
    2014-06-24 19:21:35+0530 [scrapy] INFO: Optional features available: ssl, http11
    2014-06-24 19:21:35+0530 [scrapy] INFO: Overridden settings: {}
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, HttpProxyMiddleware, ChunkedTransferMiddleware, DownloaderStats
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled item pipelines: 
    2014-06-24 19:21:35+0530 [myspider] INFO: Spider opened
    2014-06-24 19:21:35+0530 [myspider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
    2014-06-24 19:21:35+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6027
    2014-06-24 19:21:35+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6084
    2014-06-24 19:21:36+0530 [myspider] DEBUG: Crawled (403) <GET     http://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp> (referer: None) ['partial']
    2014-06-24 19:21:36+0530 [myspider] INFO: Closing spider (finished)
    2014-06-24 19:21:36+0530 [myspider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 259,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 28487,
 'downloader/response_count': 1,
 'downloader/response_status_count/403': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2014, 6, 24, 13, 51, 36, 631236),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2014, 6, 24, 13, 51, 35, 472849)}
    2014-06-24 19:21:36+0530 [myspider] INFO: Spider closed (finished)

Please help me, i am stuck at this very point.

Raj
  • 447
  • 6
  • 11

1 Answers1

3

It is an anti-web-crawling technique used by Amazon - you are getting 403 - Forbidden because it requires User-Agent header to be sent with the request.

One option would be to manually add it to the Request yielded from start_requests():

class MySpider(BaseSpider):
    name = 'myspider'
    allowed_domains = ["amazon.com"]

    def start_requests(self):
        yield Request("https://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp",
                      headers={'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"})

    ...

Another option would be to set the DEFAULT_REQUEST_HEADERS setting project-wide.

Also note that Amazon provides an API which has a python wrapper, consider using it.

Hope that helps.

alecxe
  • 462,703
  • 120
  • 1,088
  • 1,195
  • Thanks a lot for your quick reponse.The manually adding method is not working, I am getting the same 403 error. Can you please tell me how to set the Default_request_headers in a spider? – Raj Jun 24 '14 at 14:55
  • 1
    @user2019135 have you removed `start_urls` property? Cause I've tested the code before posting - works for me. – alecxe Jun 24 '14 at 14:55
  • @user2019135 This is [how the spider looks now](https://gist.github.com/alecxe/46f95778072ce4b59e79). – alecxe Jun 24 '14 at 14:57
  • One more thing, if you don't mind, How can I pass a file in the request which contains a list of urls which i want to crawl? – Raj Jun 24 '14 at 15:19
  • @user2019135 sure, see for example: http://stackoverflow.com/questions/9322219/how-to-generate-the-start-urls-dynamiclly-in-crawling – alecxe Jun 24 '14 at 15:21