2

I have this spider that scrapes amazon for information.

The spider reads a .txt file in which I write which product it must search and then enters amazon page for that product, for example :

https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=laptop

I use the keyword=laptop for changing which product to search and such.

The issue that I'm having is that the spider just does not work, which is weird because a week ago it did her job just fine.

Also, no errors appear on the console, the spider starts, "crawls" the keyword and then just stops.

Here is the full spider

import scrapy
import re
import string
import random
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from genericScraper.items import GenericItem
from scrapy.exceptions import CloseSpider
from scrapy.http import Request

class GenericScraperSpider(CrawlSpider):

    name = "generic_spider"

    #Dominio permitido
    allowed_domain = ['www.amazon.com']

    search_url = 'https://www.amazon.com/s?field-keywords={}'

    custom_settings = {

        'FEED_FORMAT': 'csv',
        'FEED_URI' : 'datosGenericos.csv'

    }

    rules = {

        #Gets all the elements in page 1 of the keyword i search
        Rule(LinkExtractor(allow =(), restrict_xpaths = ('//*[contains(@class, "s-access-detail-page")]') ), 
                            callback = 'parse_item', follow = False)

}


    def start_requests(self):

        txtfile = open('productosGenericosABuscar.txt', 'r')

        keywords = txtfile.readlines()

        txtfile.close()

        for keyword in keywords:

            yield Request(self.search_url.format(keyword))



    def parse_item(self,response):

        genericAmz_item = GenericItem()


        #info de producto
        categoria = response.xpath('normalize-space(//span[contains(@class, "a-list-item")]//a/text())').extract_first()

        genericAmz_item['nombreProducto'] = response.xpath('normalize-space(//span[contains(@id, "productTitle")]/text())').extract()
        genericAmz_item['precioProducto'] = response.xpath('//span[contains(@id, "priceblock")]/text()'.strip()).extract()
        genericAmz_item['opinionesProducto'] = response.xpath('//div[contains(@id, "averageCustomerReviews_feature_div")]//i//span[contains(@class, "a-icon-alt")]/text()'.strip()).extract()
        genericAmz_item['urlProducto'] = response.request.url
        genericAmz_item['categoriaProducto'] = re.sub('Back to search results for |"','', categoria) 

        yield genericAmz_item

Other spiders with a similar structure I made also work, any idea what's going on?

Here's what I get in the console

2019-01-31 22:49:26 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: genericScraper)
2019-01-31 22:49:26 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.7.0,                     Python 3.7.0 (default, Jun 28 2018, 08:04:48) [MSC v.1912 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.0.2p  14 Aug 2018), cryptography 2.3.1, Platform Windows-10-10.0.17134-SP0
2019-01-31 22:49:26 [scrapy.crawler] INFO: Overridden settings:         {'AUTOTHROTTLE_ENABLED': True, 'BOT_NAME': 'genericScraper', 'DOWNLOAD_DELAY':     3, 'FEED_FORMAT': 'csv', 'FEED_URI': 'datosGenericos.csv', 'NEWSPIDER_MODULE':     'genericScraper.spiders', 'SPIDER_MODULES': ['genericScraper.spiders'],     'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36     (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36'}
2019-01-31 22:49:26 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.throttle.AutoThrottle']
2019-01-31 22:49:26 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-01-31 22:49:26 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-01-31 22:49:26 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-01-31 22:49:26 [scrapy.core.engine] INFO: Spider opened
2019-01-31 22:49:26 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-01-31 22:49:26 [scrapy.extensions.telnet] DEBUG: Telnet console listening on xxx.x.x.x:xxxx
2019-01-31 22:49:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/s?field-keywords=Laptop> (referer: None)
2019-01-31 22:49:27 [scrapy.core.engine] INFO: Closing spider (finished)
2019-01-31 22:49:27 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 315,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 2525,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 2, 1, 1, 49, 27, 375619),
 'log_count/DEBUG': 2,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2019, 2, 1, 1, 49, 26, 478037)}
2019-01-31 22:49:27 [scrapy.core.engine] INFO: Spider closed (finished)
Manuel
  • 730
  • 7
  • 25

1 Answers1

1

Interesting! It is possibly due to website wasn't returning any data. Have you tried to debug with scrapy shell. If not, try to check with that is response.body returning intended data which you want to crawl.

def parse_item(self,response):
     from scrapy.shell import inspect_response
     inspect_response(response, self)

For more details, please read detailed info on scrapy shell


After debugging, If you still not getting intended data that means there is more into the site which obstructing to crawling process. That could be dynamic script or cookie/local-storage/session dependency.

For dynamic/JS script, you can use selenium or splash.
selenium-with-scrapy-for-dynamic-page
handling-javascript-in-scrapy-with-splash

For cookie/local-storage/session, you have to look deeper into inspect window and find out which is essential for getting the data.

Pankaj
  • 931
  • 8
  • 15
  • 1
    Thanks for the help!, tried debugging and nothing seemed wrong, so I just ran the spider again and.... it worked. Going to restart my PC next time this happens and try to re run the spider – Manuel Feb 01 '19 at 11:13