2

Im trying to scrape details from a subsite and merge with the details scraped with site. I've been researching through stackoverflow, as well as documentation. However, I still cant get my code to work. It seems that my function to extract additional details from the subsite does not work. If anyone could take a look I would be very grateful.

# -*- coding: utf-8 -*-
from scrapy.spiders import Spider
from scrapy.selector import Selector
from scrapeInfo.items import infoItem
import pyodbc


class scrapeInfo(Spider):
    name = "info"
    allowed_domains = ["http://www.nevermind.com"]
    start_urls = []

    def start_requests(self):

        #Get infoID and Type from database
        self.conn = pyodbc.connect('DRIVER={SQL Server};SERVER=server;DATABASE=dbname;UID=user;PWD=password')
        self.cursor = self.conn.cursor()
        self.cursor.execute("SELECT InfoID, category FROM dbo.StageItem")

        rows = self.cursor.fetchall()

        for row in rows:
            url = 'http://www.nevermind.com/info/'
            InfoID = row[0]
            category = row[1]
            yield self.make_requests_from_url(url+InfoID, InfoID, category, self.parse)

    def make_requests_from_url(self, url, InfoID, category, callback):
        request = Request(url, callback)
        request.meta['InfoID'] = InfoID
        request.meta['category'] = category
        return request

    def parse(self, response):
        hxs = Selector(response)
        infodata = hxs.xpath('div[2]/div[2]')  # input item path

        itemPool = []

        InfoID = response.meta['InfoID']
        category = response.meta['category']

        for info in infodata:
            item = infoItem()
            item_cur, item_hist = InfoItemSubSite()

            # Stem Details
            item['id'] = InfoID
            item['field'] = info.xpath('tr[1]/td[2]/p/b/text()').extract()
            item['field2'] = info.xpath('tr[2]/td[2]/p/b/text()').extract()
            item['field3'] = info.xpath('tr[3]/td[2]/p/b/text()').extract()
            item_cur['field4'] = info.xpath('tr[4]/td[2]/p/b/text()').extract()
            item_cur['field5'] = info.xpath('tr[5]/td[2]/p/b/text()').extract()
            item_cur['field6'] = info.xpath('tr[6]/td[2]/p/b/@href').extract()

            # Extract additional information about item_cur from refering site
            # This part does not work
            if item_cur['field6'] = info.xpath('tr[6]/td[2]/p/b/@href').extract():
                url = 'http://www.nevermind.com/info/sub/' + item_cur['field6'] = info.xpath('tr[6]/td[2]/p/b/@href').extract()[0]
                request = Request(url, housingtype, self.parse_item_sub)
                request.meta['category'] = category
                yield self.parse_item_sub(url, category)
            item_his['field5'] = info.xpath('tr[5]/td[2]/p/b/text()').extract()
            item_his['field6'] = info.xpath('tr[6]/td[2]/p/b/text()').extract()
            item_his['field7'] = info.xpath('tr[7]/td[2]/p/b/@href').extract()      

            item['subsite_dic'] = [dict(item_cur), dict(item_his)]

            itemPool.append(item)
            yield item
        pass

        # Function to extract additional info from the subsite, and return it to the original item.
        def parse_item_sub(self, response, category):
            hxs = Selector(response)
            subsite = hxs.xpath('div/div[2]')  # input base path

            category = response.meta['category']

            for i in subsite:        
                item = InfoItemSubSite()    
                if (category == 'first'):
                    item['subsite_field1'] = i.xpath('/td[2]/span/@title').extract()            
                    item['subsite_field2'] = i.xpath('/tr[4]/td[2]/text()').extract()
                    item['subsite_field3'] = i.xpath('/div[5]/a[1]/@href').extract()
                else:
                    item['subsite_field1'] = i.xpath('/tr[10]/td[3]/span/@title').extract()            
                    item['subsite_field2'] = i.xpath('/tr[4]/td[1]/text()').extract()
                    item['subsite_field3'] = i.xpath('/div[7]/a[1]/@href').extract()
                return item
            pass

I've been looking at these examples together with a lot of other examples (stackoverflow is great for that!), as well as scrapy documentation, but still unable to understand how I get details send from one function and merged with the scraped items from the original function.

how do i merge results from target page to current page in scrapy? How can i use multiple requests and pass items in between them in scrapy python

Community
  • 1
  • 1
Philip
  • 944
  • 11
  • 26

1 Answers1

6

What you are looking here is called request chaining. Your problem is - yield one item from several requests. A solution to this is to chain requests while carrying your item in requests meta attribute.
Example:

def parse(self, response):
    item = MyItem()
    item['name'] = response.xpath("//div[@id='name']/text()").extract()
    more_page = # some page that offers more details
    # go to more page and take your item with you.
    yield Request(more_page, 
                  self.parse_more,
                  meta={'item':item})  


def parse_more(self, response):
    # get your item from the meta
    item = response.meta['item']
    # fill it in with more data and yield!
    item['last_name'] = response.xpath("//div[@id='lastname']/text()").extract()
    yield item 
Granitosaurus
  • 20,530
  • 5
  • 57
  • 82
  • Thanks! So in my case I have about 109 fields in the first request, are all of these carried in the meta={'item':item}, and how do I send different classes this way? – Philip Aug 03 '16 at 21:32
  • @PhilipHoyos You can carry any object or reference in the request meta, however some more complex object types and classes can cause some issues like memory leaks etc, so you probably want to stick with basic python types when possible (scrapy.Item is pretty much just python dict btw, so it's completely safe) – Granitosaurus Aug 04 '16 at 07:52
  • @Granitosaurus Gotta take a minute and 1+ you for the cleanest, simplest, direct and to the point explanation of request.meta I've ever seen. No gobbledygook, just 'here it is, here's what it does, here's how to make it work for you'. Bravo! – Malik A. Rumi Aug 01 '17 at 18:16