21

Need example in scrapy on how to get a link from one page, then follow this link, get more info from the linked page, and merge back with some data from first page.

today
  • 32,602
  • 8
  • 95
  • 115
Jas
  • 14,493
  • 27
  • 97
  • 148

4 Answers4

15

Partially fill your item on the first page, and the put it in your request's meta. When the callback for the next page is called, it can take the partially filled request, put more data into it, and then return it.

Acorn
  • 49,061
  • 27
  • 133
  • 172
  • thanks, i got the inner link togoto in my var: links[i] then i tried inside a loop (for each outer page) to do this: for i in range(0,len(categories)): print categories[i] + ' : ' + links[i] item = LectscrapItem() item['category'] = categories[i] yield FormRequest(links[i],method='GET',callback=self.parseVideo, meta={'item':item}) and inside parseVideo i did: print 'im here' i don't see i'm here printed... anything i'm doing wrong please? – Jas Dec 12 '11 at 05:34
  • 1
    @Jason, i haven't used FormRequest, but... `FormRequest(links[i],method='GET',callback=self.parseVideo, meta={'item':item})` why you need FormRequest without `formdata` argument? why not a simple Request? – warvariuc Dec 12 '11 at 08:02
  • ok so i updated to request and now it looks like this: ' print 'initem going to video' yield Request(links[i], callback=self.parseVideo) ' and in my method: parseVideo i have this: ' def parseVideo(self, response): print 'inhere' ' however while i do get the first printing i get no 'inhere' printing... i don't understand why its not called.. – Jas Dec 13 '11 at 19:30
8

An example from scrapy documntation

def parse_page1(self, response):
    item = MyItem()
    item['main_url'] = response.url
    request = scrapy.Request("http://www.example.com/some_page.html",
                         callback=self.parse_page2)
    request.meta['item'] = item
    return request

def parse_page2(self, response):
    item = response.meta['item']
    item['other_url'] = response.url
    return item
Chitrasen
  • 1,706
  • 18
  • 15
7

More information on passing the meta data and request objects is specifically described in this part of the documentation:

http://readthedocs.org/docs/scrapy/en/latest/topics/request-response.html#passing-additional-data-to-callback-functions

This question is also related to: Scrapy: Follow link to get additional Item data?

Community
  • 1
  • 1
Ryan White
  • 1,927
  • 2
  • 19
  • 32
4

A bit illustration of Scrapy documentation code

def start_requests(self):
        yield scrapy.Request("http://www.example.com/main_page.html",callback=parse_page1)
def parse_page1(self, response):
    item = MyItem()
    item['main_url'] = response.url ##extracts http://www.example.com/main_page.html
    request = scrapy.Request("http://www.example.com/some_page.html",callback=self.parse_page2)
    request.meta['my_meta_item'] = item ## passing item in the meta dictionary
    ##alternatively you can follow as below
    ##request = scrapy.Request("http://www.example.com/some_page.html",meta={'my_meta_item':item},callback=self.parse_page2)
    return request

def parse_page2(self, response):
    item = response.meta['my_meta_item']
    item['other_url'] = response.url ##extracts http://www.example.com/some_page.html
    return item
Learner
  • 5,192
  • 1
  • 24
  • 36