0

I want to use scrapy to crawl data from webpages, but the difference between different pages can't be seen from the url.For example:

http://epgd.biosino.org/EPGD/search/textsearch.jsp?textquery=man&submit=Feeling+Lucky

The url as above is the first page which I want to crawl data from, and it's easy to get data from it.

Here is my code:

__author__ = 'Rabbit'
from scrapy.spiders import Spider
from scrapy.selector import Selector

from scrapy_Data.items import EPGD


class EPGD_spider(Spider):
    name = "EPGD"
    allowed_domains = ["epgd.biosino.org"]
    stmp = []
    term = "man"
    url_base = "http://epgd.biosino.org/EPGD/search/textsearch.jsp?textquery=man&submit=Feeling+Lucky"

    start_urls = stmp

    def parse(self, response):
        sel = Selector(response)
        sites = sel.xpath('//tr[@class="odd"]|//tr[@class="even"]')

        for site in sites:
            item = EPGD()
            item['genID'] = map(unicode.strip, site.xpath('td[1]/a/text()').extract())
            item['taxID'] = map(unicode.strip, site.xpath('td[2]/a/text()').extract())
            item['familyID'] = map(unicode.strip, site.xpath('td[3]/a/text()').extract())
            item['chromosome'] = map(unicode.strip, site.xpath('td[4]/text()').extract())
            item['symbol'] = map(unicode.strip, site.xpath('td[5]/text()').extract())
            item['description'] = map(unicode.strip, site.xpath('td[6]/text()').extract())
            yield item

But the problem comes out if I want to get data from page 2.I click next page, and the url of second page looks like this:

http://epgd.biosino.org/EPGD/search/textsearch.jsp?currentIndex=20

Just as you see, it doesn't have a keyword in its url, so I don't know how to get data from other pages. Maybe I should use cookies, but I don't know how to do with this situation, so can anyone help me.

Thanks a lot!

Coding_Rabbit
  • 1,287
  • 3
  • 22
  • 44
  • So am I not clarify my question? If somebody want to help me, and don't know what am I asking for, you can add a comment here. – Coding_Rabbit Mar 19 '16 at 00:39

1 Answers1

0

When link parsing and Request yielding is added to your parse() function, your example just works for me. Maybe the page uses some server-side cookies. But using a proxy service like Scrapy's Crawlera (which downloads from multiple IPs) it fails though.

The solution is to enter the 'textquery' parameter manually into the request url:

import urlparse
from urllib import urlencode

from scrapy import Request
from scrapy.spiders import Spider
from scrapy.selector import Selector


class EPGD_spider(Spider):
    name = "EPGD"
    allowed_domains = ["epgd.biosino.org"]
    term = 'calb'
    base_url = "http://epgd.biosino.org/EPGD/search/textsearch.jsp?currentIndex=0&textquery=%s"
    start_urls = [base_url % term]

    def update_url(self, url, params):
        url_parts = list(urlparse.urlparse(url))
        query = dict(urlparse.parse_qsl(url_parts[4]))
        query.update(params)
        url_parts[4] = urlencode(query)
        url = urlparse.urlunparse(url_parts)
        return url

    def parse(self, response):
        sel = Selector(response)
        genes = sel.xpath('//tr[@class="odd"]|//tr[@class="even"]')

        for gene in genes:
            item = {}
            item['genID'] = map(unicode.strip, gene.xpath('td[1]/a/text()').extract())
            # ...
            yield item

        urls = sel.xpath('//div[@id="nviRecords"]/span[@id="quickPage"]/a/@href').extract()
        for url in urls:
            url = response.urljoin(url)
            url = self.update_url(url, params={'textquery': self.term})
            yield Request(url)

update_url() function details from Lukasz' solution:
Add params to given URL in Python

dron22
  • 1,235
  • 10
  • 20