2

how can Scrapy follow Links on a website which are with JavaScript "javascript:__doPostBack". I have a CrawlSpider which works fine.

class MySpider(CrawlSpider):
    name = 'myspider'
    allowed_domains = ['website']
    start_urls = ['website/Category/']

    rules = (
        Rule(SgmlLinkExtractor(allow='/Products/Overview/'), follow=True),
        Rule(SgmlLinkExtractor(allow=('/Products/Details/', )), callback='parse_item'),
    )

But the Pagination is like:

<a id="MainContent_ProductsOverview1_rptPagesTop_btnPage_1" class="btnPage" href="javascript:__doPostBack('ctl00$MainContent$ProductsOverview1$rptPagesTop$ctl02$btnPage','')" >1</a>
<a id="MainContent_ProductsOverview1_rptPagesTop_btnPage_1" class="btnPage" href="javascript:__doPostBack('ctl00$MainContent$ProductsOverview1$rptPagesTop$ctl02$btnPage','')" >2</a>

etc etc

I know the formdata request examples. But i dont know the how to get the URL Parameter. Help would be awesome.

Thank you :D

anhocy
  • 21
  • 1

0 Answers0