0

i am a newbie to scraping and selenium. The page which i want to scrape uses a js script on a button to get to the next page. I found part of a code (Click a Button in Scrapy) on SO but i cant seem to make it work.

from selenium import webdriver

import scrapy

class chSpider(scrapy.Spider):
    name = 'spidypy'
    allowed_domains = ['117.145.177.252']
    start_urls = ['http://117.145.177.252/login.do?method=enterPdamccx']

    def __init__(self):
        self.driver = webdriver.Firefox()

    def parse(self,response):

        self.driver.get('http://117.145.177.252/login.do?method=enterPdamccx')

        while True:
            try:
                next = self.driver.find_element_by_xpath('/html/body/form/div[3]/div/div/a')
                url = 'http://117.145.177.252/login.do?method=enterPdamccx'
                yield Request(url,callback=self.parse2)
                next.click()
            except:
                break

        self.driver.close()

    def parse2(self,response):
        print('you are here!')

I recieve following error messages several times:

selenium.common.exceptions.WebDriverException: Message: connection refused
Moshe Slavin
  • 5,127
  • 5
  • 23
  • 38
  • I'm thinking perhaps you're not making it on the first driver.get() or the yield, skipping the click completely. Try clicking before yeild. – David Silveiro Mar 24 '19 at 19:08

1 Answers1

0

That a has an onclick, so you would just do:

driver.execute_script('doMccx()')
pguardiario
  • 53,827
  • 19
  • 119
  • 159