3

I need to scrape a page who is using javascript. this is why I'm using Selenium. the problem is that selenium can't fetch the required data.

I want to use htmlXmlSelector to try to fetch the data.

how can I pass the html selenium produced to htmlXmlSelector ?

DjangoPy
  • 855
  • 1
  • 13
  • 29

3 Answers3

6

This is my solution: just create htmlXpathSelector from selenium page_source:

hxs = HtmlXPathSelector(text=sel.page_source)
warvariuc
  • 57,116
  • 41
  • 173
  • 227
DjangoPy
  • 855
  • 1
  • 13
  • 29
0

Try creating Response manually:

from scrapy.http import TextResponse
from scrapy.selector import HtmlXPathSelector

body = '''<html></html>'''

response = TextResponse(url = '', body = body, encoding = 'utf-8')

hxs = HtmlXPathSelector(response)
hxs.select("/html")
warvariuc
  • 57,116
  • 41
  • 173
  • 227
  • How does Selenium come into play? I did selenium.get(url). how do proceed? – DjangoPy Jul 27 '12 at 11:39
  • I haven't used selenium, but i guess you can get [page html source](http://stackoverflow.com/q/7861775/248296) from it. Having the page body you create a `response` and then you can use `HtmlXPathSelector` on it. – warvariuc Jul 27 '12 at 12:06
0

Manual response with Selenium:

from scrapy.spider import BaseSpider
from scrapy.http import TextResponse
from scrapy.selector import HtmlXPathSelector
import time
from selenium import selenium

class DemoSpider(BaseSpider):
    name="Demo"
    allowed_domains = ['http://www.example.com']
    start_urls = ["http://www.example.com/demo"]

    def __init__(self):
        BaseSpider.__init__(self)
        self.selenium = selenium("127.0.0.1", 4444, "*chrome", self.start_urls[0])
        self.selenium.start()

    def __del__(self):
       self.selenium.stop()

    def parse (self, response):
        sel = self.selenium
        sel.open(response.url)
        time.sleep(2.0) # wait for javascript execution

        #build the response object from Selenium
        body = sel.get_html_source()
        sel_response = TextResponse(url=response.url, body=body, encoding = 'utf-8')
        hxs = HtmlXPathSelector(sel_response)
        hxs.select("//table").extract()
samuel5
  • 196
  • 1
  • 5
  • How do I make use of sel before this line ```body = sel.get_html_source()```, I need to make an XPATH query and then based on returned elements and I need to e.click() them one by one and then download the get_html_source(), any idea how todo that? sel does not seem to have methods for xpath queries on content? – Mo J. Mughrabi Nov 22 '14 at 14:07