1

I'm writing a crawler in Python. Given a single web page, I extract it's Html content in the following manner:

import urllib2
response = urllib2.urlopen('http://www.example.com/')
html = response.read()

But some text components don't appear in the Html page source, for example in this page (redirected to the index, please access one of the dates and view a specific mail) if you view page source you will see that the mail text doesn't appear in the source but seems to be loaded by JS.

How can I programmatically download this text?

alecxe
  • 462,703
  • 120
  • 1,088
  • 1,195
zvisofer
  • 1,346
  • 18
  • 41

3 Answers3

2

The easiest option here would be to make a POST request to the URL responsible for the email search and parse the JSON results (mentioning @recursive since he suggested the idea first). Example using requests package:

import requests

data = {
    'year': '1999',
    'month': '05',
    'day': '20',
    'locale': 'en-us'
}
response = requests.post('http://jebbushemails.com/api/email.py', data=data)

results = response.json()
for email in results['emails']:
    print email['dateCentral'], email['subject']

Prints:

1999-05-20T00:48:23-05:00 Re: FW: The Reason Study of Rail Transportation in Hillsborough
1999-05-20T04:07:26-05:00 Escambia County School Board
1999-05-20T06:29:23-05:00 RE: Escambia County School Board
...
1999-05-20T22:56:16-05:00 RE: School Board
1999-05-20T22:56:19-05:00 RE: Emergency Supplemental just passed 64-36
1999-05-20T22:59:32-05:00 RE:
1999-05-20T22:59:33-05:00 RE: (no subject)

A different approach here would be to let a real browser handle the dynamic javascript part of the page load with the help of selenium browser automation framework:

# -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC


driver = webdriver.Chrome()  # can also be, for example, webdriver.Firefox()
driver.get('http://jebbushemails.com/email/search')

# click 1999-2000
button = driver.find_element_by_xpath('//button[contains(., "1999 – 2000")]')
button.click()

# click 20
cell = driver.find_element_by_xpath('//table[@role="grid"]//span[. = "20"]')
cell.click()

# click Submit
submit = driver.find_element_by_xpath('//button[span[1]/text() = "Submit"]')
submit.click()

# wait for result to appear
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "//tr[@analytics-event]")))

# get the results
for row in driver.find_elements_by_xpath('//tr[@analytics-event]'):
    date, subject = row.find_elements_by_tag_name('td')
    print date.text, subject.text

Prints:

6:24:27am Fw: Support Coordination
6:26:18am Last nights meeting
6:52:16am RE: Support Coordination
7:09:54am St. Pete Times article
8:05:35am semis on the interstate
...
6:07:25pm Re: Appointment
6:18:07pm Re: Mayor Hood
8:13:05pm Re: Support Coordination

Note that a browser here can also be headless, like PhantomJS. And, if there is no display for browser to work in - you can fire up a virtual one, see examples here:

Community
  • 1
  • 1
alecxe
  • 462,703
  • 120
  • 1,088
  • 1,195
2

You can make a request to the actual ajax service, instead of trying to use the web interface.

For example, a post request to http://jebbushemails.com/api/email.py with this form data will yield 80kb of easily parse-able json.

year:1999
month:05
day:20
locale:en-us
recursive
  • 83,943
  • 34
  • 151
  • 241
0

I'm not a python expert but any function such as urlopen only gets you the static HTML, not executing it. What you need is a browser engine of some kind to actually parse and execute the JavaScript. It seems to be answered here:

How to Parse Java-script contains[dynamic] on web-page[html] using Python?

Community
  • 1
  • 1
Gil Cohen
  • 836
  • 7
  • 12