2

Basically, I want to scrape some options data daily from Yahoo! Finance. I have been kicking the tires using (1) as an example. However it hasn't quite worked out, since I am unfamiliar with HTML.

(1) Scraping html tables into R data frames using the XML package

As an example I want to scrape and collect the following options chain http://finance.yahoo.com/q/op?s=MNTA&m=2011-05

Here is what I have tried so far. The last 2 lines don't work since I am unclear what class I should be looking for. Any help would be great. Thanks.

library(RCurl)
library(XML)

theurl <- "http://finance.yahoo.com/q/op?s=MNTA&m=2011-05"
webpage <- getURL(theurl)
webpage <- readLines(tc <- textConnection(webpage)); close(tc)

pagetree <- htmlTreeParse(webpage, error=function(...){}, useInternalNodes = TRUE)

tablehead <- xpathSApply(pagetree, "//*/table[@class='yfnc_datamodoutline1']/tr/th", xmlValue)

results <- xpathSApply(pagetree, "//*/table[@class='wikitable sortable']/tr/td", xmlValue)

The last two lines don

Community
  • 1
  • 1
James Smith
  • 23
  • 1
  • 3

1 Answers1

3

I presume that you want to get the information in the two tables Call Options and Put Options. Here is one simple way to do it using the XML package

url  = "http://finance.yahoo.com/q/op?s=MNTA&m=2011-05"
# extract all tables on the page
tabs = readHTMLTable(url, stringsAsFactors = F)

# locate tables containing call and put information
call_tab = tabs[[11]]
put_tab  = tabs[[15]]

I figured out the position of the two tables by manual inspection. If the position is going to vary across the pages you are parsing, then you might want to programatically define the position, either using length of table or some other text criteria.

EDIT. The two tables you are presumably interested in both have cellpadding = 3. You can use this information to directly extract the two tables using the following code

# parse url into html tree
doc = htmlTreeParse(url, useInternalNodes = T)

# find all table nodes with attribute cellpadding = 3
tab_nodes = xpathApply(doc, "//table[@cellpadding = '3']")

# parse the two nodes into tables
tabs = lapply(tab_nodes, readHTMLTable)
names(tabs) = c("calls", "puts")

This is a list that contains both tables.

Ramnath
  • 54,439
  • 16
  • 125
  • 152
  • Thanks so much! This works much better than my other attempts at fixing the getOptionChain() command in the quantmod package. – James Smith Apr 25 '11 at 02:48
  • @James What problems are you having with `getOptionChain`. I tried `getOptionChain('MNTA')` and it returns the same results as the parser defined here – Ramnath Apr 25 '11 at 03:41
  • `getOptionChain()` fails for option chains that have only one strike and I couldn't find a clean solution.Try `getOptionChain('ACUR')` and you'll see an error about incorrect dimensions. – James Smith Apr 25 '11 at 03:58
  • One follow up: how would you go about trying to stores this data daily in R? trying to manipulate a bunch of these – James Smith May 17 '11 at 01:05
  • @James are you talking about pulling this information every day, or pulling it for a number of days historically? – Ramnath May 17 '11 at 15:44
  • Ideally I would scan Yahoo once or twice a day and store the results. This way I could have a time series of the options price with corresponding volume. I just haven't thought an easy way of storing the data. Sort of like this site : [Optionistics](http://www.optionistics.com/f/option_prices) – James Smith Jul 09 '11 at 16:20