1

I am looking to create a data.frame in R from a table found at http://netflixcanadavsusa.blogspot.ca/2013/11/alphabetical-list-k-4-am-fri-nov-22-2013.html#more

It consists of three columns. The first two columns may or may nor contain a flag image, the third is text. An extract is

<span class="listings">
  <table>
    <tr>
     <td><img class="flag" src="http://bit.ly/Y9CbVZ" /></td>
     <td></td>
     <td><b><a target="_blank" href="http://movies.netflix.com/WiMovie/70187567">1000         Ways to Die - Season 3</a> (2010)</b>&nbsp;&nbsp;<i style="font-size:small"> 3.6 stars, 1 Season&nbsp;&nbsp;<a target="_blank" href="http://www.imdb.com/search/title?title=1000 Ways to Die - Season 3">imdb</a></i>
     </td>
    </tr> 
    <tr>
      <td><img class="flag" src="http://bit.ly/Y9CbVZ" /></td>
      <td><img class="flag" src="http://bit.ly/WXvnLp" /></td>
      <td><b><a target="_blank" href="http://movies.netflix.com/WiMovie/100_Below_Zero/70273426?trkid=1889703">100 Below Zero</a> (2013)</b>&nbsp;&nbsp;<i style="font-size:small"> 2.8 stars, 1hr 28m&nbsp;&nbsp;<a target="_blank" href="http://www.imdb.com/search/title?title=100 Below Zero">imdb</a></i></td>
    </tr>    
 </table>
</span>

So here the first row has an image in the first column only, the second row has them in both. I can extract the text and image url but cannot match them up to take account of missing data. Here is what I have done to date - theURL refers to above site and I have just shown results from extract

library(XML)
myURL <- "http://netflixcanadavsusa.blogspot.ca/2013/11/alphabetical-list-k-4-am-fri-nov-22-2013.html#more"


basicInfo <- htmlParse(myURL, isURL = TRUE)

### text
 df <- readHTMLTable(myURL,header=c("flag1","flag2","movie"),  stringsAsFactors = FALSE)[[1]]
head(df,2)
# V1 V2                                                             V3
# 1       1000 Ways to Die - Season 3 (2010)   3.6 stars, 1 Season  imdb
# 2                     100 Below Zero (2013)   2.8 stars, 1hr 28m  imdb    

### images
xpathSApply(basicInfo, "//*/span[@class='listings']/table/tr/td/img/@src")
#                   src                    src                    src                    
#"http://bit.ly/Y9CbVZ" "http://bit.ly/Y9CbVZ" "http://bit.ly/WXvnLp" 

So I have the images but do not know which row/column they apply to In this problem, each column can only have a one specific image so it is sufficient to know whether it occurs. A more general case might have different srcs by row

TIA

pssguy
  • 3,455
  • 7
  • 38
  • 68
  • As always, when someone asks about parsing HTML: http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454 – Carl Witthoft Nov 26 '13 at 19:36
  • 1
    Thanks. That link is the length of a novella! Any chance of being more specific – pssguy Nov 26 '13 at 19:42
  • Just a warning to be really REALLY careful about trying to parse an HTML file :-) – Carl Witthoft Nov 26 '13 at 20:02
  • OK. But I have used the above functions without problem on several occasions before. Just have not encountered this specific issue – pssguy Nov 26 '13 at 20:20

1 Answers1

1

Here how I do this. It is a little bit long but it does the job.

library(XML)
basicInfo <- htmlParse(myURL, isURL = TRUE,encoding='UTF-8')

## for some reason the data is divided into 2 html tags
rows1 <- xpathSApply(basicInfo, "//*/span[@class='listings']/table/tr")
rows2 <- xpathSApply(basicInfo, "//*/span[@id='listings']/*/tr")
## for each element in the list I create a dsamll xml document containg
## all tds 
ll <- lapply(c(rows1,rows2),function(x)xpathSApply(xmlDoc(x),'//*/td'))
ull <- unlist(ll)
## function to parse the tag imag from the xml document
## if the td don't contain an img it returns an NA
parse.img <-    function(x){
  res <- xpathSApply(xmlDoc(x),'//img',xmlGetAttr,'src')
  ifelse(length(res)==0,NA,res)

}

col1 <- unlist(lapply(ull[c(T,F,F)],parse.img))
col2 <- unlist(lapply(ull[c(F,T,F)],parse.img))
## the third column contain text so I use xmlValue to extract it
col3 <- unlist(lapply(ull[c(F,F,T)], 
               function(x)xpathSApply(xmlDoc(x),'//td',xmlValue)))

res <- data.frame(col1,col2,col3)

head(res)

                  col1                 col2                                                                              col3
1 http://bit.ly/Y9CbVZ                 <NA>                1000 Ways to Die - Season 3 (2010)   3.6 stars, 1 Season  imdb
2 http://bit.ly/Y9CbVZ                 <NA>                1000 Ways to Die - Season 3 (2010)   3.6 stars, 1 Season  imdb
3 http://bit.ly/Y9CbVZ http://bit.ly/WXvnLp                              100 Below Zero (2013)   2.8 stars, 1hr 28m  imdb
4 http://bit.ly/Y9CbVZ http://bit.ly/WXvnLp 100 Ghost Street: The Return of Richard Speck (2012)   3 stars, 1hr 23m  imdb
5                 <NA> http://bit.ly/WXvnLp                              100 Million BC (2008)   2.8 stars, 1hr 25m  imdb
6                 <NA> http://bit.ly/WXvnLp                           100 Years Of Evil (2012)   2.7 stars, 1hr 19m  imdb
agstudy
  • 119,832
  • 17
  • 199
  • 261
  • +1 Thanks for taking the time out to solve this. It looks good but I will just do a bit more work to check there is no need for follow-up before accepting I think the data was divided as this is a followup url to another containing a subset of the data - hence the #more – pssguy Nov 26 '13 at 21:02
  • @pssguy I don't get your point here. What's wrong with this solution? – agstudy Nov 26 '13 at 22:41
  • Nothing - just had to get round to using it. Thanks again – pssguy Nov 27 '13 at 06:17