1

I am trying to use RCurl package to get data from the genecard databases

http://www-bimas.cit.nih.gov/cards//

I read a wonderful solution in a previous posted questions:

How can I use R (Rcurl/XML packages ?!) to scrape this webpage?

However, my problem is different in a form that I need further supports from experist. Instead of exctracting all the links from the webpages. I have a list of ~ 1000 genes in my mind. They are in the form of gene symbols (some of the gene symbols can be found in the webpage, some of them are new to the database). Here is part of my lists of genes.

TP53 SOD1 EGFR C2d AKT2 NFKB1

C2d is not in the database, so, when I do the search manually I will see. "Sorry, there is no GeneCard for C2d".

When I use to the solution posted in the previous questions for my analysis.

How can I use R (Rcurl/XML packages ?!) to scrape this webpage?

(1) I firstly readin the list

(2) I then use the get_structs function in the previous solution to subsitute each gene sybmols in the list to the following website http://www-bimas.cit.nih.gov/cgi-bin/cards/carddisp.pl?gene=genesybol.

(3) Scrap the information that I needed for each genes in the list, using the get_data_url function in the previous message.

It works for the TP53, SOD1, EGFR, but when the search comes to C2d. The process stopped.

As I got ~ 1000 genes, I am sure some of them are missing from the webpage.

How can I get a modified gene list to tell me out of ~1000 genes, which one of them are missing automatically? So, that I can use the same approach as listed in the previous question to get all the data that I needed based on the new gene lists that are EXISTING in webpage?

Or are there any methods to ask the R to skip those missing items and do the scrapping continuously till the end of the list but mark those missing items in the final results.

In order to faciliate the discussion process. I have make a sudo input files using the scripts using in the previous questions for the same webpage that they used.

u <- c ("Aero_pern", "Ppate", "didnotexist", "Sbico")

library(RCurl)  
base_url<-"http://gtrnadb.ucsc.edu/" base_html<-getURLContent(base_url)[[1]] 
links<-strsplit(base_html,"a href=")[[1]] 

get_structs<-function(u) {     
struct_url<-paste(base_url,u,"/",u,"-structs.html",sep="")     
raw_data<-getURLContent(struct_url)     
s_split1<-strsplit(raw_data,"<PRE>")[[1]]     
all_data<-s_split1[seq(3,length(s_split1))]     
data_list<-lapply(all_data,parse_genomes)     
for (d in 1:length(data_list)) {data_list[[d]]<-append(data_list[[d]],u)}     
return(data_list) 
}

I guess the problem can be solved by modifing the get_structs scripps above or ifelse function may help, but I cannot figure out how to modify it further. Pls comments.

Community
  • 1
  • 1
a83
  • 11
  • 1
  • Your code doesn't work for me. Make sure it runs as-is so we have a reproducible example. parse_genoms() not found. – Vincent May 20 '11 at 03:13
  • 1
    The pares_genoms function is in the link he offered, but the question still does not show the code that is producing an error. – IRTFM May 20 '11 at 03:37
  • Right. Definitely lazyness on my part, so +1 for answering my question. But I still don't think we should be expected to follow links and read other text in order to know what the code is supposed to do. In any case, try() is probably good enough for him anyway, so hopefully my answer is sufficient. – Vincent May 20 '11 at 03:47

1 Answers1

2

You can enclose your function call inside a try() so that the process won't break if you get errors. Usually this will let you loop over problematic cases and it will return an error message instead of breaking your process. e.g.

dat <- list()
for (i in 1:length(u)){
   dat[[i]] <- try(get_structs(u[i]))
}
Vincent
  • 15,809
  • 7
  • 37
  • 39