3

I am doing some basic webscraping with RVest and am getting results to return, however the data isnt lining up with each other. Meaning, I am getting the items but they are out of order from the site so the 2 data elements I am scraping cant be joined in a data.frame.

library(rvest)
library(tidyverse)

base_url<- "https://www.uchealth.com/providers"
loc <- read_html(base_url) %>%
  html_nodes('[class=locations]') %>%
  html_text() 
dept <- read_html(base_url) %>%
  html_nodes('[class=department last]') %>%
  html_text()

I was expecting to be able to create a dataframe of :

Location  Department

Any suggestions? I was wondering if there is an index that would keep these items together but I didnt see anything.

EDIT: I tried this also and did not have any luck. It seems the location is getting an erroneous starting value:

scraping <- function(

base_url = "https://www.uchealth.com/providers"
)
{
loc <- read_html(base_url) %>%
  html_nodes('[class=locations]') %>%
  html_text() 

dept <- read_html(base_url) %>%
  html_nodes('[class=specialties]') %>%
  html_text()

data.frame(
  loc = ifelse(length(loc)==0, NA, loc),
  dept = ifelse(length(dept)==0, NA, loc), 
  stringsAsFactors=F
)

}
cowboy
  • 613
  • 5
  • 20
  • not sure i follow your question, but maybe https://stackoverflow.com/questions/41708685/equivalent-of-which-in-scraping – MichaelChirico Jun 19 '19 at 18:59
  • this [https://stackoverflow.com/questions/33250826/scraping-with-rvest-complete-with-nas-when-tag-is-not-present] seems to be similar to what I am trying to do. I edited above but not having any luck – cowboy Jun 19 '19 at 19:43
  • Small tip: Be kind to websites by not scraping the same page more than once. Instead, assign your `read_html` output to a variable, and then pull your data from that variable. –  Jun 20 '19 at 14:47

2 Answers2

3

One, far more involved, option would be to first turn all the available data in each .searchresult node into a dataframe, and then stack these using dplyr::bind_rows. I think this goes beyond the your basic requirements, but it still answers your question in a roundabout way, and it might be useful for the more general case:

library(rvest)
library(tidyverse)

base_url<- "https://www.uchealth.com/providers"

html <- read_html(base_url)

# Extract `.searchresult` nodes.
res_list <- html %>% 
    html_nodes(".searchresult") %>% 
    unclass()

# Turn each node into a dataframe.
df_list <- res_list %>% 
    map(~ {html_nodes(., ".propertylist li") %>% 
            html_text(T) %>% 
            str_split(":", 2) %>%
            map(~ str_trim(.) %>% cbind() %>% as_tibble()) %>%
            bind_cols() %>%
            set_names(.[1,]) %>% 
            .[-1,]
    })

# Stack the dataframes, add the person names, and reorder the columns.
ucdf <- bind_rows(df_list) %>% 
    mutate(Name = map_chr(res_list, ~ html_node(., "h4") %>% html_text(T))) %>% 
    select(Name, 1:(ncol(.)-1))

Which returns:

# A tibble: 1,137 x 5
   Name         Title                       Locations                                      Specialties              Department        
   <chr>        <chr>                       <chr>                                          <chr>                    <chr>             
 1 Adrian Abre… Assistant Professor of Med… UC Health Physicians Office South (West Chest… nephrology               Internal Medicine 
 2 Bassam G. A… Associate Professor of Cli… University of Cincinnati Medical Center: (513… nephrology, organ trans… Internal Medicine 
 3 Brian Adams… Professor, Director of Res… UC Health Physicians Office (Clifton - Piedmo… dermatology              Dermatology       
 4 Opeolu M. A… Associate Professor of Eme… University of Cincinnati Medical Center: (513… emergency medicine, neu… Emergency Medicine
 5 Caleb Adler… Professor in the Departmen… UC Health Psychiatry (Stetson Building): (513… psychiatrypsychology, m… Psychiatry & Beha…
 6 John Adler,… Assistant Professor of Obs… UC Health Women's Center: (513) 475-8248, UC … gynecology, robotic sur… OB/GYN            
 7 Steven S. A… Assistant Professor         UC Health Physicians Office (Clifton - Piedmo… orthopaedics, spine sur… Orthopaedics & Sp…
 8 Surabhi Aga… Assistant Professor of Med… Hoxworth Center: (513) 475-8524, UC Health Ph… rheumatology, connectiv… Internal Medicine 
 9 Saad S. Ahm… Assistant Professor of Med… Hoxworth Center: (513) 584-7217                cardiovascular disease,… Internal Medicine 
10 Syed Ahmad,… Professor of Surgery; Dire… UC Health Barrett Cancer Center: (513) 584-89… surgical oncology, canc… Surgery           
# … with 1,127 more rows
  • Thanks, this also looks like a viable solution as well – cowboy Jun 20 '19 at 15:43
  • @Brad_J glad it helps. A good way to say thanks on SO is to upvote, even if you don't accept the answer. Others see the votes more than the comments, and it lets them know that someone found the answer useful. –  Jun 20 '19 at 15:51
2

The problem you are facing, is not every child node is present in all of the parent nodes. The best way to handle these situations is to collect all parent nodes in a list/vector and then extract the desired information from each parent using the html_node function. html_node will always return 1 result for every node, even if it is NA.

library(rvest)

#read the page just onece
base_url<- "https://www.uchealth.com/providers"
page <- read_html(base_url)

#parse out the parent node for each parent
providers<-page %>% html_nodes('ul[id=providerlist]')  %>% html_children()

#parse out the requested information from each child.
dept<-providers %>% html_node("[class ^= 'department']") %>% html_text()
location<-providers %>%html_node('[class=locations]') %>% html_text()

The length of providers, dept and location should all be equal.

Dave2e
  • 22,192
  • 18
  • 42
  • 50
  • I haven't seen the `^` in `[class ^= 'department']` before. What does it do? –  Jun 20 '19 at 13:04
  • 1
    @gersht, The ^ means starts with. Thus `class ^= 'department'` means a class which starts with "department", I have trouble with rvest understanding whitespace within the attribute or class names. – Dave2e Jun 20 '19 at 13:12
  • This solution worked exactly as i expected. Thanks for your time! – cowboy Jun 20 '19 at 15:42