1

I am trying to scrape a page in booking.com with rvest and the problem is that I need the code to return NA when a hotel does not have ratings for example, so the dataframe will have exact number of rows for each parameter Im trying to scrape.

The code that I am using which functions perfectly without returing NA is this:

# Necessary packages
  library(rvest)
  library(dplyr)
  library(httr)
  
# Base URL of the search results page
  base_url <- "https://www.booking.com/searchresults.it.html"
  
  
# Parameters we add to the search get the specific results 
  params <- list(
    ss = "Firenze%2C+Toscana%2C+Italia",
    efdco = 1,
    label = "booking-name-L*Xf2U1sq4*GEkIwcLOALQS267777916051%3Apl%3Ata%3Ap1%3Ap22%2C563%2C000%3Aac%3Aap%3Aneg%3Afi%3Atikwd-65526620%3Alp9069992%3Ali%3Adec%3Adm%3Appccp",
    aid = 376363,
    lang = "it",
    sb = 1,
    src_elem = "sb",
    src = "index",
    dest_id = -117543,
    dest_type = "city",
    ac_position = 0,
    ac_click_type = "b",
    ac_langcode = "it",
    ac_suggestion_list_length = 5,
    search_selected = "true",
    search_pageview_id = "2e375b14ad810329",
    ac_meta = "GhAyZTM3NWIxNGFkODEwMzI5IAAoATICaXQ6BGZpcmVAAEoAUAA%3D",
    checkin = "2023-06-11",
    checkout = "2023-06-18",
    group_adults = 2,
    no_rooms = 1,
    group_children = 0,
    sb_travel_purpose = "leisure"
  )
  
  
# Create empty vectors to store the titles, rating, price
  titles <- c()
  ratings <- c()
  prices <- c()

### Loop through each page of the search results
  for (page_num in 1:35) {
    
# Build the URL for the current page
    url <- modify_url(base_url, query = c(params, page = page_num))
    
# Read the HTML of the new page specificated
    page <- read_html(url)
    
# Extract the titles, rating, price from the current page
# Got the elements from Inspect code of the page
    titles_page <- page %>% html_elements("div[data-testid='title']") %>% html_text()
    
    prices_page <- titles_page %>% html_element("span[data-testid='price-and-discounted-price']") %>% html_text()
    ratings_page <- titles_page %>% html_element("div[aria-label^='Punteggio di']") %>% html_text()
    
# Append the titles, ratings, prices from the current page to the vector
    titles <- c(titles, titles_page)
    prices <- c(prices, prices_page)
    ratings <- c(ratings, ratings_page)
  }
  
  hotel = data.frame(titles, prices, ratings)
  
  print(hotel)```

I have seen being suggested to add a paretn and children node and I have tried this but it does not function:

```titles_page <- page %>% html_elements("div[data-testid='title']") %>% html_text()
  
  prices_page <- titles_page %>% html_element("span[data-testid='price-and-discounted-price']") %>% html_text()
  ratings_page <- titles_page %>% html_element("div[aria-label^='Punteggio di']") %>% html_text()```
Anisa
  • 57
  • 7

1 Answers1

2

titles_page <- page %>% html_elements("div[data-testid='title']") %>% html_text() is creating a vector of character strings.
You cannot parse "titles_page" in the next line of code.
You are skipping the step of creating a vector of parent nodes. Review your previous question/answer How to report NA when scraping a web with R and it does not have value? and look at the line properties <- html_elements(page, xpath=".//div[@data-testid='property-card']") in the answer. This is returning a vector of xml nodes. Now parse this vector of nodes to obtain the desired information.

Error was not having these lines correct:

#find the parents
properties <- html_elements(page, xpath=".//div[@data-testid='property-card']")
       
#getting the information from each parent
titles_page <- properties %>% html_element("div[data-testid='title']") %>% html_text()
prices_page <- properties %>% html_element("span[data-testid='price-and-discounted-price']") %>% html_text()    
ratings_page <- properties %>% html_element("div[aria-label^='Punteggio di']") %>% html_text()

The full corrected loops is now:

for (page_num in 1:35) { 
   # Build the URL for the current page
   url <- modify_url(base_url, query = c(params, page = page_num))
   
   # Read the HTML of the new page specificated
   page <- read_html(url)
   
   #parse out the parent node for each parent 
   properties <- html_elements(page, xpath=".//div[@data-testid='property-card']")
   
   #now find the information from each parent
   titles_page <- properties %>% html_element("div[data-testid='title']") %>% html_text()
   prices_page <- properties %>% html_element("span[data-testid='price-and-discounted-price']") %>% html_text()    
   ratings_page <- properties %>% html_element("div[aria-label^='Punteggio di']") %>% html_text()
   
   # Append the titles, ratings, prices from the current page to the vector
   titles <- c(titles, titles_page)
   prices <- c(prices, prices_page)
   ratings <- c(ratings, ratings_page)
}
Dave2e
  • 22,192
  • 18
  • 42
  • 50