0

I want to download all the files named "listings.csv.gz" which refer to US cities from http://insideairbnb.com/get-the-data.html, I can do it by writing each link but is it possible to do in a loop?

In the end I'll keep only a few columns from each file and merge them into one file.

Since the problem was solved thanks to @CodeNoob I'd like to share how it all worked out:

page <- read_html("http://insideairbnb.com/get-the-data.html")

# Get all hrefs (i.e. all links present on the website)
links <- page %>%
  html_nodes("a") %>%
  html_attr("href")

# Filter for listings.csv.gz, USA cities, data for March 2019
wanted <- grep('listings.csv.gz', links)
USA <- grep('united-states', links)
wanted.USA = wanted[wanted %in% USA]
wanted.links <- links[wanted.USA]
wanted.links = grep('2019-03', wanted.links, value = TRUE)

wanted.cols = c("host_is_superhost", "summary", "host_identity_verified", "street", 
                "city", "property_type", "room_type", "bathrooms", 
                "bedrooms", "beds", "price", "security_deposit", "cleaning_fee", 
                "guests_included", "number_of_reviews", "instant_bookable", 
                "host_response_rate", "host_neighbourhood", 
                "review_scores_rating", "review_scores_accuracy","review_scores_cleanliness",
                "review_scores_checkin" ,"review_scores_communication", 
                "review_scores_location", "review_scores_value", "space", 
                "description", "host_id", "state", "latitude", "longitude")


read.gz.url <- function(link) {
  con <- gzcon(url(link))
  df  <- read.csv(textConnection(readLines(con)))
  close(con)
  df  <- df %>% select(wanted.cols) %>%
    mutate(source.url = link)
  df
}

all.df = list()
for (i in seq_along(wanted.links)) {
  all.df[[i]] = read.gz.url(wanted.links[i])
}

all.df = map(all.df, as_tibble)
  • 1
    What code have you tried yourself? Have you tried using https://stackoverflow.com/questions/50164561/download-files-with-specific-extension-from-a-website – Hector Haffenden Apr 25 '19 at 22:07

1 Answers1

1

You can actually extract all links, filter for the ones containing listings.csv.gz and then download these in a loop:

library(rvest)
library(dplyr)

# Get all download links

page <- read_html("http://insideairbnb.com/get-the-data.html")

# Get all hrefs (i.e. all links present on the website)
links <- page %>%
  html_nodes("a") %>%
  html_attr("href")

# Filter for listings.csv.gz
wanted <- grep('listings.csv.gz', links)
wanted.links <- links[wanted]

for (link in wanted.links) {
  con <- gzcon(url(link))
  txt <- readLines(con)
  df <- read.csv(textConnection(txt))
  # Do what you want
}

Example: Download and combine the files
To get the result you want I would suggest to write a download function that filters for the columns you want and then combines these in a single dataframe, for example something like this:

read.gz.url <- function(url) {
  con <- gzcon(url(link))
  df  <- read.csv(textConnection(readLines(con)))
  close(con)
  df  <- df %>% select(c('calculated_host_listings_count_shared_rooms', 'cancellation_policy' )) %>% # random columns I chose
    mutate(source.url = url) # You may need to remember the origin of each row
  df
}

all.df <- do.call('rbind', lapply(head(wanted.links,2), read.gz.url)) 

Note I only tested this on the first two files since they are pretty large

CodeNoob
  • 1,988
  • 1
  • 11
  • 33
  • Thank you, it was helpful. I also filtered them by country and date, and ended up with a nice list of tibbles for each city. The only problem is that R is so slow in this kind of things.. – Aliya Davletshina Apr 30 '19 at 21:23