I have created an Rvest scraper that scrapes a job listing site. Unfortunately, it takes for ever to loop through just 100 pages. Is there any quick fix to make this faster? Below is the basic structure I'm using
for(i in beginning:end){
url <- read_html(paste0("https://www.jobsite.com",links[[1]][i]))
address[[i]] <- html_nodes(x = url, css = selector_name) %>%
html_text()
employer[[i]] <- employer[[i]][3]
rating[[i]] <- html_nodes(x = url, css = selector_rating) %>%
html_attr("data-jobsite") %>% as.numeric()
rating[[i]] <- rating[[i]]*(10/6)
rating[[i]] <- round(rating[[i]])
rating[[i]] <- ifelse(length(rating[[i]]) == 0, 1, rating[[i]])
title[[i]] <- html_nodes(x = url, css = ".xsmall-10") %>%
html_text()
title[[i]] <- stri_replace_all_fixed(title[[i]], " ", "")
title[[i]] <- stri_replace_all_fixed(title[[i]], " ", "")
title[[i]] <- stri_replace_all_fixed(title[[i]], "\r\n", "")
dd[[i]] <- html_nodes(x = url, css = ".item-price")%>%
html_text()
}