I am trying to download several URLs through R using pahtomjs as the websites contain javascript. I am able to download a single webpage using the code in the last code box below, but I need it to work for multiple websites. To find that code i used this tutorial: http://flovv.github.io/Scrape-JS-Sites/ I have a character string of urls, urls[i], and one for the destinations where they should be saved urls[i]. Previously I tried:
for(i in seq_along(urls)) {
download.file(urls[i], destinations[i], mode="wb")
}
However, this didn't work as the websites contain javascript.
I tried to follow the answers in this post but was quite confused: Scraping multiple URLs by looping in PhantomJS
writeLines("var url = 'url link';
var page = new WebPage();
var fs = require('fs');
page.open(url, function (status) {
just_wait();
});
function just_wait() {
setTimeout(function() {
fs.write('1.html', page.content, 'w');
phantom.exit();
}, 2500);
}
", con = "scrape.js")
js_scrape <- function(url = "url link",
js_path = "scrape.js",
phantompath = "phantomjs"){
lines <- readLines(js_path)
lines[1] <- paste0("var url ='", url ,"';")
writeLines(lines, js_path)
command = paste(phantompath, js_path, sep = " ")
system(command)
}
js_scrape()
Please help me!