i want to import a txt that have a list of urls and extract from each one and save that in a cvs file but i get stuck
First i import the txt no problem but when a i want to iterate over each row i just extrat from the first one
library(rvest)
library(tidyr)
library(dplyr)
for(i in seq(list_url)) {
text <- read_html(list_url$url[i]) %>%html_nodes("tr~ tr+ tr strong") %>%html_text()}
i just get the result from the first url in a value form , i want a dataframe of all the the extract from the urls
edit : the list_ url file is full with this urls..
http://consultas.pjn.gov.ar/cuantificacion/civil/vida_po_detalle_caso.php?numcas=_b8I7G9olKAukGNlsRE6RHSYaYPu8YLjhTEW15HEuj4. http://consultas.pjn.gov.ar/cuantificacion/civil/vida_po_detalle_caso.php?numcas=ewwF4WmHAnOkCg8Y_XIFH705H_O5hJL9uB5hztOhrsE. http://consultas.pjn.gov.ar/cuantificacion/civil/vida_po_detalle_caso.php?numcas=Z9BDo7ACNDbsUwTiVFTe9aKFfcLAxxnU2AtL6DCloX4. http://consultas.pjn.gov.ar/cuantificacion/civil/vida_po_detalle_caso.php?numcas=NZPRa9SoKHVJQcZ64_4zVgcLSNKmHZ4MtorPu23MUPg.