-1

This is my program that I've written

    library(rvest)
    library(RCurl)
    library(XML)
    library(stringr)


    #Getting the number of Page
    getPageNumber <- function(URL){
      parsedDocument = read_html(URL)
      Sort1 <- html_nodes(parsedDocument, 'div')
      Sort2 <- Sort1[which(html_attr(Sort1, "class") == "pageNumbers al-pageNumbers")] 
      P <- str_count(html_text(Sort2), pattern = " \\d+\r\n")
      return(ifelse(length(P) == 0, 0, max(P)))
    }


    #Getting all articles based off of their DOI
    getAllArticles <-function(URL){
      parsedDocument = read_html(URL)
      Sort1 <- html_nodes(parsedDocument,'div')
      Sort2 <-  Sort1[which(html_attr(Sort1, "class") == "al-citation-list")]
      ArticleDOInumber = trimws(gsub(".*10.1093/dnares/","",html_text(Sort2)))
      URL3 <- "https://doi.org/10.1093/dnares/"
      URL4 <- paste(URL3, ArticleDOInumber, sep = "")
      return(URL4)
    }


    Title <- function(parsedDocument){
      Sort1 <- html_nodes(parsedDocument, 'h1')
      Title <- gsub("<h1>\\n|\\n</h1>","",Sort1)
      return(Title)
    }


    #main function with input as parameter year
    findURL <- function(year_chosen){
      if(year_chosen >= 1994){
      noYearURL = glue::glue("https://academic.oup.com/dnaresearch/search-results?rg_IssuePublicationDate=01%2F01%2F{year_chosen}%20TO%2012%2F31%2F{year_chosen}")
      pagesURl = "&fl_SiteID=5275&startpage="
      URL = paste(noYearURL, pagesURl, sep = "")
      #URL is working with parameter year_chosen
      Page <- getPageNumber(URL)
      

      Page2 <- 0
      while(Page < Page2 | Page != Page2){
        Page <- Page2
        URL3 <- paste(URL, Page-1, sep = "")
        Page2 <- getPageNumber(URL3)    
      }
      R_Data <- data.frame()
      for(i in 1:Page){ #0:Page-1
        URL2 <- getAllArticles(paste(URL, i, sep = ""))
        for(j in 1:(length(URL2))){
          parsedDocument <- read_html(URL2[j])
          print(URL2[j])
          R <- data.frame("Title" = Title(parsedDocument),stringsAsFactors = FALSE)
          #R <- data.frame("Title" = Title(parsedDocument), stringsAsFactors = FALSE)
          R_Data <- rbind(R_Data, R)
        } 
      }
      paste(URL2)
      suppressWarnings(write.csv(R_Data, "DNAresearch.csv", row.names = FALSE, sep = "\t"))
      #return(R_Data)
      } else {
        print("The Year you provide is out of range, this journal only contain articles from 2005 to present")
      }
    }

    findURL(2003)

The output for my code goes as follows:

[1] "https://doi.org/10.1093/dnares/10.6.249"
[1] "https://doi.org/10.1093/dnares/10.6.263"
[1] "https://doi.org/10.1093/dnares/10.6.277"
[1] "https://doi.org/10.1093/dnares/10.6.229"
[1] "https://doi.org/10.1093/dnares/10.6.239"
[1] "https://doi.org/10.1093/dnares/10.6.287"
[1] "https://doi.org/10.1093/dnares/10.5.221"
[1] "https://doi.org/10.1093/dnares/10.5.203"
[1] "https://doi.org/10.1093/dnares/10.5.213"
[1] "https://doi.org/10.1093/dnares/10.4.137"
[1] "https://doi.org/10.1093/dnares/10.4.147"
[1] "https://doi.org/10.1093/dnares/10.4.167"
[1] "https://doi.org/10.1093/dnares/10.4.181"
[1] "https://doi.org/10.1093/dnares/10.4.155"
[1] "https://doi.org/10.1093/dnares/10.3.115"
[1] "https://doi.org/10.1093/dnares/10.3.85"
[1] "https://doi.org/10.1093/dnares/10.3.123"
[1] "https://doi.org/10.1093/dnares/10.3.129"
[1] "https://doi.org/10.1093/dnares/10.3.97"
[1] "https://doi.org/10.1093/dnares/10.2.59"
[1] "https://doi.org/10.1093/dnares/10.6.249"
[1] "https://doi.org/10.1093/dnares/10.6.263"

I'm trying to scrape a journal with years as a parameter. I've scraped one page, but when I'm supposed to change pages my loop just goes back to the top of the page and loops over the same data. My code should be right and I don't understand why this is happening. Thank you in advance

bkush98
  • 19
  • 6

1 Answers1

0

It is not that it is reading the same url. It is that you are selecting for the wrong node which happens to yield repeating info. As I mentioned in your last question, you need to re-work your Title function. The Title re-write below will extract the actual article title based on class name and single node match.

Please note the removal of your sep arg. There are also some other areas of the code that look like they probably could be simplified in terms of logic.


Title function:

Title <- function(parsedDocument) {
  Title <- parsedDocument %>%
    html_node(".article-title-main") %>%
    html_text() %>%
    gsub("\\r\\n\\s+", "", .) %>%
    trimws(.)
  return(Title)
}

R:

library(rvest)
library(XML)
library(stringr)


# Getting the number of Page
getPageNumber <- function(URL) {
  # print(URL)
  parsedDocument <- read_html(URL)
  Sort1 <- html_nodes(parsedDocument, "div")
  Sort2 <- Sort1[which(html_attr(Sort1, "class") == "pagination al-pagination")]
  P <- str_count(html_text(Sort2), pattern = " \\d+\r\n")
  return(ifelse(length(P) == 0, 0, max(P)))
}

# Getting all articles based off of their DOI
getAllArticles <- function(URL) {
  print(URL)
  parsedDocument <- read_html(URL)
  Sort1 <- html_nodes(parsedDocument, "div")
  Sort2 <- Sort1[which(html_attr(Sort1, "class") == "al-citation-list")]
  ArticleDOInumber <- trimws(gsub(".*10.1093/dnares/", "", html_text(Sort2)))
  URL3 <- "https://doi.org/10.1093/dnares/"
  URL4 <- paste(URL3, ArticleDOInumber, sep = "")
  return(URL4)
}


Title <- function(parsedDocument) {
  Title <- parsedDocument %>%
    html_node(".article-title-main") %>%
    html_text() %>%
    gsub("\\r\\n\\s+", "", .) %>%
    trimws(.)
  return(Title)
}


# main function with input as parameter year
findURL <- function(year_chosen) {
  if (year_chosen >= 1994) {
    noYearURL <- glue::glue("https://academic.oup.com/dnaresearch/search-results?rg_IssuePublicationDate=01%2F01%2F{year_chosen}%20TO%2012%2F31%2F{year_chosen}")
    pagesURl <- "&fl_SiteID=5275&page="
    URL <- paste(noYearURL, pagesURl, sep = "")
    # URL is working with parameter year_chosen
    Page <- getPageNumber(URL)


    if (Page == 5) {
      Page2 <- 0
      while (Page < Page2 | Page != Page2) {
        Page <- Page2
        URL3 <- paste(URL, Page - 1, sep = "")
        Page2 <- getPageNumber(URL3)
      }
    }
    R_Data <- data.frame()
    for (i in 1:Page) {
      URL2 <- getAllArticles(paste(URL, i, sep = ""))
      for (j in 1:(length(URL2))) {
        parsedDocument <- read_html(URL2[j])
        #print(URL2[j])
        #print(Title(parsedDocument))
        R <- data.frame("Title" = Title(parsedDocument), stringsAsFactors = FALSE)
        #print(R)
        R_Data <- rbind(R_Data, R)
      }
    }
    write.csv(R_Data, "Group4.csv", row.names = FALSE)
  } else {
    print("The Year you provide is out of range, this journal only contain articles from 2005 to present")
  }
}

findURL(2003)
QHarr
  • 83,427
  • 12
  • 54
  • 101
  • Yes, thank you for the clear explanation I see what you meant by the title function! Best – bkush98 Mar 23 '21 at 15:01
  • I have another question and this would be about using a similar function as Title to get the entire text of an article. The function would be called FullText. The only issue is that in the journals the articles full text are in a pdf file so I don't think it can be scraped. Thank you for all the help. – bkush98 Mar 23 '21 at 17:14
  • https://stackoverflow.com/questions/38592600/how-to-read-pdf-file-in-r so you just need to extract pdf link and pass to the function from that package. – QHarr Mar 23 '21 at 17:16
  • It would be get every link for a full text. I know there's a function to read pdfs in R, but from the journals I believe there's no way of getting the link for the pdf needed to scrape. – bkush98 Mar 23 '21 at 17:21
  • Where do you find them then? You mentioned pdf files. – QHarr Mar 23 '21 at 17:39
  • An example is in the journal entry https://academic.oup.com/dnaresearch/article/1/6/297/541225?searchresult=1 the full text is in a pdf link. – bkush98 Mar 23 '21 at 17:43
  • you can extract the pdf link with `html_node('.PdfOnlyLink .article-pdfLink') %>% html_attr('href')` on the parsed document node. – QHarr Mar 23 '21 at 17:58
  • I keep getting NA as an output when I use that. How did you write the function for it to work? – bkush98 Mar 23 '21 at 18:11
  • Also if I were to change my NA in my csv output into something like "Not Able to Find" would you recommend writing a separate function? Or it would be easy to add it into the main . – bkush98 Mar 23 '21 at 18:13
  • If you have the time I moved my questions to a different post. I figured this discussion thread was getting to large . This is https://stackoverflow.com/questions/66769349/r-program-is-not-outputting-the-correct-scraped-journal-entries Thank you again for all the help – bkush98 Mar 23 '21 at 18:42
  • Oops. I was sleeping – QHarr Mar 23 '21 at 22:05
  • No Problem at all! – bkush98 Mar 23 '21 at 22:49