I think it has to do with the amount of pages it can only call 10 results at a time. Depending on what term I'm scraping for. I believe the error is originating in the bold line of code..second from last.
term <- "somalia+united+states" # Need to use + to string together separate words
begin_date <- "19900101"
end_date <- "20200101"
baseurl <- paste0("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=",term,
"&begin_date=",begin_date,"&end_date=",end_date,
"&facet_filter=true&api-key=",NYTIMES_KEY, sep="")
initialQuery <- fromJSON(baseurl)
`***maxPages <- round((initialQuery$response$meta$hits[1] / 10)-1) ***`
View(initialQuery)
I tried altering the search terms and its shows changes in the max pages value in the global environment window. The tutorial I used uses [1] / 10)-1), because he had 97 hits thus, 9.7 hits per 10 pages, thus 9 full pages and 1 70% full page, thus 10 pages. I'm not sure where he gets the data that he has 97 hits, and it may help me if so.
If there is an easier way to build a corpus of NYT articles that mention Somalia from 1990-2010 that would also be incredibly helpful. Please let me know what you think and thanks for the help.