I am using the RedditExtractoR package to obtain information about threads, users and comments on Reedit. I have a list with different authors, and I am using the function RedditExtractoR::get_user_content, which gives me the information about each user from the Oficial Reddit API. The problem is that when I run the code, there are some users that apparently are not longer available, and therefore, the code stops because the API can not obtain their info (they no longer exist), This is the code that I'm using (obsviusly the list is bigger and it was not constructed with this code, becase the authors where put from the API with the function find_thread_urls and get_thread_content:
list = c("User1","User2",... "UserN")
users_about <- get_user_content(list)
The error message that I received is this
Error in page[[content_type]] : subscript out of bounds
In addition: Warning messages:
1: In file(con, "r") :
cannot open URL 'https://www.reddit.com/user/User1.json?limit=100': HTTP status was '403 Forbidden'
2: In h(simpleError(msg, call)) :
error in evaluating the argument 'content' in selecting a method for function 'fromJSON': cannot open the connection to 'https://www.reddit.com/user/User1.json?limit=100'
Since in the list are two types of users: 1) the ones that are available, 2) the ones that ARE NOT available, I want to make the code still running when that error shows (which means that the API does not recognize that author anymore).
The problem is that I can not know which users are available (there only way to do this is doing manually and I can't do that for 500 users), so, if there is a way to run a code despite the error. In other words, if the information of User 2 is not available on the API, I want the code to ignore that and pull the User3, and so on... Is there any way that I can do that?