I am trying to perform text analytics on the following text file. The code I have written to tokenize this text after importing it is:
my_data <- read.delim("5KjlUO.txt")
library(tokenizers)
library(SnowballC)
tokenize_words(my_data$ACT.I)
tokenize_words(my_data)
I am getting the following error:
Error in check_input(x) : Input must be a character vector of any length or a list of character vectors, each of which has a length of 1.
Can someone help me resolve this issue.