2

I have a library of words and punctuation. I am trying to make a dataframe out of it so I can use it later on. The original data set has 2,000,000 rows with punctuation but it is a list. I am having trouble parsing out the punctuation from the list from the rest of the words. I would like spaces between each punctuation character from the words. I can easily do this in excel with find a replace. But want to do it in R. I have an example called = df, and the output I want in r called = output. I attached the code below with what I have so far. I tried str_split for How but it deleted "How " and returned nothing "".

#--------Upload 1st dataset and edit-------#
library("stringr")
sent1<-c("How did Quebec? 1 2 3")
sent2<-c("Why does valve = .245? .66")
sent3<-c("How do I use a period (.) comma [,] and hyphen {-} to columns?")
df <- data.frame(text = c(sent1,sent2,sent3))
df <- as.matrix(df)
str_split(df, " ")#spaces

#-------------output-------------#
words1<-c("How", "did" ,"Quebec"," ? ","1", "2" ,"3")
words2<-c('Why', "does", "valve"," = ",".245","?" ,".66")
words3<-c("How" ,"do", "I", "use", "a", "period", '(',".",')', "comma" ,'[',",","]" ,"and" ,"hyphen" ,"{","-",'}' ,"to" ,"columns",'?')
output<-data.frame(words1,words2,words3)
FrosyFeet456
  • 349
  • 2
  • 12

2 Answers2

2

Here is a rough concept that gets the job done:

First split on all characters that are not word characters (inspired by another answer). Then get the maximum length and fill in the others to have the same length.

dfsplt <- strsplit( gsub("([^\\w])","~\\1~", df, perl = TRUE), "~")
dfsplt <- lapply(dfsplt, function(x) x[!x %in% c("", " ")])
n <- max(lengths(dfsplt))
sapply(dfsplt, function(x) {x <- rep(x, ceiling(n / length(x))); x[1:n]})
# or
sapply(dfsplt, function(x) x[(1:n - 1) %% length(x) + 1])

      [,1]     [,2]    [,3]     
 [1,] "How"    "Why"   "How"    
 [2,] "did"    "does"  "do"     
 [3,] "Quebec" "valve" "I"      
 [4,] "?"      "="     "use"    
 [5,] "1"      "."     "a"      
 [6,] "2"      "245"   "period" 
 [7,] "3"      "?"     "("      
 [8,] "How"    "."     "."      
 [9,] "did"    "66"    ")"      
[10,] "Quebec" "Why"   "comma"  
[11,] "?"      "does"  "["      
[12,] "1"      "valve" ","      
[13,] "2"      "="     "]"      
[14,] "3"      "."     "and"    
[15,] "How"    "245"   "hyphen" 
[16,] "did"    "?"     "{"      
[17,] "Quebec" "."     "-"      
[18,] "?"      "66"    "}"      
[19,] "1"      "Why"   "to"     
[20,] "2"      "does"  "columns"
[21,] "3"      "valve" "?"  
s_baldur
  • 29,441
  • 4
  • 36
  • 69
2

Here is an option where we create a space between punctuation characters and then scan it separately

do.call(cbind, lapply(gsub("([[:punct:]])", " \\1 ", 
       df$text), function(x) scan(text = x, what = "", quiet = TRUE)))
#      [,1]     [,2]    [,3]     
# [1,] "How"    "Why"   "How"    
# [2,] "did"    "does"  "do"     
# [3,] "Quebec" "valve" "I"      
# [4,] "?"      "="     "use"    
# [5,] "1"      "."     "a"      
# [6,] "2"      "245"   "period" 
# [7,] "3"      "?"     "("      
# [8,] "How"    "."     "."      
# [9,] "did"    "66"    ")"      
#[10,] "Quebec" "Why"   "comma"  
#[11,] "?"      "does"  "["      
#[12,] "1"      "valve" ","      
#[13,] "2"      "="     "]"      
#14,] "3"      "."     "and"    
#[15,] "How"    "245"   "hyphen" 
#[16,] "did"    "?"     "{"      
#[17,] "Quebec" "."     "-"      
#[18,] "?"      "66"    "}"      
#[19,] "1"      "Why"   "to"     
#[20,] "2"      "does"  "columns"
#[21,] "3"      "valve" "?"    
akrun
  • 874,273
  • 37
  • 540
  • 662
  • 1
    Nice. So `cbind()` does recycling! Do you have a comment on the cons/pros of `scan()` versus `strsplit()`? – s_baldur Dec 21 '18 at 10:04
  • 1
    @snoram `cbind` does the recycling with a warning. I think `strsplit` would be faster, but here I am using `scan` as it will return a vector instead of a `list` (which may need to be `unlist`ed` – akrun Dec 21 '18 at 10:06