0

I have a data frame data1 with cleaned strings of text matched to their ids

    # A tibble: 2,000 x 2
      id text                                                                                                                            
   <int> <chr>                                                                                                                           

     1 decent scene guys visit spanish lady hilarious flamenco music background re…
     3 movie beautiful plot depth kolossal scenes battles moral rationale br br conclusion wond…
     4 fan scream killing astonishment story summarized don time move ii won regret plot ironical              
     5 mistake film guess minutes clunker fought hard stay seat lose hours life feeling br his…
     6 phoned awful bed dog ranstuck br br positive grooming eldest daughter beeeatch br ous…
    
    # … with 1,990 more rows

And have created a new variable freq that for every word gives the tf, pdf and itidf. In order, the columns of freq indicate id, word, n, tf, idf, tf_idf

# A tibble: 112,709 x 6
      id word           n    tf   idf tf_idf
   <int> <chr>      <int> <dbl> <dbl>  <dbl>
 1   335 starcrash      1 0.5    7.60   3.80
 2  2974 carly          1 0.5    6.50   3.25
 3  1796 phillips       1 0.5    5.81   2.90
 4  1796 eric           1 0.5    5.40   2.70
 5  1398 wilson         1 0.5    5.20   2.60
 6   684 apolitical     1 0.333  7.60   2.53
 7  1485 saimin         1 0.333  7.60   2.53
 8  1398 charlie        1 0.5    4.77   2.38
 9  2733 shouldn        1 0.5    4.71   2.36
10  2974 jones          1 0.5    4.47   2.23
# … with 112,699 more rows

I am trying to create a loop that goes through this second variable and uses word2vec to substitute in data1 any word of tf lower than the mean of all others, with the closest match. I have tried the function

 replace_word <- function(x) {
   x<-hunspell_suggest(x)
   x<-mutate(x)
   p<-system.file(package = "word2vec", "models", "example.bin")
   m<-read.word2vec(p)
   s<-predict(m, x, type='nearest', top_n=1)
   paste0(s)
  }
  

But when I run it it goes into an infinite loop. I originally wanted to check whether the spelling of the word was correct first, but because there are words not in the dictionary I kept on getting errors. Because I have never done something like this before, I really don't know how to make it work. Could someone please help?

Thank you

Mary
  • 153
  • 1
  • 2
  • 10

2 Answers2

0

Going by the text of your question, I think you are looking for a way to selectively update the value of the column named word in a data frame called freq using a specialized function to find a replacement value, but only for rows where the value of tf is below a set threshold. For that, here's an example using a tidyverse approach, with some simplifications with regard to your word replacement algorithm.

library(tidyverse)

# a placeholder for your word replacement function
replace_word <- function(x) {
    paste0(x, "*")
}

# Creating some simplified example data to work with
freq <- tibble(
    id = c(1, 2, 3, 4, 5),
    word = c("aa", "bb", "cc", "dd", "ee"),
    tf = c(0.001, 0.003, 0.005, 0.007, 0.009)
) 

print(freq)

# A tibble: 5 x 3
     id word     tf
  <dbl> <chr> <dbl>
1     1 aa    0.001
2     2 bb    0.003
3     3 cc    0.005
4     4 dd    0.007
5     5 ee    0.009

# Making changes to a column using `mutate()` and `if_else()` to do so conditionally.
freq <- freq %>%
    mutate(
        word = if_else(tf < 0.007, replace_word(word), word)
    )
    
print(freq)

# A tibble: 5 x 3
     id word     tf
  <dbl> <chr> <dbl>
1     1 aa*   0.001
2     2 bb*   0.003
3     3 cc*   0.005
4     4 dd    0.007
5     5 ee    0.009

The first 3 values of word are updated with stars. Does that help?

Damian
  • 1,385
  • 10
  • 10
  • Thank you for your reply. The idea is correct and the code works, but I would like to substitute the word with frequency lower than the mean with a synonym for it. Could you please help me understand how I can do it in the initial function you wrote? – Mary Mar 29 '21 at 21:22
  • Could you explain the process for identifying a synonym for a particular word? Is that where word2vec fits in? – Damian Mar 30 '21 at 14:06
  • Perhaps this R package can do that part https://cran.r-project.org/web/packages/word2vec/index.html – Damian Mar 30 '21 at 14:11
  • Yes, I wanted to use word2vec to find synonyms, using the closest match to the word, but I get errors with words that are not in the dictionary (i.e. personal names or words in other languages). I have updated the question with the function I made, could you please check it? – Mary Mar 30 '21 at 14:33
0

Maybe this code is what you are looking for. You can also use a pretrained word2vec model, in the below example the word2vec model is trained upon your data (more info at https://www.bnosac.be/index.php/blog/100-word2vec-in-r)

library(word2vec)
library(udpipe)
data(brussels_reviews, package = "udpipe")
x <- subset(brussels_reviews, language == "nl")

data1 <- data.frame(id = x$id, text = tolower(x$feedback), stringsAsFactors = FALSE) 
str(data1)
#> 'data.frame':    500 obs. of  2 variables:
#>  $ id  : int  19991431 21054450 22581571 23542577 40676307 46755068 23831365 23016812 46958471 28687866 ...
#>  $ text: chr  "zeer leuke plek om te vertoeven , rustig en toch erg centraal gelegen in het centrum van brussel , leuk adres o"| __truncated__ "het appartement ligt op een goede locatie: op loopafstand van de europese wijk en vlakbij verschilende metrosta"| __truncated__ "bedankt bettina en collin. ik ben heel blij dat ik bij jullie heb verbleven, in zo'n prachtige stille omgeving "| __truncated__ "ondanks dat het, zoals verhuurder joffrey zei, geen last minute maar een last seconde boeking was, is alles per"| __truncated__ ...
freq <- strsplit.data.frame(data1, term = "text", group = "id", split = "[[:space:][:punct:][:digit:]]+")
freq <- document_term_frequencies(freq)
freq <- document_term_frequencies_statistics(freq)
freq <- freq[, c("doc_id", "term", "freq", "tf", "idf", "tf_idf")]
head(freq)
#>      doc_id      term freq      tf       idf     tf_idf
#> 1: 19991431      zeer    1 0.03125 1.5702172 0.04906929
#> 2: 19991431     leuke    1 0.03125 1.9519282 0.06099776
#> 3: 19991431      plek    1 0.03125 2.5770219 0.08053194
#> 4: 19991431        om    2 0.06250 1.4105871 0.08816169
#> 5: 19991431        te    2 0.06250 0.9728611 0.06080382
#> 6: 19991431 vertoeven    1 0.03125 4.6051702 0.14391157

## Build word2vec model
set.seed(123456789)
w2v <- word2vec(x = data1$text, dim = 15, iter = 20, min_count = 0, lr = 0.05, type = "cbow")
vocabulary <- summary(w2v, type = "vocabulary")
## For each word, find the most similar one if it is part of the word2vec vocabulary
freq$similar_word <- ifelse(freq$term %in% vocabulary, freq$term, NA)
freq$similar_word <- lapply(freq$similar_word, FUN = function(x){
    if(!is.na(x)){
        x <- predict(w2v, x, type = 'nearest', top_n = 1)
        x <- x[[1]]$term2
    }
    x
})
head(freq)
#>      doc_id      term freq      tf       idf     tf_idf  similar_word
#> 1: 19991431      zeer    1 0.03125 1.5702172 0.04906929     plezierig
#> 2: 19991431     leuke    1 0.03125 1.9519282 0.06099776         cafes
#> 3: 19991431      plek    1 0.03125 2.5770219 0.08053194 opportuniteit
#> 4: 19991431        om    2 0.06250 1.4105871 0.08816169    verblijven
#> 5: 19991431        te    2 0.06250 0.9728611 0.06080382   overnachten
#> 6: 19991431 vertoeven    1 0.03125 4.6051702 0.14391157  comfortabele

Now your threshold of 0.5. That's up to you to define.