2

I can't figure out how to map the top (#1) most similar document in my list back to each document item in my original list.

I go through some preprocessing, ngrams, lemmatization, and TF IDF. Then I use Scikit's linear kernal. I tried using extract features, but am not sure how to work with it in the csr matrix...

Tried various things (Using csr_matrix of items similarities to get most similar items to item X without having to transform csr_matrix to dense matrix)

import string, nltk
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.stem import WordNetLemmatizer 
from sklearn.metrics.pairwise import cosine_similarity
import sparse_dot_topn.sparse_dot_topn as ct
import re

documents = 'the cat in the hat','the catty ate the hat','the cat wants the cats hat'

def ngrams(string, n=2):
    string = re.sub(r'[,-./]|\sBD',r'', string)
    ngrams = zip(*[string[i:] for i in range(n)])
    return [''.join(ngram) for ngram in ngrams]
lemmer = nltk.stem.WordNetLemmatizer()

def LemTokens(tokens):
    return [lemmer.lemmatize(token) for token in tokens]
remove_punct_dict = dict((ord(punct), None) for punct in string.punctuation)
def LemNormalize(text):
    return LemTokens(nltk.word_tokenize(text.lower().translate(remove_punct_dict)))

TfidfVec = TfidfVectorizer(tokenizer=LemNormalize, analyzer=ngrams, stop_words='english')
tfidf_matrix = TfidfVec.fit_transform(documents)

from sklearn.metrics.pairwise import linear_kernel
cosine_similarities = linear_kernel(tfidf_matrix[0:1], tfidf_matrix).flatten()

related_docs_indices = cosine_similarities.argsort()[:-5:-1]

cosine_similarities

My current example only gets me the first line against all docs. How do I get an output that looks something like this into a dataframe (note the original documents come from a dataframe).

original df col             most similar doc       similarity%
'the cat in the hat'        'the catty ate the hat'   80%
'the catty ate the hat'     'the cat in the hat'      80%
'the cat wants the cats hat' 'the catty ate the hat'  20%
June
  • 720
  • 10
  • 22

1 Answers1

2
import pandas as pd

df = pd.DataFrame(columns=["original df col", "most similar doc", "similarity%"])
for i in range(len(documents)):
    cosine_similarities = linear_kernel(tfidf_matrix[i:i+1], tfidf_matrix).flatten()
    # make pairs of (index, similarity)
    cosine_similarities = list(enumerate(cosine_similarities))
    # delete the cosine similarity with itself
    cosine_similarities.pop(i)
    # get the tuple with max similarity
    most_similar, similarity = max(cosine_similarities, key=lambda t:t[1])
    df.loc[len(df)] = [documents[i], documents[most_similar], similarity]

The result:

              original df col       most similar doc  similarity%
0          the cat in the hat  the catty ate the hat     0.664119
1       the catty ate the hat     the cat in the hat     0.664119
2  the cat wants the cats hat     the cat in the hat     0.577967
keineahnung2345
  • 2,635
  • 4
  • 13
  • 28