1

I am working on trying to retrieve "feature importances" for Naive Bayes models, or ideally, which feature the naive bayes models use to create their predictions (i.e. which result in the highest probability).

I have found many answers that seek to answer the same question, and based my current work off of a Stack Overflow post.

My current code is below (trying to make this as minimal and repeatable as possible):

Dataset:

| stack | overflow | is | a | great | resource | for | programmers | to | use | classification |
|:-----:|:--------:|:--:|:-:|:-----:|:--------:|:---:|:-----------:|:--:|:---:|:--------------:|
|   2   |     2    |  2 | 0 |   1   |     0    |  0  |      1      |  1 |  0  |       -1       |
|   2   |     1    |  2 | 0 |   1   |     0    |  1  |      1      |  3 |  3  |       -1       |
|   2   |     1    |  1 | 0 |   2   |     0    |  3  |      0      |  2 |  1  |       -1       |
|   1   |     3    |  2 | 2 |   1   |     3    |  2  |      3      |  3 |  0  |       -1       |
|   2   |     2    |  3 | 2 |   1   |     2    |  0  |      2      |  2 |  3  |       -1       |
|   2   |     2    |  5 | 3 |   1   |     2    |  6  |      7      |  7 |  10 |        1       |
|   0   |     2    |  4 | 2 |   1   |     0    |  6  |      10     |  6 |  6  |        1       |
|   5   |     3    |  1 | 3 |   5   |     2    |  9  |      9      |  8 |  10 |        1       |
|   2   |     0    |  2 | 0 |   2   |     2    |  9  |      6      |  7 |  9  |        1       |
|   1   |     3    |  3 | 5 |   4   |     3    |  10 |      8      |  9 |  6  |        1       |

The code is below:

from sklearn.model_selection import train_test_split
from joblib import load
from sklearn.naive_bayes import MultinomialNB, GaussianNB, ComplementNB, BernoulliNB

import pandas as pd, numpy as np

def get_salient_words(nb_clf, vect, class_ind):
    # From https://stackoverflow.com/questions/50526898/how-to-get-feature-importance-in-naive-bayes?rq=1
    words = vect.get_feature_names()
    zipped = list(zip(words, nb_clf.feature_log_prob_[class_ind]))
    sorted_zip = sorted(zipped, key=lambda t: t[1], reverse=True)
    return sorted_zip

df = pd.read_csv('Trimmed_Count.csv')
classes = df['classification']
df.drop(['classification'], inplace=True, axis=1)

vec = load('imdb_count_vectorizer.joblib')

X_train, X_test, y_train, y_test = train_test_split(df, classes, stratify=classes, random_state=8)

clfs = [MultinomialNB(), GaussianNB(), ComplementNB(), BernoulliNB()]

for clf in clfs:
    name = str(clf.__class__.__name__)
    try:
        clf.fit(X_train, y_train)
        print("{} | Score: {}".format(name, clf.score(X_test, y_test)))
        pos = get_salient_words(clf, vec, 1)[:10]
        print(pos)
        neg = get_salient_words(clf, vec, -1)[:10]
        print(neg)
    except:
        print("Couldn't test: {}".format(name))  

The output looks like:

MultinomialNB | Score: 0.81192
[('br', -3.2526110546751053), ('film', -3.3810118723988154), ('movi', -3.505443120655009), ('one', -3.9644855550689293), ('like', -4.27222948451632), ('time', -4.530651371310233), ('see', -4.579184102089133), ('good', -4.588233937609051), ('charact', -4.650109341327138), ('stori', -4.656123521837383)]
[('br', -3.2526110546751053), ('film', -3.3810118723988154), ('movi', -3.505443120655009), ('one', -3.9644855550689293), ('like', -4.27222948451632), ('time', -4.530651371310233), ('see', -4.579184102089133), ('good', -4.588233937609051), ('charact', -4.650109341327138), ('stori', -4.656123521837383)]
GaussianNB | Score: 0.79672
Couldn't test: GaussianNB
ComplementNB | Score: 0.81192
[('excel', 7.505813902996572), ('amaz', 7.466220449576814), ('perfect', 7.284934625656391), ('impress', 7.1851596281894885), ('emot', 7.179315076193205), ('experi', 7.141574748210359), ('definit', 7.096586360995577), ('style', 7.092303699203576), ('meet', 7.08485289304771), ('often', 7.08167660662929)]
[('excel', 7.505813902996572), ('amaz', 7.466220449576814), ('perfect', 7.284934625656391), ('impress', 7.1851596281894885), ('emot', 7.179315076193205), ('experi', 7.141574748210359), ('definit', 7.096586360995577), ('style', 7.092303699203576), ('meet', 7.08485289304771), ('often', 7.08167660662929)]
BernoulliNB | Score: 0.79832
[('film', -0.5236348439500489), ('movi', -0.5284175990851931), ('one', -0.5608697735054893), ('br', -0.5615240708602336), ('like', -0.7782599576805733), ('time', -0.9267171252591915), ('see', -0.9447968673740359), ('good', -0.9934224360365533), ('make', -1.039544791007872), ('great', -1.0672202825869217)]
[('film', -0.5236348439500489), ('movi', -0.5284175990851931), ('one', -0.5608697735054893), ('br', -0.5615240708602336), ('like', -0.7782599576805733), ('time', -0.9267171252591915), ('see', -0.9447968673740359), ('good', -0.9934224360365533), ('make', -1.039544791007872), ('great', -1.0672202825869217)]

My question is: Why are the positive and negative sentiments reporting back the exact same features, with the exact same probabilities? Additionally, why are they negative??? A highest probability is negative?!

This is a similar question to an unanswered post here

artemis
  • 6,857
  • 11
  • 46
  • 99
  • Well, if you're looking primarily at feature importance for naive Bayes, remember that each feature only enters the calculation via something which is a function of only that one feature, not involving the others; it's something like p(x|C)/p(x|not C) -- I don't remember off-hand, anyway I'm sure you can look it up. Anyway, that calculation might be simpler to handle than constructing the whole naive Bayes model. Something to consider. – Robert Dodier Aug 05 '20 at 21:04
  • @RobertDodier You're quoting (or, attempting to) Bayes Theorem; I would probably be looking for, on average, which features is the model relying on the most (i.e. the highest feature logisitc probability) to make those predictions. – artemis Aug 06 '20 at 01:59
  • Well, I was thinking that the mutual information or cross entropy for one variable, say x[n], and the class label, would have some relatively simple form by cancelling out common terms in p(C | x[1], ..., x[n])/p(C | x[1], ..., x[n - 1]). However, the normalizing term brings all the other x[1], ..., x[n - 1] back into the picture, so I don't see at this point that anything is gained. That's assuming the question is essentially whether adding x[n] to x[1], ..., x[n - 1] gains anything. Another question might be how much just x[n] vs no x's will gain you. It seems simpler but I didn't try it. – Robert Dodier Aug 08 '20 at 16:54

0 Answers0