13

I am using the TfidfTransformer from the sklearn package in Python 2.7.

As I was getting comfortable with the arguments, I became a bit confused about use_idf, as in:

TfidfVectorizer(use_idf=False).fit_transform(<corpus goes here>)

What exactly does use_idf do when false or true?

Since we are generating a sparse Tfidf matrix, it doesn't make sense to have an argument to choose a sparse Tfidif matrix; that seems redundant.

This post was interesting but didn't seem to nail it.

The documentation says only, Enable inverse-document-frequency reweighting, which isn't very illuminating.

Any comments appreciated.

EDIT I think I figured it out. It's real simple:
Text --> counts
Counts --> TF, meaning we just have raw counts or Counts --> TFIDF, meaning we have weighted counts.

What was confusing me was...since they called it TfidfVectorizer I didn't realize that was true only if you chose it to be a TFIDF. You could have also use it to create just a TF.

Community
  • 1
  • 1
Monica Heddneck
  • 2,973
  • 10
  • 55
  • 89
  • Perhaps this old answer could help explain what TF without IDF means: http://stackoverflow.com/questions/27497528/calculating-tf-idf-among-documents-using-python-2-7/27504795#27504795 – tripleee Jan 18 '16 at 04:46

2 Answers2

7

Typically, the tf-idf weight is composed by two terms: the first computes the normalized Term Frequency (TF), aka. the number of times a word appears in a document, divided by the total number of words in that document; the second term is the Inverse Document Frequency (IDF), computed as the logarithm of the number of the documents in the corpus divided by the number of documents where the specific term appears.

TF: Term Frequency, which measures how frequently a term occurs in a document. TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document)

IDF: Inverse Document Frequency, which measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following:

IDF(t) = log_e(Total number of documents / Number of documents with term t in it).

If you give use_idf=False, you will score using only TF.

Harsha Reddy
  • 391
  • 5
  • 8
1

In Term frequency (TF) calculation, all terms are considered equally important. Even certain terms which have no importance in determining relevance are treaded in the calculations.

Scaling down the weights for terms with high collection frequency helps the calculations. Inverse Document Frequency reduces the TF weight of a term by a factor that grows with its collection frequency. So Document frequency DF of the term is used to scale its weight.

Pranav Waila
  • 418
  • 4
  • 19
  • TF matrix is for the whole document collection. Here in scikit the object is for the single document. Internally these are the same calculations. – Pranav Waila Jan 18 '16 at 06:41
  • What? I thought `use_idf` refers to the idf, which is a matrix of weights by frequency across all documents. – Monica Heddneck Jan 18 '16 at 06:44