12

I know that the computation in scikit-learn is based on NumPy so everything is a matrix or array.

How does this package handle mixed data (numerical and nominal values)?

For example, a product could have the attribute 'color' and 'price', where color is nominal and price is numerical. I notice there is a model called 'DictVectorizer' to numerate the nominal data. For example, two products are:

products = [{'color':'black','price':10}, {'color':'green','price':5}]

And the result from 'DictVectorizer' could be:

[[1,0,10],
 [0,1,5]]

If there are lots of different values for the attribute 'color', the matrix would be very sparse. And long features will degrade the performance of some algorithms, such as decision trees.

Is there any way to use the nominal value without the need to create dummy codes?

kdopen
  • 8,032
  • 7
  • 44
  • 52
xueliang liu
  • 556
  • 3
  • 13
  • 1
    It's worth noting that Weka [Instances](http://weka.sourceforge.net/doc/weka/core/Instance.html) store nominal values as floating point numbers corresponding to the index of the nominal in the attribute's definition. You could simply follow this same strategy to generate a numeric dataset for use with scikit-learn. – Wesley Tansey Nov 06 '12 at 00:31
  • Thanks a lot for enlarging my knowledge. – xueliang liu Nov 06 '12 at 14:31

1 Answers1

6

The DecisionTree class in scikit-learn will need some refactoring to deal efficiently with high-cardinality categorical features (and maybe even with naturally sparse data such as text TF-IDF vectors).

Nobody is working on that yet AFAIK.

ogrisel
  • 39,309
  • 12
  • 116
  • 125
  • thanks a lot. In scikit, is there any smart way to do refactoring compared with manual operation? – xueliang liu Jul 30 '12 at 15:59
  • My answer states that this current state of affair is a limitation of the current implementation of the Decision Tree in scikit-learn. There is no easy fix I know of to remove that limitation. I don't understand what you call "manual operation". – ogrisel Jul 30 '12 at 16:44