1

I've been trying to divide randomly into test and train sets my dataset and train on a 5 deep decision tree and plot the decision tree.

P.s. I'm not allowed to use pandas to do so.

Here is what I tried to do:

import numpy
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn import tree
from sklearn.model_selection import train_test_split
filename = 'diabetes.csv'
raw_data = open(filename, 'rt')
data = numpy.loadtxt(raw_data, delimiter=",", skiprows=1)
print(data.shape)

X = data[:,0:8] #identify columns as data sets
Y = data[:, 9] #identfy last column as target
print(X)
print(Y)
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.25)
treeClassifier = DecisionTreeClassifier(max_depth=5)
treeClassifier.fit(X_train, Y_train)
with open("treeClassifier.txt", "w") as f:
 f = tree.export_graphviz(treeClassifier, out_file=f)

My output is:

(768, 10)
[[  6.    148.     72.    ...  33.6     0.627  50.   ]
[  1.     85.     66.    ...  26.6     0.351  31.   ]
[  8.    183.     64.    ...  23.3     0.672  32.   ]
 ...
[  5.    121.     72.    ...  26.2     0.245  30.   ]
[  1.    126.     60.    ...  30.1     0.349  47.   ]
[  1.     93.     70.    ...  30.4     0.315  23.   ]]
[1. 0. 1. 0. 1. 0. 1. 0. 1. 1. 0. 1. 0. 1. 1. 1. 1. 1. 0. 1. 0. 0. 1. 1.
 1. 1. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 0. 1. 0. 1. 0. 0.
 1. 0. 0. 0. 0. 1. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 1. 0. 1. 0. 0. 0. 1. 0.
 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0.
 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 1. 1. 1. 0. 0. 0.
 1. 0. 0. 0. 1. 1. 0. 0. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.
 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 0. 0.
 0. 0. 1. 1. 0. 0. 0. 1. 0. 1. 0. 1. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 0. 0.
 1. 1. 0. 1. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 1. 1. 0. 1. 0. 0. 0. 1. 1. 1.
 1. 0. 1. 1. 1. 1. 0. 0. 0. 0. 0. 1. 0. 0. 1. 1. 0. 0. 0. 1. 1. 1. 1. 0.
 0. 0. 1. 1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 1. 0. 1. 0. 0.
 1. 0. 1. 0. 0. 1. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 1. 1. 0. 0. 1.
 0. 0. 0. 1. 1. 1. 0. 0. 1. 0. 1. 0. 1. 1. 0. 1. 0. 0. 1. 0. 1. 1. 0. 0.
 1. 0. 1. 0. 0. 1. 0. 1. 0. 1. 1. 1. 0. 0. 1. 0. 1. 0. 0. 0. 1. 0. 0. 0.
 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 1. 1. 0. 1.
 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 1. 0. 0. 0. 0. 1. 0. 0. 1. 0. 0. 0. 0. 0.
 0. 0. 1. 1. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 1. 1. 0. 1. 0. 1. 0. 1. 0.
 1. 1. 0. 0. 0. 0. 1. 1. 0. 1. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 0. 1. 0. 0.
 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 1. 1. 1. 0. 0. 1. 0. 0. 1. 0. 0. 0. 1.
 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.
 1. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0.
 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 1. 1. 1. 0. 0. 1. 1. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.
 0. 1. 0. 1. 1. 0. 0. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 0. 1. 0. 0. 1. 0.
 0. 0. 0. 1. 1. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 0. 0. 0. 1. 1. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. 1. 1.
 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 0. 0.
 0. 0. 0. 1. 1. 0. 1. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 1. 0. 1. 0. 1. 0. 1.
 1. 0. 0. 0. 0. 1. 1. 0. 0. 0. 1. 0. 1. 1. 0. 0. 1. 0. 0. 1. 1. 0. 0. 1.
 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 1.
 0. 0. 1. 0. 1. 1. 1. 0. 0. 1. 1. 1. 0. 1. 0. 1. 0. 1. 0. 0. 0. 0. 1. 0.]

Here is an example of what I want the resulting tree to look like:

here is an example of a tree

The problem I'm having is that in my tree, I don't get the 'class=0\ class=1' attribute. I thought the problem might be in the Y = data[:, 9] part, the 9th column classifies if it's a 0 or a 1 -- this is the class attribute, but I don't see any way to change it to make it appear in the tree; maybe something in the tree.export_graphviz function? Am I missing a parameter? Any help would be appreciated.

fuglede
  • 17,388
  • 2
  • 54
  • 99
Tali
  • 77
  • 2
  • 11

2 Answers2

2

If you replace

tree.export_graphviz(treeClassifier, out_file=f)

with

tree.export_graphviz(treeClassifier, class_names=['0', '1'], out_file=f)

you should be good.

For example,

import graphviz
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
from sklearn.model_selection import train_test_split

np.random.seed(42)
X = np.random.random((100, 8))
Y = np.random.randint(2, size=100)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25)
tree_classifier = DecisionTreeClassifier(max_depth=5)
tree_classifier.fit(X_train, Y_train)

dot_data = tree.export_graphviz(tree_classifier, class_names=['0', '1'], out_file=None)
graph = graphviz.Source(dot_data)
graph

enter image description here

To make it look even more like the example you refer to, you can use

tree.export_graphviz(treeClassifier, class_names=['0', '1'],
                     filled=True, rounded=True, out_file=f)

enter image description here

fuglede
  • 17,388
  • 2
  • 54
  • 99
0

There are 4 methods which I'm aware of for plotting the scikit-learn decision tree:

  • print text representation of the tree with sklearn.tree.export_text method
  • plot with sklearn.tree.plot_tree method (matplotlib needed)
  • plot with sklearn.tree.export_graphviz method (graphviz needed)
  • plot with dtreeviz package (dtreeviz and graphviz needed)

The simplest is to export to the text representation. The example decision tree will look like:

|--- feature_2 <= 2.45
|   |--- class: 0
|--- feature_2 >  2.45
|   |--- feature_3 <= 1.75
|   |   |--- feature_2 <= 4.95
|   |   |   |--- feature_3 <= 1.65
|   |   |   |   |--- class: 1
|   |   |   |--- feature_3 >  1.65
|   |   |   |   |--- class: 2
|   |   |--- feature_2 >  4.95
|   |   |   |--- feature_3 <= 1.55
|   |   |   |   |--- class: 2
|   |   |   |--- feature_3 >  1.55
|   |   |   |   |--- feature_0 <= 6.95
|   |   |   |   |   |--- class: 1
|   |   |   |   |--- feature_0 >  6.95
|   |   |   |   |   |--- class: 2
|   |--- feature_3 >  1.75
|   |   |--- feature_2 <= 4.85
|   |   |   |--- feature_1 <= 3.10
|   |   |   |   |--- class: 2
|   |   |   |--- feature_1 >  3.10
|   |   |   |   |--- class: 1
|   |   |--- feature_2 >  4.85
|   |   |   |--- class: 2

Then if you have matplotlib installed, you can plot with sklearn.tree.plot_tree:

tree.plot_tree(clf) # the clf is your decision tree model

The example output is very similar to what you will get with export_graphviz: sklearn decision tree visualization

You can also try dtreeviz package. It will give you much more information. The example:

dtreeviz example decision tree

You can find comparison of different visualization of sklearn decision tree with code snippets in this blog post: link.

pplonski
  • 5,023
  • 1
  • 30
  • 34