5

I am trying to detect anomalies in a breast cancer dataset using Isolation Forest in sklearn. I am trying to apply Iolation Forest to a mixed data set and it gives me value errors when I fit the model.

This is my dataset : https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer/

This is my code :

from sklearn.model_selection import train_test_split
rng = np.random.RandomState(42)

X = data_cancer.drop(['Class'],axis=1)
y = data_cancer['Class'] 

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 20)
X_outliers = rng.uniform(low=-4, high=4, size=(X.shape[0], X.shape[1]))

clf = IsolationForest()
clf.fit(X_train)

This is the error I get :

ValueError: could not convert string to float: '30-39'

Is it possible to use Isolation Forest on categorical data? If yes, how do I do so?

Nnn
  • 191
  • 3
  • 9

1 Answers1

10

You should encode your categorical data to numerical representation.

There are many ways to encode categorical data, but I suggest that you start with

sklearn.preprocessing.LabelEncoder if cardinality is high and sklearn.preprocessing.OneHotEncoder if cardinality is low.

Here a usage example:

import numpy as np
from numpy import argmax
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
# define example
data = ['cold', 'cold', 'warm', 'cold', 'hot', 'hot', 'warm', 'cold', 'warm', 'hot']
values = np.array(data)
print(values)
# integer encode
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(values)
print(integer_encoded)
# binary encode
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
print(onehot_encoded)
# invert first example
inverted = label_encoder.inverse_transform([argmax(onehot_encoded[0, :])])
print(inverted)

Output:

['cold' 'cold' 'warm' 'cold' 'hot' 'hot' 'warm' 'cold' 'warm' 'hot']
 
[0 0 2 0 1 1 2 0 2 1]
 
[[ 1.  0.  0.]
 [ 1.  0.  0.]
 [ 0.  0.  1.]
 [ 1.  0.  0.]
 [ 0.  1.  0.]
 [ 0.  1.  0.]
 [ 0.  0.  1.]
 [ 1.  0.  0.]
 [ 0.  0.  1.]
 [ 0.  1.  0.]]
 
['cold']
Mario
  • 1,631
  • 2
  • 21
  • 51
Farseer
  • 4,036
  • 3
  • 42
  • 61
  • Ok but what do I do if I want to predict with my own input. I wrote `input_par = encoder.transform(['string value 1', 'string value 2'...])` but I get an error: `Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.` – taga Oct 02 '19 at 12:19
  • @Farseer forgot to add: `from array import array` Also, your toy example didn't work for me. I get an error: `TypeError: array() argument 1 or typecode must be char (string or ascii-unicode with length 1), not list ` (using Python 2). – user2205916 Sep 22 '20 at 03:13
  • @user2205916 just replace `values = np.array(data)` instead of `values = array(data)` then it works. – Mario Mar 17 '21 at 13:28
  • 1
    LabelEncoder is for your target labels. It's not intended to be used for features. If your categories aren't ordered, LabelEncoder makes no sense. – Paul Coccoli Jul 12 '22 at 17:21