My goal is to identify clusters in my dataset that containe around 10 categoricals and/or numericals columns and 3 textual description columns. After a few researchs, i thought about a 3 steps process:
- pre-processing my data (normalize my 10 columns and doing tf-idf on the text data - shape is something like (89,000, 41206) ) After a few treatment, i use a Column Transformer as follow :
column_trans = ColumnTransformer([
('scale', StandardScaler(), make_column_selector(dtype_include=np.number)),
('res_vec', TfidfVectorizer(), "Résumé de l'incident"),
('desc_vec', TfidfVectorizer(), "Description de l'incident")],remainder='drop')
#On applique l'objet de transformation à note dataframe
all_features = column_trans.fit_transform(df_incidents_sample)
(I also tried to use PCA:
#First, data normalization
scaler = StandardScaler().fit(X)
X_scaled = scaler.transform(X)
from sklearn.decomposition import PCA
pca = PCA(.70)
pca.fit(X_scaled)
principalComponents = pca.components_
print("Percentage of variance explained: ")
print(pca.explained_variance_ratio_)
print("Main components:")
print(principalComponents)
Percentage of variance explained:
[0.18618277 0.17050933 0.10841001 0.09733908 0.09186758 0.08251782]
Main components:
[[ 0.14725228 0.37825793 0.36558713 0.11637642 -0.22776482 0.46478375
0.26814039 0.37555349 0.39590524 0.22463055]
[-0.46043277 0.39805237 0.37268412 0.22276568 0.49565864 -0.02403753
0.14180977 0.07271966 -0.33350997 -0.24115478]
[-0.30192161 0.18580638 -0.12840671 -0.71123187 -0.02576491 0.10946048
0.47718378 -0.31007677 0.02038784 0.12274863]
[ 0.26901203 0.09679569 -0.30329614 0.41158977 0.11026846 -0.24897028
0.62929629 -0.23384344 0.2611964 -0.2525925 ]
[ 0.1235864 0.12176666 0.0547025 0.12728051 0.27585949 -0.33158646
0.02475187 -0.12885138 -0.08494957 0.86036434]
[-0.30114986 -0.2197743 -0.24955475 -0.09226451 0.00559164 -0.35950503
0.24902454 0.76731762 0.06424171 0.07762742]]
But the results didn't seem really relevant and usable)
- Building an autoencoder to reduce dimension of my dataset. First i split my data in 2, then i create the autoencoder:
x_train, x_test = train_test_split(all_features, test_size=0.2)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
input_size = 41206
hidden_size = 1280
code_size = 32
input_data = Input(shape=(input_size,))
hidden_1 = Dense(hidden_size, activation='relu')(input_data)
code = Dense(code_size, activation='relu')(hidden_1)
hidden_2 = Dense(hidden_size, activation='relu')(code)
output_data = Dense(input_size, activation='sigmoid')(hidden_2)
autoencoder = Model(input_data, output_data)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train, epochs=3)
- Use classical clustering ML algorithms (knn, dbscan or others)
So i've got 2 major questions:
- what's your level of confidence base on these informations, that it will works ?
- i have trouble creating my autoencoder. When i tried to fit it on my data...
# train the model
autoencoder.fit(x_train,
x_train,
epochs=50,
batch_size=256,
shuffle=True)
autoencoder.summary()
... i have an error:
TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. Contents: SparseTensor(indices=Tensor("DeserializeSparse_1:0", shape=(None, 2), dtype=int64), values=Tensor("DeserializeSparse_1:1", shape=(None,), dtype=float32), dense_shape=Tensor("stack_1:0", shape=(2,), dtype=int64)). Consider casting elements to a supported type.
I did a few research on my error, i find this gitub subject that offer a solution by suggesting to create a SparseToDense-Layer. But i have trouble to adapt this solution to my code.
Thank you in advance for everyone taking time to read me ;)
Médéric