I'm currently trying to do some text classification using a neural network with some own inputdata.
Due to my very limited dataset of around 85 positive and 85 negative classified text of ~1500 words per file, I was told to use cross-validation testing for my neural network to bypass overfitting.
I started to build a neural network with some help of YT-Videos and guides and my problem now is how to do the cross-validation testing.
My current code looks like this:
data = pd.read_excel('dataset3.xlsx' )
max_words = 1000
tokenize = text.Tokenizer(num_words=max_words, lower=True, char_level=False)
train_size = int(len(data) * .8)
train_posts = data['Content'][:train_size]
train_tags = data['Value'][:train_size]
test_posts = data['Content'][train_size:]
test_tags = data['Value'][train_size:]
tokenize.fit_on_texts(train_posts)
x_train = tokenize.texts_to_matrix(train_posts)
x_test = tokenize.texts_to_matrix(test_posts)
encoder = LabelEncoder()
encoder.fit(train_tags)
y_train = encoder.transform(train_tags)
y_test = encoder.transform(test_tags)
num_classes = np.max(y_train) + 1
y_train = utils.to_categorical(y_train, num_classes)
y_test = utils.to_categorical(y_test, num_classes)
batch_size = 1
epochs = 20
model = Sequential()
model.add(Dense(750, input_shape=(max_words,)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes))
model.add(Activation('sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs= epochs,
verbose=1,
validation_split=0.2)
I played around with
KFold(n_splits=k, shuffle=True, random_state=1).split(x_train, y_train))
but I have no idea how to use it on the neural network itself, hope you can help me with my problem.
Thanks in regards
Jason