-1

I am new to NN. In order to study, I created a simple neural network model using Keras. On every rerun the accuracy is changing (+/-)10-30%, that means, sometimes I got 94%, but in the next execution it will decrease into 60%. I am using same data set for every run.

df = pd.read_csv("../Datasets/error_pred/mulclass.csv")
df.columns = ["var1","var2","result","outcome"]
scaled_train_samples = df[['var1', 'var2','result']].values
train_labels = df.outcome.values
model_m = Sequential([
    Dense(units=8, input_shape=(3,), activation='relu'),
    Dense(units=16, activation='relu'),
    Dense(units=2, activation='softmax')
])
model_m.compile(optimizer=tf.keras.optimizers.Adam(learning_rate = 0.0001), loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
model_m.fit(x=scaled_train_samples, y=train_labels, batch_size=10, epochs=100, validation_split=0.1, shuffle=True,verbose=2)


from numpy import loadtxt
test_dataset = loadtxt('../Datasets/error_pred/mulTest.csv', delimiter=',')
X_test = test_dataset[:,0:3]
y_test = test_dataset[:,3:]
_, accuracy = model_m.evaluate(X_test, y_test)
accuracy*100
Ashok v
  • 77
  • 1
  • 8
  • There may be different reasons. Did you try to train model with longer epochs? – Ugurcan Feb 15 '22 at 06:57
  • I increased the epoch to 10000, then the accuracy is not changing very much. Now I am getting accuracy as 100,94,100, .. – Ashok v Feb 15 '22 at 10:17
  • 1
    You should check your loss values. When it starts to stabilize at low values for each of your training you can stop training and test your model. It will likely give similar results for each training. Your network initialized with different weights for your each training and you kept your trainings short. So they were stopped before they didn't learn enough. When you test your trained models you faced with huge accuracy fluctuations. Another reason may be your data is not balanced, standardized and your model suffered learning the data. – Ugurcan Feb 15 '22 at 11:03

1 Answers1

0

You can try with default learning rate lr=0.001, you can use keras callback ReduceLROnPlateau. You can set random seed for numpy and tf for repeatability, without constant seed outliers can "jump" between training and validation set and it can be cause accuracy unstability. You can check if there are some outliers in data. There are only 3 input features so You can do 3D plot.

Python 3d plot - axis centered

Peter Pirog
  • 142
  • 1
  • 8