Edit: Results are different mostly because of weights initialization and batches. But seed fixing is not enough for full reproducibility, see:
Previous answer:
Neural networks learning have random results due to
- random weight initialization
- random batch splitting/sorting in SGD algorithms such as Adam
For example, this code
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten
def run():
classifier = Sequential()
classifier.add(Flatten(input_shape=(28, 28)))
classifier.add(Dense(10, kernel_initializer='uniform', activation= 'relu'))
classifier.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
X_train, y_train = keras.datasets.mnist.load_data()[0]
X_train = X_train[:100] # for example
y_train = keras.utils.to_categorical(y_train)[:100]
classifier.fit(X_train, y_train, batch_size=10, epochs=100)
gives a different result on each run.
>>> run()
Epoch 1/100
100/100 [==============================] - 0s 4ms/step - loss: 10.1763 - acc: 0.1700
...
Epoch 100/100
100/100 [==============================] - 0s 2ms/step - loss: 4.5131 - acc: 0.4700
>>> run()
Epoch 1/100
100/100 [==============================] - 0s 5ms/step - loss: 7.2993 - acc: 0.2000
...
Epoch 1/100
100/100 [==============================] - 0s 2ms/step - loss: 0.8059 - acc: 0.7000
You can fix seed in keras random generator (which is numpy) for reproducibility.
>>> np.random.seed(1)
>>> run()
Epoch 1/100
100/100 [==============================] - 0s 5ms/step - loss: 7.6193 - acc: 0.1500
...
Epoch 100/100
100/100 [==============================] - 0s 2ms/step - loss: 0.3224 - acc: 0.6400
>>> np.random.seed(1)
>>> run()
Epoch 1/100
100/100 [==============================] - 0s 5ms/step - loss: 7.6193 - acc: 0.1500
...
Epoch 100/100
100/100 [==============================] - 0s 2ms/step - loss: 0.3224 - acc: 0.6400
https://github.com/keras-team/keras/issues/2743#issuecomment-219777627
P.S. Code may have very different results, if there are some problems with data/model (as in this mnist example with too small data and too easy model). 90% could be just overfitting. Check classifier on another independent test data.