I thought my neural network would be reproducible, but it is not! The results are not dramatically different but for example the loss is about 0.1 different from one run. So here is my Code!
# Code reproduzierbar machen
from numpy.random import seed
seed(0)
from tensorflow import set_random_seed
set_random_seed(0)
# Importiere Datasets (Training und Test)
import pandas as pd
poker_train = pd.read_csv("C:/Users/elihe/Documents/Studium Master/WS 19 und 20/Softwareprojekt/poker-hand-training-true.data")
poker_test = pd.read_csv("C:/Users/elihe/Documents/Studium Master/WS 19 und 20/Softwareprojekt/poker-hand-testing.data")
from sklearn.preprocessing import OneHotEncoder
# Trainings- und Testset in Input und Output verwandeln
X_tr = poker_train.iloc[:, 0:10].values
y_tr = poker_train.iloc[:, 10:11].values
X_te = poker_test.iloc[:, 0:10].values
y_te = poker_test.iloc[:, 10:11].values
# Output in 0-1-Vektoren verwandeln
encode = OneHotEncoder(categories = 'auto')
y_train = encode.fit_transform(y_tr).toarray()
y_test = encode.fit_transform(y_te).toarray()
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_tr)
X_test = sc.transform(X_te)
# NN mit Keras erstellen
import keras
from keras.models import Sequential
from keras.layers import Dense
nen.add(Dense(400, input_dim = 10, activation = 'sigmoid'))
nen.add(Dense(400, activation = 'sigmoid'))
nen.add(Dense(10, activation = 'softmax'))
from keras.optimizers import RMSprop
nen.compile(loss='binary_crossentropy', optimizer=RMSprop(0.001), metrics=['accuracy'])
nen_fit = nen.fit(X_train, y_train,epochs=30, batch_size=15, verbose=1, validation_split = 0.2, shuffle = False)
I thought i make it reproducible with the first few lines...can someone help? I googled a lot but nothing helped. Is it just normal that there is a little difference? I would like to make it exactely (!) reproducible.
Btw please ignore my comments in the code..i am german :) and you have to know that i am new in neural networks!