I was playing around with tf.keras
and ran some predict()
method on two Model
objects with the same weights initialization.
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import LSTM, Masking, Input, Embedding, Dense
from tensorflow.keras.models import Model
tf.enable_eager_execution()
np.random.seed(10)
X = np.asarray([
[0, 1, 2, 3, 3],
[0, 0, 1, 1, 1],
[0, 0, 0, 1, 1],
])
y = [
0,
1,
1
]
seq_len = X.shape[1]
inp = Input(shape=[seq_len])
emb = Embedding(4, 10, name='embedding')(inp)
x = emb
x = LSTM(5, return_sequences=False, name='lstm')(x)
out = Dense(1, activation='sigmoid', name='out')(x)
model = Model(inputs=inp, outputs=out)
model.summary()
preds = model.predict(X)
inp = Input(shape=[seq_len])
emb = Embedding(4, 10, name='embedding', weights=model.get_layer('embedding').get_weights()[0])(inp)
x = emb
x = LSTM(5, return_sequences=False, weights=model.get_layer('lstm').get_weights()[0])(x)
out = Dense(1, activation='sigmoid', weights=model.get_layer('out').get_weights()[0])(x)
model_2 = Model(inputs=inp, outputs=out)
model_2.summary()
preds_2 = model_2.predict(X)
print(preds, preds_2)
I am not sure why but the results of the two predictions are different. I got these when I ran the print
function. You might get something different.
[[0.5027414 ]
[0.5019673 ]
[0.50134844]] [[0.5007331]
[0.5002397]
[0.4996575]]
I am trying to understand how keras
works. Any explanation would be appreciated. Thank you.
NOTE: THERE IS NO LEARNING INVOLVED HERE. I don't get the idea where the randomness comes from.