Note: I already read keras forward pass with tensorflow variable as input but it did not help.
I'm training an auto-encoder unsupervised neural-network with Keras with the MNIST database:
import keras, cv2
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255.0
x_test = x_test.reshape(10000, 784).astype('float32') / 255.0
model = Sequential()
model.add(Dense(100, activation='sigmoid', input_shape=(784,)))
model.add(Dense(10, activation='sigmoid'))
model.add(Dense(100, activation='sigmoid'))
model.add(Dense(784, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer='sgd')
history = model.fit(x_train, x_train, batch_size=1, epochs=1, verbose=0)
Then I would like to get the output vector when the input vector is x_test[i]
:
for i in range(100):
x = x_test[i]
a = model(x)
cv2.imshow('img', a.reshape(28,28))
cv2.waitKey(0)
but I get this error:
All inputs to the layer should be tensors.
How should I modify this code to do a forward pass of an input vector in the neural network, and get a vector in return?
Also how to get the activation after, say, the 2nd layer? i.e. don't propagate until the last layer, but get the output after the 2nd layer.
Example: input: vector of size 784, output: vector of size 10