0
history = model.fit(x_spectro_train, y_train_onehot, batch_size=batch_size, epochs=training_epochs, validation_data =(x_spectro_test, y_test_onehot), shuffle=True, callbacks=callbacks_list,class_weight=class_weights, verbose=1)


model=load_model(model_name)
predict_prob_train = model.predict(x_spectro_train,batch_size=batch_size) 


inp = model.input                                           # input placeholder
outputs = [layer.output for layer in model.layers]          # all layer outputs
functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs]    # evaluation functions
layer_outs = [func([x_spectro_train, 0.]) for func in functors] #test mode (0.0), train mode(1.0)

I want to save CNN layer outputs. I want to train the svm model with CNN layer outputs (not probability)

So I used code from Keras, How to get the output of each layer? and I saw the result.

But my result of CNN layer is different from the result of model.predict. I monitored the val accuracy, save the best model, and load it. This is structure of my model. (below image)

enter image description here

I expected that result of layer_outs[13] (last layer) is same with predict_prob_train. However, the results were different. (like below image)

enter image description here

Why the results are different?

Jeonghwa Yoo
  • 167
  • 1
  • 12

1 Answers1

0

You have 7 layers after Conv layer (2 of which are Dense). They also learn stuff and they are 'making the decision' of the model output.

Think about it like this: Conv outputs something, that is the input to Dense1 -> Dense2. All those layers are learning simultaneously. So the goal of Dense1 layer is to learn what Conv layer is 'trying to tell it', how to interpret the results of Conv layer. If you input the image to this Dense1 layer and then to Dense2 layer, you won't get the same result (nor correct one). All of those layers are working together to get the correct prediction.

You cannot isolate 1 layer and expect the correct result.

Novak
  • 2,143
  • 1
  • 12
  • 22
  • Thank you, then what is mean layer_outs? I think that these are all the results of each layer when all layers are being trained simultaneously. So I thought layers_out[13] was the result of passing the last dense layer when the model was trained at the same time using all layer. If not, what do the results mean? – Jeonghwa Yoo Nov 07 '18 at 15:04
  • Layer_out means the output of the model, output of the last layer of the model. That is what you have to interpret, that is the rezult – Novak Nov 07 '18 at 15:29
  • I mean that list name is 'layer_outs' with 13 elements. You can see it in my first picture. There are 13 Numpy array. I thought that Numpy array with index 13 in 'layer_outs' is output of the model when the model was trained using all layer. However, the result is different from model.predict. – Jeonghwa Yoo Nov 07 '18 at 15:42