I have a keras MLP with single hidden layer. I am using a multilayer perceptron with some specific number of nodes in a single hidden layer. I want to extract the activation value for all the neurons of that hidden layer when a batch is passed and I want to do that for every epoch and store that in a list to explore. My representation is like following.
class myNetwork:
# Architecture of our neural network.
def multilayerPerceptron(self, Num_Nodes_hidden,input_features,output_dims,activation_function = 'relu', learning_rate=0.001,
momentum_val=0.00):
model = Sequential()
model.add(Dense(Num_Nodes_hidden, input_dim =input_features, activation=activation_function))
model.add(Dense(output_dims,activation='softmax'))
model.compile(loss = "categorical_crossentropy",
optimizer=SGD(lr = learning_rate, momentum = momentum_val),
metrics=['accuracy'])
return model
Below is my call for another part where I am using lambdacallbacks to save the weights. I want something similar but this time to save the actual activation values for the hidden layer.
from keras.callbacks import LambdaCallback
import pickle
from keras.callbacks import ModelCheckpoint
from keras.callbacks import CSVLogger
# setting_parameters and calling inputs.
val = myNetwork()
vals = val.multilayerPerceptron(8,4,3,'relu',0.01)
batch_size_val = 20
number_iters = 200
weights_ih = []
weights_ho = []
activation_vals = []
get_activtaion = LambdaCallback(on_epoch_end=lambda batch, logs: activation_vals.append("What should I put Here"))
print_weights = LambdaCallback(on_epoch_end=lambda batch, logs: weights_ih.append(vals.layers[0].get_weights()))
print_weights_1 = LambdaCallback(on_epoch_end=lambda batch, logs: weights_ho.append(vals.layers[1].get_weights()))
history_callback = vals.fit(X_train, Y_train,
batch_size=batch_size_val,
epochs=number_iters,
verbose=0,
validation_data=(X_test, Y_test),
callbacks = [csv_logger,print_weights,print_weights_1,get_activtaion])
I am super confused and I am not sure what I should put in GetActivtion. Please let me know what I should there in order to get the activation value for all the samples of the batch for that iteration value of the weights.