I am trying to use the dropout layers in my model during inference time to measure the model uncertainty as described in the method outlined by Yurin Gal
A solution is described in this post:
How to calculate prediction uncertainty using Keras?, which defines a new Keras function
self.f = K.function([self.graph.layers[0].input, K.learning_phase()], [self.graph.layers[-1].output])
However, this method does not hold if the used model has batch normalisation layers. As this will make the model not use the mean and variance learned during training, but set new ones depending on the current batch.
Hence, I am looking for a way to put the batch layers training parameter to false but keep the dropout layer in training mode?
I am using the Keras efficientNet B0 as model, trained on custom data keras_efficientNet
I already tried to change the layers settings myself
`
for layer in self.graph.layers[4].layers:
if 'batch_norm' in layer.name:
layer._trainable = False
layer._inbound_nodes[0].output_tensors[0]._uses_learning_phase = False
layer._inbound_nodes[0].input_tensors[0]._uses_learning_phase = False
if 'dropout' in layer.name:
layer._inbound_nodes[0].output_tensors[0]._uses_learning_phase = True
for weight in self.graph.layers[4].weights:
if 'batch_norm' in weight.name:
weight._trainable = False`
Nonetheless, none of this worked.