4

Is it possible to access pre-activation tensors in a Keras Model? For example, given this model:

import tensorflow as tf
image_ = tf.keras.Input(shape=[224, 224, 3], batch_size=1)
vgg19 = tf.keras.applications.VGG19(include_top=False, weights='imagenet', input_tensor=image_, input_shape=image_.shape[1:], pooling=None)

the usual way to access layers is:

intermediate_layer_model = tf.keras.models.Model(inputs=image_, outputs=[vgg19.get_layer('block1_conv2').output])
intermediate_layer_model.summary()

This gives the ReLU outputs for a layer, while I would like the ReLU inputs. I tried doing this:

graph = tf.function(vgg19, [tf.TensorSpec.from_tensor(image_)]).get_concrete_function().graph
outputs = [graph.get_tensor_by_name(tname) for tname in [
    'vgg19/block4_conv3/BiasAdd:0',
    'vgg19/block4_conv4/BiasAdd:0',
    'vgg19/block5_conv1/BiasAdd:0'
]]
intermediate_layer_model = tf.keras.models.Model(inputs=image_, outputs=outputs)
intermediate_layer_model.summary()

but I get the error

ValueError: Unknown graph. Aborting.

The only workaround I've found is to edit the model file to manually expose the intermediates, turning every layer like this:

x = layers.Conv2D(256, (3, 3), activation="relu", padding="same", name="block3_conv1")(x)

into 2 layers where the 1st one can be accessed before activations:

x = layers.Conv2D(256, (3, 3), activation=None, padding="same", name="block3_conv1")(x)
x = layers.ReLU(name="block3_conv1_relu")(x)

Is there a way to acces pre-activation tensors in a Model without essentially editing Tensorflow 2 source code, or reverting to Tensorflow 1 which had full flexibility accessing intermediates?

Szabolcs
  • 832
  • 11
  • 15
  • Does [this](https://github.com/tensorflow/tensorflow/issues/33129#issuecomment-653138256) answer your question? –  Oct 19 '20 at 15:19
  • That solution is exactly what's not working for me. It doesn't work for the person responding to that post either but they get a different error. – Szabolcs Oct 19 '20 at 20:50

2 Answers2

1

There is a way to access pre-activation layers for pretrained Keras models using TF version 2.7.0. Here's how to access two intermediate pre-activation outputs from VGG19 in a single forward pass.

Initialize VGG19 model. We can omit top layers to avoid loading unnecessary parameters into memory.

vgg19 = tf.keras.applications.VGG19(
    include_top=False,
    weights="imagenet"
)

This is the important part: Create a deepcopy of the intermediate layer form which you like to have the features, change the activation of the conv layers to linear (i.e. no activation), rename the layer (otherwise two layers in the model will have the same name which will raise errors) and finally pass the output of the previous through the copied conv layer.

# for more intermediate features wrap a loop around it to avoid copy paste
b5c4_layer = deepcopy(vgg19.get_layer("block5_conv4"))
b5c4_layer.activation = tf.keras.activations.linear
b5c4_layer._name = b5c4_layer.name + str("_preact")
b5c4_preact_output = b5c4_layer(vgg19.get_layer("block5_conv3").output)

b2c2_layer = deepcopy(vgg19.get_layer("block2_conv2"))
b2c2_layer.activation = tf.keras.activations.linear
b2c2_layer._name = b2c2_layer.name + str("_preact")
b2c2_preact_output = b2c2_layer(vgg19.get_layer("block2_conv1").output)

Finally, get the outputs and check if they equal post-activation outputs when we apply ReLU-activation.

vgg19_features = Model(vgg19.input, [b2c2_preact_output, b5c4_preact_output])
vgg19_features_control = Model(vgg19.input, [vgg19.get_layer("block2_conv2").output, vgg19.get_layer("block5_conv4").output])

b2c2_preact, b5c4_preact = vgg19_features(tf.keras.applications.vgg19.preprocess_input(img))
b2c2, b5c4 = vgg19_features_control(tf.keras.applications.vgg19.preprocess_input(img))

print(np.allclose(tf.keras.activations.relu(b2c2_preact).numpy(),b2c2.numpy()))
print(np.allclose(tf.keras.activations.relu(b5c4_preact).numpy(),b5c4.numpy()))
True
True

Here's a visualization similar to Fig. 6 of Wang et al. to see the effect in the feature space. VGG19-intermediate

Input image

input image

Tinu
  • 2,432
  • 2
  • 8
  • 20
  • 1
    Technically answers the question so I accepted it, but you still need to run a network N times to get N intermediate outputs out, while in TF1 you could run it just once. – Szabolcs Feb 04 '22 at 17:55
  • 1
    Thank. I totally agree with the limitation you pointed out. I'm pretty sure there is a more efficient way to do it in a single forward pass. – Tinu Feb 04 '22 at 19:28
  • 1
    @Szabolcs I edited my answer and improved the solution to get the intermediate outputs in a single forward pass. – Tinu Feb 05 '22 at 11:03
  • Very very nice solution! – Szabolcs Feb 05 '22 at 23:12
0

To get output of each layer. You have to define a keras function and evaluate it for each layer.

Please refer the code as shown below

from tensorflow.keras import backend as K

inp = model.input                                           # input 
outputs = [layer.output for layer in model.layers]          # all layer outputs
functors = [K.function([inp], [out]) for out in outputs]    # evaluation functions

For more details on this please refer SO Answer.