1

I have built a Sequential Keras model with three layers: A Gaussian Noise layer, a hidden layer, and the output layer with the same dimension as the input layer. For this, I'm using the Keras package that comes with Tensorflow 2.0.0-beta1. Thus, I'd like to get the output of the hidden layer, such that I circumvent the Gaussian Noise layer since it's only necessary in the training phase.

To achieve my goal, I followed the instructions in https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer, which are pretty much described in Keras, How to get the output of each layer? too.

I have tried the following example from the official Keras documentation:

from tensorflow import keras
from tensorflow.keras import backend as K

dae = keras.Sequential([
    keras.layers.GaussianNoise( 0.001, input_shape=(10,) ),
    keras.layers.Dense( 80, name="hidden", activation="relu" ),
    keras.layers.Dense( 10 )
])

optimizer = keras.optimizers.Adam()
dae.compile( loss="mse", optimizer=optimizer, metrics=["mae"] )

# Here the fitting process...
# dae.fit( · )

# Attempting to retrieve a decoder functor.
encoder = K.function([dae.input, K.learning_phase()], 
                               [dae.get_layer("hidden").output])

However, when K.learning_phase() is used to create the Keras backend functor, I get the error:

Traceback (most recent call last):
  File "/anaconda3/lib/python3.6/contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 534, in _scratch_graph
    yield graph
  File "/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 3670, in __init__
    base_graph=source_graph)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/eager/lift_to_graph.py", line 249, in lift_to_graph
    visited_ops = set([x.op for x in sources])
  File "/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/eager/lift_to_graph.py", line 249, in <listcomp>
    visited_ops = set([x.op for x in sources])
AttributeError: 'int' object has no attribute 'op'

The code works great if I don't include K.learning_phase(), but I need to make sure that the output from my hidden layer is evaluated over an input that is not polluted with noise (i.e. in "test" mode -- not "training" mode).

I know my other option is to create a model from the original denoising autoencoder, but can anyone point me into why my approach from the officially documented functor creation fails?

  • It'd help if you shared your full model code - or its smallest version for a minimally-reproducible example. Also, if using `tensorflow.keras.backend`, make sure all your layers come from `tensorflow.keras`, rather than `keras`, for compatibility reasons – OverLordGoldDragon Sep 28 '19 at 21:45
  • @OverLordGoldDragon I have added a simple code snippet that in my case fails when building the `encoder` functor. – 영민 카이 앤절 Sep 28 '19 at 21:56
  • Strange, no errors for me - are you packages up-to-date? Also, `encoder` won't get you the outputs - but I included a complete script in my answer that does. Let me know if it doesn't work. (Also, if not using already, I'd strongly recommend [Anaconda](https://anaconda.org/) for your python packages, as it ensures there are no conflicts) – OverLordGoldDragon Sep 28 '19 at 22:05

1 Answers1

2

Firstly, ensure your packages are up-to-date, as your script works fine for me. Second, encoder won't get the outputs - continuing from your snippet after # Here is the fitting process...,

x = np.random.randn(32, 10) # toy data
y = np.random.randn(32, 10) # toy labels
dae.fit(x, y) # run one iteration

encoder = K.function([dae.input, K.learning_phase()], [dae.get_layer("hidden").output])
outputs = [encoder([x, int(False)])][0][0] # [0][0] to index into nested list of len 1
print(outputs.shape)
# (32, 80)

However, as of Tensorflow 2.0.0-rc2, this will not work with eager execution enabled - disable via:

tf.compat.v1.disable_eager_execution()
OverLordGoldDragon
  • 1
  • 9
  • 53
  • 101
  • Well, it's unfortunate, but this woks only for an environment with tensorflow 1.14 (stable release version). I upgraded all of my packages via Anaconda, and I did not find a way to use `conda install` for **tensorflow 2.0.0-rc2**. So, I had to use `pip install` for TF 2. – 영민 카이 앤절 Sep 28 '19 at 23:01
  • @YoungMin It's why I pass on betas, especially for something as massive as TF that's not nearly bug-free even out of beta. One way you can improve compatibility is, put your TF2 install where your TF1 was, in Anaconda folders - then run `conda update --all` - did this once myself, worked (but no promises; back up your working conda environment just in case). Lastly, try passing in `int(0)` instead of `K.learning_phase()` – OverLordGoldDragon Sep 28 '19 at 23:26
  • @YoungMin Also, I just looked through the source code, and a relevant function was changed: `Function()`, which was used to evaluate `function()`; instead, it's now being imported from `tensorflow.python.keras.backend` as `tf_keras_backend`, and ultimately running via the`EagerExecutionFunction` class [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/backend.py#L3577), which _is_ pointed to by your error trace; it seems `int(0)` may actually be worse, as `.op` must evaluate to something valid – OverLordGoldDragon Sep 28 '19 at 23:30
  • @YoungMin The _line numbers_, however, don't agree - and the link is from the master branch of TensorFlow. I'm unsure if rc2 is currently in master, but if it isn't, that's a strong reason to not use it yet - but if it is, your package's not up to date – OverLordGoldDragon Sep 28 '19 at 23:35
  • Thanks for your help on this. I upgraded TF from beta to RC2, so I have updated the traceback log in my original post. I think I'll use the solution that involves creating a new model per the instructions given in the Keras webpage. Moving from TF 1 to 2 will require them to greatly update their documentation online. – 영민 카이 앤절 Sep 28 '19 at 23:46
  • 1
    @YoungMin The lines are closer now, but still off - note that your line 3670 is master's [3652](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/backend.py#L3652). Now, did you try running it without eager? Eager won't make it any easier to see the outputs with the code you're using - and it may be the source of the bug. If disabling it doesn't work, open up [this file on this line](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/eager/lift_to_graph.py#L248) locally, and add `print(sources)` there - then rerun everything and see output – OverLordGoldDragon Sep 28 '19 at 23:52
  • 1
    Yeah, it worked without eager execution! Thanks @OverLordGoldDragon! – 영민 카이 앤절 Sep 29 '19 at 01:14