0

Im trying to build a Convolutional Variational Auto Encoder(CVAE) and therefore I have to build the vae_loss() function, which is a combination of a MSE and a KL Divergence loss function. It looks like follows:

def vae_loss(y_true, y_pred):
    # mse loss
    reconstruction_loss = K.sum(K.square(y_true - y_pred), axis=-1)
    # kl loss
    kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
    kl_loss = K.sum(kl_loss, axis=-1)
    kl_loss *= -0.5
    weight = 0.
    return reconstruction_loss + (weight * kl_loss)

My model looks like:

input_img = Input(shape=(image_resolution(), image_resolution(), 1))
latent_dim = 64     #bottleneck

# ENCODER
e = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
e = Conv2D(32, (3, 3), activation='relu', padding='same')(e)
e = MaxPooling2D((2, 2))(e)
e = Conv2D(64, (3, 3), activation='relu', padding='same')(e)
e = MaxPooling2D((2, 2))(e)
e = Conv2D(64, (3, 3), activation='relu', padding='same')(e)
e = MaxPooling2D((2, 2))(e)
e = Conv2D(128, (3, 3), activation='relu', padding='same')(e)
l = Flatten()(e)
#l = Dense(200, activation='relu')(l)                             #transition linear layer
l = Dense(latent_dim, activation='softmax')(l)                    #latent dimension: maximal compresion(bottleneck)

#Stochastic latent space
z_mean = Dense(latent_dim, name = 'z_mean')(l)
z_log_var = Dense(latent_dim, name = 'z_log_var')(l)
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])

# DECODER
d = Reshape((8, 8, 1))(l)
d = Conv2DTranspose(128, (3, 3), strides=2, activation='relu', padding='same')(d)
d = BatchNormalization()(d)
d = Conv2DTranspose(64, (3, 3), strides=2, activation='relu', padding='same')(d)
d = BatchNormalization()(d)
d = Conv2DTranspose(64, (3, 3), strides=2, activation='relu', padding='same')(d)
d = BatchNormalization()(d)
d = Conv2DTranspose(32, (3, 3), activation='relu', padding='same')(d)
decoded = Conv2D(1, (3, 3), activation='linear', padding='same')(d)

autoencoder = Model(input_img, decoded)
autoencoder.summary()
autoencoder.compile(optimizer='Nadam', loss=vae_loss, metrics=[coeff_determination])

And when I change the loss function to my custom loss function I get the following error:

Traceback (most recent call last):
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
  @tf.function
  def has_init_scope():
    my_constant = tf.constant(1.)
    with tf.init_scope():
      added = my_constant * 2
The graph tensor has name: z_log_var/Identity:0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/user/PycharmProjects/user1/Try/VAE.py", line 108, in <module>
    validation_data=(test_input, test_label)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 66, in _method_wrapper
    return method(self, *args, **kwargs)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 848, in fit
    tmp_logs = train_function(iterator)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
    result = self._call(*args, **kwds)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 644, in _call
    return self._stateless_fn(*args, **kwds)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2420, in __call__
    return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1665, in _filtered_call
    self.captured_inputs)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1746, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 598, in call
    ctx=ctx)
  File "/Users/user/opt/anaconda3/envs/try/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 74, in quick_execute
    "tensors, but found {}".format(keras_symbolic_tensors))
tensorflow.python.eager.core._SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'z_log_var/Identity:0' shape=(None, 64) dtype=float32>, <tf.Tensor 'z_mean/Identity:0' shape=(None, 64) dtype=float32>]

I dont know what I'm doing wrong or what I'm missing. Im using currently Tensorflow 2.2 with Keras 2.3.1

Desperate Morty
  • 66
  • 2
  • 11

1 Answers1

0

Can you try the custom_loss implementation mentioned in this example.

Try this

def vae_loss(z_mean, z_log_var):
  def loss(y_true, y_pred):
    # mse loss
    reconstruction_loss = K.sum(K.square(y_true - y_pred), axis=-1)
    # kl loss
    kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
    kl_loss = K.sum(kl_loss, axis=-1)
    kl_loss *= -0.5
    weight = 0.
    return reconstruction_loss + (weight * kl_loss)
  return loss

Update model.compile line as follows

autoencoder.compile(optimizer='Nadam', loss=vae_loss(z_mean, z_log_var), metrics=[coeff_determination])
Vishnuvardhan Janapati
  • 3,088
  • 1
  • 16
  • 25
  • With this I get(after a lot of traceback) the following error: TypeError: Failed to convert object of type to Tensor. Contents: .loss at 0x15ea52e18>. Consider casting elements to a supported type. – Desperate Morty May 22 '20 at 10:02
  • Same error traceback as published above: (...) tensorflow.python.eager.core._SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [, ] – Desperate Morty May 22 '20 at 14:10
  • In the linked example (but not in this answer) there is a parameter run_eagerly=True, it's probably important. – Viktoriya Malyasova May 29 '20 at 21:56