3

I'm using Tensorflow to write a NN-model to approximate the sine function and I'd like to use the second derivative w.r.t. to the input in the loss-function for my model.

My code doesn't yet include the derivative, but I just added the input tensor in my loss function (as first step) and used this answer as an first approach.

My code currently looks like this

import tensorflow as tf
import numpy as np

from tensorflow import keras
from numpy import random

# --- Settings
x_min = 0
x_max = 2*np.pi

n_train = 64
n_test = 64

# --- Generate dataset
x_train = random.uniform(x_min, x_max, n_train)
y_train = np.sin(x_train)

x_test = random.uniform(x_min, x_max, n_test)
y_test = np.sin(x_test)

# --- Create model
model = keras.Sequential()

model.add(keras.layers.Dense(64, activation="tanh", input_dim=1))
model.add(keras.layers.Dense(64, activation="tanh"))

model.add(keras.layers.Dense(1, activation="tanh"))

def custom_loss_wrapper(input_tensor):

    def custom_loss(y_true, y_pred):
        return keras.losses.mean_squared_error(y_true, y_pred) + keras.backend.mean(input_tensor)

    return custom_loss

# --- Configure learning process
model.compile(
        optimizer=keras.optimizers.Adam(0.01),
        loss=custom_loss_wrapper(model.input),
        metrics=['MeanSquaredError'])

# --- Train from dataset
model.fit(x_train, y_train, epochs=5, batch_size=32, validation_data=(x_test, y_test))

model.evaluate(x_test, y_test)

My custom loss function just computes the mean-squared-error and adds the input value. This shouldn't be a problem, but I receive the error

TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
  @tf.function
  def has_init_scope():
    my_constant = tf.constant(1.)
    with tf.init_scope():
      added = my_constant * 2
The graph tensor has name: dense_input:0

Does anybody know why this occurs?

s1624210
  • 627
  • 4
  • 11

1 Answers1

2

Since TensorFlow 2.0 and higher is running on the eager mode by default, Tensorflow op will check if the inputs are of type "tensorflow.python.framework.ops.EagerTensor" and since Keras is implemented, the inputs to the eager mode will be of "tensorflow.python.framework.ops.Tensor" and this throws the error

TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
  @tf.function
  def has_init_scope():
    my_constant = tf.constant(1.)
    with tf.init_scope():
      added = my_constant * 2

You can change the input type to EagerTensor by explicitly telling TensorFlow to run in eager mode for Keras. Setting this to true will solve the issue

tf.config.experimental_run_functions_eagerly(True)
TF_Support
  • 1,794
  • 1
  • 7
  • 16
  • 1
    OK but is it possible to pass a "tensorflow.python.framework.ops.Tensor" to "custom_loss_wrapper" without being in eager mode and lose performance? I need a keras.Input to be passed to my loss function. It is similar to this issue here and I wish to avoid concatenation approaches. https://stackoverflow.com/questions/57704771/inputs-to-eager-execution-function-cannot-be-keras-symbolic-tensors – Phoenix666 Oct 12 '21 at 22:36