0

I am trying to use a Custom Loss Function to prevent False Positives. I found this post Custom loss function in Keras to penalize false negatives, which is very similar to mine, but when I implement these functions, the model returns me the next error:

    model.compile(loss = loss, optimizer = 'Adadelta', metrics = [auroc])
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\keras\engine\training.py", line 860, in compile
    sample_weight, mask)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\keras\engine\training.py", line 460, in weighted
    score_array = fn(y_true, y_pred)
  File "EmbeddingConcatenate.py", line 268, in recall_spec_loss
    return binary_recall_specificity(y_true, y_pred, recall_weight, spec_weight)
  File "EmbeddingConcatenate.py", line 248, in binary_recall_specificity
    TN = np.logical_and(K.eval(y_true) == 0, K.eval(y_pred) == 0)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\keras\backend\tensorflow_backend.py", line 644, in eval
    return to_dense(x).eval(session=get_session())
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\ops.py", line 648, in eval
    return _eval_using_default_session(self, feed_dict, self.graph, session)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\ops.py", line 4758, in _eval_using_default_session
    return session.run(tensors, feed_dict)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\client\session.py", line 895, in run
    run_metadata_ptr)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\client\session.py", line 1128, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\client\session.py", line 1344, in _do_run
    options, run_metadata)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\client\session.py", line 1363, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'dense_1_target' with dtype float and shape [?,?]
         [[Node: dense_1_target = Placeholder[dtype=DT_FLOAT, shape=[?,?], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
         [[Node: dense_1_target/_29 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4_dense_1_target", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

Caused by op 'dense_1_target', defined at:
  File "EmbeddingConcatenate.py", line 398, in <module>
    model = generate_model()
  File "EmbeddingConcatenate.py", line 296, in generate_model
    model.compile(loss = loss, optimizer = 'Adadelta', metrics = [auroc])
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\keras\engine\training.py", line 755, in compile
    dtype=K.dtype(self.outputs[i]))
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\keras\backend\tensorflow_backend.py", line 488, in placeholder
    x = tf.placeholder(dtype, shape=shape, name=name)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1680, in placeholder
    return gen_array_ops._placeholder(dtype=dtype, shape=shape, name=name)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 4105, in _placeholder
    "Placeholder", dtype=dtype, shape=shape, name=name)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\ops.py", line 3160, in create_op
    op_def=op_def)
  File "C:\Users\X\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\ops.py", line 1625, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'dense_1_target' with dtype float and shape [?,?]
         [[Node: dense_1_target = Placeholder[dtype=DT_FLOAT, shape=[?,?], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
         [[Node: dense_1_target/_29 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4_dense_1_target", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

Any ideas on how I can fix it? Is there any other option that I can do to prevent False Positives?

Edit

I left the code that I use here (exactly the same in the other post):

import keras.backend as K

def binary_recall_specificity(y_true, y_pred, recall_weight, spec_weight):

    TN = np.logical_and(K.eval(y_true) == 0, K.eval(y_pred) == 0)
    TP = np.logical_and(K.eval(y_true) == 1, K.eval(y_pred) == 1)

    FP = np.logical_and(K.eval(y_true) == 0, K.eval(y_pred) == 1)
    FN = np.logical_and(K.eval(y_true) == 1, K.eval(y_pred) == 0)

    # Converted as Keras Tensors
    TN = K.sum(K.variable(TN))
    FP = K.sum(K.variable(FP))

    specificity = TN / (TN + FP + K.epsilon())
    recall = TP / (TP + FN + K.epsilon())

    return 1.0 - (recall_weight*recall + spec_weight*specificity)

# Our custom loss' wrapper
def custom_loss(recall_weight, spec_weight):

    def recall_spec_loss(y_true, y_pred):
        return binary_recall_specificity(y_true, y_pred, recall_weight, spec_weight)

    # Returns the (y_true, y_pred) loss function
    return recall_spec_loss

loss_custom = custom_loss(recall_weight = 0.9, spec_weight = 0.1)
model.compile(loss = loss_custom, optimizer = 'Adadelta', metrics = [auroc])
Cristina V
  • 117
  • 7
  • 1
    short comment: not showing the code -> nobody can help you – UninformedUser Nov 27 '19 at 16:29
  • Sorry, I didn't write it because is in the other post. But now it is included! Thanks! – Cristina V Nov 27 '19 at 16:33
  • As much as I can say `x = tf.placeholder(dtype, shape=shape, name=name)` you need to define datatype and shape. Make sure the parameters passed are correct. – Hayat Nov 27 '19 at 17:11
  • I think in your binary_recall_specificity function you need to only be working in tensors – learningthemachine Nov 27 '19 at 18:24
  • @learningthemachine Yes, I've been reading this in the last hour. It seems that the function is called before the y_pred and y_true are filled, so I need to work only with tensors, but I don't know how to transform this code... Any idea? – Cristina V Nov 27 '19 at 18:34
  • A lot of np functions have TF equivalents. I see that your np conditional can be replaced with https://www.tensorflow.org/api_docs/python/tf/math/logical_and – learningthemachine Nov 27 '19 at 18:36
  • Thank you for your quick response! I tried 2 things: 1) To replace np function with the function of tensorflow. It returns me the same error... 2) Replace np function AND k.eval with tf.equal() . It returns me another error: `ValueError: Tensor conversion requested dtype float32 for Tensor with dtype bool: 'Tensor("loss/dense_1_loss/LogicalAnd:0", shape=(?, ?), dtype=bool)` which I don't understand, because logical_and receives two booleans, and tf.equals returns booleans, too. – Cristina V Nov 27 '19 at 19:32

0 Answers0