1

I try to build up my own loss function as follows

    import numpy as np
    from keras import backend as K

    def MyLoss(self, x_input, x_reconstruct):

        a = np.copy(x_reconstruct)
        a = np.asarray(a, dtype='float16')       
        a = np.floor(4*a)/4
        return K.mean(K.square(a - x_input), axis=-1)`

In compilation, it says ValueError: setting an array element with a sequence

Both x_input and x_reconstruct are [m, n, 1] np arrays. The last line of code is actually copied directly from Keras' built-in MSE loss function.

Also, I suppose loss is calculated per sample. If dimensions of the input and reconstructed input are both [m, n, 1], the result of Keras' built-in loss will also be a matrix sized [m, n]. So why does it work properly?

I then tried to us np's functions directly by

    def MyLoss(self, x_input, x_reconstruct):        
        a = np.copy(x_reconstruct)
        a = np.asarray(a, dtype=self.precision)       
        a = np.floor(4*a)/4
        Diff = a - x_input
        xx = np.mean(np.square(Diff), axis=-1)
        yy = np.sum(xx)
        return yy

yet the error persists. What mistake did I make? How should write the code?

Having borrowed the suggestion from Make a Custom loss function in Keras in detail, I tried following

    def MyLoss(self, x_input, x_reconstruct):    
        if self.precision == 'float16':
            K.set_floatx('float16')
            K.set_epsilon(1e-4)
        a = K.cast_to_floatx(x_input)
        a = K.round(a*4.-0.5)/4.0
        return K.sum(K.mean(K.square(x_input-a), axis=-1))

But the same error happens

Theron
  • 567
  • 1
  • 7
  • 21
  • Two suggestions to help debug this: 1) Insert a line just before the return statement, `print("got here")`, that way you can verify the problem is with a computation on the last line; 2) output x_input.shape and a.shape to see if they are compatible. – weirdev Jun 14 '19 at 04:16
  • I am afraid it does not work. The error occurs even before the code reaches there, when I call `model.compile(loss=self.MyLoss, optimizer= CurOptimizer, metrics=CurMetrics)` – Theron Jun 14 '19 at 05:33

2 Answers2

3

You can not use numpy arrays in your loss. You have to use TensorFlow or Keras backend operations. Try this maybe:

import tensorflow as tf
import keras.backend as K

def MyLoss(x_input, x_reconstruct):
    a = tf.cast(x_input, dtype='tf.float16')       
    a = tf.floor(4*a)/4
    return K.mean(K.square(a - x_input), axis=-1)
Anakin
  • 1,889
  • 1
  • 13
  • 27
1

I found the answer myself, and let me share it here

If I write code like this

    def MyLoss(self, y_true, y_pred):    
        if self.precision == 'float16':
            K.set_floatx('float16')
            K.set_epsilon(1e-4)
        return K.mean(K.square(y_true-K.round(y_pred*4.-0.5)/4.0), axis=-1)

It works. The trick is, I think, that I cannot use 'K.cast_to_floatx(y_true)'. Instead, simply use y_true directly. I still do not understand why...

Theron
  • 567
  • 1
  • 7
  • 21