1

I'm using this custom loss function for ccc

def ccc(y_true, y_pred):
  ccc = ((ccc_v(y_true, y_pred) + ccc_a(y_true, y_pred)) / 2)
  return 1 - ccc

def ccc_v(y_true, y_pred):
  x = y_true[:,0]
  y = y_pred[:,0]

  x_mean = K.mean(x, axis=0)
  y_mean = K.mean(y, axis=0)

  covar = K.mean( (x - x_mean) * (y - y_mean) )

  x_var = K.var(x)
  y_var = K.var(y)

  ccc = (2.0 * covar) / (x_var + y_var + (x_mean + y_mean)**2)

  return ccc

def ccc_a(y_true, y_pred):
  x = y_true[:,1]
  y = y_pred[:,1]

  x_mean = K.mean(x, axis=0)
  y_mean = K.mean(y, axis=0)

  covar = K.mean( (x - x_mean) * (y - y_mean) )

  x_var = K.var(x)
  y_var = K.var(y)

  ccc = (2.0 * covar) / (x_var + y_var + (x_mean + y_mean)**2)

  return ccc

Currently the loss function ccc returns a scalar. The loss function is split into 2 different functions (ccc_v and ccc_a) because I use them as metrics as well.

I've read from Keras doc and this question that a custom loss function should return a list of losses, one for each sample.

First question: my model trains even if the loss function returns a scalar. Is it that bad? How is training different if I use a loss function whose output is a scalar instead of a list of scalars?

Second question: how can I rewrite my loss function to return a list of losses? I know I should avoid means and sums but in my case I think it's not possible because there's not a global mean but different ones, one a the numerator for the covariance and a couple at the denominator for the variances.

zcb
  • 87
  • 1
  • 13
  • will you post a portion of the training set? the loss function can be be automatically determined. – Golden Lion Feb 07 '21 at 18:43
  • @GoldenLion The training set is made of images with shape (96,96,3). The images show faces of different people. The objective of the model is to recognize emotions from faces. Should I post some images? – zcb Feb 07 '21 at 19:02
  • are you using a keras cnn network – Golden Lion Feb 07 '21 at 19:03
  • @GoldenLion If you mean a pretrained model, then no I'm not using one. I'm using a model based on VGG16, but it's not the pretrained one from Keras. – zcb Feb 07 '21 at 20:23

1 Answers1

0

if your using tensorflow there are automatic apis for calculating loss

tf.keras.losses.mse()
tf.keras.losses.mae()
tf.keras.losses.Huber()


# Define the loss function
def loss_function(w1, b1, w2, b2, features = borrower_features, targets =     default):
   predictions = model(w1, b1, w2, b2)
   # Pass targets and predictions to the cross entropy loss
   return keras.losses.binary_crossentropy(targets, predictions)

 #if your using categorical_crossentropy than return the losses for it.


  #convert your image into a single np.array for input
  #build your SoftMax model


  # Define a sequential model
  model=keras.Sequential()

  # Define a hidden layer
  model.add(keras.layers.Dense(16, activation='relu', input_shape=(784,)))

  # Define the output layer
  model.add(keras.layers.Dense(4,activation='softmax'))

  # Compile the model
  model.compile('SGD', loss='categorical_crossentropy',metrics=['accuracy'])

  # Complete the fitting operation

  train_data=train_data.reshape((50,784))

  # Fit the model
  model.fit(train_data, train_labels, validation_split=0.2, epochs=3)

  # Reshape test data
  test_data = test_data.reshape(10, 784)

  # Evaluate the model
  model.evaluate(test_data, test_labels)
Golden Lion
  • 3,840
  • 2
  • 26
  • 35