1

I tried to customize my loss function in my auto encoder, the loss function must take into account the result of another dimension reduction (LLE) and the data I pass to the function must be updated to each calculates the loss function, the variables that must change do not change. That's my code, I'm waiting for your answers, thank you.

loss function:

def increment():
  global i
  i = i+1
  return i
def call_loss_lle():
  global i  #i does not increment
  def loss_lle(y_true,y_pred):
    global i
    global increment
    i  = increment()
    X = x_train[i-1:i,]
    lamda = 0.3
    z = encoder.predict(X)
    encoded = encoder.predict(X)
    z = z.reshape((28,3))
    y,W = LLE_(encoded.reshape((28,3)),10)
    produit = np.dot(W,z)  
    diff =  z - produit
    loss_lle = lamda * np.linalg.norm(diff)  
    cross = K.binary_crossentropy(y_true,y_pred)
    return cross + loss_lle
  return loss_lle

autoencoder:

from keras.layers import Input, Dense
from keras.models import Model

# this is the size of our encoded representations
encoding_dim = 84  

# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.updates()
autoencoder.compile(optimizer='adadelta', loss=call_loss_lle())
  • Change last line - call_loss_lle() to call_loss_lle first, you are passing the function as parameter not its result. What is the reason for the i iterator? – Konrad Mar 09 '19 at 22:28
  • The result of the function call_loss_lle() is the call of the function loss_lle, the role of the iterator is to pass the data one to one to the function LLE_ () to compute a weight matrix that I later use in the computation of my loss function. Thank you. – Adel Ali Taleb Mar 09 '19 at 22:38
  • Ah ok, wouldn't it be better to fist use model.fit on the training set and use y_true,y_pred and not use 'i' variable at all? – Konrad Mar 09 '19 at 23:54
  • I tried this, but y_true and y_pred are tensors and I can't convert them to numpy.ndarray to LLE_ () – Adel Ali Taleb Mar 10 '19 at 00:23
  • Maybe this can help you: https://stackoverflow.com/questions/39779710/setting-up-a-learningratescheduler-in-keras/39807000#39807000 – razimbres Mar 10 '19 at 10:22
  • Thank you @Rubens but it not the same problem – Adel Ali Taleb Mar 10 '19 at 15:49

0 Answers0