1

I've been implementing an autoencoder which receives as inputs vectors that consist only of 0 and 1, such as [1, 0, 1, 0, 1, 0, ...].

Likewise, another autoencoder that receives as inputs vectors that consist in values between 0 and 1, such as [0.123, 1, 0.9, 0.01, 0.9, ...]. In both cases each vector element is the input value of a node. The activation function of the hidden layers is relu and for the output layer is sigmoid.

I've seen some examples of autoencoders where adam/adadelta are used as optimizer and binary_crossentropy is used as a loss function. For that reason I implemented in both adadelta and binary_crossentropy, but I'm not sure if for both cases it's the correct configuration.

sam-w
  • 7,478
  • 1
  • 47
  • 77
zoh85429
  • 21
  • 1
  • 2
  • 1
    Both binary crossentropy and mean squared error can be used as the loss function here. [This answer](https://stackoverflow.com/a/52443301/2099607) on SO and [this answer](https://stats.stackexchange.com/a/370180/114422) on CV explain further about this. – today Oct 19 '18 at 21:08
  • Possible duplicate of [How does binary cross entropy loss work on autoencoders?](https://stackoverflow.com/questions/52441877/how-does-binary-cross-entropy-loss-work-on-autoencoders) – today Oct 19 '18 at 21:09

0 Answers0