0

I'm trying to extract the values of the logits of a trained model when you have the softmax activation inside the dense layer.

Something along these lines:

outputs = keras.layers.Dense(num_labels, activation='softmax')(x)

I've tried to recompile the model and use loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) but this didn't work...

mCalado
  • 121
  • 1
  • 19
  • Dear StackOverflow users and mods, first of all, I would like to express my gratitude for the continuous support that this website has given me through the years. Recently, I've been seeing this platform lose its reputation, mainly due to essentially canceling newbies like me for questions that aren't duplicated. Again, I would like to thank the anonymous user that tried to help by linking me with a solution that did nothing to my question. If it wasn't for him I wouldn't be able to find the REAL solution. Simply, If you don't want the softmax activation of a trained model all you gotta do is – mCalado Dec 21 '20 at 16:54
  • ```model.layers[-1].activation = None```, that way you can get the logits of your model. No need to try to invert the softmax when it wasn't even the half related to this question. Sincerely, TOMMYFOOKINSHELBY. – mCalado Dec 21 '20 at 16:55

0 Answers0