I took an example from the keras.
https://github.com/keras-team/keras/blob/master/examples/pretrained_word_embeddings.py
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(128, 5, activation='relu')(embedded_sequences)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = GlobalMaxPooling1D()(x)
x = Dense(128, activation='relu')(x)
preds = Dense(len(labels_index), activation='softmax')(x)
model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['acc'])
The model predicted classes probabilities which are lesser than 0. I know softmax will sum it to 1. The only one output i can see in probability using np.argmax(pre)
I want the other classes probabilities at least readable.
Prediction output:
[2.8300792e-06 4.5637703e-03 7.2316222e-02 6.7710824e-02 5.2243233e-01
3.7763064e-04 1.2326813e-02 2.9277834e-01 4.1662962e-03 1.0876421e-05
2.3830748e-06 1.3590348e-04 2.3074823e-02 3.3520879e-05 4.0551484e-05
1.9896568e-06 1.0994432e-05 4.7518920e-06 2.3408763e-06 6.7659844e-06]
All of them are yielding the lesser than 0 as probability. When I use np.argmax I got 4
. How do I get probability results above 0? Instead softmax which activation should i use to get more positive probabilities?