My sklearn
accuracy_score
function takes two following inputs:
accuracy_score(y_test, y_pred_class)
y_test
is of pandas.core.series
and y_pred_class
is of numpy.ndarray
. So do two different inputs produce wrong accuracy? It's actually giving no error and produce some score. If my procedure is not correct what should I do to produce accuracy correctly?
Edit
It's a binary classification problem and labels are not one-hot-encoded. So model.predict produces one probability value for each sample which are converted to label using np.round.
Outputs of model.predict
looks like this--->
[[0.50104564]
[0.50104564]
[0.20969158]
...
[0.5010457 ]
[0.5010457 ]
[0.5010457 ]]
My y_pred_class
after rounding off looks like this--->
[[1.]
[1.]
[0.]
...
[1.]
[1.]
[1.]]
And y_test
which is pandas.series looks like this (as expected)--->
34793 1
60761 0
58442 0
56299 1
89501 0
..
91507 1
25467 1
79635 0
22230 1
22919 1
Are y_pred_class and y_test compatible to each other for accuracy_score() ?