I'm using a test list and a prediction list which contains 4000 elements like in this example
test_list=[1,0,0,1,0,.....]
prediction_list=[1,1,0,1,0......]
How can I find the binary cross entropy between these 2 lists in terms of python code? I tried using the log_loss
function from sklearn:
log_loss(test_list,prediction_list)
but the output of the loss function was like 10.5 which seemed off to me. Am I using the function the wrong way or should I use another implementation ?