I have a training dataset of 8670 trials and each trial has a length of 125-time samples while my test set consists of 578 trials. When I apply SVM algorithm from scikit-learn, I get pretty good results.
However, when I apply logistic regression, this error occurs:
"ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 1.0" .
My question is why SVM is able to give predictions but logistic regression gives this error?
Could it be possible that something is wrong in the dataset or just that logistic regression was not able to classify because the training samples look similar to it?