I'd like to ask your opinion about my problem. I have two-class problem. Here is my procedure:
- Z score normalization was done on data set
- Dimensionality reduction was done by PCA
- Leave one out cross validation is used
- I was trying to use libsvm based on the example in
precomputed kernels with libsvm
I'm getting 0 accuracy. What would be the reason? As far as I know, accuracy cannot be lower than 50% for two class problem. I have 32x2967 matrix which contains my data set. The first 16 points belong to the first class, the rest belong to the second class.
Here is the code that makes me worried
sigma = 2e-3;
rbfKernel = @(X,Y) exp(-sigma .* pdist2(X,Y,'euclidean').^2);
for i = 1:len % leave one out cross validation
data_train = data_reduced; % After PCA , data reduced is 32X3
data_test = data_reduced(i,:);
data_train(i,:) = [];
group_Train = group;
group_test = [group(i)];
group_Train(i) = [];
numTrain=size(data_train,1);
numTest=size(data_test,1);
K = [ (1:numTrain)' , rbfKernel(data_train,data_train) ];
KK = [ (1:numTest)' , rbfKernel(data_test,data_train) ];
SVMClass=svmtrain(group_Train, K, '-t 4');
[predClass]=svmpredict(group_test,KK, SVMClass);
class = [class predClass];
end