0

I'd like to ask your opinion about my problem. I have two-class problem. Here is my procedure:

  1. Z score normalization was done on data set
  2. Dimensionality reduction was done by PCA
  3. Leave one out cross validation is used
  4. I was trying to use libsvm based on the example in
    precomputed kernels with libsvm

I'm getting 0 accuracy. What would be the reason? As far as I know, accuracy cannot be lower than 50% for two class problem. I have 32x2967 matrix which contains my data set. The first 16 points belong to the first class, the rest belong to the second class.

Here is the code that makes me worried

sigma = 2e-3;

rbfKernel = @(X,Y) exp(-sigma .* pdist2(X,Y,'euclidean').^2);

 for i = 1:len  % leave one out cross validation

data_train = data_reduced; % After PCA , data reduced is 32X3 

data_test = data_reduced(i,:);

data_train(i,:) = [];

group_Train = group;

group_test = [group(i)];

group_Train(i) = [];

 numTrain=size(data_train,1);

 numTest=size(data_test,1);

K =  [ (1:numTrain)' , rbfKernel(data_train,data_train) ];

KK = [ (1:numTest)'  , rbfKernel(data_test,data_train)  ];

SVMClass=svmtrain(group_Train, K, '-t 4');

[predClass]=svmpredict(group_test,KK, SVMClass);

class = [class predClass];

end
Community
  • 1
  • 1
  • 1
    welcome to stakoverflow, first you need to read http://stackoverflow.com/help/how-to-ask . we need more details to help you, your code, and if possible metadata of your dataset – Rachid Ait Abdesselam Jan 16 '17 at 11:15
  • Thank you so much to let me know. I was more intending to know if there is anybody who had the same problem . I updated my question . I hope it's more understandable now – Büsra Tugce Jan 16 '17 at 18:14
  • If you tag your question with the programming language you're writing in you'll likely get more attention on your question. – Tchotchke Jan 16 '17 at 18:18

0 Answers0