I use Spark MLlib for my project. I have used SVM, Decision Tree and Random Forest. I have split the dataset into Training and Testing (60% training, 40 % testing) and got my results.
I want to repeat my work but splitting the data using Cross Validation instead of percentage split for SVM, DT and RF.
How can I do that on Spark? I have found several codes for splitting using logistic regression and Pipeline whcih can not work for SVM.
I need to split the data int 10 fold, then apply SVM for now.
also I want to print the Accuracy for each fold.