I've faced the same issue for over a month until I realized that I must use the ML API instead of the MLlib API (more about the differences between both of them here). In that case, the SVM for the new API is the LinearSVC:
from pyspark.ml.classification import RandomForestClassifier, LinearSVC
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder, CrossValidatorModel
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# SVM
crossval = CrossValidator(estimator=LinearSVC(),
estimatorParamMaps=ParamGridBuilder().build(),
evaluator=MulticlassClassificationEvaluator(metricName='f1'),
numFolds=5,
parallelism=4)
# Random Forest
crossval = CrossValidator(estimator=RandomForestClassifier(),
estimatorParamMaps=ParamGridBuilder().build(),
evaluator=MulticlassClassificationEvaluator(metricName='f1'),
numFolds=5,
parallelism=4)
In both cases you can just fit the model:
cross_model: CrossValidatorModel = crossval.fit