I built the svm model through learning data and scikitlearn.
The data consists of about 1000 pieces, and the feature consists of 50.
What I'm curious about is that when predicting new data through this model, which of the 50 features of the new data is most important for prediction.
For example, the probability of each of the 50 features, or the distance from the hyperplane of the model. Is this possible?
It's not a feature importance when constructing a model that's commonly called