0

I built the svm model through learning data and scikitlearn.

The data consists of about 1000 pieces, and the feature consists of 50.

What I'm curious about is that when predicting new data through this model, which of the 50 features of the new data is most important for prediction.

For example, the probability of each of the 50 features, or the distance from the hyperplane of the model. Is this possible?

It's not a feature importance when constructing a model that's commonly called

  • Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. – Community May 12 '22 at 06:44

1 Answers1

0

You can use coef to do what you need. Check these links for more info and examples...

link1 link2

Redox
  • 9,321
  • 5
  • 9
  • 26
  • thanks, but i use rbf kernel . Is there a way in this case? – 황준석 May 12 '22 at 04:36
  • No, I don't believe it is possible for non linear models like RBF. In the first link (link1) above and [here](https://stats.stackexchange.com/questions/265656/is-there-a-way-to-determine-the-important-features-weight-for-an-svm-that-uses), there is some explanation on why it is not possible for rbf. – Redox May 12 '22 at 06:17