-2

In general what are the steps you follow when the accuracy of a supervised learning classifier model that you have obtained after training is not as per your expectation? Example Steps: Feature Re-Engineering, Removing Noise, Dimensionality Reduction, Overfitting and so on. What tests (carried out after you have obtained % accuracy of your classifier) make you arrive at a conclusion (say there is lot of Noise because of which the accuracy is low) which makes you perform an action (Remove Noisy words/features etc.)? After performing the action you re-train the classifier and the cycle goes on until you have achieved good results.

I have read this question on SO - Feature Selection and Reduction for Text Classification which has a great accepted answer but it doesn't talk about the steps followed which make you arrive at a conclusion (as described above)

Community
  • 1
  • 1
Yavar
  • 11,883
  • 5
  • 32
  • 63

2 Answers2

1

There are various metrics that you can use depending on the classifier you have. Is it a binary classifier? A multi-class classifier? Or a multi-label multi-class classifier? The most common metrics are Precision, Recall, F-Score and Accuracy but there are a host of other more detailed metrics especially when it comes to Multi-label classifiers.

Most machine learning toolkits implement standard evaluation metrics (Precision, Recall, etc) but I have found that metrics for multi-label classifiers aren't implemented in many machine learning toolkits.

The paper A systematic analysis of performance measures for classification tasks is a comprehensive listing of metrics for classifiers.

A good paper on multi-label classifier metrics is: A literature survey of algorithms for multi-label learning

Depending on your metrics, you may want to either handle issues such as overfitting, underfitting, or get more data (or even more accurate data) or (in extreme situations) switch machine learning algorithms or approaches. See Domingo's A few useful things to know about Machine Learning

fjxx
  • 945
  • 10
  • 20
0

You don't say what you're trying to do and overall it really depends if you're a practitioner (specialist in other area) or an expert in machine learning. Regardless, there all types of things you can look at:

One dimension is depth or difficulty:

-Basics: Handling simple methodological and programming bugs. features between 0 and 1 (or -1 and 1), cross validation to get good values of hyper parameters (C and gamma in case of SVM) and many other details: this question covers them well: Supprt Vector Machine works in matlab, doesn't work in c++

-Intermediate: Handling deeper conceptual bugs. revisiting the quality and quantity of your data, reviewing the type of classifier your using for example linear vs non linear, generative vs discriminative, checking the literature for results others have obtained using methods similar to yours on the same data. Consider the posibility that you're training on some type of data testing on other types of data (source-targe problems). Keywords: Domain adaptation, multi-task learning, regularization, etc.

-Advanced: You've exhausted all the posibilities, you need to advance the state of the art to solve your problem. You need faster algorithms. You need to robust results with less data or you need to handle a massively larger scale. Study state of the art solutions and push them ahead. Also, sometimes progress is not so evolutionary/incremental, sometimes you need to take another route, eliminate assumptions, etc.

This categorization is mostly orthogonal, but also useful:

-Expert Knowledge: sometimes (as in the case you link) problems that are very difficult to handle (NLP, Vision) can be approached using expert knowledge. For example in face recognition people use some regions of the face (around the eyes) based on results in neuroscience that say that to recognize individuals results show that is what humans focus on. Most if not all useful representation methods like SIFT, SURF, LBP all have some basis in human vision. Also, in the example you linked, linguists have proposed representations used in ML approaches to NLP: Feature Selection and Reduction for Text Classification.

Community
  • 1
  • 1
carlosdc
  • 12,022
  • 4
  • 45
  • 62