3

I need a simulation model that generate an artificial classification data set with a binary response variable. I then want to check the performance of various classifiers using this data set. The data set may have any number of features, the predictors.

Z KHAN
  • 59
  • 1
  • 4

2 Answers2

3

This is a bad idea, and will tell you nothing about the relative merits of the classifiers.

First I'll explain how to generate data, then why you won't learn anything by doing it. You want a vector of binary features: there are lots of ways to do this, but let's take the simplest. A vector of independent Bernoulli variables. Here's the recipe to generate as many instances as you like:

  1. For each feature i, generate a parameter theta_i, where 0 < theta_i < 1, from a uniform distribution
  2. For each desired instance j, generate the i-th feature f_ji by sampling again from a uniform distribution. If the number you sampled is less than theta_i, set f_ij = 1, else set it equal to 0

This will allow you to generate as many instances as you like. However, the problem is that you know the true distribution of the data, so you can get the Bayes Optimal decision rule: this is the theoretically optimal classifier. Under the generation scheme I gave you above, the Naive Bayes classifier is close to optimal (if you used an actual Bayesian version where you integrated out the parameters, it would be the optimal classifier).

Does this mean that Naive Bayes is the best classifier? No, of course not: as a rule, practically we are interested in the performance on classifiers on datasets where we don't know the true distribution of the data. Indeed, the whole notion of discriminative modelling is based on the idea that when the true distribution is unknown, trying to estimate it is solving a harder problem than is required for classification.

In summary, then: think very carefully about whether this is what you want to do. You can't simulate data and use that to decide which classifier is 'best', because which is best will depend on the recipe you use for simulation. If you wanted to look at kinds of data where certain classifiers perform poorly or strangely, you could simulate this sort of data to confirm your supposition, but I don't think that's what you're trying to do.

EDIT:

I realise you actually want a binary outcome, not binary features. You can ignore some of what I said.

Binary responses come from the logistic regression model:

log( p/(1-p) ) = w.x

where w is your weight vector and x is your feature vector. To simulate from this model given observed x, take the dot product w.x, apply the inverse logit function:

logit^-1 = 1 / (1 + exp(-w.x))

this gives you a number in the range 0-1. Then sample a response as a Bernoulli variable with parameter p, i.e. take a uniform number in [0,1] and return 1 if it's less than p, else return 0.

If you want to simulate the xs too, you can, but you're back into the realms of my discussion above. Also, note that since this is logistic regression sampling, this classifier will have an obvious advantage here, as I describe above...

Ben Allison
  • 7,244
  • 1
  • 15
  • 24
  • Yes you are right. I want a data set that is not biased to any classifier. One regression example could be the model: Y = (2sinX1)(2sinX2)+e, where X1 and X2 are uniform and e is Gaussian. Where as I need a binary response., Thanks! – Z KHAN Feb 07 '13 at 11:36
  • Ahh wait - a binary response? Then you want the logistic regression model. Let me edit my answer. – Ben Allison Feb 07 '13 at 15:55
0

you need to know for what distribution you want generate the data. Most likely it is normal distribution. Then you need to lable data points to its classes.

normal distribution: example algorithm for generating random value in dataset with normal distribution?

gaussian distribution: C++: generate gaussian distribution

data generation in excel:http://www.databison.com/index.php/how-to-generate-normal-distribution-sample-set-in-excel/

Community
  • 1
  • 1
xhudik
  • 2,414
  • 1
  • 21
  • 39