0

We can also form different decision trees from same data by randomly selecting the features without creating so many samples.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
  • What do you mean "creating so many samples"? No samples are *created* in RF – desertnaut Jul 24 '18 at 12:54
  • Actually, in RF we do both (i.e. randomly select both data & features); the answer here may be useful: [Why is Random Forest with a single tree much better than a Decision Tree classifier?](https://stackoverflow.com/questions/48239242/why-is-random-forest-with-a-single-tree-much-better-than-a-decision-tree-classif/48239653#48239653) – desertnaut Jul 24 '18 at 12:57

1 Answers1

0

Selecting random subsets of the data is a way to make sure that each tree does not overfit the underlying data

Eric Yang
  • 2,678
  • 1
  • 12
  • 18