The documentation for Random Forest Classifier in Scikit-Learn says
A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default)
If the training set size X has n instances, then it seems like every sub-sample picked for each decision tree being trained will be of size n. Now if Bootstrap==True, the sample is taken with replacement and it seems there is some statistical benefit to picking a number of such samples.
However, if Bootstrap=False (sample picked with no replacement), that means every sample is identical to the training set? Is that a correct interpretation? If so, every tree gets the exact same sample? Why would this be considered an ensemble then?