In probabilistic classifiers, such as Random Forests, there is not any threshold involved during fitting of a model, neither there is any threshold associated with a fitted model; hence, there is actually nothing to change. As correctly pointed out in the CV thread Reduce Classification Probability Threshold:
Choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the statistics any more. It is part of the decision component.
Quoting from my own answer in Change threshold value for Random Forest classifier :
There is simply no threshold during model training; Random Forest is a probabilistic classifier, and it only outputs class probabilities. "Hard" classes (i.e. 0/1), which indeed require a threshold, are neither produced nor used in any stage of the model training - only during prediction, and even then only in the cases we indeed require a hard classification (not always the case). Please see Predict classes or class probabilities? for more details.
So, if you produce predictions from a fitted model, say rf
, with the argument type = "prob"
, as shown in the CV thread you have linked to:
pred <- predict(rf, mydata, type = "prob")
these predictions will be probability values in [0, 1]
, and not hard classes 0/1
. From here, you are free to choose the threshold as shown in the answer there, i.e.:
thresh <- 0.6 # any desired value in [0, 1]
class_pred <- c()
class_pred[pred <= thresh] <- 0
class_pred[pred > thresh] <- 1
or of course experiment with different values of threshold without needing to change anything in the model itself.