I'm a person who doesnt get used to loocv yet. I've been curious of the title problem. I did leave one out cross validation for my random forest model. My codes are like this.
for train_index, test_index in loo.split(x):
x_train, x_test = x.iloc[train_index], x.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
model = RandomForestRegressor()
model.fit(x_train, y_train.values.ravel()) #y_train.values.ravel()
y_pred = model.predict(x_test)
#y_pred = [np.round(x) for x in y_pred]
y_tests += y_test.values.tolist()[0]
y_preds += list(y_pred)
rr = metrics.r2_score(y_tests, y_preds)
ms_error = metrics.mean_squared_error(y_tests, y_preds)**0.5
After that, I wanted to get a feature importance of my model like this.
features = x.columns
sorted_idx = model.feature_importances_.argsort()
It's pretty different to what i've expected. In loocv process, my computer made many different models with using different test and datasets from my original data, which has a length of literally same to original data. So I'm thinking that feature importances should be multiple as the length of original data, because the test set of each loocv epoch is just one.. (I don't know which word is best for explaining this in English, english is not my mother tongue) It was not multiple results, just one though. It was only one feature importance, like calculated for only one sets (as if loocv hadnt added in my codes)
Then why should i had gotten only one importance? I want to understand the reason of it. Thank you for reading my question.
want to know the reason why I got only one feature importance even though loocv was added in my codes