For ordinal features it makes sense to use label encoding. But for categorical features we use one hot encoding. But these are the conventions for input features. But for output variables is it necessary to use one hot encoding if the output labels are categorical? Or I may use label encoding as well? Which one is preferable?
I am training a fruit classifier having 120 classes. I am using a ResNet50 model pre-trained on ImageNet as a feature extractor and using these features I am training a Logistic Regression classifier (transfer learning). As there are 120 classes, for label encoding the labels will be ranged from 0 to 119. Will it be okay to train model keeping them label-encoded? I am asking this because in the following documentation of sklearn they are allowing me to do so:
sklearn.preprocessing.LabelEncoder
Here they are saying:
..."This transformer should be used to encode target values, i.e. y, and not the input X."
But I am confused why it is okay to do so as in label encoding each of the output variables is not getting the same priority as they would get if I used one hot encoding.