In terms of training deep learning models for different types of image-related works, such as image classification, semantic segmentation, what kind of pre-processing works need to be performed?
For instance, if I want to train a network for semantic segmentation, do I need to scale the image value (normally represented as an nd-array) to [0,1]
range, or keep it as [0,255]
range? Thanks.