For semantic image segmentation, I understand that you often have a folder with your images and a folder with the corresponding masks. In my case, I have gray-scale images with the dimensions (32, 32, 32). The masks naturally have the same dimensions. The labels are saved as intensity values (value 1 = label 1, value 2 = label 2 etc.). 4 classes in total. Imagine I have found a model that was built with the keras model API. How do I know how to prepare my label data for it to be accepted by the model? Does it depend on the loss function? Is it defined in the model (Input parameter). Do I just add another dimension (4, 32, 32, 32) in which the 4 represents the 4 different classes and one-hot code it?
I want to build a 3D convolutional neural network for semantic segmentation but I fail to understand how to feed in the data correctly in keras. The predicted output is supposed to be a 4-channel 3D image, each channel showing the probability values of each pixel to belong to a certain class.