I have to train a neural net for semantic segmentation of kidney and his tumor, starting from the dataset available from the kiTS 19 Challenge.
In this dataset, I have 100 CT scans for the training set, with a great variety in terms of size and pixel spacing.
Studying several approaches on the internet, I found that it is a good practice to decide a unique set of pixel spacing that has to be the same for all the volumes (e.g. new_spacing = [2. , 1.5, 1.5]
); by resampling the volumes to this new spacing, of course their dimensions will change according to this formula: new_size = original_size*(original_spacing/new_spacing).
What I did until now was using the scipy.ndimage.zoom in order to resample the volume to the desired new_spacing and new_size computed, then padding or cropping the obtained volume to the desired dimension (the dimensions for the input of the NN, which in my case are (n_slice, 512,512)). The problem is that this approach is really time-consuming, I'd need a faster way to do what I need to, is there any?