I'm working on image segmentation with large satellite .JP2 images.
image shape : (10000, 10000, 13) because of 13 bands (13 different waves-length observations for the same area)
uint32
I want to build the most efficient tensorflow pipeline but i don't have much experience.
I want to have an easy tuning of the number of bands used for training (RGB for the first training then i'll try to add more bands to see if it increase the performances)
I imagined two different pipelines :
I transform my .JP2 into a (10000 x 10000 x 13) numpy array. Then the pipeline is feed with desired slices (e.g 128x128x3 if i want RGB image)
Or, I preprocess my large image into 13 différents folders (13 bands) Then the input pipeline use the desired datasets to build the 128 x 128 x (1-13) input image
Taking a big image and slicing it as i want, directly into the tensorflow pipeline is more convenient because I just need a 10000x10000x13 numpy array as training set. But I don't know if it is releavant/optimized/possible...
What is the most optimized way to solve my pb ? (I have a 11Gb 1080 GPU)