So I am doing hyperspectral image classification atm (python + tensorflow) and with one specific test i ran into problem that i don't have enough RAM.
General idea: i have hyperspectral image (ndarray, dtype = float64) WxHxD i need to extract smaller cubes/windows from it for classified pixels WSxWSxD, but in order to feed them into ANN i need to reshape them into SxWSxWsxDx1, where S is samples and 1 is for channels and that's where i don't have enough memory.
as it is right now in my code i extract all classified pixels into similarly shaped containers per class, which fails because given window size WS = 27, depth D = 103 and there are around 42k classified pixels.
42776 * 27 * 27 *103 * 8 / 1024^2 = 24505 MB
total of 24.5 GB which i don't have.
I looked into whether i can concatenate numpy views/slices such that it would still be a sort of view, but Here people say that i can't do that, because it wouldn't be contiguous which is fair enough.
So the only option that i am seeing right now is to extract cubes as python list filled with views/slices and then have a buffer that would be refilled and fed into ANN, which would be a a bit annoying because that's probably only test with such big window+depth.
My question is whether there is some other better way to force numpy stack/concat as a view-like object without inflating memory (or maybe there is some special sacred technology that am not aware of that can help) or is "buffer + feeding in portions" is the most rational solution?