Context:
I've been processing scientific satellite images, currently keeping the individual end results at each timestamp as cv::Mat_<double>
, which can for instance be stored in a std::container of images, such as a std::vector<cv::Mat_<double>>
.
The issue:
I would now like to study the physical properties of each individual pixel over time. For that, it would be far preferable if I could look at the data along the time dimension and work with a 2D table of vectors instead. In other words: to have a std::vector<double>
associated to each pixel on the 2D grid that is common to all images.
A reason for that is that the type of calculations (computing percentiles, curve fitting, etc) will rely on std::algorithms
and libraries which expect to be fed with std::vectors
and the like. For a given pixel the data is definitely not contiguous in memory along the time dimension though.
Can/Should I really avoid copying the data in such a case? If yes, what would be the best approach, then? By best I mean efficient yet as 'clean'/'clear' as possible.
I thought of std::reference_wrapper
to store the addresses in a std::vector
; it's simple and works but each entry takes as much memory as if I had simply duplicated the data in a std::vector<double>
. Each data point is just a double after all.
NB: I've stumbled upon Boost MultiArray, but I'd like to avoid having to add a Boost dependency.
Many thanks in advance for your time/input.