I know that it doesn't make a lot of sense to profile a code that has been compiled without optimizations but when I try to compile it with -Ofast
on and profile it with gprof
for example, I get useless data like for example many functions with the same time%
and without calls#
information:
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
81.35 0.74 0.74 void cv::Mat::forEach_impl<cv::Vec<unsigned char, 3>, A_estimation(cv::Mat&, std::vector<cv::Mat, std::allocator<cv::Mat> >, int, int)::{lambda(cv::Vec<unsigned char, 3>&, int const*)#1}>(A_estimation(cv::Mat&, std::vector<cv::Mat, std::allocator<cv::Mat> >, int, int)::{lambda(cv::Vec<unsigned char, 3>&, int const*)#1} const&)::PixelOperationWrapper::operator()(cv::Range const&) const
10.99 0.84 0.10 void cv::Mat::forEach_impl<cv::Vec<float, 3>, Parallel_process::operator()(cv::Range const&) const::{lambda(cv::Vec<float, 3>&, int const*)#1}>(Parallel_process::operator()(cv::Range const&) const::{lambda(cv::Vec<float, 3>&, int const*)#1} const&)::PixelOperationWrapper::operator()(cv::Range const&) const
When I profile the code which is not optimized I get an easy and useful information from gprof
:
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
53.86 0.07 0.07 42236 0.00 0.00 A_estimation(cv::Mat&, std::vector<cv::Mat, std::allocator<cv::Mat> >, int, int)::{lambda(cv::Vec<unsigned char, 3>&, int const*)#1}::operator()(cv::Vec<unsigned char, 3>&, int const*) const
23.08 0.10 0.03 8259774 0.00 0.00 float const& cv::Mat::at<float>(int, int) const
7.69 0.11 0.01 2812992 0.00 0.00 float& cv::Mat::at<float>(int, int)
That's an example of the code I want to find where it is hot. I found out that it takes 53.86%
of the time in that part of the code in which it is called 46945
times:
I extracted that function from my code so as you can compile it:
#include <opencv2/highgui.hpp>
#include <iostream>
typedef std::vector<std::vector<int> > Matrix;
std::vector<int> A_estimation(cv::Mat& src_temp, std::vector<cv::Mat> rgb, int cols, int rows)
{
//////////////////////////////
//cv::Mat histSum = cv::Mat::zeros( 256, 1, CV_8UC3 );
Matrix histSum(3, std::vector<int>(256,0));
//cv::Mat src_temp = src.clone();
//src_temp.convertTo(src_temp, CV_8UC3);
src_temp.forEach<cv::Vec3b>
(
[&histSum](cv::Vec3b &pixel, const int* po) -> void
{
++histSum[0][pixel[0]];
++histSum[1][pixel[1]];
++histSum[2][pixel[2]];
}
);
std::vector<int> A(3, 255);
[&A, rows, cols, &histSum]{
for (auto index=8*rows*cols/1000; index>histSum[0][A[0]]; --A[0])
index -= histSum[0][A[0]];
for (auto index=8*rows*cols/1000; index>histSum[1][A[1]]; --A[1])
index -= histSum[1][A[1]];
for (auto index=8*rows*cols/1000; index>histSum[2][A[2]]; --A[2])
index -= histSum[2][A[2]];
return A;
}();
return A;
//auto AA=A_estim_lambda();
}
int main(int argc, char* argv[])
{
cv::Mat src_temp = cv::imread(argv[1]);
auto rows=src_temp.rows,
cols=src_temp.cols;
std::vector<cv::Mat> rgb;
cv::split(src_temp, rgb);
auto A = A_estimation(src_temp, rgb, cols, rows);
//Do sth with A
}
Compilation:
g++ -std=c++1z -Wall -Weffc++ -Ofast test.cpp -o test -fopenmp `pkg-config --cflags --libs opencv`
Execution
./test frame.jpg
I've two questions:
Are those information correct because they're taken from non-optimized code and if not how can I compile optimized code? and any tips to how to speed up those loops?