I have a vector of size more than 5 million, each time I would like to pick up one element with the smallest key from the vector, and do some process on this element. However with the process this particular element, all of the remaining elements in the vector will also be affected so that their key update. So next time if I want to pick up the element with the smallest key from the vector, I must sort the vector once again. The problem is the number of picking-up the smallest element from the vector will be as high as 0.5 million, so that the program runs so slow. For your clearer understanding, I could write the following code to illustrate:
void function(vector<MyObj*>& A)
{ //A.size() is near 5 million, maybe even more such as 50 million.
make_heap(A.begin(), A.end(), compare); // compare function is self-defined.
for (int i=0; i<500000; i++)
{
MyObj* smallest_elem = A.front();
pop_heap(A.begin(), A.end());
A.pop_back();
Process_MyObj(smallest_elem); // here all of the elements
// in A will be affect, causing
// their keys changed.
make_heap(A.begin(), A.end()); // Since all elements' keys in A changed,
// so heap sorting A once again is
// necessary in my viewpoint.
}
}
Is there any ways to make the code run as efficient as possible? Any idea is welcome, not limited improvement in algorithm, for example, parallel or anything else. Thank you so much!