I'm trying to split a massive QByteArray
which contains UTF-8 encoded plain text(using whitespace as delimiter) with the best performance possible. I found that I can achieve much better results if I convert the array to QString
first. I tried using the QString.split
function using a regexp, but the performance was horrendous. This code turned out to be way faster:
QMutex mutex;
QSet<QString> split(QByteArray body)
{
QSet<QString> slova;
QString s_body = QTextCodec::codecForMib(106)->toUnicode(body);
QString current;
for(int i = 0; i< body.size(); i++){
if(s_body[i] == '\r' || s_body[i] == '\n' || s_body[i] == '\t' || s_body[i] == ' '){
mutex.lock();
slova.insert(current);
mutex.unlock();
current.clear();
current.reserve(40);
} else {
current.push_back(s_body[i]);
}
}
return slova;
}
"Slova" is a QSet<QString>
currently, but I could use a std::set
or any other format. This code is supposed to find how many unique words there are in the array, with the best performance possible.
Unfortunately, this code runs far from fast enough. I'm looking to squeeze the absolute maximum out of this.
Using callgrind, I found that the most gluttonous internal functions were:
QString::reallocData (18% absolute cost)
QString::append (10% absolute cost)
QString::operator= (8 % absolute cost)
QTextCodec::toUnicode (8% absolute cost)
Obviously, this has to do with memory allocation stemming from the push_back
function. What is the most optimal way to solve this? Doesn't necessarily have to be a Qt solution - pure C or C++ are also acceptable.