0

Hey so I have written a program that makes use of parallel programming to crack hashes. The first thing the program does is read data from a txt file (This is the word list for the program to use to compare hashes) and splits it up into smaller chunks (appends x number of lines to a vector then adds that vector to another vector of vectors and repeats this until all lines have been read). Now this takes a long time to do when using large word lists obviously, so I was wondering if there would be any way I could possibly make this process a lot faster may it be making use of parallel programming or any other techniques.

Current code being used:

std::ifstream file(fileName);

// Filling vector with lines from file
while (!line.empty())
{
    std::vector<std::string> temp;
    // Add x lines from file to temporary vector 
    for (int j = 0; j < x; j++)
    {
        std::getline(file, line);
        temp.push_back(line);
    }
    // Add temp vector storing x lines of the file to another vector
    FileSectionVec.push_back(temp);
}
Ken White
  • 123,280
  • 14
  • 225
  • 444
  • when reading from a file, typically the bottleneck is your harddrive, not the cpu. Also related: https://stackoverflow.com/questions/22469461/can-you-open-the-same-file-multiple-times-for-writing – 463035818_is_not_an_ai Apr 27 '21 at 12:13
  • 1
    There are no special tricks that can make a hard drive spin faster; transfer data from SSD into RAM any faster than the I/O bus can support. – Sam Varshavchik Apr 27 '21 at 12:15
  • Did you profile? How much time is spent reading, parsing the lines, pushing into vector? If your file reading is not a bottleneck yet, you can optimize the other parts to make sure that it is :) – Vlad Feinstein Apr 27 '21 at 16:36

0 Answers0