3

I've written a wavefront Obj parser class, to import obj models into my OpenGL projects. I tested the class in debug mode, and found that it was unbearably slow.

The code works, and I made the obvious tweaks to ensure that it was as efficient as was reasonably practical.

Still, loading my test file, a 12mb obj file, which runs to about 330,000 lines of text, took over a minute to parse.

Frustrated, I had a Google, and sure enough, I wasn't the first person to run into this problem.

This guy who posted his query on gamedev.net simply ran his algorithm in release mode, outside the Visual Studio IDE and whammo, acceptable performance. This also worked for me, my ~70 seconds was reduced to ~3 seconds.

I did some profiling of the algorithm, and the bottlenecks are in the calls to std::getline, and in the following:

sstream >> sToken;

Where sstream is an std::stringstream and sToken is an std::string (with space pre-reserved).

Question

Why is the IDE so unbelievably slow at running my parsing algorithm (even in release mode) - and is there anything I can do to speed this up when running the code through the IDE (F5 - run the project)? This is making debugging impossibly slow. Is the IDE injecting code / hooks into the executable for running through the IDE, or could this be put down to cache misses or something else?

Optimizations

I do two passes through the file, on pass one, I just count the token types - so that I can reserve space (rather than iteratively growing the vectors that store vertices, normals, texcoords, faces etc.)

sLineBuffer.reserve( 100 );
sToken.reserve(10);

while( sstream.good() )
{
    sstream >> sToken;
    getline( sstream, sLineBuffer );

    if( sToken.compare("f") == 0 )
        nFaces ++;

    else if( sToken.compare("v") == 0 )
        nVertices ++;

    else if( sToken.compare("vn") == 0 )
        nNormals ++;

    else if( sToken.compare("vt") == 0 )
        nTextures ++;

    else if( sToken.compare("g") == 0 )
        nGroups ++;
}

m_Vertices.reserve( nVertices );
m_Normals.reserve( nNormals );
m_TexCoords.reserve( nTextures );
m_Faces.reserve( nFaces );
m_Groups.reserve( nGroups );

This first pass costs little (~8 seconds in debug mode, or ~0.3 seconds in release mode outside the IDE) and the efficiency saving is huge (reduces parse time from ~180 seconds in debug mode to ~60 seconds).

I also read the entire file into a stringstream, so as to take disk access out of the equation:

// Read entire file from disk into memory
fstream stream;
stringstream sstream;
stream.open( m_sFilename.c_str(), std::ios::in );
sstream << stream.rdbuf();
stream.close();

Also, where possible, throughout the algorithm, I try to reserve space for std::strings ahead of time, so that they're not being resized on a per character basis:

sLineBuffer.reserve( 100 );
sToken.reserve(10);  // etc
  • Maybe mmapping the file could help. – mfontanini May 04 '12 at 22:43
  • 1
    By "running it in the IDE" do you mean under the debugger, or does starting it with Ctrl-F5 have the same problem? Also, when you run outside the IDE, are you using the same binary or is it a different build where there's a possibility of some difference in compiler options even if both are "Release" builds? – Michael Burr May 04 '12 at 23:04
  • 1
    This probably has nothing to do with the slowdown you're reporting but the code `while(sstream.good())` is not a good idea. An I/O loop that tests for `.good()` or `.eof()` that way is a [bad practice](http://stackoverflow.com/questions/4324441). – Blastfurnace May 04 '12 at 23:08
  • Of course it is slow, it is a debug build. It makes no sense to dump a big dataset in a debug build, test it with a small one, rely on the release build to crunch through a big one. – Hans Passant May 04 '12 at 23:17
  • Blastfurnace - .good() doesn't show up as a bottleneck in profiling the algorithm with glowcode. Can you explain why you consider it a bad idea? Oh, just found your link, nevermind –  May 04 '12 at 23:28
  • Hans: If i run the code through the IDE in release mode, it's still slow. Surely a release mode build shouldn't have the speed problems of a debug build??? –  May 04 '12 at 23:30
  • Michael Burr: Start without debugging gives the same performance issues for a debug build. Start without debugging gives acceptable performance for a release build. But what's the IDE doing to a release mode build that slows it down? –  May 04 '12 at 23:36
  • hans: easier said than done. I only have high poly models for some of my models. –  May 04 '12 at 23:37
  • fontanini: em, I read the file into a stringstream in order to take disk access out of the equation. –  May 04 '12 at 23:38

2 Answers2

1

This issue turned out to be a misunderstanding with the way Visual Studio IDE operates.

Pressing F5 runs your in debug mode, no matter if you're in Debug or Release build.

I learned what Ctrl+F5 takes the debugger out of the equation (But only if you're running a release build will you see a speed increase when you do this though).

I also learned that stdio might be a better solution in this instance. I'll have to rewrite my algorithm to use fscanf as suggested, and report my findings back here, although I cringe at the idea.

0

The STL is written in such a way so as to expect the compiler to do heavy inlining of many small functions. The debugger though allows you to step into all of the wonderful layers of abstraction, and you pay dearly for it during debug mode, because it can't inline anything.

Normally I wouldn't give the following advice but in the context of parsing OBJ files I suggest just throwing out the STL and relying on good old fashioned fscanf statements. You'll find a significant gain during debugging, and even a noticeable improvement in speed during release modes.

cdiggins
  • 17,602
  • 7
  • 105
  • 102
  • The STL tip is interesting, although having to rewrite using stdio library made me groan! Counting tokens to reserve vector sizes buys me a significant speed increase. It means that they don't have to be resized as they grow. The results speak for themselves, have to disagree with you there. If I remove the first pass, the second pass takes three times as long. –  May 05 '12 at 00:23
  • If you remove the first pass does it take three times as long during a release build? You said it took 3x as long during a debug build. – cdiggins May 05 '12 at 13:03
  • Yes, still takes much longer in release build. –  May 06 '12 at 18:37
  • I'll edit my answer then, thanks for teaching me something new! – cdiggins May 06 '12 at 19:03