Use the following code with your own timing code around the call to delete msg in main(). When running in debug mode, it is taking 473 times as long, on average, as when running without debugging. Does anyone know why this is happening? If so, is there a way that I can get this code to run much faster in debug mode?
Note: I am using Visual Studio 2008 SP 1 on a Windows 7 machine.
// This file is generated by using the Google Protocol Buffers compiler
// to compile a PropMsg.proto file (contents of that file are listed below)
#include "PropMsg.pb.h"
void RawSerializer::serialize(int i_val, PropMsg * o_msg)
{
o_msg->set_v_int32(i_val);
}
void serialize(std::vector<int> const & i_val, PropMsg * o_msg)
{
for (std::vector<int>::const_iterator it = i_val.begin(); it != i_val.end(); ++it) {
PropMsg * objMsg = o_msg->add_v_var_repeated();
serialize( * it, objMsg);
}
}
int main()
{
std::vector<int> testVec(100000);
PropMsg * msg = new PropMsg;
serialize(testVec, msg);
delete msg; // Time this guy
}
PropMsg was created with the following .proto file definition:
option optimize_for = SPEED;
message PropMsg
{
optional int32 v_int32 = 7;
repeated PropMsg v_var_repeated = 101;
}
Here's some sample test output that I got:
datatype: class std::vector<int,class std::allocator<int> >
num runs: 10
num items: 100000
deserializing from PropMsg time: 0.0046
serializing to PropMsg time: 0.0426
reading from disk time: 0.7195
writing to disk time: 0.0298
deallocating PropMsg time: 8.99
Notice how this is NOT IO-bound.