I'm having a fairly large amount of data (>1MB) encoded in Base64.
I found a nice library which can help me dealing with this as fast as possible.
It's decode syntax is here, very basic, it needs an output buffer:
int base64_decode
( const char *src
, size_t srclen
, char *out
, size_t *outlen
, int flags
) ;
Although I could have the output buffer from their sample:
char out[1*1024*1024];
But input size is not constant, and it somehow looks and feels bad to ask this much size at compile time. On the other hand, having a nice large buffer on stack shall give some speed advantages over data stored and accessed on heap (source).
But I thought of using a vector<char>
instead. I could define it as
std::vector<char> out;
Then when I have the input size, I can resize
it:
out.resize(input_size);
Resizing would initialize all its items to 0, which seems impractical and unnecessary to me as the base64_decode
would also initialize those items in the next step.
Therefore, resize
might be not the best, but calling reserve
will not help either as it doesn't modify the vector's size (although it doesn't initialize the items either).
As I have no prior information on the data size, I either need to use some runtime-resizable buffer, or take a big guess and allocate a huge buffer.
As the library is able to decode really fast, I would like to use the fastest solution for the output buffer too, and both char array
and vector
seems inappropriate. What could be the my fastest option then?