1

I'm new to C++ and am making an app that uses a lot of putc to write data in output which is file. Because of high writes its being slowed down, I used to code in Delphi, so I know how to solve it, like make a memory stream and write into it every time we need to write into output, and if size of memory stream is larger than buffer size we want, write it into output and clear the memory stream. How should I do this with C++ or any better solution?

Joruun
  • 11
  • 2
  • 2
    Is there a specific reason for using ``putc`` and not an already buffered I/O function (function in a broad way: STL stream, printf, ...) ? – nefas May 22 '17 at 13:05
  • Its a bit wise processor so putc seemed to be the fastest option to me – Joruun May 22 '17 at 13:08
  • You might want to check this [fwrite already buffered](https://stackoverflow.com/questions/2806104/does-fwrite-buffer-the-output) – Sniper May 22 '17 at 13:09
  • 1
    I question the premises of the question: why do you believe/how do you know your application is I/O-constrained? And what performance do you actually expect and see? – spectras May 22 '17 at 13:15

3 Answers3

2

putc is already buffered, 4 KB is default you can use setvbuf for changing that value :D

setvbuf

Zeref
  • 43
  • 6
0

Writing to a file should be very quick. It is usually the emptying of the buffer that takes some time. Consider using the character \n instead of std::endl.

Adam Hunyadi
  • 1,890
  • 16
  • 32
  • Here's how much writes its taking: IO Write : 497 MB (in 127406 writes) and it writing binary data. – Joruun May 22 '17 at 13:06
  • That should not take too long. If you don't manually flush your buffer, the speed should be about to your maximum disk reading speed. If you really get stuck, try putting everything into an std::stream (which should automatically skip buffer flushes), then flush the stream into a file. – Adam Hunyadi May 22 '17 at 13:09
  • Here is a good explanation, and some micro-benchmarking: https://www.youtube.com/watch?v=GMqQOEZYVJQ – Adam Hunyadi May 22 '17 at 13:10
0

I think a good answer to your question is here: Writing a binary file in C++ very fast

Where the answer is:

#include <stdio.h>
const unsigned long long size = 8ULL*1024ULL*1024ULL;
unsigned long long a[size];

int main()
{
    FILE* pFile;
    pFile = fopen("file.binary", "wb");
    for (unsigned long long j = 0; j < 1024; ++j){
        //Some calculations to fill a[]
        fwrite(a, 1, size*sizeof(unsigned long long), pFile);
    }
    fclose(pFile);
    return 0;
}

The most important thing in your case is to write as much data you can, with the least possible I/O requests.

Anoroah
  • 1,987
  • 2
  • 20
  • 31
  • What's with the `unsigned long long` thing? – spectras May 22 '17 at 13:13
  • You need the biggest positive integer number (not int) – Anoroah May 22 '17 at 13:16
  • 1
    What for? If you're talking about the write size, there is a type specifically for that purpose, `size_t`. It is defined by C99 standard in C (don't remember which for C++), and guaranteed to be able to hold the size of any object that fits in memory. You get it either as `::size_t` from `stddef.h` (legacy) or `std::size_t` from `cstddef` header (current). They are not necessarily the same. – spectras May 22 '17 at 13:18
  • BTW, the size of the integer does not matter. Regardless of the integer size, you are still writing bytes to the file. The important concept is to write buffers to the file. Although the underlying code does buffer, the fewer function calls will speed up the code. – Thomas Matthews May 22 '17 at 13:57