23

I tried to print Hello World 200,000 times and it took me forever, so I have to stop. But right after I add a char array to act as a buffer, it took less than 10 seconds. Why?

Before adding a buffer:

#include <iostream> 
using namespace std;

int main() {
        int count = 0;
        std::ios_base::sync_with_stdio(false);
        for(int i = 1; i < 200000; i++)
        {       
                cout << "Hello world!\n";
                count++;
        }
                cout<<"Count:%d\n"<<count;
return 0;
}

And this is after adding a buffer:

#include <iostream> 
using namespace std;

int main() {
        int count = 0;
        std::ios_base::sync_with_stdio(false);
        char buffer[1024];
        cout.rdbuf()->pubsetbuf(buffer, 1024);
        for(int i = 1; i < 200000; i++)
        {       
                cout << "Hello world!\n";
                count++;
        }
                cout<<"Count:%d\n"<<count;
return 0;
}

This makes me think about Java. What's the advantages of a using BufferReader to read in file?

roschach
  • 8,390
  • 14
  • 74
  • 124
Amumu
  • 17,924
  • 31
  • 84
  • 131

5 Answers5

31

For the stand of file operations, writing to memory (RAM) is always faster than writing to the file on the disk directly.

For illustration, let's define:

  • each write IO operation to a file on the disk costs 1 ms
  • each write IO operation to a file on the disk over a network costs 5 ms
  • each write IO operation to the memory costs 0.5 ms

Let's say we have to write some data to a file 100 times.

Case 1: Directly Writing to File On Disk

100 times x 1 ms = 100 ms

Case 2: Directly Writing to File On Disk Over Network

100 times x 5 ms = 500 ms

Case 3: Buffering in Memory before Writing to File on Disk

(100 times x 0.5 ms) + 1 ms = 51 ms

Case 4: Buffering in Memory before Writing to File on Disk Over Network

(100 times x 0.5 ms) + 5 ms = 55 ms

Conclusion

Buffering in memory is always faster than direct operation. However if your system is low on memory and has to swap with page file, it'll be slow again. Thus you have to balance your IO operations between memory and disk/network.

mauris
  • 42,982
  • 15
  • 99
  • 131
  • I see.So, directly write to file is much slower than doing it all at once by using buffering. But isn't that writing data in the buffer all at once would make it longer than to write part of the data to file many times? I guess the time to write all data at once is less than directly write to file many times but with a little bit of data each. – Amumu Feb 25 '11 at 02:49
  • sorry Amumu, I'm not getting your question. Mind rephrasing? – mauris Feb 25 '11 at 02:51
  • Sorry it's a bit messy there. So writing data all at once > writing a bit of data each time because it has less complex IO calls. – Amumu Feb 25 '11 at 02:55
  • In my second example with buffer, it will fill until the buffer is full and then output to the console, then it discards the old data from buffer to receive the new data. Is that correct? – Amumu Feb 25 '11 at 02:57
  • @Amumu - Your first comment: yes. Basically as you reduce expensive IO calls to disk, your code gets faster. Internally the disk need to find the file, open it, make sure no other programs are writing to it and all that. – mauris Feb 25 '11 at 03:21
  • @Amumu - Your second comment: Since you're using C++ there should be GC. I'm quite a rookie at C++ compared to other big guys on Stack Overflow. The buffer data will be filled, and once it is full, the output occurs (what we call flushing). The buffer should be cleared for more buffering. – mauris Feb 25 '11 at 03:23
  • 10
    It's like moving stuff by car from one city to another. If you move boxes one at a time, you're going to be spending a lot of time driving. You want to be moving at least a carload at a time. As the size in each call increases, at first performance and efficiency increase dramatically. Then you reach a point of diminishing returns. Where this point is depends on a lot of factors but it's usually around 2KB or so. "Hello world!" is a lot less than 2KB. – David Schwartz Sep 13 '11 at 16:19
5

The main issue with writing to the disk is that the time taken to write is not a linear function of the number bytes, but an affine one with a huge constant.

In computing terms, it means that, for IO, you have a good throughput (less than memory, but quite good still), however you have poor latency (a tad better than network normally).

If you look at evaluation articles of HDD or SSD, you'll notice that the read/write tests are separated in two categories:

  • throughput in random reads
  • throughput in contiguous reads

The latter is normally significantly greater than the former.

Normally, the OS and the IO library should abstract this for you, but as you noticed, if your routine is IO intensive, you might gain by increasing the buffer size. This is normal, the library is generally tailored for all kinds of uses and thus offers a good middle-ground for average applications. If your application is not "average", then it might not perform as fast as it could.

Matthieu M.
  • 287,565
  • 48
  • 449
  • 722
3

What compiler/platform are you using? I see no significant difference here (RedHat, gcc 4.1.2); both programs take 5-6 seconds to finish (but "user" time is about 150 ms). If I redirect output to a file (through the shell), total time is about 300 ms (so most of the 6 seconds is spent waiting for my console to catch up to the program).

In other words, output should be buffered by default, so I'm curious why you're seeing such a huge speedup.

3 tangentially-related notes:

  1. Your program has an off-by-one error in that you only print 199999 times instead of the stated 200000 (either start with i = 0 or end with i <= 200000)
  2. You're mixing printf syntax with cout syntax when outputting count...the fix for that is obvious enough.
  3. Disabling sync_with_stdio produces a small speedup (about 5%) for me when outputting to console, but the impact is negligible when redirecting to file. This is a micro-optimization which you probably wouldn't need in most cases (IMHO).
Sumudu Fernando
  • 1,763
  • 2
  • 11
  • 17
  • I ran the two code examples in code blocks using gcc on windows xp. The code without buffer had an execution time of 27ms where as The code with buffer had an execution time of 17ms. – Aditya P Feb 25 '11 at 04:50
  • I ran it on Visual Studio 2010 Professional, Windows Server 2008 R2. The first example without buffering took a very long time, but not with buffering. – Amumu Feb 25 '11 at 07:39
2

The cout function contains a lot of hidden and complex logic going all the way down the the kernel so you can write your text to the screen, when you use a buffer in that way your essentially do a batch request instead of repeating the complex I/O calls.

Istinra
  • 343
  • 4
  • 8
1

If you have a buffer, you get fewer actual I/O calls, which is the slow part. First, the buffer gets filled, then one I/O call is made to flush the buffer. Will be equally helpful in Java or any other system where I/O is slow.

Lou Franco
  • 87,846
  • 14
  • 132
  • 192