Your average program does not require frequent flushing. Flushing is something nearer to a special case needed in a few situations:
- Interacting with a human or other system: flushing output before waiting for input is sensible.
- Going dormant for awhile: Flushing before extended sleep or waiting simplifies examination of logfiles, makes databases consistent most of the time, etc.
If buffering is not needed, it would be better to disable buffering in the first place instead of throwing in a lot of flushes.
Most of the time, programs benefit by having buffering enabled. Sometimes they generate a few characters here and there. Other times they output a blast of lines.
In all my decades of engineering, my most dramatic performance increases are often realized simply by improving buffering. Sometimes by increasing the default FILE
buffer size above 512 bytes (the default) to 4K or 32K (sometimes higher). Other times by adding a layer of buffering or caching. Usually there is high overhead with each trip through the operating system's i/o system. Reducing the total number of system calls is (usually) an easy and highly effective scheme to improve performance.