1

I'am programing in C language. Sometimes we have to read large data from files for which we normally use fread or read system calls, which means either stream I/O or system call I/O.

I want to ask, if we are reading such a large data, then calculating the block size and reading according to that, will it help us in any way reading it efficiently or not?

I know that reading through system calls can make it slow and there are other conditions, like if we have to deal with network sockets then we should use these, instead of using stream based I/O will give us optimized results. Like wise I need some tips and tricks to read large data from files and the things to be taken care of.

Also if mmap can be more advantageous than these conventional I/O , please elaborate the situations when it would be?

Platform : Linux , gcc compiler

Devolus
  • 21,661
  • 13
  • 66
  • 113
john
  • 1,323
  • 2
  • 19
  • 31
  • Have you consulted [this question](http://stackoverflow.com/q/258091/1025391), [this question](http://stackoverflow.com/q/10380745/1025391), and [this question](http://stackoverflow.com/q/8056984/1025391)? – moooeeeep May 10 '12 at 07:59

2 Answers2

4

Have you considered memory-mapping the file using mmap?

Oliver Charlesworth
  • 267,707
  • 33
  • 569
  • 680
2

I think it is always a good idea to read in blocks. For huge files, we would obviously not want to allocate huge amount of memory in heap. If the file is of the order of a few MBs then I think we can read the whole file at once in a char buffer and use that buffer to process your data. This would be faster than reading again and again from file.