In short: Cross-Operating-System, Large-File Support in C is horrendous.
Goal: I am trying to have "one way" (most likely macro based) to allow 32 bit AND 64 bit to have large file support. Ideally with typedef's, #ifdef's, #(n)defined, etc a macro wrapper could allow basic large file support in the form of an #include library or set of defined macros.
Research: POSIX's file operations have been performing great across BSD/Mac/Linux for 32 and 64 bit IO with files greater than the typical 2^31 size, but even with clang or mingw on Windows I cannot leverage these calls due to M$'s silly implementation of POSIX(if that's what we want to call it...). I am leaning towards using CreateFile(), ReadFile(), WriteFile() on Windows, but this is COMPLETELY DIFFERENT than POSIX's open()/read()/write()/close()/etc in terms of methodology and data types used.
Question: After banging my head against my keyboard, and several text books, I've decided to poll all of you to see: how go you guys/gals accomplish Cross OS File I/O that supports large files?
P.S. I have research links:
- http://msdn.microsoft.com/en-us/library/windows/desktop/bb540534(v=vs.85).aspx
- Portable way to get file size in C/C++
- How can I portably turn on large file support?
- http://mingw-users.1079350.n2.nabble.com/not-obvious-how-to-compile-programs-for-large-files-td5699144.html
- http://www.viva64.com/en/l/full/
- https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/64bitPorting/HighLevelAPIs/HighLevelAPIs.html
- https://www.securecoding.cert.org/confluence/display/c/FIO19-C.+Do+not+use+fseek%28%29+and+ftell%28%29+to+compute+the+size+of+a+regular+file
- https://www.securecoding.cert.org/confluence/display/c/FIO03-C.+Do+not+make+assumptions+about+fopen%28%29+and+file+creation
- https://en.wikipedia.org/wiki/Large_file_support