I tried to use write() function to write a large piece of memory into a file (more than 2GB) but never succeed. Can somebody be nice and tell me what to do?
-
4What file system and what operating system are we talking about? – Joachim Isaksson Apr 06 '12 at 10:24
-
2[What have you tried](http://mattgemmell.com/2008/12/08/what-have-you-tried/)? – jpm Apr 06 '12 at 10:25
-
How did you try to write it? A code snippet would be nice. – ArjunShankar Apr 06 '12 at 10:28
-
Related: https://stackoverflow.com/questions/560238/how-to-create-a-file-of-size-more-than-2gb-in-linux-unix/45574824#45574824 – Ciro Santilli OurBigBook.com Aug 08 '17 at 17:43
3 Answers
Assuming Linux :)
https://users.suse.com/~aj/linux_lfs.html
- Define
_FILE_OFFSET_BITS
to 64. (gcc -D_FILE_OFFSET_BITS=64
) - Define
_LARGEFILE_SOURCE
and_LARGEFILE64_SOURCE
. - Use the
O_LARGEFILE
flag withopen()
to operate on large file
Also some information there: http://www.gnu.org/software/libc/manual/html_node/Opening-Streams.html#index-fopen64-931
These days the file systems you have on your system will support large file out of the box.

- 657
- 5
- 18

- 206
- 1
- 3
-
Only step 1 is necessary or beneficial. 2-4 are bogus/outdated/harmful to portability. – R.. GitHub STOP HELPING ICE Apr 06 '12 at 12:59
-
Hey I tried that. In fact by using fopen64 I can create a file with maximum sizeo of 4GB now. But I need bigger. Anyway, thx for you advice. – tzcoolman Apr 09 '12 at 14:44
Add -D_FILE_OFFSET_BITS=64
to your compiler command line (aka CFLAGS
) and it will work. This is not necessary on any 64-bit system and it's also unnecessary on some (but not all) 32-bit systems these days.
Avoid advice to go writing O_LARGEFILE
or open64
etc. all over your source. This is non-portable and, quite simply, ugly.
Edit: Actually, I think we've all misread your issue. If you're on a 32-bit system, the largest possible single object in memory is 2GB-1byte (i.e. SSIZE_MAX
). Per POSIX, the behavior of write
for size arguments that cannot fit in ssize_t
is not defined; this is because write
returns type ssize_t
and could not represent the number of bytes written if it were larger. Perhaps more importantly, it's dangerous for an implementation to allow objects so large that their size does not fit in ptrdiff_t
, since that would create a situation where pointer arithmetic could invoke integer overflow and thus undefined behavior. You may have been able to create such a large object with mmap
(I would consider this a bug in your system's kernel or libc), but it's a very bad idea and will lead to all sorts of bugs. If you need single objects larger than 2GB, you really need to run on a 64-bit machine to have it be safe and bug-free.

- 208,859
- 35
- 376
- 711
It depends upon the operating system, the processor, the file system. On Linux/x86-64 systems (with ext3 file systems) it is trivial to do. Just use the usual library functions (in C++ std::ofstream
, in C <stdio.h>
, fopen
&& fprintf
etc.) or the underlying system calls (open, write).

- 223,805
- 18
- 296
- 547