8

I have this home work where I have to transfer a very big file from one source to multiple machines using bittorrent kinda of algorithm. Initially I am cutting the files in to chunks and I transfer chunks to all the targets. Targets have the intelligence to share the chunks they have with other targets. It works fine. I wanted to transfer a 4GB file so I tarred four 1GB files. It didn't error out when I created the 4GB tar file but at the other end while assembling all the chunks back to the original file it errors out saying file size limit exceeded. How can I go about solving this 2GB limitation problem?

Bill the Lizard
  • 398,270
  • 210
  • 566
  • 880
Ram
  • 3,034
  • 11
  • 38
  • 46

5 Answers5

11

I can think of two possible reasons:

  • You don't have Large File Support enabled in your Linux kernel
  • Your application isn't compiled with large file support (you might need to pass gcc extra flags to tell it to use 64-bit versions of certain file I/O functions. e.g. gcc -D_FILE_OFFSET_BITS=64)
codelogic
  • 71,764
  • 9
  • 59
  • 54
  • Don't forget to at least mention the filesystem (driver). Some FS-es have arbitrary limits. Only Btrfs, ZFS and perhaps a few others have 128bit capabilities (resulting in e.g [16 exabytes (16×1018 bytes) — Maximum size of a single file](http://en.wikipedia.org/wiki/ZFS#Capacity)) – sehe Apr 28 '11 at 00:02
  • Note: `_FILE_OFFSET_BITS` passes `O_LARGEFILE` to the `open` syscall on the correct arch, but `_FILE_OFFSET_BITS` is preferred for portability: https://stackoverflow.com/questions/2888425/is-o-largefile-needed-just-to-write-a-large-file – Ciro Santilli OurBigBook.com Aug 17 '17 at 09:46
4

This depends on the filesystem type. When using ext3, I have no such problems with files that are significantly larger.

If the underlying disk is FAT, NTFS or CIFS (SMB), you must also make sure you use the latest version of the appropriate driver. There are some older drivers that have file-size limits like the ones you experience.

krosenvold
  • 75,535
  • 32
  • 152
  • 208
3

Could this be related to a system limitation configuration ?

$ ulimit -a
vi /etc/security/limits.conf
vivek       hard  fsize  1024000

If you do not want any limit remove fsize from /etc/security/limits.conf.

VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
1

If your system supports it, you can get hints with: man largefile.

mouviciel
  • 66,855
  • 13
  • 106
  • 140
1

You should use fseeko and ftello, see fseeko(3) Note you should define #define _FILE_OFFSET_BITS 64

#define _FILE_OFFSET_BITS 64
#include <stdio.h>
Artyom
  • 31,019
  • 21
  • 127
  • 215