-1

I'm trying to write 1 billion of files in one folder using multi thread but next my program wrote 20 million files I got "No space left on device". I did not close my program because It still writing same files.

  • I don't have any problems with "inode", I used only 7%.
  • No problem with /tmp, /var/tmp, there are empty.
  • I increased fs.inotify.max_user_watches to 1048576.

I use debian and EXT4 as filesystem. Is there same one meet this problem and thank you so much for help.

Running tune2fs -l /path/to/drive gives

Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

Filesystem flags: signed_directory_hash

Default mount options: user_xattr acl

Filesystem state: clean

Errors behavior: Continue

Filesystem OS type: Linux

Inode count: 260276224

Block count: 195197952

Reserved block count: 9759897

Free blocks: 178861356

Free inodes: 260276213

First block: 0

Block size: 4096

Fragment size: 4096

Reserved GDT blocks: 1024

Blocks per group: 24576

Fragments per group: 24576

Inodes per group: 32768

Inode blocks per group: 2048

Flex block group size: 16

Filesystem created: ---

Last mount time: ---

Last write time: ---

Mount count: 2

Maximum mount count: -1

Last checked: ---

Check interval: 0 ()

Lifetime writes: 62 GB

Reserved blocks uid: 0 (user root)

Reserved blocks gid: 0 (group root)

First inode: 11

Inode size: 256

Required extra isize: 28

Desired extra isize: 28

Journal inode: 8

Default directory hash: ---

Directory Hash Seed: ---

Journal backup: inode blocks
Gricey
  • 1,321
  • 1
  • 18
  • 38
Mourad Karim
  • 161
  • 1
  • 1
  • 11
  • 1
    There is a limit to the number of files per directory, which depends on parameters given when the file system was created. See [this question](http://stackoverflow.com/questions/17537471/what-is-the-max-files-per-directory-in-ext4), too. – unwind Nov 25 '14 at 15:17
  • Yes there is a limitation when I work with ext3 but EXT4 is unlimited. – Mourad Karim Nov 25 '14 at 15:20
  • 1
    What is your EXT4 block size and bytes/inode (see [here](http://stackoverflow.com/questions/6154841/))? What does `df -h` and `df -i` tell you? – uesp Nov 25 '14 at 15:37
  • -When I run df -i or df -h there is not partition use more than 20%. -I did not use inodes, I set it to 200M. -file size = 2 bytes – Mourad Karim Nov 25 '14 at 15:43
  • 1
    What about output from `tune2fs -l /path/to/drive`? Note that actually posting the output from these commands in your question may be helpful (it may look fine to you but someone could see something you don't). – uesp Nov 25 '14 at 16:02

1 Answers1

0

check this question How to store one billion files on ext4?

you have fewer blocks than inodes which is not going to work, though I think that is the least of your problems. If you really want to do this (would a database be better?) you may need to look into filesystems other an ext4 zfs springs to mind as an option that allows 2^48 entries per directory and should do what you want.

If this question https://serverfault.com/questions/506465/is-there-a-hard-limit-to-the-number-of-files-a-directory-can-have is anything to go by, there is a limit on the number of files per directory using ext4 which you are likely hitting

Community
  • 1
  • 1
camelccc
  • 2,847
  • 8
  • 26
  • 52