1

first of all, I'd like to explain why i need this. i need to create some disk image files for virtual machines. that files will be really huge(100GB ~ 1.5TB) files which are filled with zeroes. it should be created in very short time(as much as possible, at least it should be faster than "dd" command with "/dev/zero"). requiring size will be various(it means that i can not create it previously). Easy solution(until know) is creating files as a sparse file. but why i want to change solution is some sort of disadvantage of sparse file.

in my view, it can be solved by directly editing inodes. my idea is that create really large file which are filled with zeroes and divide it by inode level. Or I can create a lot of 100 Gbyte files before concatenate that files to create desired file size. I know that "debugfs" can edit direct blocks of a inode. as "debugfs" is used for file recovering, maybe, i can use it for creating file. but I can't find how to edit indirect blocks of a inode until now. beside of editing indirect blocks, I'm not sure about side effect of directly editing inodes.

anyway, Are there any cool solution or tool(if it exists) for "sewing" huge files or creating zero-fill file? "cat" command or other just read-and-write solution maybe, can't fulfill my problem.

jinhwan
  • 1,207
  • 1
  • 13
  • 27
  • 2
    You say that you'd like to explain _why_ but you never do explain it... However, if you need files filled with zeroes, why don't you read whatever amount you need from /dev/zero? – Kimvais Feb 06 '12 at 07:59
  • /dev/zero is exactly what I was about to suggest :) – paulsm4 Feb 06 '12 at 08:03
  • //kimvais thank you for reply. That files are disk images. I will mount that images to virtual machines. – jinhwan Feb 07 '12 at 00:19
  • //paulsm4 I used dd commnad like this "dd if=/dev/zero of=/file bs=??k count=xx and I had tested reasoning block sizes to find best speed. but all of them don't match my requirement. that is why i started to find other solution. – jinhwan Feb 07 '12 at 00:45
  • downvoted because this question spends most of its time talking about concatenating large files, but then apparently only wants to allocate them: see http://stackoverflow.com/questions/257844/quickly-create-a-large-file-on-a-linux-system for that. fallocate, or a sparse file, is the way to go. You can't safely use `debugfs` on a filesystem that's mounted read-write, so that idea isn't very useful. – Peter Cordes Mar 09 '16 at 18:57

1 Answers1

6

If you only want to create zero filled files, then fallocate might be useful. From the man page

fallocate is used to preallocate blocks to a file. For filesystems which support the fallocate system call, this is done quickly by allocating blocks and marking them as uninitialized, requiring no IO to the data blocks. This is much faster than creating a file by filling it with zeros.

As of the Linux Kernel v2.6.31, the fallocate system call is supported by the btrfs, ext4, ocfs2, and xfs filesystems.

ghostkadost
  • 502
  • 4
  • 14