first of all, I'd like to explain why i need this. i need to create some disk image files for virtual machines. that files will be really huge(100GB ~ 1.5TB) files which are filled with zeroes. it should be created in very short time(as much as possible, at least it should be faster than "dd" command with "/dev/zero"). requiring size will be various(it means that i can not create it previously). Easy solution(until know) is creating files as a sparse file. but why i want to change solution is some sort of disadvantage of sparse file.
in my view, it can be solved by directly editing inodes. my idea is that create really large file which are filled with zeroes and divide it by inode level. Or I can create a lot of 100 Gbyte files before concatenate that files to create desired file size. I know that "debugfs" can edit direct blocks of a inode. as "debugfs" is used for file recovering, maybe, i can use it for creating file. but I can't find how to edit indirect blocks of a inode until now. beside of editing indirect blocks, I'm not sure about side effect of directly editing inodes.
anyway, Are there any cool solution or tool(if it exists) for "sewing" huge files or creating zero-fill file? "cat" command or other just read-and-write solution maybe, can't fulfill my problem.