0

On a Linux system, I need to create a large file (about 10GB), uncompressible file.

This file is supposed to reside in a Docker image, needed to test performance in transferring and storing large docker images on a local registry. Therefore, I need the image to be "intrinsically" large (that is: uncompressible), in order to bypass optimization mechanisms.

fallocate (described at Quickly create a large file on a Linux system ) works great to create large files very quickly, but the result is a 0 entropy large file, highly compressible. When pushing the large image to the registry, it takes only few MB.

So, how can a large, uncompressible file be created?

Starnuto di topo
  • 3,215
  • 5
  • 32
  • 66

2 Answers2

2

You may tray use /dev/urandom or /dev/random to fill your file for example

@debian-10:~$ SECONDS=0; dd if=/dev/urandom of=testfile bs=10M count=1000 ;echo $SECONDS
1000+0 record in
1000+0 record out
10485760000 bytes (10 GB, 9,8 GiB) copied, 171,516 s, 61,1 MB/s
171

Using bigger bs a little small time is needed:

*@debian-10:~$ SECONDS=0; dd if=/dev/urandom of=testfile bs=30M count=320 ;echo $SECONDS
320+0 record in
320+0 record out
10066329600 bytes (10 GB, 9,4 GiB) copied, 164,498 s, 61,2 MB/s
165

171 seconds VS. 165 seconds

Starnuto di topo
  • 3,215
  • 5
  • 32
  • 66
Zio-Seppio
  • 36
  • 2
0

Is less than 3 minutes on 4GiB of real data an acceptable speed?

A "random set" can be obtained by giving dd ready data instead of generating it. The easiest way is to use a disk that is full to a degree greater than the required file size, I used a disk with random binaries and video. If you are concerned about data leakage, you can process your data with something.

Everything goes to /dev/shm, because writing to RAM is much faster than writing to disk. Naturally, there must be enough free space I had 4GB so the file in the example is 4GB My processor is an aged i7 of the first generation.

% time dd if=/dev/sdb count=40 bs=100M >/dev/shm/zerofil
40+0 records in
40+0 records out
4194304000 bytes (4,2 GB, 3,9 GiB) copied, 163,211 s, 25,7 MB/s

real    2m43.313s
user    0m0.000s
sys     0m6.032s

% ls -lh /dev/shm/zerofil
-rw-r--r-- 1 root root 4,0G mar  5 13:11 zerofil

% more /dev/shm/zerofil
3��؎м
f`f1һ���r�f�F����fa�v
f�fFf��0�r'f�>
^���>b��<
...
Slawomir Dziuba
  • 1,265
  • 1
  • 6
  • 13