0

I executed this:

$ dd if=/dev/random of=foo bs=1G count=1
0+1 records in
0+1 records out
6 bytes (6 B) copied, 0.00016958 s, 35.4 kB/s

$ stat -c "%s" foo
6

Also this does not work, I get stuck in the command:

$ head -c 500 /dev/random > foo

What is my mistake?

I'm on Linux mint.

nowox
  • 25,978
  • 39
  • 143
  • 293
  • Possible duplicate of [Generate a random filename in unix shell](http://stackoverflow.com/questions/2793812/generate-a-random-filename-in-unix-shell) – Alexander Guz May 02 '16 at 19:51
  • 1
    This question was not about the filename but the content. Not a duplicate. – Florian May 03 '16 at 01:42

1 Answers1

4

/dev/random wil just deliver random data until the entropy sources for the RNG are exhausted. If you want to read bigger chunks of (pseudo-)random data, you can read from /dev/urandom instead.

Florian
  • 255
  • 1
  • 9
  • `/dev/urandom` is very very slow. It takes about 1minute to generate about 30MB. Is it normal? – nowox May 02 '16 at 19:42
  • Is it possible to generate `bad random` file quickly? I just wanna get a dummy file to benchmark a file transfer over nfs. – nowox May 02 '16 at 19:47
  • 1
    For that purpose, it should be sufficient to create a smaller file and concatenat that a couple of times to create a larger one: dd if=/dev/urandom bs=1024 count=1024 >1m; cat 1m 1m 1m 1m 1m >5m; cat 5m 5m 5m 5m 5m >25m; cat 25m 25m 25m 25m>100m – Florian May 02 '16 at 19:48
  • 2
    That should indeed work for NFS. To generate test data for tools that are harder to fool like `rsync` or `xz`, you can use `head -c 100m /dev/zero | openssl enc -aes-128-cbc -pass pass:"$(head -c 20 /dev/urandom | base64)" > my100MBfile`. – that other guy May 02 '16 at 20:39