ls
is not fast at all, and for your purpose is not even valuable: indeed ls
prints an alhabetically sorted list of items, so have to wait for the OS to return the whole list of entries, sort them, print in the standard output, an then filter the result looking for newline characters.
A looooot of work for a simple task and even worse: if some of your file has a newline in the name, you'll count it more than once.
find
, on the other hand, dosn't sort. It also have the advantage of immediatly executing the actions when the buffer is returned from the File System, so you'll start seeing result immediatly, and will consume far less memory.
So prefer this approach instead:
find . -mindepth 0 -maxdepth 0 -ignore_readdir_race -prinf x | wc -m
It will print an "x" in the standard output for every item found in the current directory (excluded the current directory itself, with -mindepth 1
), and don't recurse (-maxdepth 1
), then count the chracters.
Given that the folder is very full, -ignore_readdir_race
will ignore errors for files deleted while counting
If you want to know the current count, redirect the output to a file (possibly in a tmpfs, so all is in-memory, and you won't produce a bottleneck), then detach the process. When you want to know the current counter's value, simply wc -m /tmp/count.txt
:
nohup find . -mindepth 1 -maxdepth 1 -ignore_readdir_race -printf x > /tmp/count.txt &
Then when you want to see the actual count:
wc -m /tmp/count.txt
Or just keep watching it increase...
watch wc -m /tmp/count.txt
Have fun