EDITED to include information discussed in comments
The original answer to this question was
for /r "c:\startingPoint" %%a in (*.txt) do echo %%~fa
which works as intended by the OP: it will recursively process files as they are located in disk, with no wait or pause or at least with no unnecessary pause (of course the first file needs to be found).
What is the difference between the anwswer and the original code
FOR /F "delims=*" %%i IN ('dir /s /b *.txt') do echo "test"
in the question?
In general, for /f
is used to iterate over a set of lines instead of a set of files, executing the code in the body of the for
command for each of the lines. The in
clause of the command defines from "where" to retrieve the set of lines.
This "where" can be a file on disk to be read or a command or set of commands to execute and whose output will be processed. In both cases, all the data is fully retrieved before start processing it. Until all the data is in a memory buffer, the code in the body of the for
command is not executed.
And this is where a difference appears.
When a file in disk is read, for /f
gets the size of the file and allocates a memory buffer big enough to acomodate the full file in memory, reads the file into the buffer and starts to process the buffer (and of course, you can not use for /f
to process a file bigger than free memory)
But when for /f
processes a command, it allocates a starting buffer, appends data into it from the stdout stream of the executed command and, when the buffer is full, a new larger buffer is allocated, data from the old buffer is copied to the new buffer and old buffer is discarded. New data is retrieved in the adecuated point of new buffer. And this process is repeated each time the buffer gets full. And this scenario is exacerbated by the fact that the buffer is increased in small amounts.
So, when the data generated by the command is very large, a lot of memory allocation, copy, free is done. And this needs time. For large data, a lot of time.
Summarizing, if for /f
is used to process the output of a command and the data to process is large, the time needed to to it will increase exponentially.
How to avoid it? The problem (in this cases) is to retrieve the data from the command, not to process it. So, when the volume of data is really big, instead of the usual for /f %%a in (' command ' ) ....
syntax, it is better to execute the command redirecting the output to a temporary file and then use for /f
to process the file. The generation of data will need the same amout of time, but the difference in data processing delay can go from hours to seconds or minutes.