cmd.exe supports a for
loop, like:
for %c in ( file*.txt ) do process %c
It supports a fair number of options for things like getting only the base name of the file in question, so if (for example) you wanted to the .txt
files and produce a file with the same base name and the extension changed to, say, .dat
, that's pretty easy to do.
In this case, the apparent intent (gleaned from multiple comments) is to step through some files named tweetNNN.txt
, where NNN
is some number from 1 to 100. The content (not just the name) of that file is then to be passed on a cURL command line as data in a request.
The easiest way to do this is probably to use the @
character on the cURL command line, something like this:
for /l %c in (1, 1, 100) do echo "language=english&text=" > stage.txt
&& copy stage.txt + tweet%c.txt
&& curl -d@stage.txt text-processing.com/api/sentiment/ >> results.txt
(Note: I've formatted this with line-breaks, but it needs to be entered as a single line).
I put together a quick test, using a couple of files:
tweet1.txt: "what complete crap. hated every minute"
tweet2.txt: "Best movie of the years. Loved it"
Running the previous command in a directory containing those two files produced a results.txt
containing:
{"probability": {"neg": 0.82811145456964252, "neutral": 0.18962854533013332, "pos": 0.17188854543035748}, "label": "neg"}
{"probability": {"neg": 0.10467714372495518, "neutral": 0.080508941181180751, "pos": 0.89532285627504482}, "label": "pos"}
That seems a close enough fit with the content of the files that I think we can safely conclude that the text was analyzed as desired.