3

I need to automate the submission of a lot of requests to a cloud-based database interface (Intelex). There isn't any way to submit certain actions in bulk, but all that is necessary to submit an individual request is for an authenticated user to attempt to open the web link. So, to restore a record with a given GUID all I need to do is open the page "https://.../restore/[GUID]". The webpage will load and display the now-restored record, but I don't actually need it to even finish loading - simply requesting the page is enough for the server to perform the action. I expect the server will either kick me out or drop some of the requests if I send them too quickly, but it has no issues with processing them as fast as I can navigate and open links in a new browser tab (about 1 per second).

What is the easiest way to submit these requests for a few thousand web addresses?

AnAdverb
  • 33
  • 5

2 Answers2

0

I can see couple of ways here. Each are pretty much easy.

  1. Store your variable part of URL in a file. Then use the tool like CURL and loop through file content calling CURL with the URL you have built on each iteration. If you're using Linux you can do something like Looping through the content of a file in Bash, for Windows check this post: How do you loop through each line in a text file using a windows batch file?

  2. Use JMeter tool (jMeter loop through all values in CSV). This will even aloww you to parallelize your queries.

Alexey R.
  • 8,057
  • 2
  • 11
  • 27
  • Thanks Alexey! Installing JMeter looks like it would be a substantial research project for me given my current knowledge level of: "_usually_ able to jury-rig code from Stack-Exchange"; "https means the web service magically knows I'm me... or something"; and "Linux is something smarter people use". I'm thinking the best I'll get would be getting a batch file to open and close Chrome 1000 times while I eat lunch, a la combining your link with https://stackoverflow.com/questions/41331970/open-and-close-websites-in-batch Do you think that's a fair assessment? – AnAdverb Jan 27 '21 at 18:04
  • That is a possible solution but more resource consuming hence taking more time to complete. That might happen you will come back from your lunch but the script will still be working. – Alexey R. Jan 27 '21 at 18:22
0

Assume guid.txt contains:

[GUID]
[GUID]
[GUID]
:
[GUID]

Then you can run:

cat guid.txt | parallel --delay 1s wget https://.../restore/{}

To automatically find the optimal --delay use:

cat guid.txt |
  parallel --retries 5 --delay 1sauto wget https://.../restore/{}

(Requires GNU Parallel version 20210122).

Ole Tange
  • 31,768
  • 5
  • 86
  • 104