In my script, I have two http requests. I would like reuse the connection, so for example what I do is:
curl -v 'http://example.com?id=1&key1=value1' 'http://example.com?id=1&key2=value2'
Is there any way to store the output of each http request in two different variables? I have been searching. I haven't found any solution yet.
I understand I can do the following to store output in two different files.
curl -v 'http://example.com?id=1&key1=value1' -o output1 'http://example.com?id=1&key2=value2' -o output2
Edit: here is my use case
I have a cronjob that runs the parallel (GNU parallel) command below every few minutes. And 'get_data.sh' will be run 2000 times, because there are 2000 rows in input.csv. I would like to avoid using tmp file to get the best performance.
parallel \
-a input.csv \
--jobs 0 \
--timeout $parallel_timeout \
"get_data.sh {}"
In get_data.sh:
id=$1
curl -v "http://example.com?id=${id}&key1=value1" -o output1 \
"http://example.com?id=${id}&key2=value2" -o output2
stat1=$(cat output1 | sed '' | cut ..)
stat2=$(cat output2 | awk '')