Unless there are NUL characters in the strings, there should be no need to call jq more than once, or to serialize the data.
In the simplest case, you could proceed along the following lines:
{ IFS= read -r certificate
IFS= read -r key
echo "certificate=$certificate"
echo "key=$key"
} < <(API call | jq -r '.certificate, .key')
If the values do not contain NUL but might contain newline characters,
then you could use NUL as the delimiter. For the sake of variety,
we could also use a while
loop:
while IFS= read -r -d $'\0' certificate
do
IFS= read -r -d $'\0' key
echo "certificate=$certificate"
echo "key=$key"
done < <(API call | jq -rj '[.certificate, .key] | join("\u0000")')
Conversely ...
If the values of interest might contain literal NUL values ("\u0000"), then the question seems to be problematic, as bash variables in effect cannot contain literal NULs.
If any of the values of interest might contain literal NUL values, then here are two strategies for extracting the "raw" string equivalents into separate files:
Save the JSON output in a (temporary) file, and invoke jq -r
once per value of interest in the obvious way.
Set up a bash pipeline starting with:
API call | jq -r '.certificate, .key | @base64`
and continuing with a loop in which each line is decoded, e.g. using base64 --decode
or jq's @base64d
.
The second strategy might make sense if the API call produces a very large JSON document.