19

I have been using curl -i http://website.com for a long time. It's great, it reports the response headers and information.

I also use a tool jq which parses and pretty prints JSON. I'd like to do the following:

curl -i http://website.com | jq -r .

The problem with this is that the HTTP headers are forwarded to jq.

Is there a way to redirect -i to standard error?

Naftuli Kay
  • 87,710
  • 93
  • 269
  • 411

2 Answers2

26

Try this command:

curl -s -D /dev/stderr http://www.website.com/... | jq

sɐunıɔןɐqɐp
  • 3,332
  • 15
  • 36
  • 40
Zmey
  • 2,304
  • 1
  • 24
  • 40
  • 6
    This does exactly what the OP wants. `-D /dev/stderr` writes the headers to stderr, `-s` removes the progress bars. I often add `-S` so I still get errors if curl fails (which `-s` would otherwise hide). – Quoting Eddie Apr 04 '20 at 13:11
  • 1
    `curl -s -D CON http://www.website.com/... | jq .` works on windows. tyvm for the tip in the right direction. – davenpcj May 17 '21 at 15:00
3

Since I faced the some problem today, I came up using:

curl -i http://some-server/get.json | awk '{ sub("\r$", ""); print }' | awk -v RS= 'NR==1{print > "/dev/stderr";next} 1' > /dev/stdout  | jq .

Most likely not the best solution, but it works for me.

Explanation: the first awk program will just convert windows new lines to unix new lines.

In the second program -v RS= will instruct awk to use one or more blank lines as record separators[1]. NR==1{print > "/dev/stderr";next} will print the first record (NR==1) to stderr, the next statement forces awk to immediately stop processing the current record and go on to the next record[2]. 1 is just a short hand for {print $0}[3].

[1] https://stackoverflow.com/a/33297878
[2] https://www.gnu.org/software/gawk/manual/html_node/Next-Statement.html
[3] https://stackoverflow.com/a/20263611

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
jakrol
  • 31
  • 2