242

It works ok as a single tool:

curl "someURL"
curl -o - "someURL"

but it doesn't work in a pipeline:

curl "someURL" | tr -d '\n'
curl -o - "someURL" | tr -d '\n'

it returns:

(23) Failed writing body

What is the problem with piping the cURL output? How to buffer the whole cURL output and then handle it?

Matthias Braun
  • 32,039
  • 22
  • 142
  • 171
static
  • 8,126
  • 15
  • 63
  • 89

19 Answers19

166

This happens when a piped program (e.g. grep) closes the read pipe before the previous program is finished writing the whole page.

In curl "url" | grep -qs foo, as soon as grep has what it wants it will close the read stream from curl. cURL doesn't expect this and emits the "Failed writing body" error.

A workaround is to pipe the stream through an intermediary program that always reads the whole page before feeding it to the next program.

E.g.

curl "url" | tac | tac | grep -qs foo

tac is a simple Unix program that reads the entire input page and reverses the line order (hence we run it twice). Because it has to read the whole input to find the last line, it will not output anything to grep until cURL is finished. Grep will still close the read stream when it has what it's looking for, but it will only affect tac, which doesn't emit an error.

Kaworu
  • 2,116
  • 1
  • 14
  • 8
  • 8
    Could you not simply pipe it through `cat` once? Solves the issue for me, at least. – benvd Jun 10 '16 at 14:26
  • 5
    No. It might help with small documents but when it is too large to fit in the buffer cat uses the error will reappear.You could use `-s` to silence all error messages (and progress) if you don't need them. – Kaworu Jun 13 '16 at 10:37
  • 12
    `tac|tac` changes the input if input does not end with a linefeed, or for example `printf a\\nb\\nc|tac|tac` prints `a\ncb` where `\n` is a linefeed. You can use `sponge /dev/stdout` instead. Another option is `printf %s\\n "$(cat)"`, but when the input contains null bytes in shells other than Zsh, that either skips the null bytes or stops reading after the first null byte. – nisetama Sep 24 '16 at 18:20
  • From the docs: CURLE_WRITE_ERROR (23) An error occurred when writing received data to a local file, or an error was returned to libcurl from a write callback. https://curl.haxx.se/libcurl/c/libcurl-errors.html – Jordan Stewart Jan 20 '17 at 00:47
  • 4
    This should be accepted answer because it explains the problem, altought it does not provide capable solution as there is no `tac` command on macOS – Dominik Bucher Sep 02 '18 at 14:47
  • 1
    I simply just ignore stderr and sent it to null: ``curl "url" 2>/dev/null | grep`` – PLG Feb 24 '20 at 09:42
  • This looks ugly, but it's good for investigation; it lets you see the real error, like the command you're piping to not being installed. Then you can remove the `tac|tac`. – Noumenon Dec 17 '21 at 15:37
79

For completeness and future searches:

It's a matter of how cURL manages the buffer, the buffer disables the output stream with the -N option.

Example: curl -s -N "URL" | grep -q Welcome

Matthias Braun
  • 32,039
  • 22
  • 142
  • 171
user5968839
  • 823
  • 6
  • 2
  • 9
    It did work for `curl -s https://raw.githubusercontent.com/hermitdave/FrequencyWords/master/content/2016/ro/ro_50k.txt | head -20` (without `-s` I get the same error). – Dan Dascalescu Sep 14 '17 at 08:46
  • `-s` just makes curl silent so it doesn't emit the error you'd otherwise see. The underlying issue is still happening, but for most situations that's fine. However if you were doing something like `curl ... | tee /tmp/full_output | head -20` then you need to actually resolve the error if you want `/tmp/full_output` to have everything. – Chris Sep 20 '21 at 23:12
67

Another possibility, if using the -o (output file) option - the destination directory does not exist.

eg. if you have -o /tmp/download/abc.txt and /tmp/download does not exist.

Hence, ensure any required directories are created/exist beforehand, use the --create-dirs option as well as -o if necessary

MikeW
  • 5,504
  • 1
  • 34
  • 29
  • 4
    Thanks, --create-dirs solved this for me in the most unusual situation, couldn't ever figure out what was wrong, but this was the ticket! – rfay Jul 22 '19 at 18:03
  • 1
    It just happened to me in similar case. I forgot to declare the variable $out for the output. Thanks, Mike. – Mincong Huang Sep 24 '19 at 11:36
14

You can do this instead of using -o option:

curl [url] > [file]

  • 1
    so, not using the pipe and instead do all the work over the file system? I wanted to use the curl's output with pipes. – static Aug 20 '13 at 14:27
14

The server ran out of disk space, in my case.

Check for it with df -k .

I was alerted to the lack of disk space when I tried piping through tac twice, as described in one of the other answers: https://stackoverflow.com/a/28879552/336694. It showed me the error message write error: No space left on device.

HostedMetrics.com
  • 3,525
  • 3
  • 26
  • 31
  • I received the same error due to running out of disk space within a container, for anyone else also hitting the same issue can clean up space within their containers with `docker system prune` – Dave Nov 12 '19 at 15:10
11

So it was a problem of encoding. Iconv solves the problem

curl 'http://www.multitran.ru/c/m.exe?CL=1&s=hello&l1=1' | iconv -f windows-1251 | tr -dc '[:print:]' | ...
static
  • 8,126
  • 15
  • 63
  • 89
11

If you are trying something similar like source <( curl -sS $url ) and getting the (23) Failed writing body error, it is because sourcing a process substitution doesn't work in bash 3.2 (the default for macOS).

Instead, you can use this workaround.

source /dev/stdin <<<"$( curl -sS $url )"
wisbucky
  • 33,218
  • 10
  • 150
  • 101
8

Trying the command with sudo worked for me. For example:

sudo curl -O -k 'https url here'

note: -O (this is capital o, not zero) & -k for https url.

user33192
  • 1,012
  • 1
  • 13
  • 21
  • explaination please? – markroxor Jul 15 '22 at 11:04
  • it's already mentioned in the answer as note. – user33192 Jul 23 '22 at 04:29
  • The crux of the answer lies in the usage of `sudo`. Can you please explain why it works with escalated privileges? – markroxor Jul 23 '22 at 18:13
  • @markroxor one reason using sudo would resolve the issue is if you are trying to download a file using curl and the output path is restricted. Using sudo in the command would allow for the creation of the file. Another work around would be to change the permissions on the folder you are outputting the file to so sudo permissions would not be required. – Daniel Black Jul 26 '22 at 19:14
  • If that is the reason, I believe that it does more bad than good. If user is aware that he is writing files in the directory where he doesn't have enough permissions he should switch user before proceeding. Downloading files from the internet as a root user is never a good idea. @DanielBlack – markroxor Jul 27 '22 at 11:20
6

I had the same error but from different reason. In my case I had (tmpfs) partition with only 1GB space and I was downloading big file which finally filled all memory on that partition and I got the same error as you.

LLL
  • 1,777
  • 1
  • 15
  • 31
6

I encountered the same problem when doing:

curl -L https://packagecloud.io/golang-migrate/migrate/gpgkey | apt-key add -

The above query needs to be executed using root privileges.

Writing it in following way solved the issue for me:

curl -L https://packagecloud.io/golang-migrate/migrate/gpgkey | sudo apt-key add -

If you write sudo before curl, you will get the Failed writing body error.

Gerhardh
  • 11,688
  • 4
  • 17
  • 39
neel229
  • 73
  • 1
  • 5
5

In my case, I was doing: curl <blabla> | jq | grep <blibli>

With jq . it worked: curl <blabla> | jq . | grep <blibli>

damio
  • 6,041
  • 3
  • 39
  • 58
4

For me, it was permission issue. Docker run is called with a user profile but root is the user inside the container. The solution was to make curl write to /tmp since that has write permission for all users , not just root.

I used the -o option.

-o /tmp/file_to_download

lallolu
  • 139
  • 2
  • 4
  • 2
    Yep, this might come as a surprise when you usually run scripts that create tmp files as non-privileged users, and then run that script as root just once to test sth. Later again, the non-privileged users won't be able to use/cleanup the temp files left by root. I try to always put a "chown user:" into my scripts just after creating the tmp file. – Marki Dec 31 '20 at 22:28
2

I encountered this error message while trying to install varnish cache on ubuntu. The google search landed me here for the error (23) Failed writing body, hence posting a solution that worked for me.

The bug is encountered while running the command as root curl -L https://packagecloud.io/varnishcache/varnish5/gpgkey | apt-key add -

the solution is to run apt-key add as non root

curl -L https://packagecloud.io/varnishcache/varnish5/gpgkey | apt-key add -
All Іѕ Vаиітy
  • 24,861
  • 16
  • 87
  • 111
1

The explanation here by @Kaworu is great: https://stackoverflow.com/a/28879552/198219

This happens when a piped program (e.g. grep) closes the read pipe before the previous program is finished writing the whole page. cURL doesn't expect this and emits the "Failed writing body" error.

A workaround is to pipe the stream through an intermediary program that always reads the whole page before feeding it to the next program.

I believe the more correct implementation would be to use sponge, as already suggested by @nisetama in the comments:

curl "url" | sponge | grep -qs foo

famzah
  • 1,462
  • 18
  • 21
1

I got this error trying to use jq when I didn't have jq installed. So... make sure jq is installed if you're trying to use it.

titusfortner
  • 4,099
  • 2
  • 15
  • 29
1

I had the same question because of my own typo mistake:

# fails because of reasons mentioned above
curl -I -fail https://www.google.com | echo $? 
curl: (23) Failed writing body

# success
curl -I -fail https://www.google.com || echo $?
rustyMagnet
  • 3,479
  • 1
  • 31
  • 41
0

In Bash and zsh (and perhaps other shells), you can use process substitution (Bash/zsh) to create a file on the fly, and then use that as input to the next process in the pipeline chain.

For example, I was trying to parse JSON output from cURL using jq and less, but was getting the Failed writing body error.

# Note: this does NOT work
curl https://gitlab.com/api/v4/projects/ | jq | less

When I rewrote it using process substitution, it worked!

# this works!
jq "" <(curl https://gitlab.com/api/v4/projects/) | less

Note: jq uses its 2nd argument to specify an input file

Bonus: If you're using jq like me and want to keep the colorized output in less, use the following command line instead:

jq -C "" <(curl https://gitlab.com/api/v4/projects/) | less -r

(Thanks to Kowaru for their explanation of why Failed writing body was occurring. However, their solution of using tac twice didn't work for me. I also wanted to find a solution that would scale better for large files and tries to avoid the other issues noted as comments to that answer.)

Robert
  • 181
  • 7
0

I was getting curl: (23) Failed writing body . Later I noticed that I did not had sufficient space for downloading an rpm package via curl and thats the reason I was getting issue. I freed up some space and issue for resolved.

-2

I added flag -s and it did the job. eg: curl -o- -s https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash

Afshin Ghazi
  • 2,784
  • 4
  • 23
  • 37