0

When the curl executable redirects its output to stdout, is its memory-use proportional to the total size of the downloaded content?

If I'm downloading a 1GB tgz with curl that I pipe to tar -- as in the example below -- is the memory used proportional to that 1GB?

curl -sSl https://path/to/1GB_file.tar.gz | tar -xvxf - -C /

If the answer is 'yes', it there a way to "throttle" or limit the maximum memory used by curl without outright causing the download itself to fail?

Is there a way to measure how much memory curl uses -- either dynamically, or a maximum value -- while downloading a given file?

StoneThrow
  • 5,314
  • 4
  • 44
  • 86
  • I very much doubt it’s related - both cURL and tar will stream. I would imagine memory use is just the buffers. Dumping it to your stdout though might be an issue - as your terminal might buffer the whole lot. In terms of measurement, maybe just use `top` in another window in the first instance? – Boris the Spider Apr 20 '21 at 22:19
  • How about [`time -v`](https://stackoverflow.com/a/774601/2071828)? – Boris the Spider Apr 20 '21 at 22:21
  • @BoristheSpider - don't seem to have `time` on my linux box, and I'm not allowed to `apt install` anything. >:( – StoneThrow Apr 20 '21 at 22:27

1 Answers1

0

I'm going to offer this as an answer, but it's cobbled this together from new reading/discovery, so this is likely not ideal.

After spawning my curl process, I quickly determine its pid while the process it still running, and in another shell I run grep VmPeak /proc/<pid>/status. Do that a couple times, and hopefully close to when the curl process ends, so you're getting a truly comprehensive peak value.

For a 1.7GB file, curl seems to use a peak of 223016 kB.

I think this means curl doesn't strictly use memory proportional to the target file size (i.e. >= the target file size)

StoneThrow
  • 5,314
  • 4
  • 44
  • 86