Let's say I'm building my own download accelerator.
Let's simplify it to the point where:
- my code runs at 3rd party whose network parameters I cannot control
- an item is downloaded from a single IP
- number of parallel range transfers is adjustable
- there will be many transfers to learn ideal parameters
- client runs Linux
- server is outside my control
- path is over WAN and download is over HTTPS
- downloaded segments are large
How do I measure if enough connections are used to saturate the path between client and server?
What bits from getsockopt(..., TCP_INFO)
are actually useful?
How fast can I adjust to varying network conditions?
It's possible to measure CPU and memory pressure on a client system, how about network pressure?