5

I have a problem that requires me to calculate the maximum upload and download available, then limit my program's usage to a percentage of it. However, I can't think of a good way to find the maximums.

At the moment, the only solution I can come up with is transfering a few megabytes between the client and server, then measuring how ling the transfer took. This solution is very undesirable, however, because with 100,000 clients it could potentially result in too much of an increase to our server's bandwidth usage (which is already too high).

Does anyone have any solutions to this problem?

Note that I'm mostly interested in the limitation of data transferred up to the point it leaves the ISP's network; I think this is most likely where a bottleneck that would cause other programs' communication to degrade would occur. Correct me if I'm wrong, though.

EDIT: After further investigation, I don't think this is possible; there are too many variables involved to accurately measure the maximum transfer rate when leaving the ISP's network. Leaving the question open, in case someone comes up with an accurate solution, though.

Cœur
  • 37,241
  • 25
  • 195
  • 267
Collin Dauphinee
  • 13,664
  • 1
  • 40
  • 71
  • What OS are you writing the code for? You can probably retrieve at least the theoretical maximum for a particular interface, but the method to do so will vary with the OS. – Jerry Coffin May 07 '10 at 15:36
  • Windows. I'm not interested in the interface maximum, Im interested in the maximum that can be transfered past through the client's ISP; if allowed, our program will use everything it's given, which degrades the performance of other applications. Having the user select their own limits is unacceptable from a usability standpoint. – Collin Dauphinee May 07 '10 at 15:44
  • Don't think you have much of a choice other than a sampling of the actual transfer rate during a transfer. If you don't want to smash your server, you could look into using one of the existing services for measuring it - such as speakeasy.net. Someone must have an API for you. – AlG May 07 '10 at 15:53
  • Don't have time to type out a whole answer, but take a look at TCP friendly rate control? http://www.faqs.org/rfcs/rfc5348.html – KillianDS May 07 '10 at 16:48
  • Which is more important, The amount of data you have to receive or the delay/jitter in the communication? Have you tried using End-to-end QoS? – rsarro Jun 01 '10 at 23:23

5 Answers5

2

If you can restrict the code to Windows Vista or newer (not likely, but who knows?) you can use SetPerTcpConnectionEStats and GetPerTcpConnectionEStats along with TCP_ESTATS_BANDWIDTH_RW_v0 to have Windows estimate the bandwidth for a connection, and later retrieve that estimate. Then, based on that estimate, you can throttle the bandwidth you use.

So what would happen is that you'd start running the application about as you do now, collect statistics for a while, then impose throttling based on what you measure during that initial time period.

This has the advantage that it avoids sending extra data only to collect bandwidth information -- it simply collects statistics on the data you're sending anyway. It has the disadvantage (which I suspect is nearly unavoidable) that it still uses something approaching full bandwidth until you get an estimate of the bandwidth that's available (and, as mentioned above, this was added in Windows Vista, so it's not even close to universally available yet).

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
1

If you have Windows devices on both ends of the connections, you could use the Background Intelligent Transfer Service (BITS) to move the information and cop out of the entire bandwidth question. The (nearly) always installed component is described at http://msdn.microsoft.com/en-us/library/aa362708(VS.85).aspx.

You don't say whether the bandwidth friendliness is required or just a cost issue so this may not be appropriate.

Pekka
  • 3,529
  • 27
  • 45
0

The only answers I see are:

  1. Use a small sample to time the transfer rate.
  2. Time the actual data in chunks (say 1k) and report the average.

Some of the issues complicating the matter:

  • The processor bandwidth of the sending machine (i.e. other tasks running).
  • Traffic density on the network.
  • Tasks running on the client machine.
  • Architecture of all machines.

Since the client may be running other tasks, and the host (sending machine) will be running different tasks, the transfer rate will vary.

I vote for sending a chunk of data timing it, sending another and timing it. Accumulate these durations and average over the number of chunks. This allows for a dynamic timing, which would be more accurate than any precalculated timing.

Thomas Matthews
  • 56,849
  • 17
  • 98
  • 154
0

If the problem is raw bandwidth then a a feedback mechanism could work here. When you start the session, the server tells the client at wich rate it will send data. The client can monitor at wich rate it receives data. If the rate for the data received if less than the rate the data is being sent (you could use a threshold here, like 90% lower or less) then the client notifies the server throttle down the data rate and start the process again. This will serve as a basic QoS mechanism.

If the problem is that the connection has a high latency and/or jitter, try to send the information in smaller packtes (actual IP/TCP packtes). Normally the system will try to use the maximum packet size, but the packets fragmentation on the internet could and will delay the traffic. If this still does not improves the latency, then you could fallback to using UDP instead of TCP. But this will not ensure data delivery.

rsarro
  • 566
  • 3
  • 6
0

One option would be to implement something like uTorrent's UDP transport protocol between the client and server to keep latency down. Just measuring a raw throughput won't help when some other process starts using bandwidth as well, cutting down the amount of bandwidth you have free.

bdonlan
  • 224,562
  • 31
  • 268
  • 324