8

I'm a beginner to golang.

Is there any way to limit golang's http.Get() bandwidth usage? I found this: http://godoc.org/code.google.com/p/mxk/go1/flowcontrol, but I'm not sure how to piece the two together. How would I get access to the http Reader?

John
  • 3,037
  • 8
  • 36
  • 68

3 Answers3

17

Thirdparty packages have convenient wrappers. But if you interested in how things work under the hood - it's quite easy.

package main

import (
    "io"
    "net/http"
    "os"
    "time"
)

var datachunk int64 = 500       //Bytes
var timelapse time.Duration = 1 //per seconds

func main() {
    responce, _ := http.Get("http://google.com")
    for range time.Tick(timelapse * time.Second) {
        _, err :=io.CopyN(os.Stdout, responce.Body, datachunk)
        if err!=nil {break}
    }
}

Nothing magic.

Uvelichitel
  • 8,220
  • 1
  • 19
  • 36
6

There is an updated version of the package on github

You use it by wrapping an io.Reader

Here is a complete example which will show the homepage of Google veeeery sloooowly.

This wrapping an interface to make new functionality is very good Go style, and you'll see a lot of it in your journey into Go.

package main

import (
    "io"
    "log"
    "net/http"
    "os"

    "github.com/mxk/go-flowrate/flowrate"
)

func main() {
    resp, err := http.Get("http://google.com")
    if err != nil {
        log.Fatalf("Get failed: %v", err)
    }
    defer resp.Body.Close()

    // Limit to 10 bytes per second
    wrappedIn := flowrate.NewReader(resp.Body, 10)

    // Copy to stdout
    _, err = io.Copy(os.Stdout, wrappedIn)
    if err != nil {
        log.Fatalf("Copy failed: %v", err)
    }
}
Nick Craig-Wood
  • 52,955
  • 12
  • 126
  • 132
  • Looks great. What about checkin if a limit has been reached such as 5MB? Would I just use b:=make([]byte, 5000000); wrappedIn.Read(b);? – John Jan 10 '15 at 03:01
  • 1
    No you would wrap the `io.Reader` in an [io.LimitedReader](http://golang.org/pkg/io/#LimitedReader). – Nick Craig-Wood Jan 10 '15 at 13:48
  • 3
    This limits the speed at which you read from the kernel's TCP buffer, not directly the speed at which you read from remote. The local TCP buffer will fill, the network will be idle while you read (slowly) from the buffer, then it will burst again to refill the buffer. For a large download you're averaging 10 bytes/sec from remote, but for a small download you're just consuming it slowly, with no difference in network activity. To limit net activity see https://unix.stackexchange.com/questions/28198/how-to-limit-network-bandwidth – Graham King Nov 14 '17 at 18:25
  • @GrahamKing is correct. However, I am looking for a simple solution that is portable so I am ok, at least for now, on relying on the TCP buffers filling up. However, it would be nice to detect when all the data has downloaded so that I can write the rest of the data without needlessly throttling when there's no network activity. Any idea how to accomplish that? Thanks! – stefansundin May 20 '22 at 19:51
0

You can use https://github.com/ConduitIO/bwlimit to limit the bandwidth of requests on the server and the client. It differs from other libraries, because it respects read/write deadlines (timeouts) and limits the bandwidth of the whole request including headers, not only the request body.

Here's how to use it on the client:

package main

import (
    "io"
    "net"
    "net/http"
    "time"

    "github.com/conduitio/bwlimit"
)

const (
    writeLimit = 1 * bwlimit.Mebibyte // write limit is 1048576 B/s
    readLimit  = 4 * bwlimit.KB       // read limit is 4000 B/s
)

func main() {
    // change dialer in the default transport to use a bandwidth limit
    dialer := bwlimit.NewDialer(&net.Dialer{
        Timeout:   30 * time.Second,
        KeepAlive: 30 * time.Second,
    }, writeLimit, readLimit)
    http.DefaultTransport.(*http.Transport).DialContext = dialer.DialContext

    // requests through the default client respect the bandwidth limit now
    resp, _ := http.DefaultClient.Get("http://google.com")
    _, _ = io.Copy(resp.Body)
}

However, as Graham King pointed out, keep in mind that reads from remote will still fill the TCP buffer as fast as possible, this library will only read slowly from the buffer. Limiting the bandwidth of writes produces the expected result though.

lmazgon
  • 1,215
  • 1
  • 21
  • 40