0

I wrote a small utility to pipe the output of the mysqldump command to an s3 bucket in golang, which I intend to use to snapshot and restore our database in an end-to-end test environment. When I run this locally, everything works as expected: the utility correctly pipes the mysqldump output to the s3 bucket. However, when running in the Alpine Linux Docker image, I get a broken pipe error. The utility then sends a empty file to the s3 bucket.

To execute mysqldump, I simply use the exec package. Code for mysqldump:

func (conf *MySQLConfig) DumpDatabases(stdout io.Writer) error {
    args := []string{"-u", conf.User, fmt.Sprintf("--password=%s", conf.Password), "-h", conf.Host, "--databases"}
    args = append(args, conf.Databases...)
    cmd := exec.Command("mysqldump", args...)

    cmd.Stdout = stdout
    cmd.Stderr = os.Stderr

    err := cmd.Start()
    if err != nil {
        return err
    }
    cmd.Wait()

    return nil
}

The upload to the S3 bucket utilises the go-cdk. It expects a function that expects an io.Writer as input and returns an error. In this case the following snippet is executed to pipe the output of mysqldump to s3:

bucket.Upload(func(w io.Writer) error { return mysqlConf.DumpDatabases(w) })

Code for the upload:

func (b *Bucket) Upload(writeFunc func(io.Writer) error) error {
    // Open a connection to the bucket.
    ctx := context.Background()

    bucket, err := blob.OpenBucket(ctx, b.Url)

    if err != nil {
        return err
    }
    defer bucket.Close()

    // Add encryption header
    beforeWrite := func(asFunc func(interface{}) bool) error {
        var input *s3manager.UploadInput
        if asFunc(&input) {
            input.ServerSideEncryption = aws.String("AES256")
        }
        return nil
    }

    opts := &blob.WriterOptions{}
    opts.BeforeWrite = beforeWrite

    w, err := bucket.NewWriter(ctx, generateKey(), opts)
    if err != nil {
        return err
    }

    err = writeFunc(w)
    if err != nil {
        return err
    }

    if err := w.Close(); err != nil {
        return err
    }

    return nil
}

Any clue why I get a broken pipe error when running this code on Alpine Linux, while it works as expected locally? Any suggestions on how to debug this issue to figure out what exactly is happening? Should I utilise buffered IO?

Blokje5
  • 4,763
  • 1
  • 20
  • 37
  • Does the container have access to your aws account, eg [aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)? – Mark May 27 '19 at 05:32
  • Yes, it is able to send a file to the s3 bucket, so that is not the problem. – Blokje5 May 27 '19 at 07:13
  • 1
    Well it's good it's partly working :-) Some random suggestions: 1) upgrade/update to latest Alpine Linux image as possibly the Alpine Linux image version has a problem, 2) switch from Alpine Linux image to Go:stretch, 3) in your code wrap all errors in fmt.Errorf(...) to pinpoint exactly where the error is coming from, 4) can you dump to file, then upload the file? 5) reduce [latency](https://github.com/fog/fog/issues/824): is the mysqldump in the container slower? Can you increase container resources? Configure more frequent writes in smaller chunks to prevent aws thinking the socket is idle? – Mark May 27 '19 at 07:55
  • Yeah I ended up writing it to local disk and moving that to s3. Still strange that it breaks, haven't figured out why. – Blokje5 May 28 '19 at 07:43
  • I guess if the mysqldump is slower in the container then on the host (eg, host might have advantage of unix socket), that might delay the writes to aws, increasing latency to the point that aws closes the connection. Glad you got it working. – Mark May 28 '19 at 21:36

0 Answers0