I'm using the aws-sdk to download a file from an s3 bucket. The S3 download function want's something that implements io.WriterAt however bytes.Buffer doesn't implement that. Right now I'm creating a file which implements io.WriterAt but I'd like something in memory.
5 Answers
For cases involving the AWS SDK, use aws.WriteAtBuffer
to download S3 objects into memory.
requestInput := s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
}
buf := aws.NewWriteAtBuffer([]byte{})
downloader.Download(buf, &requestInput)
fmt.Printf("Downloaded %v bytes", len(buf.Bytes()))

- 1,816
- 1
- 19
- 18
-
7It's worth noting that this buffer will grow unboundedly (up to filesize) unless concurrently drained (read from) – Mike Graf Sep 11 '19 at 23:02
-
2Use `manager.NewWriteAtBuffer([]byte{})` for AWS SDK V2 – Andres Oct 26 '22 at 16:04
Faking AWS WriterAt with an io.Writer
This isn't a direct answer to the original question but rather the solution I actually used after landing here. It's a similar use case that I figure may help others.
The AWS documentation defines the contract such that if you set downloader.Concurrency
to 1, you get guaranteed sequential writes.
downloader.Download(FakeWriterAt{w}, s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
})
downloader.Concurrency = 1
Therefore you can take an io.Writer
and wrap it to fulfill the io.WriterAt
, throwing away the offset
that you no longer need:
type FakeWriterAt struct {
w io.Writer
}
func (fw FakeWriterAt) WriteAt(p []byte, offset int64) (n int, err error) {
// ignore 'offset' because we forced sequential downloads
return fw.w.Write(p)
}

- 74,004
- 20
- 105
- 125
I don't know of any way to do this in the standard library, but you can write your own buffer.
It really wouldn't be all that hard...
EDIT: I couldn't stop thinking about this, and I ended up acidentally the whole thing, enjoy :)
package main
import (
"errors"
"fmt"
)
func main() {
buff := NewWriteBuffer(0, 10)
buff.WriteAt([]byte("abc"), 5)
fmt.Printf("%#v\n", buff)
}
// WriteBuffer is a simple type that implements io.WriterAt on an in-memory buffer.
// The zero value of this type is an empty buffer ready to use.
type WriteBuffer struct {
d []byte
m int
}
// NewWriteBuffer creates and returns a new WriteBuffer with the given initial size and
// maximum. If maximum is <= 0 it is unlimited.
func NewWriteBuffer(size, max int) *WriteBuffer {
if max < size && max >= 0 {
max = size
}
return &WriteBuffer{make([]byte, size), max}
}
// SetMax sets the maximum capacity of the WriteBuffer. If the provided maximum is lower
// than the current capacity but greater than 0 it is set to the current capacity, if
// less than or equal to zero it is unlimited..
func (wb *WriteBuffer) SetMax(max int) {
if max < len(wb.d) && max >= 0 {
max = len(wb.d)
}
wb.m = max
}
// Bytes returns the WriteBuffer's underlying data. This value will remain valid so long
// as no other methods are called on the WriteBuffer.
func (wb *WriteBuffer) Bytes() []byte {
return wb.d
}
// Shape returns the current WriteBuffer size and its maximum if one was provided.
func (wb *WriteBuffer) Shape() (int, int) {
return len(wb.d), wb.m
}
func (wb *WriteBuffer) WriteAt(dat []byte, off int64) (int, error) {
// Range/sanity checks.
if int(off) < 0 {
return 0, errors.New("Offset out of range (too small).")
}
if int(off)+len(dat) >= wb.m && wb.m > 0 {
return 0, errors.New("Offset+data length out of range (too large).")
}
// Check fast path extension
if int(off) == len(wb.d) {
wb.d = append(wb.d, dat...)
return len(dat), nil
}
// Check slower path extension
if int(off)+len(dat) >= len(wb.d) {
nd := make([]byte, int(off)+len(dat))
copy(nd, wb.d)
wb.d = nd
}
// Once no extension is needed just copy bytes into place.
copy(wb.d[int(off):], dat)
return len(dat), nil
}

- 3,204
- 2
- 23
- 36
-
3Prefer using `aws.WriteAtBuffer` as answered by @sam rather than rolling a custom solution. – Urjit Jul 30 '18 at 00:51
-
If the AWS API has that baked in, then of course you should use it. – Milo Christiansen Jul 31 '18 at 02:09
I was looking for a simple way to get io.ReadCloser
directly from an S3 object. There is no need to buffer response nor reduce concurrency.
import "github.com/aws/aws-sdk-go/service/s3"
[...]
obj, err := c.s3.GetObject(&s3.GetObjectInput{
Bucket: aws.String("my-bucket"),
Key: aws.String("path/to/the/object"),
})
if err != nil {
return nil, err
}
// obj.Body is a ReadCloser
return obj.Body, nil

- 1,315
- 12
- 17
With aws-sdk-go-v2, the codebase-provided example shows:
// Example:
// pre-allocate in memory buffer, where headObject type is *s3.HeadObjectOutput
buf := make([]byte, int(headObject.ContentLength))
// wrap with aws.WriteAtBuffer
w := s3manager.NewWriteAtBuffer(buf)
// download file into the memory
numBytesDownloaded, err := downloader.Download(ctx, w, &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(item),
})
Then use w.Bytes() as the result.
Import "github.com/aws/aws-sdk-go-v2/feature/s3/manager" and other needed components

- 367
- 3
- 9