32

I noticed that if I tried appending to a slice using goroutines inside a for loop, there would be instances where I would get missing/blank data:

destSlice := make([]myClass, 0)

var wg sync.WaitGroup
for _, myObject := range sourceSlice {
    wg.Add(1)
    go func(closureMyObject myClass) {
        defer wg.Done()
        var tmpObj myClass
        tmpObj.AttributeName = closureMyObject.AttributeName
        destSlice = append(destSlice, tmpObj)
    }(myObject)
}
wg.Wait()

Sometimes, when I print all AttributeNames from destSlice, some elements are empty strings (""), and other times, some elements from sourceSlice are not present in destSlice.

Does my code have a data race, and does this mean that append is not thread-safe for concurrent use by multiple goroutines?

icza
  • 389,944
  • 63
  • 907
  • 827
Floating Sunfish
  • 4,920
  • 5
  • 29
  • 48

4 Answers4

56

In Go no value is safe for concurrent read/write, slices (which are slice headers) are no exception.

Yes, your code has data races. Run with the -race option to verify.

Example:

type myClass struct {
    AttributeName string
}
sourceSlice := make([]myClass, 100)

destSlice := make([]myClass, 0)

var wg sync.WaitGroup
for _, myObject := range sourceSlice {
    wg.Add(1)
    go func(closureMyObject myClass) {
        defer wg.Done()
        var tmpObj myClass
        tmpObj.AttributeName = closureMyObject.AttributeName
        destSlice = append(destSlice, tmpObj)
    }(myObject)
}
wg.Wait()

Running it with

go run -race play.go

Output is:

==================
WARNING: DATA RACE
Read at 0x00c420074000 by goroutine 6:
  main.main.func1()
      /home/icza/gows/src/play/play.go:20 +0x69

Previous write at 0x00c420074000 by goroutine 5:
  main.main.func1()
      /home/icza/gows/src/play/play.go:20 +0x106

Goroutine 6 (running) created at:
  main.main()
      /home/icza/gows/src/play/play.go:21 +0x1cb

Goroutine 5 (running) created at:
  main.main()
      /home/icza/gows/src/play/play.go:21 +0x1cb
==================
==================
WARNING: DATA RACE
Read at 0x00c42007e000 by goroutine 6:
  runtime.growslice()
      /usr/local/go/src/runtime/slice.go:82 +0x0
  main.main.func1()
      /home/icza/gows/src/play/play.go:20 +0x1a7

Previous write at 0x00c42007e000 by goroutine 5:
  main.main.func1()
      /home/icza/gows/src/play/play.go:20 +0xc4

Goroutine 6 (running) created at:
  main.main()
      /home/icza/gows/src/play/play.go:21 +0x1cb

Goroutine 5 (running) created at:
  main.main()
      /home/icza/gows/src/play/play.go:21 +0x1cb
==================
==================
WARNING: DATA RACE
Write at 0x00c420098120 by goroutine 80:
  main.main.func1()
      /home/icza/gows/src/play/play.go:20 +0xc4

Previous write at 0x00c420098120 by goroutine 70:
  main.main.func1()
      /home/icza/gows/src/play/play.go:20 +0xc4

Goroutine 80 (running) created at:
  main.main()
      /home/icza/gows/src/play/play.go:21 +0x1cb

Goroutine 70 (running) created at:
  main.main()
      /home/icza/gows/src/play/play.go:21 +0x1cb
==================
Found 3 data race(s)
exit status 66

Solution is simple, use a sync.Mutex to protect writing the destSlice value:

var (
    mu        = &sync.Mutex{}
    destSlice = make([]myClass, 0)
)

var wg sync.WaitGroup
for _, myObject := range sourceSlice {
    wg.Add(1)
    go func(closureMyObject myClass) {
        defer wg.Done()
        var tmpObj myClass
        tmpObj.AttributeName = closureMyObject.AttributeName
        mu.Lock()
        destSlice = append(destSlice, tmpObj)
        mu.Unlock()
    }(myObject)
}
wg.Wait()

You could also solve it in other ways, e.g. you could use a channel on which you'd send the value to be appended, and have a designated goroutine receiving from this channel and do the append.

Also note that while slice headers are not safe, slice elements act as different variables and different slice elements can be written concurrently without synchronization (because they are distinct variables). See Can I concurrently write different slice elements

icza
  • 389,944
  • 63
  • 907
  • 827
  • > In Go no value is safe for concurrent read/write, slices (which are slice headers) are no exception. False. See other answer that uses a slice to collect results from goroutines. – raine Mar 24 '22 at 21:41
  • @raine No, it is true. Slice _elements_ and slice _header_ are **not** the same. Slice elements act as distinct variables, so they can be written concurrently without synchronization, but the slice header cannot. So a slice value (header) _is not_ safe for concurrent read and write! See [Can I concurrently write different slice elements](https://stackoverflow.com/questions/49879322/can-i-concurrently-write-different-slice-elements/49879469#49879469) – icza Mar 25 '22 at 06:47
17

It's quite an old question but there is another small improvement that helps to get rid of mutex. You can use index to add to array. Each go routine will use it's own index. In this case synchronization is not necessary.

destSlice := make([]myClass, len(sourceSlice))

var wg sync.WaitGroup
for i, myObject := range sourceSlice {
    wg.Add(1)
    go func(idx int, closureMyObject myClass) {
        defer wg.Done()
        var tmpObj myClass
        tmpObj.AttributeName = closureMyObject.AttributeName

        destSlice[idx] = tmpObj
     }(i, myObject)
}
wg.Wait()
Mirian
  • 620
  • 1
  • 7
  • 18
  • Indeed, this is a handy solution for when you know the size of your collections beforehand! Many thanks for sharing! – Floating Sunfish Apr 24 '20 at 15:43
  • I like this solution a lot, and used it to fix my own problem. Thank you! I like it, that it doesn't require an import, and it also optimises a little bit, by allocating a fixed array instead of dynamically increasing it on appends. – Teemu Aug 24 '21 at 07:47
6

To give a more recent solution to this problem, it looks like Go has released a new map for sync purposes:

https://godoc.org/golang.org/x/sync/syncmap

cody.tv.weber
  • 536
  • 7
  • 15
6

Question has been answered, but my favorite way to solve this problem is with errgroup. One of the examples in the docs is this exact problem plus one nice addition the handling of errors.

Below is the meat of the example from the docs:

g, ctx := errgroup.WithContext(ctx)

searches := []Search{Web, Image, Video}
results := make([]Result, len(searches))
for i, search := range searches {
    i, search := i, search // https://golang.org/doc/faq#closures_and_goroutines
    g.Go(func() error {
        result, err := search(ctx, query)
        if err == nil {
            results[i] = result
        }
        return err
    })
}
if err := g.Wait(); err != nil {
    return nil, err
}
return results, nil

Hope this is helpful to those who are not aware of the errgroup package.

lockwobr
  • 1,461
  • 1
  • 15
  • 24