4

Another question made me wonder if the seqlock can be efficiently implemented with a volatile version counter in Java.

Here's a prototypical implementation, for the case there will only ever be a single writer thread:

class Seqlock {
  private volatile long version = 0;
  private final byte[] data = new byte[10];

  void write(byte[] newData) {
    version++;  // 1
    System.arraycopy(newData, 0, data, 0, data.length);  // 2
    version++;  // 3
  }

  byte[] read() {
    long v1, v2;
    byte[] ret = new byte[data.length];
    do {
      v1 = version; // 4
      System.arraycopy(data, 0, ret, 0, data.length);  // 5
      v2 = version; // 6
    } while (v1 != v2 || (v1 & 1) == 1);
  }
}

The basic idea is to increment the version number before and after writing, and for readers to check they got a "consistent" read by verifying that the version numbers were the same and even, as odd indicates a "write in progress".

There are all sorts of happens-before relationships between the key actions in the writer thread and readers threads since version is volatile.

I can't, however, see what prevents the write at (2) from moving up above (1) and thus cause a reader to see a write in progress.

For example, the following synchronization order of volatile reads and writes, using the labels in the comments next to each line (showing also the data reads and writes which are not volatile and thus not part of the synchronization order, indented):

1a (version is now 1)
  2a (not part of the synchronization order)
3 (version is now 2)
4 (read version == 2, happens before 3)
  5 (not part of the synchronization order)
6 (read version == 2, happens before 4 and hence 3)
1b (second write, version is now 3)
  2b (not part of the synchronization order)

ISTM that there is no happens-before between 5 (the read of the data) and 2b (the second write of the data, so it is possible for 2b to occur before the read and wrong data to be read.

If that's true, does declaring write() as synchronized help?

Community
  • 1
  • 1
BeeOnRope
  • 60,350
  • 16
  • 207
  • 386
  • i can answer the last question easily: declaring the write synchronized is meaningless if read is not synchronized. as for whether or not volatile works on its own, i'm still pondering that one. – jtahlborn Feb 07 '13 at 04:29
  • 1
    may not be what you are looking for, but you can solve this problem very simply in java (with the same guarantees) using a simple volatile reference to a byte[] which is assigned in the `write()` method. – jtahlborn Feb 07 '13 at 04:38
  • I cannot understand your "synchronization order". How many threads do you have there? In the fourth line, what exactly do you mean by `4 (read version == 2, happens before 3)`? What is happening first? I'd suggest that you draw a "table", each column is a thread, each line is "an instant". – Bruno Reis Feb 07 '13 at 05:38
  • Also, have you actually compared the performance of this with a similar class using a trivial `synchronized {}` block? The JVM is very, very good at handling synchronized blocks, in that it will have almost no impact in performance if the lock is not acquired (compared to a volatile read/write pair), and if it is acquired, then it will happily spin the thread a little bit before actually giving up and stopping the thread (ie, it will loop, something similar to what your code is doing, but without the extra work of pointlessly calling System.arraycopy and discarding the result) – Bruno Reis Feb 07 '13 at 05:44
  • Assuming you have multiple readers, they may suffer in performance with a synchronized block. In this case, compare your code with a locking version that uses ReentrantReadWriteLock instead of synchronized. – Bruno Reis Feb 07 '13 at 05:47
  • [JSR-166e](http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166e/) contains an implementation, as well as a [potential alternative](http://cs.oswego.edu/pipermail/concurrency-interest/2012-June/009530.html). – Ben Manes Feb 07 '13 at 05:52
  • @jtahlborn - good enough, not sure why I didn't see that answer which was staring me in the face. I guess seqlock is mostly useful in environments without gc, where you can't rely on gc to safely clean up the dangling array. Sure, writing is slower in this approach, but seqlock only makes sense in read heavy scenarios anyway. Make that an answer and I'll accept it. – BeeOnRope Feb 12 '13 at 06:44
  • @BeeOnRope Acutally, as my answer mentions, if you have a Read/Write lock and you do far more reads then writes a seqlock or StampedLock helps tremendously on cache invalidation bottlenecks – John Vint Feb 12 '13 at 14:16
  • @JohnVint - yeah but I was comparing it to jthalborn's suggestion of using an array reference swapped out by readers. This case has no contention between readers either. – BeeOnRope Feb 12 '13 at 17:56

2 Answers2

1

In java you can implement a shared buffer (or other object) very simply:

public class SharedBuffer {

  private volatile byte[] _buf;

  public void write(byte[] buf) {
    _buf = buf;
  }

  public byte[] read() {
    // maybe copy here if you are worried about passing out the internal reference
    return _buf;
  }
}

obviously, this is not a "seqlock".

jtahlborn
  • 52,909
  • 5
  • 76
  • 118
-1

Bit late but it’s topic I am interested in The original code for reference


class Seqlock {
  private volatile long version = 0;
  private final byte[] data = new byte[10];

  void write(byte[] newData) {
    version++;  //
    System.arraycopy(newData, 0, data, 0, data.length);  // 2
    version++;  // 3
  }

  byte[] read() {
    long v1, v2;
    byte[] ret = new byte[data.length];
    do {
      v1 = version; // 4
      System.arraycopy(data, 0, ret, 0, data.length);  // 5
      v2 = version; // 6
    } while (v1 != v2 || (v1 & 1) == 1);
  }
}

Volatile writes is effectively a fullFence . This is what prevents the write at (2) from moving up at hardware x86 level . On x86 JIT issues LOCK instruction for Volatile writes which makes all preceding writes visible to other cores before proceeding to write at (2). However JIT compiler is still free to reorder .

In the code above in the method write, version++ is volatile read followed by volatile write of version. It effectively a load followed by store as shown below

//version++ 
tmp = version;
version = tmp + 1;

it is possible for System.array copy at (2) to be reordered in such a way that copy might take place just before version is written and made visible to other threads . The snippet below should show what could be possible.



    tmp = version  
    System.arraycopy(newData, 0, data, 0, data.length);
    version = tmp + 1
   

If you can use misc.unsafe then one way to make sure we have the correct ordering would be to have an unsafe.storeFence(), this ensures reads and write before the fence will not be reordered with reads and write after the fence. See the snippet below (2) cannot move above the storeFence nor can’t it go past volatile write at (3) (volatile write has release semantics in java)


void write(byte[] newData) {
    version++;  // 1
    Unsafe.storeFence():
    System.arraycopy(newData, 0, data, 0, data.length);  // 2
    version++;  // 3
  }

there is a similar issue in read too . Where the (5) can happen after (6) . see the snippet below to see an ordering that could happen


byte[] read() {
    long v1, v2;
    byte[] ret = new byte[data.length];
    do {
      v1 = version; // 4
   
      v2 = version; // 6
    System.arraycopy(data, 0, ret, 0, data.length);  // 5
    
    } while (v1 != v2 || (v1 & 1) == 1);
  }
}

A unsafe.loadFence() can ensure this is avoided. See snippet below .

The loadFence ensure the data read at (5) cannot move below the fence .

byte[] read() {
    long v1, v2;
    byte[] ret = new byte[data.length];
    do {
      v1 = version; // 4
   
      System.arraycopy(data, 0, ret, 0, data.length);  // 5
    unsafe.loadFence()
      v2 = version; // 6
    } while (v1 != v2 || (v1 & 1) == 1);
  }
}

0x1saiah
  • 1
  • 1