0

Let's say I have a shared object with field data. Multiple threads will share a reference to this object in order to access the field. The threads never access the object concurrently, though. Do I need to declare data as volatile?

Such a situation would be the following:

  • A class Counter defines a unique field value and one method increment.
  • A thread increments the counter, then spawn another thread that increments the counter, etc.

Given the very logic of the program, there is no concurrent access to the counter. The counter is however shared accross multiple threads. Must the counter be a volatile?

Another variant of the situation is when multiple threads manipulate an object X that is plain data, but alternate their temporal execution (so that X is never accessed concurrently) via another object Y that rely on concurrency control (wait, notify, synchronize). Should fields of object X be volatile?

ewernli
  • 38,045
  • 5
  • 92
  • 123

5 Answers5

3

Studying the entire JLS chapter on the Java Memory Model is highly recommended – mandatory, in fact – for anyone doing concurrency in Java. Your case, specifically, is covered in JLS, 17.4.4:

"An action that starts a thread synchronizes-with the first action in the thread it starts."

This means that for your first scenario you don't need volatile. However, it would be good practice to have it anyway to be robust to future changes to the code. You should really have a good reason not to have volatile, which would be only in the case of an incredibly high read rate (millions per second at the very least).

Marko Topolnik
  • 195,646
  • 29
  • 319
  • 436
1

the Java Memory Model and bytecode reordering does not guarantee that subsequent thread will see the incremented value of the counter. So if you work with single thread - you don't need to do anything with volatiles, but if several threads may read something from variable - you need to ensure visibility of changes to another threads either with volatile or with syncrhonization/locks.

Thread.start method imposes the barrier, so visibility is assured - and it may happen that you don't need that volatile stuff. But I would add it anyway.

jdevelop
  • 12,176
  • 10
  • 56
  • 112
  • 2
    The reordering doesn't happen at the bytecode level---these issues arise only with JIT-compiled code. Also, your use of the term "memory barrier" indicates that you misunderstand it somewhat. `volatile` is there to **impose** the barrier, not **break** it. – Marko Topolnik Oct 11 '12 at 12:01
  • I am not agreeing with imposing memory barrier. Volatile read / volatile werite forses operations of write to happen before reads, **AND** flushes changes from write into **ALL** thread "local" copies of not just that variable, but all variables from main memory. – jdevelop Oct 11 '12 at 13:13
  • As I said, you need to read up on the true meaning of that term. What you have just described is called "memory barrier", so you basically disagree with yourself :) – Marko Topolnik Oct 11 '12 at 13:15
  • it's not about memory barrier itself, but about the fact that using volatile breaks that barrier - not imposes it. – jdevelop Oct 11 '12 at 13:17
  • What can I say, you are wrong. I suggest reading up on it, improve your knowledge. – Marko Topolnik Oct 11 '12 at 13:29
  • would you mind to provide some quote with reference? Just saying "you are wrong" is nothing. – jdevelop Oct 11 '12 at 13:58
  • I'm not trying to argue, just help you out and advise. If you don't care about your own education, I won't mind. – Marko Topolnik Oct 11 '12 at 14:01
  • I'm sorry, do you really want me to google it for you? Maybe the [wikipedia entry on the subject](http://en.wikipedia.org/wiki/Memory_barrier) can help? The web is chock-full of related material, why do you think you need me for your education? My contribution was pointing out where you are wrong and giving you the correct definition. If someone did that for me, I'd thank him/her and do the googling myself. Try asking "what is a memory barrier" over here at SO and count the seconds it will take it to get closed. – Marko Topolnik Oct 11 '12 at 14:12
  • I (think that I) know what the memory barrier is, and I know that it always there unless you break it either with volatile, or with synchrinization. I read carefully "java concurrency in practice" and JLS, especially chapter 17, and nothing prevents me from thinking in this way. If you have opposite definition, that volatile actually **raises** memory barrier - please share the source of this. I can't find anything about that in google. – jdevelop Oct 11 '12 at 14:27
  • The JLS never even mentions the term as it is an implementation detail at the CPU level. The only context you'll ever encounter the phrase "break the memory barrier" will be when talking about the limit of addressable memory, a completely unrelated concept. – Marko Topolnik Oct 11 '12 at 14:33
  • memory barrier == synchronization barrier, isn't it (in scope of JVM)? – jdevelop Oct 11 '12 at 14:37
  • You can also help yourself to [this article by Martin Thomson](http://mechanical-sympathy.blogspot.co.uk/2011/07/memory-barriersfences.html), which opens with "... memory barriers, or fences, that make the memory state within a processor visible to other processors." – Marko Topolnik Oct 11 '12 at 14:38
  • Memory barrier is one way of implementing the semantics of the synchronization barrier, on a specific set of CPU architectures popular today. Keeping in mind, of course, that the acquisition of a lock is a mechanism entirely separate from the memory barrier. – Marko Topolnik Oct 11 '12 at 14:39
  • ok, that'f fair enough, so I was talking about sycnhronization barrier rather than memory barrier. – jdevelop Oct 11 '12 at 14:44
  • Do you then think it should be said that `volatile` *breaks* the synchronization barrier, or that it imposes it? – Marko Topolnik Oct 11 '12 at 14:45
  • At this point of view I would say that **volatile** breaks the barrier, because if it would **impose** it - than no else thread will be able to read it's most recent value without synchronization on that monitor holder for the variable. – jdevelop Oct 11 '12 at 15:26
  • I can only hope you as an answerer on SO will take responsibility and not promulgate this wrong point of view any further. – Marko Topolnik Oct 11 '12 at 15:29
  • The term "breaking the synchronization barrier" is as meaningless as your original "breaking the memory barrier". Nobody uses it and, even if someone did, that would just mean they invented it on the spot and would need to define it. – Marko Topolnik Oct 11 '12 at 15:42
  • Okay, what I found so far in JCiP: "The visibility guarantees provided by synchronized and volatile may entail using special instructions called memory barriers that can flush or invalidate caches". So this may mean that volatile "raises" memory barrier. But it's kinda contra-intuitive. – jdevelop Oct 11 '12 at 15:43
  • 1
    It is counterintuitive if you make the wrong picture in your mind. The picture that phrase is meant to conjure is of write and read actions sort-of floating freely around their "semantic" point of occurence, until they encounter the barrier they cannot cross: every action that happens before the barrier in program order, stays before the barrier in actual execution. – Marko Topolnik Oct 11 '12 at 15:46
  • ok, will narrow down my answer to visibility of changes. Any objections? – jdevelop Oct 11 '12 at 15:50
  • 1
    Your last sentence is in essence correct and its only problem is that it uses the phrase "breaks the barrier" to mean "imposes the barrier". It would be a loss if you removed that sentence since it gives important information. – Marko Topolnik Oct 11 '12 at 15:53
  • well, from the point view of CPU caches the barrier is imposed. Still, it's all about visibility stuff, which hides barriers. – jdevelop Oct 11 '12 at 15:57
  • Talking about memory barriers when discussing Java semantics is colloquial usage anyway, since Java doesn't define that term. You can always rephrase the sentence in official JLS terms, which would be the most precise way to do it in any case. – Marko Topolnik Oct 11 '12 at 16:00
1

Regarding the second part of your question: if you do not use volatile on your variable X, it is possible that a given thread will always use a locally cached version of the value of the variable. Your use of variable Y as a lock will work very well as a means to insure that the two threads do not write concurrently to X but can't guarantee that one of the threads won't be looking at stale data.

From the JLS: "A write to a volatile variable v synchronizes-with all subsequent reads of v by any thread". The way I read this is that the spec offers no guarantees about the reads to other variables besides v.

mircealungu
  • 6,831
  • 7
  • 34
  • 44
0

You've only told part of the story with the counter. The incrementing part of the counter seems fine -- as Marko points out, there is a HB edge at Thread.start. But who's reading this counter? If it's anybody other than these spawned threads, and you at all care about seeing an up-to-date value, then the field needs to be volatile. If the counter is a long (or double), you need it to be volatile even if you don't care about stale values, because otherwise you could get word tearing.

yshavit
  • 42,327
  • 7
  • 87
  • 124
0

Mutations from a thread are guaranteed to become visible to other threads only when an happen-before relationship between the threads is established. When the relationship is established, all previous mutations become visible.

An object that isn't correctly synchronized when taken in isolation can be safe to use if another object correctly synchronizes accesses to it (see piggibacking in Java Concurrency in Practice).

In the two cases described in the question, I think no synchronization is needed:

  • Thread.start establishes a happen-before relationship, so all mutations from previous threads are visible
  • Accesses to object X are synchronized by object Y, which will establish happen-before relationships and make the changes to X visible (I expanded a bit more in a blog post).

If you know that an object X is never accessed concurrently, chances are there is an object Y that indirectly synchronizes accesses to X, so it's fine. The only unsafe case I see is if threads relay on time itself (e.g. with Thread.sleep or by looping until some time has elasped) to guarantee mutual exclusion: in this case there is no happen-before relationship that is established.

ewernli
  • 38,045
  • 5
  • 92
  • 123