0

The legacy code is as follows:

private static final ReentrantLock RE_ENTRANT_LOCK = new ReentrantLock(true);

private void newRunTransaction(final OrderPayment payment, final Address billingAddress, final String command)  {

    TransactionResponse response = null;
    RE_ENTRANT_LOCK.lock();
    try {
        SoapServerPortType client = getClient();
....

We consider that the lock in the beginning of the method is overkill because we should be able to run transactions in multiple threads. On the other hand if OrderPayment is related to the same order in 2 parallel threads then we cannot run transactions in parallel.

Is there any elegant and efficient way to ensure that transactions related to just one order do not run in parallel but all other transactions are multithreaded?

Alex
  • 7,007
  • 18
  • 69
  • 114
  • In the most basic version `synchronized(payment)` whereever you access it. That excludes concurrent access to just 1 object instead of to every transaction. – zapl Jan 14 '15 at 15:53
  • @zapi, this will still lock every transaction regardless of order – Alex Jan 14 '15 at 16:04
  • 1
    Re, "if OrderPayment is related to the same order in 2 parallel threads then we cannot run transactions in parallel:" I don't know your application, and I don't know much about databases, but isn't that the whole point of transactions? You run two or more at the same time, and if they don't interfere with one another, then they all succeed. If they do interfere, then at least one of them fails, and it's up to the application to figure out how to handle the failure. – Solomon Slow Jan 14 '15 at 17:16
  • @James, a good point, the only drawback is that in MYSQL failing transactions can lock the DB itself. The other drawback, that in this particular case we send also payments to a third-party server, so we do not have a control over transactions. – Alex Jan 14 '15 at 17:20
  • It is impossible for multiple threads to alter an object without some loss of performance, in that a thread blocking or stalling (because of a cache missing) is unavoidable. – Raedwald Jan 14 '15 at 21:46

2 Answers2

2

[Update]: Added solutions using a WeakHashMap-Cache to get the cleaning of unused locks to be done by the garbage collector. They had been developed here: Iterating a WeakHashMap

If the payment has a reference to its order and equal orders are the same objects (order1 == order2 <=> order1 is the same as order2), you can use a synchronized block:

synchronized(payment.getOrder()) {
    try  {
       // ...
    }
}

Caution: You should ensure, that payment.getOrder() does not yield null or use dummy-Objects for that case.

Edit: Possible solution if order1 == order2 does not hold:

You could try holding unique locks for equal identifiers of your order:

static Map<Long, Object> lockCache = new ConcurrentHashMap<>();

and in the method

Object lock = new Object();
Object oldlock = lockCache.putIfAbsent(payment.getOrder().getUid(), lock);
if (oldlock != null) {
    lock = oldlock;
}

synchronized(lock) {
    // ...
}

Don't forget to remove the key when the work is done.

To use the garbage collection to remove your unused keys you could use a WeakHashMap structure:

private static Map<Long, Reference<Long>> lockCache = new WeakHashMap<>();

public static Object getLock(Longi)
{
    Long monitor = null;
    synchronized(lockCache) {
        Reference<Long> old = lockCache.get(i);
        if (old != null)
            monitor = old.get();

        // if no monitor exists yet
        if (monitor == null) {
            /* clone i for avoiding strong references 
               to the map's key besides the Object returend 
               by this method.
            */ 
            monitor = new Long(i);
            lockCache.remove(monitor); //just to be sure
            lockCache.put(monitor, new WeakReference<>(monitor));
        }

    }

    return monitor;
}

When you need something more complex like an reentant lock you may use a variation of the following solution:

private static Map<Long, Reference<ReentrantLock>> lockCache = new WeakHashMap<>();
private static Map<ReentrantLock, Long> keyCache = new WeakHashMap<>();

public static ReentrantLock getLock(Long i)
{
    ReentrantLock lock = null;
    synchronized(lockCache) {
        Reference<ReentrantLock> old = lockCache.get(i);
        if (old != null)
            lock = old.get();

        // if no lock exists or got cleared from keyCache already but not from lockCache yet
        if (lock == null || !keyCache.containsKey(lock)) {
            /* clone i for avoiding strong references 
               to the map's key besides the Object returend 
               by this method.
           */ 
            Long cacheKey = new Long(i); 
            lock = new ReentrantLock();
            lockCache.remove(cacheKey); // just to be sure
            lockCache.put(cacheKey, new WeakReference<>(lock));
            keyCache.put(lock, cacheKey);
        }                
    }

    return lock;
}
Community
  • 1
  • 1
flo
  • 9,713
  • 6
  • 25
  • 41
  • The orders to protect should not be necessary the same objects, just to have the same order number. Can we write as follows: synchronized(new Long (payment.getOrder().getUid())). Probably, not – Alex Jan 14 '15 at 16:36
  • Also, orders do not have the equals method overwritten. – Alex Jan 14 '15 at 16:49
  • 1
    synchronized(new Long (payment.getOrder().getUid())) will not work, because even if the Uids are the same, the Long-Objects are different objects and therefore no locking occurs. – flo Jan 14 '15 at 16:58
  • Yes, the solution with ConcurrentHashMap is what I tend to do – Alex Jan 14 '15 at 18:56
  • Take caution when removing keys/locks: It's not that simple as it may seem when thinking only about two players. – flo Jan 14 '15 at 20:21
  • @anarinsky added solutions which use the garbage collector to remove unused locks – flo Jan 20 '15 at 07:46
-1

How about assign a read-write lock to each OrderPayment? http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReadWriteLock.html

When you only read the order, you lock the readlock and in this way multiple threads only use that order will not block each other. Only if there is one thread modifying the order, other threads will be blocked.

I also answer a read-write lock implementation before: https://stackoverflow.com/a/27869471/1646996

Community
  • 1
  • 1
qqibrow
  • 2,942
  • 1
  • 24
  • 40
  • That's just an example to help you better understand read-write lock. You can find a lot of usage example online. For example, http://www.javacodegeeks.com/2012/04/java-concurrency-with-readwritelock.html Usage is simple. The key is to understand why should you use it and how it could improve the performance. – qqibrow Jan 14 '15 at 19:08
  • I don' think that would work needed. As far as I understood differnt instances of OderPayment could contain references to the same (equal, but not necessarily ==) order. – flo Jan 16 '15 at 07:32
  • @flo all right. I don't consider the situation you "EDIT" later. but all your lock could be changed to readwritelock. what do you think? – qqibrow Jan 16 '15 at 07:53
  • @qqibrow Yes, you could replace the simple Object-instances my solutions uses as monitor by RW-locks. In fact, I for myself would use AtomicInteger-Objects and keep count of the PaymentOrders (respectivley threads) using it. When you synchronize this operations with the cache map correctly, you can delete the key when AtomicInteger.get()==0 after your work is done. This way, you can avoid memory leaks caused by caching. Using RW-locks for that seems tricky, as you have to take care that no thread is holding a reference to it while not locking on it (between get and lock.lock()). – flo Jan 16 '15 at 09:18
  • @qqibrow Maybe a simple datastructure holding a counting field and a rw lock field would suffice for both problems ;-) – flo Jan 16 '15 at 09:26
  • @flo If my understanding is correct, you can directly use a week data structure to avoid memory leaks. – qqibrow Jan 16 '15 at 15:01
  • @qqibrow Yes, but you have to make shure, that you hold a reference to the key the WeakHashMap uses to map the order to the monitor until that monitor is not used by any thread anymore. Just getting the monitor-object from the cache and forgetting about the key can result in two monitor-objects for the same order due to garbage collection. On the other hand you have to make sure your key gets not used anymore at some point, so maybe using order.getUid() directly is too strong. – flo Jan 16 '15 at 15:08