1

I'm trying to implement an array-based lock-free binary search tree. In order to make it work concurrently, I should lock a range of array in method. How can I achieve it?

public int getIndex(int data){
        int index = 1;
        while(index<this.capacity && this.array[index] != 0){
            if(data < this.array[index])
                index = index *2;
            else if (data> this.array[index])
                index = index * 2 + 1;
            else
                break;
        }

        if(index >= this.capacity)
            return -1;
        else
            return index;
    }



public void insert(int data){
        int index = this.getIndex(data);
        if (index == -1) {
            this.reallocate();
            index = this.getIndex(data);
            this.array[index] = data;

        }
        else {
            if (this.array[index] == 0)
                this.array[index] = data;
        }

    }

I'm getting where to put data in getIndex method and insert it. So, in getIndex i should lock part of array that i'm iterating.

hitz
  • 1,100
  • 8
  • 12
Yusuf Can Gürkan
  • 725
  • 2
  • 10
  • 20
  • 7
    Am i missing something or your statements: "i should lock a range of array " and " lock-free binary search tree" are in contradiction ? – John May 29 '15 at 13:05
  • 1
    Note on code quality: the name of the method is `getIndex()` ... are you sure that an operation with a name like that should **lock** something? You know; `getIndex` sounds so "side effect free". – GhostCat May 29 '15 at 13:13
  • Using an array as backing is going to be really inefficient unless you have very, very balanced trees. And balancing is going to require yet more locking. – tucuxi May 29 '15 at 13:35

1 Answers1

0

If your tree does not balance itself after insertions, it can grow to huge sizes (inserting integers from 1 to 100, in that order, would require an array of size 2^100; you don't have that much RAM). So I will assume your tree does some rebalancing to keep memory sizes in check.

To find a key, you will need to traverse the root and up to log(size-of-array) additional nodes. If there is rebalancing involved, all nodes in this path, up to the root, may be re-shuffled. Since the root can change, no two writes can happen at the same time (where a write is an insertion or a deletion). Additionally, rebalancing involves moving chunks of array around; replacing the root requires moving halfs-of-the-whole-tree around, and that is very expensive. I strongly doubt that, being worried about performance enough as to consider partially-locking parts of the backing array, you want to get dragged down to expensive, O(n) operations on your tree.

Reads can be a lot more concurrent: they only go down the tree once, and don't rebalance anything upwards (if you don't mind running out of memory, a non-rebalancing tree also exhibits this for writes, excluding deletes). There can be any number of concurrent read operations working at the same time. And you can add things without worrying about bad reads at all - all reads will either find what they were looking for, or may occasionally return "not found" if they got there slightly late while another thread was writing them in. Use a volatile array though, or other threads may never see the changes.

There remains a special case concerning a non-rebalancing tree and delete(). Deletes (may) affect nodes outside a direct downwards path (see pictures here). Moving (sub)trees around is expensive (linear) with an array; the bits that you intend to move would need to be locked out, because during the move invalid reads could be made. Still, I strongly suspect that you are not going to implement Delete() due to the bad efficiency.

Community
  • 1
  • 1
tucuxi
  • 17,561
  • 2
  • 43
  • 74