I'm writing a modificted Kademlia P2P system here but the problem I'm describing here is very similar to the implementation of the original one.
So, what's the most efficient way of implementing k-Buckets? What matters for me are access time, parallelism (read & write) and memory consuption.
Thought doing it with a ConcurrentLinkedQueue and a ConcurrentHashMap but that's pretty redundant and nasty, isn't it?
At the moment I'm simply synchronizing a LinkedList.
Here is my code:
import java.util.LinkedList;
class Bucket {
private final LinkedList<Neighbour> neighbours;
private final Object lock;
Bucket() {
neighbours = new LinkedList<>();
lock = new Object();
}
void sync(Neighbour n) {
synchronized(lock) {
int index = neighbours.indexOf(n);
if(index == -1) {
neighbours.add(n);
n.updateLastSeen();
} else {
Neighbour old = neighbours.remove(index);
neighbours.add(old);
old.updateLastSeen();
}
}
}
void remove(Neighbour n) {
synchronized(lock) {
neighbours.remove(n);
}
}
Neighbour resolve(Node n) throws ResolveException {
Neighbour nextHop;
synchronized(lock) {
int index = neighbours.indexOf(n);
if(index == -1) {
nextHop = neighbours.poll();
neighbours.add(nextHop);
return nextHop;
} else {
return neighbours.get(index);
}
}
}
}
Please don't wonder, I have implemented another neighbour eviction process.