1

Now I am using Hadoop to process the data that will finally be loaded into the same table. I need to a shared sequential number generator to generate id for each row. Now I am using the following approach to generate the unique number:

1) Create a text file, e.g., test.seq, in HDFS for saving the current sequential number.

2) I use a lock file ".lock" to control concurrency. Suppose we have two tasks to processing the data in parallel. If task1 wants to get the number, it will check if the lock file exists. If yes, it means that task2 is accessing the number from the test.seq, then task1 has to wait. When task2 has acquired the number, it overwrites the old number by increasing 1 when it returns, and deletes the lock file ".lock". When task1 sees the .lock disappear, task1 will firstly create a ".lock" file, then does the same way to get the sequential number.

However, I am not sure if this approach is practical. Because I keep the .lock and test.seq files in the HDFS, even the content of test.seq was changed by task 1, it might not immediately be aware by the task2. As the other tasks get the information about the data in the HDFS is through by namenode. So, the datanode will first notify the change to the namenode, then the other tasks are notified the changes. Is it correct?

Another idea is to create torjan program running on the Master. so, the task get the sequential number is through RPC the Torjan program. But how to run the Torjan program on the master program?

Could anybody give me some advice? thanks!

afancy
  • 673
  • 4
  • 10
  • 18
  • Check this similar [SO Question](http://stackoverflow.com/questions/2671858/distributed-sequence-number-generation) prefer generating sequence numbers with ZK. Check the [thread](http://zookeeper-user.578899.n2.nabble.com/Sequence-Number-Generation-With-Zookeeper-td5378618.html) on ZK. – Praveen Sripati Oct 28 '11 at 15:01

3 Answers3

6

You're correct that HDFS wouldn't give you a consistent view of quickly changing data. This approach would also burden your name node with a lot of traffic.

I strongly recommend you put the effort into deploying ZooKeeper. It's built as an independent service but was designed for global state tracking with Hadoop. Great stuff.

To solve your problem, you would create nodes in a directory that would be assigned by ZooKeeper the ascending value. It scales, it's fault tolerant, and all that good stuff.

Nightfirecat
  • 11,432
  • 6
  • 35
  • 51
Sam
  • 2,939
  • 19
  • 17
  • Zookeeper comes built in with Cloudera Hadoop, I'd suggest going that direction if you have your own cluster. Use curator to make it easy to access zookeeper (it's a pain to use the raw zookeeper apis). We implement a mechanism where each task process "checks out" a set of IDs from a global pool stored in zookeeper, this way it can make use of say a million IDs efficiently, then "check in" the unused IDs that it checked out at the end of it its execution. – David Parks May 07 '13 at 09:34
4

The main problem is that you choose hadoop because of the horizontal scalability properties.
All forms of horizontal scalability suffer greatly when you include something that needs to be coordinated from a central point.

So you have two options:

  1. You accept the scaling limitations and go for the solutions proposed by others. (like the zookeeper option)
  2. You choose a solution that does not require a form of central coordination. At the expense of some properties of the key.

I would try to see if the latter would be enough for your purposes. One such solution could be that you take the id of the current tracker instance and append a local counter value. This way the value is unique and sequentially per tracker and over multiple runs of the same job, but not within the job.

Niels Basjes
  • 10,424
  • 9
  • 50
  • 66
0

If you only need to have the entries in chronological order, store a timestamp instead of an id.

nfechner
  • 17,295
  • 7
  • 45
  • 64