4

how to do probability aggregations inside my reducer with mappers;

I'm trying to implement both the “stripes” approach and the “pair” approach on Hadoop for the following tasks but I'd like to know how to do communication among multiple mappers and how to do probability-oriented aggregations inside my reducer.

  • The co-occurrences of each pair of items, Count (A, B)=# of transactions contains both A and B, and the conditional probability Prob(B|A)=Count(A,B)/Count(A).
  • The co-occurrences of each triple of items, Count (A,B,C) =# of transactions contains both A and B, and the conditional probability Prob(A|B,C)=Count(A,B,C)/Count(B,C)
  • Each line records a transaction (a set of items being purchased together): input datasets are transactional data with the following format:

    25 52 164 240 274 328 368 448 538 561 630 687 730 775 825 834 39 120 124 205 401 581 704 814 825 834 35 249 674 712 733 759 854 950 39 422 449 704 825 857 895 937 954 964 15 229 262 283 294 352 381 708 738 766 853 883 966 978 26 104 143 320 569 620 798 7 185 214 350 529 658 682 782 809 849 883 947 970 979 227 390 71 192 208 272 279 280 300 333 496 529 530 597 618 674 675 720 855 914 932 =======================================================================================**

Panos Kalatzantonakis
  • 12,525
  • 8
  • 64
  • 85

1 Answers1

0

The short answer to your question is that you don't communicate directly between mappers... that goes against the map-reduce pattern of computation. Instead, you need to structure your algorithm so that the key values output by your map phase can be consumed and aggregated by your reducer phase in an intelligent way.

From your background information in the question it is clear that you understand that to calculate the conditional probabilities that you are interested is really just an exercising in counting. The usual pattern here is to do all the counting in one map reduce pass and then take those output and divide the appropriate quantities afterwards (trying to work them into the map-reduce pass adds needless complexity)

You really only need a data structure to keep track of the things you are trying to count. You could do this with a set of arrays with implicit indexing if speed is a must, but it is simple to do the exposition in terms of a single hashmap. Because we aren't interested

mapper code in python for hadoop streaming

import sys
output={}


for line in sys.stdin:
   temp=line.strip().split('\t')
   # we should sort the input so that all occurrences of items x and y appear with x before y
   temp.sort()
   # count the occurrences of all the single items
   for item in temp:
      if item in output:
         output[item]+=1
      else:
         output[item]=1


   #count the occurrences of each pair of items
   for i in range(len(temp)):
      for j in range(i+1,len(temp)):
         key=temp[i]+'-'+temp[j]
         if key in output:
            output[key]+=1
         else:
            output[key]=1
   #you can triple nest a loop if you want to count all of the occurrences of each 3 item pair, but be warned the number of combinations starts to get large quickly
   #for 100 items as in your example there would be 160K combinations


#this point of the program will be reached after all of the data has been streamed in through stdin
#output the keys and values of our output dictionary with a tab separating them
for data in output.items():
   print data[0]+'\t'+data[1]

#end mapper code

The code for the reducer is now same as all of the word count examples that are so prolific. An example for python code with map-reduce streaming can be found here. The output of the map-reduce program will be a rows with the key describing what has been counted as well as the number of occurrences each single item as well as the key and number of occurrences of all pairs and from there you can write a program to calculate the conditional probabilities that you are interested in.

Samsdram
  • 1,615
  • 15
  • 18