0

I am trying to implement Jaccard similarity using the technique specified in Spark ML Lib. I have a data frame of user and items. I am getting wrong results with a similarity score of zero. What am I doing wrong?

from pyspark.sql.functions import *
from pyspark.sql.types import *
from pyspark.ml.linalg import SparseVector, DenseVector
from pyspark.ml.feature import MinHashLSH
from pyspark.ml.linalg import Vectors 
from pyspark.sql import Row 
from pyspark.ml.feature import VectorAssembler

df = sc.parallelize([ \
                 Row(CUST_ID=1, ITEM_ID=1),\
                 Row(CUST_ID=1, ITEM_ID=2),\
                 Row(CUST_ID=2, ITEM_ID=1),\
                 Row(CUST_ID=2, ITEM_ID=2),\
                 Row(CUST_ID=2, ITEM_ID=3)
                ]).toDF()

dfpivot=(df
    .groupBy("CUST_ID").pivot("ITEM_ID").count().na.fill(0)
                          )


input_cols = [x for x in dfpivot.columns if x !="CUST_ID"]

dfassembler1 = (VectorAssembler(inputCols=input_cols, outputCol="features")
.transform(dfpivot)
.select("CUST_ID", "features"))

mh = MinHashLSH(inputCol="features", outputCol="hashes", numHashTables=3)
model = mh.fit(dfassembler)

# Feature Transformation
print("The hashed dataset where hashed values are stored in the column 

'hashes':") model.transform(dfassembler).show(3,False)

dfA=dfassembler
dfB=dfassembler

print("Approximately joining dfA and dfB on distance smaller than 0.6:")
model.approxSimilarityJoin(dfA, dfB, 0.3, distCol="JaccardDistance")\
.select(col("datasetA.CUST_ID").alias("idA"),
        col("datasetB.CUST_ID").alias("idB"),
        col("JaccardDistance")).show()

Approximately joining dfA and dfB on distance smaller than 0.6:
+---+---+---------------+
                          +---+---+---------------+
 |idA|idB|JaccardDistance|
 +---+---+---------------+
 |  1|  1|            0.0|
 |  2|  2|            0.0|
    +---+---+---------------+
Benjamin W.
  • 46,058
  • 19
  • 106
  • 116
  • Benjamin W and Sai Kiran. do you have any answer for this -> https://stackoverflow.com/questions/52923110/spark-python-how-to-calculate-jaccard-similarity-between-each-line-within-an-rd – Anil Kumar May 13 '19 at 10:02

1 Answers1

0

Actually JaccardDistance is distance score, the similarity score will be 1-JaccardDistance. In your case, idA and idB they are the same pair. So the similarity is 1. JaccardDistance = 0.

jing
  • 1