3

I am attempting to use long user/product IDs in the ALS model in PySpark MLlib (1.3.1) and have run into an issue. A simplified version of the code is given here:

from pyspark import SparkContext
from pyspark.mllib.recommendation import ALS, Rating

sc = SparkContext("","test")

# Load and parse the data
d = [ "3661636574,1,1","3661636574,2,2","3661636574,3,3"]
data = sc.parallelize(d)
ratings = data.map(lambda l: l.split(',')).map(lambda l: Rating(long(l[0]), long(l[1]), float(l[2])) )

# Build the recommendation model using Alternating Least Squares
rank = 10
numIterations = 20
model = ALS.train(ratings, rank, numIterations)

Running this code yields a java.lang.ClassCastException because the code is attempting to convert the longs to integers. Looking through the source code, the ml ALS class in Spark allows for long user/product IDs but then the mllib ALS class forces the use of ints.

Question: Is there a workaround to use long user/product IDs in PySpark ALS?

zero323
  • 322,348
  • 103
  • 959
  • 935
Jon
  • 33
  • 4

2 Answers2

7

This is known issue (https://issues.apache.org/jira/browse/SPARK-2465), but it will not be solved soon, because changing interface to long userId should slowdown computation.

There are few solutions:

  • you can hash userId to int with hash() function, since it cause just random row compression in few cases, collisions shouldn't affect accuracy of your recommender, really. Discussion in first link.

  • you can generate unique int userIds with RDD.zipWithUniqueId() or less fast RDD.zipWithIndex, just like in this thread: How to assign unique contiguous numbers to elements in a Spark RDD

Community
  • 1
  • 1
1

For newer versions of pyspark (from 1.4.0) and if you are working with dataframes, you can use the StringIndexer to map your ids into indices. Then you can use these indices as your ids.

Amir
  • 486
  • 3
  • 10
  • 17