6

I am using Spark cluster 2.0 and I would like to convert a vector from org.apache.spark.mllib.linalg.VectorUDT to org.apache.spark.ml.linalg.VectorUDT.

# Import LinearRegression class
from pyspark.ml.regression import LinearRegression

# Define LinearRegression algorithm
lr = LinearRegression()

modelA = lr.fit(data, {lr.regParam:0.0})

Error:

u'requirement failed: Column features must be of type org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7 but was actually org.apache.spark.mllib.linalg.VectorUDT@f71b0bce.'

Any thoughts how would I do this conversion between vector types.

Thanks a lot.

Community
  • 1
  • 1
Mostafa
  • 3,296
  • 2
  • 26
  • 43

1 Answers1

8

In PySpark you'll need an or map over RDD. Let's use the first option. First a couple of imports:

from pyspark.ml.linalg import VectorUDT
from pyspark.sql.functions import udf

and a function:

as_ml = udf(lambda v: v.asML() if v is not None else None, VectorUDT())

With example data:

from pyspark.mllib.linalg import Vectors as MLLibVectors

df = sc.parallelize([
    (MLLibVectors.sparse(4, [0, 2], [1, -1]), ),
    (MLLibVectors.dense([1, 2, 3, 4]), )
]).toDF(["features"])

result = df.withColumn("features", as_ml("features"))

The result is

+--------------------+
|            features|
+--------------------+
|(4,[0,2],[1.0,-1.0])|
|   [1.0,2.0,3.0,4.0]|
+--------------------+
zero323
  • 322,348
  • 103
  • 959
  • 935
  • Thank you for providing this solution, @zero323. Had the same issue and this works very well (why Spark is using two ML packages with the same class names escapes me, though; maybe historical reasons). – martin_wun Oct 05 '21 at 06:39