1

I have a simple code:

test("0153") {
  val c = Seq(1,8,4,2,7)
  val max = (x:Int, y:Int)=> if (x > y) x else y
  c.reduce(max)
}

It works fine. But, when I follow the same way to use Dataset.reduce,

test("SparkSQLTest") {
  def max(x: Int, y: Int) = if (x > y) x else y
  val spark = SparkSession.builder().master("local").appName("SparkSQLTest").enableHiveSupport().getOrCreate()
  val ds = spark.range(1, 100).map(_.toInt)
  ds.reduce(max) //compiling error:Error:(20, 15) missing argument list for method max
}

Compiler complains that missing argument list for method max, I don't what's going on here.

Shaido
  • 27,497
  • 23
  • 70
  • 73
Tom
  • 5,848
  • 12
  • 44
  • 104

2 Answers2

2

Change to a function instead of a method and it should work, i.e. instead of

def max(x: Int, y: Int) = if (x > y) x else y

use

val max = (x: Int, y: Int) => if (x > y) x else y

Using the function, using ds.reduce(max) should work directly. More about the differences can be found here.


Otherwise, as hadooper pointed out you can use the method by supplying the arguments,

def max(x: Int, y: Int) = if (x > y) x else y
ds.reduce((x, y) => max(x,y))
Shaido
  • 27,497
  • 23
  • 70
  • 73
1

As per spark scala doc, reduce function signature is reduce(func: ReduceFunction[T]): T and reduce(func: (T, T) ⇒ T): T So either of the following will work

Approach 1:

scala> val ds = spark.range(1, 100).map(_.toInt)
ds: org.apache.spark.sql.Dataset[Int] = [value: int]

scala> def max(x: Int, y: Int) = if (x > y) x else y
max: (x: Int, y: Int)Int

scala> ds.reduce((x, y) => max(x,y))
res1: Int = 99

Approach 2 [If you insist on short hand notation like reduce(max)]:

scala> val ds = spark.range(1, 100).map(_.toInt)
ds: org.apache.spark.sql.Dataset[Int] = [value: int]

scala> object max extends org.apache.spark.api.java.function.ReduceFunction[Int]{
     | def call(x:Int, y:Int) = {if (x > y) x else y}
     | }
defined object max

scala> ds.reduce(max)
res3: Int = 99

Hope, this helps!

Shaido
  • 27,497
  • 23
  • 70
  • 73
m-bhole
  • 1,189
  • 10
  • 21
  • Thanks @hadooper. The 2nd approach is more like using Java API. For 1st approach, I don't understand why `ds.reduce(max)` doesn't work – Tom Jul 12 '18 at 04:46