2

I am using Spark 1.6.1 and Java as programming language. The following code was working fine with dataframes:

simpleProf.groupBy(col("col1"), col("col2") )
                .agg(
                     sum("CURRENT_MONTH"),
                     sum("PREVIOUS_MONTH")
                );

But, it does not using datasets, any idea how to do the same with dataset in Java/Spark?

Cheers

Edge7
  • 681
  • 1
  • 15
  • 35
  • Can you post your code that isn't working? This should work, so it depends on how you are trying to do this. – Justin Pihony Jun 21 '17 at 17:53
  • In my case it also works. Can you paste your Exception? – Piotr Kalański Jun 21 '17 at 18:06
  • Can you elaborate on _"it does not using datasets"_? How do you know it does not work using datasets? What's the output that leads you to believe so? – Jacek Laskowski Jun 22 '17 at 05:47
  • It does not work, in the sense that after the groupBy I get a GroupedDataset object and when I try to apply the function agg it requires typedColumn instead of column. – Edge7 Jun 22 '17 at 08:52

1 Answers1

1

It does not work, in the sense that after the groupBy I get a GroupedDataset object and when I try to apply the function agg it requires typedColumn instead of column.

Ahh, there was just some confusion on this because of the merging of Dataset and DataFrame in Spark 2.x, where there is a groupBy which works with relational columns, and groupByKey which works with typed columns. So, given that you are using an explicit Dataset in 1.6, then the solution is to typify your columns via the .as method.

sum("CURRENT_MONTH").as[Int]
Justin Pihony
  • 66,056
  • 18
  • 147
  • 180