0

I have the below table and want to get the first n unique Name along with the rest of the columns using scala spark

+--------------------+--------------------+--------+----------+
|            Name.   |               Type |cs_Units|cs1_Units |
+--------------------+--------------------+--------+----------+
|AUTO.,AUTO.ACCESS...|BACHMAN-BERNARD C...|       4|      $548|
|AUTO.,AUTO.ACCESS...|CAVENAUGHS BRUCE ...|       1|       $49|
|AUTO.,AUTO.ACCESS...|SCOTT CHANTZ KIA ...|       2|       $49|
|BUSINESS & CONSUM...|WILLIAMS JIM & AS...|      11|      $488|
|BUSINESS & CONSUM...|OBRIEN SVC CO HEA...|       6|      $329|
|BUSINESS & CONSUM...|TOUCHSTONE ENERGY...|       5|      $235|
|BUSINESS & CONSUM...|FOX & FARMER LEGA...|       2|      $152|
|BUSINESS & CONSUM...|CANADY & SON EXTE...|       1|       $72| 
|DIRECT RESPONSE P...|MYPILLOW PREMIUM ...|       2|      $106|
|DIRECT RESPONSE P...|DERMASUCTION DIR ...|       1|       $30|
|DIRECT RESPONSE P...|GREASE POLICE DIR...|       1|       $17|
|XXXX.               |GREASE POLICE DIR...|       1|       $17|.  
+--------------------+--------------------+--------+----------+

Final result: If you see it has only 3 unique "Names" .

1)AUTO.,AUTO.ACCESS

2)BUSINESS & CONSUM

3)DIRECT RESPONSE P

    +--------------------+--------------------+--------+----------+
    |            Name.   |               Type |cs_Units|cs1_Units |
    +--------------------+--------------------+--------+----------+
    |AUTO.,AUTO.ACCESS...|BACHMAN-BERNARD C...|       4|      $548|
    |AUTO.,AUTO.ACCESS...|CAVENAUGHS BRUCE ...|       1|       $49|
    |AUTO.,AUTO.ACCESS...|SCOTT CHANTZ KIA ...|       2|       $49|
    |BUSINESS & CONSUM...|WILLIAMS JIM & AS...|      11|      $488|
    |BUSINESS & CONSUM...|OBRIEN SVC CO HEA...|       6|      $329|
    |BUSINESS & CONSUM...|TOUCHSTONE ENERGY...|       5|      $235|
    |BUSINESS & CONSUM...|FOX & FARMER LEGA...|       2|      $152|
    |BUSINESS & CONSUM...|CANADY & SON EXTE...|       1|       $72| 
    |DIRECT RESPONSE P...|MYPILLOW PREMIUM ...|       2|      $106|
    |DIRECT RESPONSE P...|DERMASUCTION DIR ...|       1|       $30|
    |DIRECT RESPONSE P...|GREASE POLICE DIR...|       1|       $17|
    +--------------------+--------------------+--------+----------+
Neethu Lalitha
  • 3,031
  • 4
  • 35
  • 60

1 Answers1

1

Here's an code example but I think you can just you limit on distinct(dropDuplicates) and join the content back in.

val df = sc.parallelize(Seq(
  (0,"cat26",30.9), (0,"cat13",22.1), (0,"cat95",19.6), (0,"cat105",1.3),
  (1,"cat67",28.5), (1,"cat4",26.8), (1,"cat13",12.6), (1,"cat23",5.3),
  (2,"cat56",39.6), (2,"cat40",29.7), (2,"cat187",27.9), (2,"cat68",9.8),
  (3,"cat8",35.6))).toDF("Hour", "Category", "TotalValue")

val distinkt = df.select( df("Category") ).dropDuplicates.limit(3);
df.join( distinkt, distinkt("Category") === df("Category") ).show

If you knew more about the data you might be able to come up with a strategy to repartition the data and use foreachPartition. But you'd have to have some next level logic to know which partition would be printed/skipped. It's doable but I"m not sure what performance gain you'd get.

Matt Andruff
  • 4,974
  • 1
  • 5
  • 21