23

I have data like below. Filename:babynames.csv.

year    name    percent     sex
1880    John    0.081541    boy
1880    William 0.080511    boy
1880    James   0.050057    boy

I need to sort the input based on year and sex and I want the output aggregated like below (this output is to be assigned to a new RDD).

year    sex   avg(percentage)   count(rows)
1880    boy   0.070703         3

I am not sure how to proceed after the following step in pyspark. Need your help on this

testrdd = sc.textFile("babynames.csv");
rows = testrdd.map(lambda y:y.split(',')).filter(lambda x:"year" not in x[0])
aggregatedoutput = ????
Cœur
  • 37,241
  • 25
  • 195
  • 267
Mohan
  • 867
  • 2
  • 7
  • 25

1 Answers1

51
  1. Follow the instructions from the README to include spark-csv package
  2. Load data

    df = (sqlContext.read
        .format("com.databricks.spark.csv")
        .options(inferSchema="true", delimiter=";", header="true")
        .load("babynames.csv"))
    
  3. Import required functions

    from pyspark.sql.functions import count, avg
    
  4. Group by and aggregate (optionally use Column.alias:

    df.groupBy("year", "sex").agg(avg("percent"), count("*"))
    

Alternatively:

  • cast percent to numeric
  • reshape to a format ((year, sex), percent)
  • aggregateByKey using pyspark.statcounter.StatCounter
zero323
  • 322,348
  • 103
  • 959
  • 935
  • [SparkSQL: apply aggregate functions to a list of column](https://stackoverflow.com/q/33882894/1560062) | [Multiple Aggregate operations on the same column of a spark dataframe](https://stackoverflow.com/q/34954771/1560062). – zero323 Aug 31 '17 at 19:05