I have a Pyspark DataFrame which I want to aggregate using a function that does row-by-row operations.
I have 4 columns, and for each unique value in column A I have to do the row-by-row aggregation in columns B,C,D
I am using this method :
get unique values in A using
A_uniques = df.select('A').distinct()
def func(x): y = df.filter(df.A==x) y = np.array(y.toPandas()) for i in y.shape[0]: y[i,1] = y[i-1,0] y[i,0] = (y[i,0]+y[i,2])/y[i,3] agg = sum(y[:,1]) return agg
A_uniques.rdd.map(lambda x: (x['A'], func(x['A'])))
I am getting this error :
PicklingError: Could not serialize object: Py4JError: An error occurred while calling o64.getnewargs. Trace: py4j.Py4JException: Method getnewargs([]) does not exist at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318) at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326) at py4j.Gateway.invoke(Gateway.java:272) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748)
Is there a solution to saving numpy arrays in RDDs ? Or Can I do this entire operation in some other way ?