I have a PySpark dataframe with 13 million rows and 800 columns. I need to normalize this data so have been using this code, which works with a smaller development data set.
def z_score_w(col, w):
avg_ = avg(col).over(w)
stddev_ = stddev_pop(col).over(w)
return (col - avg_) / stddev_
w = Window().partitionBy().rowsBetween(-sys.maxsize, sys.maxsize)
norm_exprs = [z_score_w(signalsDF[x], w).alias(x) for x in signalsDF.columns]
normDF = signalsDF.select(norm_exprs)
However, when using the full data set I run into an exception with the codegen:
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.org$apache$spark$sql$catalyst$expressions$codegen$CodeGenerator$$doCompile(CodeGenerator.scala:893
)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:950)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:947)
at org.spark_project.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
at org.spark_project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
... 44 more
Caused by: org.codehaus.janino.JaninoRuntimeException: Code of method "(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass;[Ljava/lang/Object;)V" of class "org.apache.
spark.sql.catalyst.expressions.GeneratedClass$SpecificMutableProjection" grows beyond 64 KB
at org.codehaus.janino.CodeContext.makeSpace(CodeContext.java:941)
at org.codehaus.janino.CodeContext.write(CodeContext.java:836)
at org.codehaus.janino.UnitCompiler.writeOpcode(UnitCompiler.java:10251)
at org.codehaus.janino.UnitCompiler.pushConstant(UnitCompiler.java:8933)
at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:4346)
at org.codehaus.janino.UnitCompiler.access$7100(UnitCompiler.java:185)
at org.codehaus.janino.UnitCompiler$10.visitBooleanLiteral(UnitCompiler.java:3267)
There are a few Spark JIRA issues around that appear similar, but these are all marked resolved. There is also this SO question which is relevant, but the answer is an alternative technique.
I have my own workaround where I normalize batches of columns of the dataframe. This works, but I end up with multiple dataframes that I then have to join, which is slow.
So, my question is - is there an alternative technique for normalizing large dataframes that I'm missing?
I'm using spark-2.0.1.