I have the next data frame called df
ci ing de
21 20 100
22 19 0
23 NA 80
24 100 NA
25 NA 50
26 50 30
and I want to count the number of missings of each column using spark.
I know that in R a code like this would work
apply(df, 2,
FUN = function (x)
{ sum(is.na(x)) } )
I want to do the same but using spark
Spark has a function called spark_apply, but I can't figure it out how to make it work.