I want to persist to BigTable a very wide Spark Dataframe (>100,000 columns) that is sparsely populated (>99% of values are null) while keeping only non-null values (to avoid storage cost).
Is there a way to specify in Spark to ignore nulls when writing?
Thanks !