Right now, I have to use df.count > 0 to check if the data frame is empty or not. But it is kind of inefficient. Is there any better way to do that.
Thanks.
PS: I want to check if it's empty so that I only save the data frame if it's not empty
How to check if spark dataframe is empty - this question tell me a way in scala, I am looking for a way in pyspark.