This seems to be default behaviour in Spark.
In db the value is a decimal (18,8) for example:
0.00000000
When Spark reads any decimal value that is zero, and has a scale of more than 6 (eg. 0E-06), then it automatically transforms the value to scientific notation.
In this case the value is auto converted to 0E-08 within the Dataframe after reading the value 0.00000000
.
I want to write my dataframe to CSV, BUT.. when writing, Spark writes the 0E-08 value to CSV, not the decimal 0.00000000
Is there a way to write the explicit decimal value to CSV, without scientific notation?
Notes:
- The app is generic and takes any table as input, and simply writes this table to a CSV file.
- Therefore the app does not know the schema of the data, nor which are decimal values etc
- Each possible decimal field, may have a different precision and scale, so I cannot hardcode these.
- Using Spark 2.4.8