I'm having to use SparkR for some portion of a project, I typically use scala. I'm writing out a file using the following code
# Let's set the scipen
options(scipen=999)
# create a spark dataframe and write out
sdf <- SparkR::as.DataFrame(df)
SparkR::head(sdf) # all looks good
SparkR::write.json(sdf, path=somePath, mode="append") # does not look good
However, when I go to view the written out output one of my vars, timestamp in this case, is written out using scientific notation, e.g. 1.4262E12. When I would rather have it long, e.g. 1426256000000. I can't for some reason figure out why write.json is writing the file out this way. Before writing the file out I view my spark data frame and see timestamp written out long. Can anyone help/advise to work around this problem?
Here is an example of the schema, must be kept this way:
root
|-- price: integer (nullable = true)
|-- timestamp: double (nullable = true)