I'm trying to load data data from Azure Databricks into Azure SQL database table via jdbc. Data is loading fine but the double values in Azure Databricks when inserted in to SQL table is getting converted to exponential values. I have tried different datatypes in SQL database such as nvarchar, varchar, float and it gets converted to exponential values.
However, when i try using the decimal data type in Azure SQL database it loads data into the column without exponential values but giving me extra zeros in the end.
The command that i'm using in Databricks is:
%scala
spark.sql("select ID from customers")
.write
.mode(SaveMode.Append) // <--- Append to the existing table
.jdbc(jdbcUrl,stg.customers, connectionProperties)
Some of the examples stored in Azure Databricks ID column are:
ID |
---|
1900845009567889.12 |
2134012183812321 |
When using numeric(38,15)
data type in Azure SQL Database it is giving me the following output:
|ID|
|:--|
|1900845009567889.1200000000000000|
|2134012183812321.0000000000000000|
I don't want the extra zeros in the end. Also, data in the Databricks table is not properly defined so i cannot say if numeric(38,15) would suffice or not.
I also tried storing the data in Azure Databricks ID column as String datatype and then tried loading that into a varchar or nvarchar datatype in SQL table. But, still it is converting the data into exponential values.
Can anyone please suggest if there is any easy way to load this data from Azure Databricks to Azure SQL database?