I'm trying to use the code below.
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
// Aquire a DataFrame collection (val collection)
val config = Config(Map(
"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"dbTable" -> "dbo.Clients"
"user" -> "username",
"password" -> "*********"
))
import org.apache.spark.sql.SaveMode
collection.write.mode(SaveMode.Append).sqlDB(config)
The script is from this link.
https://github.com/Azure/azure-sqldb-spark
I'm running this in a databricks environment. I'm getting these errors:
command-836397363127942:5: error: object sqlDB is not a member of package com.microsoft.azure
import com.microsoft.azure.sqlDB.spark.connect._
^
command-836397363127942:4: error: object sqlDB is not a member of package com.microsoft.azure
import com.microsoft.azure.sqlDB.spark.config.Config
^
command-836397363127942:7: error: not found: value Config
val bulkCopyConfig = Config(Map(
^
command-836397363127942:18: error: value sqlDB is not a member of org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row]
df.write.mode(SaveMode.Append).sqlDB(bulkCopyConfig)
I'm guessing that some kind of library is not installed correctly. I Googled for an answer, but didn't find anything useful. Any idea how to make this work? Thanks.