You can use Apache Spark Connector for SQL Server and Azure SQL
and an example of what you have to do in Databricks can be found in following Python file
As you can see, we are not directly connecting with the Service Principal, instead, we are using the Service Principal to generate an access token that is going to be used later when specifying the connection parameters:
jdbc_df = spark.read.format("com.microsoft.sqlserver.jdbc.spark") \
.option("url", url) \
.option("dbtable", db_table) \
.option("accessToken", access_token) \
.option("encrypt", "true") \
.option("databaseName", database_name) \
.option("hostNameInCertificate", "*.database.windows.net") \
.load()
But if you can't or don't want to use previous library, you can also do the same with the native Azure-SQL JDBC connector of Spark:
jdbc_df = spark.read.format("com.microsoft.sqlserver.jdbc.SQLServerDriver")\
.option("url", url) \
.option("dbtable", db_table) \
.option("accessToken", access_token) \
.option("encrypt", "true") \
.option("databaseName", database_name) \
.option("hostNameInCertificate", "*.database.windows.net") \
.load()