In my spark application, i use the following code to retrieve the data from sql server database using JDBC driver.
Dataset<Row> dfResult= sparksession.read().jdbc("jdbc:sqlserver://server\dbname", tableName,partitionColumn, lowerBound, upperBound, numberOfPartitions, properties);
and use map operation on dfResult dataset.
While running the application in standalone mode, i see spark creates unique connection for each rdd.From the Api description, I understand spark takes care of closing the connection.
May i know whether there is a way to reuse the connection instead of opening and closing the jdbc connection for each rdd partition?
Thanks