I have a spring batch job that reads from a database and writes into a file after doing some processing, in a chunk based step. My requirement is to run almost 16 instances of the job parallelly at the same time, just with different job parameters.
But I've been facing the a couple of issues while doing so.
1.
Could not open JDBC Connection for transaction. Nested exception is java.sql.SQLTransientConnectionException: Hikaripool -1 - Connection is not available.
Exception: could not increment identity. Nested Exception is com.microsoft.SQLserver.jdbc.SQLServerException: Transaction (process ID 124) was deadlocked on lock resources with other process, and has been chosen as the deadlock victim. Rerun the transaction.
I've tried the solutions provided in the link Github link, by setting the IsolationLevel
and altering the metadata tables as shown below.
Set the IsolationLevelForCreate like this
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setIsolationLevelForCreate("ISOLATION_REPEATABLE_READ");
Have the DBA add indexes to each of the SEQ tables like this (JET is my schema that I put the repo tables in):
ALTER TABLE [JET].[BATCH_JOB_EXECUTION_SEQ]
ADD CONSTRAINT [BATCH_JOB_EXECUTION_SEQ_PK] PRIMARY KEY CLUSTERED ([ID] ASC)
GO
ALTER TABLE [JET].[BATCH_JOB_SEQ]
ADD CONSTRAINT [BATCH_JOB_SEQ_PK] PRIMARY KEY CLUSTERED ([ID] ASC)
GO
ALTER TABLE [JET].[BATCH_STEP_EXECUTION_SEQ]
ADD CONSTRAINT [BATCH_STEP_EXECUTION_SEQ_PK] PRIMARY KEY CLUSTERED ([ID] ASC)
GO
But I am still facing the issue.
PS: The spring batch has been deployed to AKS(Azure Kubernetes Services), and using Azure SQLServer as datasource.