1

I have a spring batch job that reads from a database and writes into a file after doing some processing, in a chunk based step. My requirement is to run almost 16 instances of the job parallelly at the same time, just with different job parameters.

But I've been facing the a couple of issues while doing so.

1.

Could not open JDBC Connection for transaction. Nested exception is java.sql.SQLTransientConnectionException: Hikaripool -1 - Connection is not available.

Exception: could not increment identity. Nested Exception is com.microsoft.SQLserver.jdbc.SQLServerException: Transaction (process ID 124) was deadlocked on lock resources with other process, and has been chosen as the deadlock victim. Rerun the transaction.

I've tried the solutions provided in the link Github link, by setting the IsolationLevel and altering the metadata tables as shown below.

Set the IsolationLevelForCreate like this

JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setIsolationLevelForCreate("ISOLATION_REPEATABLE_READ");

Have the DBA add indexes to each of the SEQ tables like this (JET is my schema that I put the repo tables in):

ALTER TABLE [JET].[BATCH_JOB_EXECUTION_SEQ]
ADD CONSTRAINT [BATCH_JOB_EXECUTION_SEQ_PK] PRIMARY KEY CLUSTERED ([ID] ASC)
GO

ALTER TABLE [JET].[BATCH_JOB_SEQ]
ADD CONSTRAINT [BATCH_JOB_SEQ_PK] PRIMARY KEY CLUSTERED ([ID] ASC)
GO

ALTER TABLE [JET].[BATCH_STEP_EXECUTION_SEQ]
ADD CONSTRAINT [BATCH_STEP_EXECUTION_SEQ_PK] PRIMARY KEY CLUSTERED ([ID] ASC)
GO

But I am still facing the issue.

PS: The spring batch has been deployed to AKS(Azure Kubernetes Services), and using Azure SQLServer as datasource.

Dale K
  • 25,246
  • 15
  • 42
  • 71
CtrlAltElite
  • 487
  • 8
  • 32
  • 1
    To investigate a deadlock, you inspect the deadlock graph and it tells you which resources caused the deadlock, and that is usually enough to work out how to fix it. Otherwise you are just guessing and hoping to hit the jackpot. Also its quite possible that you cannot remove all deadlocks in which case a retry system is necessary. – Dale K Mar 11 '21 at 07:25

1 Answers1

0

Based on the discussion in https://github.com/spring-projects/spring-batch/issues/1448, the issue seems to be caused by the SqlServerMaxValueIncrementer from Spring Framework not using SQLServer's native sequences. Here is an excerpt from the Javadoc:

There should be one sequence table per table that needs an auto-generated key.

Example:

create table tab (id int not null primary key, text varchar(100))
create table tab_sequence (id bigint identity)
insert into tab_sequence default values

This could be due to SQLServer not supporting sequences until recently. But I guess that's why Spring Batch uses tables to emulate sequences for MS SQL Server.

I suggest you try to change the default DDL to use sequences instead of tables:

CREATE SEQUENCE BATCH_STEP_EXECUTION_SEQ ;
CREATE SEQUENCE BATCH_JOB_EXECUTION_SEQ ;
CREATE SEQUENCE BATCH_JOB_SEQ ;

This is the default sequences definition based on MS SQL Server's docs. This should work, but you can customize them if needed.

You might also need to provide a custom DataFieldMaxValueIncrementer that is based on sequences (since the one from Spring Framework uses tables) and register it in Spring Batch through a DataFieldMaxValueIncrementerFactory (See JobRepositoryFactoryBean#setIncrementerFactory).

Mahmoud Ben Hassine
  • 28,519
  • 3
  • 32
  • 50