4

I have a Job that has only one step which contains JdbcPagingItemReader,Custom Processor and custom writer that writes to elasticsearch.

Step configuration is

jobBuilderFactory.get("job")
.<Entity, WriteRequest>chunk(10000)
.reader(reader)
.processor(processor)
.writer(elasticSearchWriter)
.faultTolerant()
.skipLimit(3)
.skip(Exception.class)
.build();

Job Configuration is

stepBuilderFactory.get("step")
.preventRestart()
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(step)
.end()
.build();

This job is triggered by a scheduled Quartz job every few mins.

When running this in an environment, STEP completes successfully but job stays in status= COMPLETED and exit_status=UNKNOWN for a very long time, usually 3 - 5 hours and then completes.

There are no logs produced during this inactive period.

One observation is that commit_count in batch_step_execution has value almost equal to read_count which should be usually dependent on chunk size. **Also, I could see writer writing products one by one instead of writing whole chunks. **

*When running the job in local machine, it works just fine.

Any idea why this might be happening ?

Tried by reducing chunk size to 1000. Now issue is less frequent but commit_count still goes up much higher.

0 Answers0