49

I have a Java program that executes from Spring Qquartz every 20 seconds. Sometimes it takes just few seconds to execute, but as data gets bigger I'm sure it run for 20 seconds or more.

How can I prevent Quartz from firing/triggering the job while one instance is still being executed? Firing 2 jobs performing same operations on a database would not be so good. Is there a way I can do some kind of synchronization?

Jordi Castilla
  • 26,609
  • 8
  • 70
  • 109
ant
  • 22,634
  • 36
  • 132
  • 182

7 Answers7

147

Quartz 1

If you change your class to implement StatefulJob instead of Job, Quartz will take care of this for you. From the StatefulJob javadoc:

stateful jobs are not allowed to execute concurrently, which means new triggers that occur before the completion of the execute(xx) method will be delayed.

StatefulJob extends Job and does not add any new methods, so all you need to do to get the behaviour you want is change this:

public class YourJob implements org.quartz.Job {
    void execute(JobExecutionContext context) {/*implementation omitted*/}
}

To this:

public class YourJob implements org.quartz.StatefulJob {
    void execute(JobExecutionContext context) {/*implementation omitted*/}
}

Quartz 2

In version 2.0 of Quartz, StatefulJob is deprecated. It is now recommended to use annotations instead, e.g.

@DisallowConcurrentExecution
public class YourJob implements org.quartz.Job {
    void execute(JobExecutionContext context) {/*implementation omitted*/}
}
Tome
  • 3,234
  • 3
  • 33
  • 36
Dónal
  • 185,044
  • 174
  • 569
  • 824
  • 5
    With this solution the attempted job instances queue up, right? How do you prevent them from queuing up? – Jay Sullivan Oct 17 '13 at 00:17
  • 8
    The correct answer is this. Why OP has accepted another answer which is just proposing the alternative way? – Saeed Neamati Dec 09 '13 at 13:43
  • 2
    @Donal How does DisallowConcurrentExecution annotation maintain non-concurrency between multiple server instances, or it does not? – greperror Jan 17 '18 at 13:08
  • 1
    @greperror If you want to prevent concurrency execution across multiple server instances, check out `@PersistJobDataAfterExecution` – Dónal Jan 18 '18 at 11:55
32

If all you need to do is fire every 20 seconds, Quartz is serious overkill. The java.util.concurrent.ScheduledExecutorService should be perfectly sufficient for that job.

The ScheduledExecutorService also provides two semantics for scheduling. "fixed rate" will attempt to run your job every 20 seconds regardless of overlap, whereas "fixed delay" will attempt to leave 20 seconds between the end of the first job and the start of the next. If you want to avoid overlap, then fixed-delay is safest.

skaffman
  • 398,947
  • 96
  • 818
  • 769
  • 1
    Or, of course, the Spring `TaskScheduler` if you want to stay within Spring. – Michael Piefel Jun 09 '12 at 10:26
  • 10
    True if not in a cluster – David Mann Oct 16 '13 at 16:58
  • It appears that later versions of `ScheduledExecutorService` will never concurrently execute according to the [documentation](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ScheduledExecutorService.html#scheduleAtFixedRate(java.lang.Runnable,%20long,%20long,%20java.util.concurrent.TimeUnit)). " If any execution of this task takes longer than its period, then subsequent executions may start late, but **will not concurrently execute**." – Sovietaced Aug 06 '20 at 01:17
21

Just in case anyone references this question, StatefulJob has been deprecated. They now suggest you use annotations instead...

@PersistJobDataAfterExecution
@DisallowConcurrentExecution
public class TestJob implements Job {

This will explain what those annotations mean...

The annotations cause behavior just as their names describe - multiple instances of the job will not be allowed to run concurrently (consider a case where a job has code in its execute() method that takes 34 seconds to run, but it is scheduled with a trigger that repeats every 30 seconds), and will have its JobDataMap contents re-persisted in the scheduler's JobStore after each execution. For the purposes of this example, only @PersistJobDataAfterExecution annotation is truly relevant, but it's always wise to use the @DisallowConcurrentExecution annotation with it, to prevent race-conditions on saved data.

ant
  • 22,634
  • 36
  • 132
  • 182
gshauger
  • 747
  • 4
  • 16
4

if you use spring quartz, i think you have to configure like this

    <bean id="batchConsumerJob"class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
        <property name="targetObject" ref="myScheduler" />
        <property name="targetMethod" value="execute" />
        <property name="concurrent" value="false" />
    </bean>
Paul
  • 133
  • 1
  • 13
  • works like a charm, may be in code like this `MethodInvokingJobDetailFactoryBean jobDetail = new MethodInvokingJobDetailFactoryBean(); jobDetail.setConcurrent(false); jobDetail.setBeanName("Job_" + jobId); jobDetail.setTargetObject(job); jobDetail.setTargetMethod("execute"); jobDetail.setConcurrent(false); ` – Amare Oct 03 '17 at 18:46
  • Sweet. Thank you. – Joel Sep 20 '18 at 18:56
3

I'm not sure you want synchronisation, since the second task will block until the first finishes, and you'll end up with a backlog. You could put the jobs in a queue, but from your description it sounds like the queue may grow indefinitely.

I would investigate ReadWriteLocks, and let your task set a lock whilst it is running. Future tasks can inspect this lock, and exit immediately if an old task is still running. I've found from experience that that's the most reliable way to approach this.

Perhaps generate a warning as well so you know you're encountering problems and increase the time interval accordingly ?

Brian Agnew
  • 268,207
  • 37
  • 334
  • 440
1

You can use a semaphore. When the semaphore is taken, abandon the 2nd job and wait until the next fire time.

Mxyk
  • 10,678
  • 16
  • 57
  • 76
Salandur
  • 6,409
  • 2
  • 22
  • 23
0

put them in a queue

Even if the time exceeds 20 second current job should be finished & then the next should be fetched from the queue.

Or you can also increase time to some reasonable amount.

Asad Khan
  • 11,469
  • 13
  • 44
  • 59