Threading is complicated and my understanding is not as great as others, but here's my attempt at a brief explanation of how the @Scheduled
Spring annotation works:
Spring uses a TaskScheduler
:
public interface TaskScheduler {
ScheduledFuture schedule(Runnable task, Trigger trigger);
ScheduledFuture schedule(Runnable task, Date startTime);
ScheduledFuture scheduleAtFixedRate(Runnable task, Date startTime, long period);
ScheduledFuture scheduleAtFixedRate(Runnable task, long period);
ScheduledFuture scheduleWithFixedDelay(Runnable task, Date startTime, long delay);
ScheduledFuture scheduleWithFixedDelay(Runnable task, long delay);
}
https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/scheduling.html#scheduling-task-scheduler
Which submits the annotated code, i.e. task code, to a high-level concurrency object called an executor. The executor class is ThreadPoolTaskExecutor
. That class submits tasks to the thread pool to be run by the first available thread in the pool. The thread pool size you set determines how many active threads you can have. If you set allowCoreThreadTimeOut
to true
then threads in the pool that have no work available to do within their timeout interval will be terminated.
Spring uses a ThreadPoolTaskExecutor
to manage the thread pool:
https://github.com/spring-projects/spring-framework/blob/master/spring-context/src/main/java/org/springframework/scheduling/concurrent/ThreadPoolTaskExecutor.java
Keeping a pool of threads alive reduces the time that would normally be added while waiting for the thread to be created. See this question for some more info.
Ultimately, the java.lang.Thread
class runs the Runnable or Callable instances that are created by the ThreadPoolTaskExecutor
. The Thread
class implements a run()
method that is basically your code you want the thread to run:
public Thread(Runnable target) {
init(null, target, "Thread-" + nextThreadNum(), 0);
}
private void init(ThreadGroup g, Runnable target, String name,
long stackSize, AccessControlContext acc) {
...
http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/tip/src/share/classes/java/lang/Thread.java
The actual switching between threads, i.e. the context switch is OS-dependent but in general threads will be divided amongst CPUs and then each CPU cycles through the threads based on the timeout interval and does a bit of work and then pauses and switches continuously between threads until the task(s) are complete.
does it release the thread to pool before its execution is finished?
(for example in case of context switch etc.) or this thread is used
until the end of the execution?
The Runnable code can definitely stop executing in the middle of an operation but the Threads in a thread pool are usually kept alive until there's no more work to be done.
Here's more info from the Oracle documentation that explains thread pools:
Most of the executor implementations in java.util.concurrent use
thread pools, which consist of worker threads. This kind of thread
exists separately from the Runnable and Callable tasks it executes and
is often used to execute multiple tasks.
Using worker threads minimizes the overhead due to thread creation.
Thread objects use a significant amount of memory, and in a
large-scale application, allocating and deallocating many thread
objects creates a significant memory management overhead.
One common type of thread pool is the fixed thread pool. This type of
pool always has a specified number of threads running; if a thread is
somehow terminated while it is still in use, it is automatically
replaced with a new thread. Tasks are submitted to the pool via an
internal queue, which holds extra tasks whenever there are more active
tasks than threads.
An important advantage of the fixed thread pool is that applications
using it degrade gracefully. To understand this, consider a web server
application where each HTTP request is handled by a separate thread.
If the application simply creates a new thread for every new HTTP
request, and the system receives more requests than it can handle
immediately, the application will suddenly stop responding to all
requests when the overhead of all those threads exceed the capacity of
the system. With a limit on the number of the threads that can be
created, the application will not be servicing HTTP requests as
quickly as they come in, but it will be servicing them as quickly as
the system can sustain.