ThreadPoolExecutorpublic class ThreadPoolExecutor extends AbstractExecutorService An {@link ExecutorService} that executes each submitted task using
one of possibly several pooled threads, normally configured
using {@link Executors} factory methods.
Thread pools address two different problems: they usually
provide improved performance when executing large numbers of
asynchronous tasks, due to reduced per-task invocation overhead,
and they provide a means of bounding and managing the resources,
including threads, consumed when executing a collection of tasks.
Each ThreadPoolExecutor also maintains some basic
statistics, such as the number of completed tasks.
To be useful across a wide range of contexts, this class
provides many adjustable parameters and extensibility
hooks. However, programmers are urged to use the more convenient
{@link Executors} factory methods {@link
Executors#newCachedThreadPool} (unbounded thread pool, with
automatic thread reclamation), {@link Executors#newFixedThreadPool}
(fixed size thread pool) and {@link
Executors#newSingleThreadExecutor} (single background thread), that
preconfigure settings for the most common usage
scenarios. Otherwise, use the following guide when manually
configuring and tuning this class:
- Core and maximum pool sizes
- A ThreadPoolExecutor will automatically adjust the
pool size
(see {@link ThreadPoolExecutor#getPoolSize})
according to the bounds set by corePoolSize
(see {@link ThreadPoolExecutor#getCorePoolSize})
and
maximumPoolSize
(see {@link ThreadPoolExecutor#getMaximumPoolSize}).
When a new task is submitted in method {@link
ThreadPoolExecutor#execute}, and fewer than corePoolSize threads
are running, a new thread is created to handle the request, even if
other worker threads are idle. If there are more than
corePoolSize but less than maximumPoolSize threads running, a new
thread will be created only if the queue is full. By setting
corePoolSize and maximumPoolSize the same, you create a fixed-size
thread pool. By setting maximumPoolSize to an essentially unbounded
value such as Integer.MAX_VALUE, you allow the pool to
accommodate an arbitrary number of concurrent tasks. Most typically,
core and maximum pool sizes are set only upon construction, but they
may also be changed dynamically using {@link
ThreadPoolExecutor#setCorePoolSize} and {@link
ThreadPoolExecutor#setMaximumPoolSize}.
- On-demand construction
- By default, even core threads are initially created and
started only when new tasks arrive, but this can be overridden
dynamically using method {@link
ThreadPoolExecutor#prestartCoreThread} or
{@link ThreadPoolExecutor#prestartAllCoreThreads}.
You probably want to prestart threads if you construct the
pool with a non-empty queue.
- Creating new threads
- New threads are created using a {@link
java.util.concurrent.ThreadFactory}. If not otherwise specified, a
{@link Executors#defaultThreadFactory} is used, that creates threads to all
be in the same {@link ThreadGroup} and with the same
NORM_PRIORITY priority and non-daemon status. By supplying
a different ThreadFactory, you can alter the thread's name, thread
group, priority, daemon status, etc. If a ThreadFactory fails to create
a thread when asked by returning null from newThread,
the executor will continue, but might
not be able to execute any tasks.
- Keep-alive times
- If the pool currently has more than corePoolSize threads,
excess threads will be terminated if they have been idle for more
than the keepAliveTime (see {@link
ThreadPoolExecutor#getKeepAliveTime}). This provides a means of
reducing resource consumption when the pool is not being actively
used. If the pool becomes more active later, new threads will be
constructed. This parameter can also be changed dynamically using
method {@link ThreadPoolExecutor#setKeepAliveTime}. Using a value
of Long.MAX_VALUE {@link TimeUnit#NANOSECONDS} effectively
disables idle threads from ever terminating prior to shut down. By
default, the keep-alive policy applies only when there are more
than corePoolSizeThreads. But method {@link
ThreadPoolExecutor#allowCoreThreadTimeOut(boolean)} can be used to apply
this time-out policy to core threads as well, so long as
the keepAliveTime value is non-zero.
- Queuing
- Any {@link BlockingQueue} may be used to transfer and hold
submitted tasks. The use of this queue interacts with pool sizing:
- If fewer than corePoolSize threads are running, the Executor
always prefers adding a new thread
rather than queuing.
- If corePoolSize or more threads are running, the Executor
always prefers queuing a request rather than adding a new
thread.
- If a request cannot be queued, a new thread is created unless
this would exceed maximumPoolSize, in which case, the task will be
rejected.
There are three general strategies for queuing:
- Direct handoffs. A good default choice for a work
queue is a {@link SynchronousQueue} that hands off tasks to threads
without otherwise holding them. Here, an attempt to queue a task
will fail if no threads are immediately available to run it, so a
new thread will be constructed. This policy avoids lockups when
handling sets of requests that might have internal dependencies.
Direct handoffs generally require unbounded maximumPoolSizes to
avoid rejection of new submitted tasks. This in turn admits the
possibility of unbounded thread growth when commands continue to
arrive on average faster than they can be processed.
- Unbounded queues. Using an unbounded queue (for
example a {@link LinkedBlockingQueue} without a predefined
capacity) will cause new tasks to wait in the queue when all
corePoolSize threads are busy. Thus, no more than corePoolSize
threads will ever be created. (And the value of the maximumPoolSize
therefore doesn't have any effect.) This may be appropriate when
each task is completely independent of others, so tasks cannot
affect each others execution; for example, in a web page server.
While this style of queuing can be useful in smoothing out
transient bursts of requests, it admits the possibility of
unbounded work queue growth when commands continue to arrive on
average faster than they can be processed.
- Bounded queues. A bounded queue (for example, an
{@link ArrayBlockingQueue}) helps prevent resource exhaustion when
used with finite maximumPoolSizes, but can be more difficult to
tune and control. Queue sizes and maximum pool sizes may be traded
off for each other: Using large queues and small pools minimizes
CPU usage, OS resources, and context-switching overhead, but can
lead to artificially low throughput. If tasks frequently block (for
example if they are I/O bound), a system may be able to schedule
time for more threads than you otherwise allow. Use of small queues
generally requires larger pool sizes, which keeps CPUs busier but
may encounter unacceptable scheduling overhead, which also
decreases throughput.
- Rejected tasks
- New tasks submitted in method {@link
ThreadPoolExecutor#execute} will be rejected when the
Executor has been shut down, and also when the Executor uses finite
bounds for both maximum threads and work queue capacity, and is
saturated. In either case, the execute method invokes the
{@link RejectedExecutionHandler#rejectedExecution} method of its
{@link RejectedExecutionHandler}. Four predefined handler policies
are provided:
- In the
default {@link ThreadPoolExecutor.AbortPolicy}, the handler throws a
runtime {@link RejectedExecutionException} upon rejection.
- In {@link
ThreadPoolExecutor.CallerRunsPolicy}, the thread that invokes
execute itself runs the task. This provides a simple
feedback control mechanism that will slow down the rate that new
tasks are submitted.
- In {@link ThreadPoolExecutor.DiscardPolicy},
a task that cannot be executed is simply dropped.
- In {@link
ThreadPoolExecutor.DiscardOldestPolicy}, if the executor is not
shut down, the task at the head of the work queue is dropped, and
then execution is retried (which can fail again, causing this to be
repeated.)
It is possible to define and use other kinds of {@link
RejectedExecutionHandler} classes. Doing so requires some care
especially when policies are designed to work only under particular
capacity or queuing policies.
- Hook methods
- This class provides protected overridable {@link
ThreadPoolExecutor#beforeExecute} and {@link
ThreadPoolExecutor#afterExecute} methods that are called before and
after execution of each task. These can be used to manipulate the
execution environment; for example, reinitializing ThreadLocals,
gathering statistics, or adding log entries. Additionally, method
{@link ThreadPoolExecutor#terminated} can be overridden to perform
any special processing that needs to be done once the Executor has
fully terminated.
If hook or callback methods throw
exceptions, internal worker threads may in turn fail and
abruptly terminate.
- Queue maintenance
- Method {@link ThreadPoolExecutor#getQueue} allows access to
the work queue for purposes of monitoring and debugging. Use of
this method for any other purpose is strongly discouraged. Two
supplied methods, {@link ThreadPoolExecutor#remove} and {@link
ThreadPoolExecutor#purge} are available to assist in storage
reclamation when large numbers of queued tasks become
cancelled.
- Finalization
- A pool that is no longer referenced in a program AND
has no remaining threads will be shutdown
automatically. If you would like to ensure that unreferenced pools
are reclaimed even if users forget to call {@link
ThreadPoolExecutor#shutdown}, then you must arrange that unused
threads eventually die, by setting appropriate keep-alive times,
using a lower bound of zero core threads and/or setting {@link
ThreadPoolExecutor#allowCoreThreadTimeOut(boolean)}.
Extension example. Most extensions of this class
override one or more of the protected hook methods. For example,
here is a subclass that adds a simple pause/resume feature:
class PausableThreadPoolExecutor extends ThreadPoolExecutor {
private boolean isPaused;
private ReentrantLock pauseLock = new ReentrantLock();
private Condition unpaused = pauseLock.newCondition();
public PausableThreadPoolExecutor(...) { super(...); }
protected void beforeExecute(Thread t, Runnable r) {
super.beforeExecute(t, r);
pauseLock.lock();
try {
while (isPaused) unpaused.await();
} catch (InterruptedException ie) {
t.interrupt();
} finally {
pauseLock.unlock();
}
}
public void pause() {
pauseLock.lock();
try {
isPaused = true;
} finally {
pauseLock.unlock();
}
}
public void resume() {
pauseLock.lock();
try {
isPaused = false;
unpaused.signalAll();
} finally {
pauseLock.unlock();
}
}
}
|
Fields Summary |
---|
private static final RuntimePermission | shutdownPermPermission for checking shutdown | volatile int | runStaterunState provides the main lifecyle control, taking on values:
RUNNING: Accept new tasks and process queued tasks
SHUTDOWN: Don't accept new tasks, but process queued tasks
STOP: Don't accept new tasks, don't process queued tasks,
and interrupt in-progress tasks
TERMINATED: Same as STOP, plus all threads have terminated
The numerical order among these values matters, to allow
ordered comparisons. The runState monotonically increases over
time, but need not hit each state. The transitions are:
RUNNING -> SHUTDOWN
On invocation of shutdown(), perhaps implicitly in finalize()
(RUNNING or SHUTDOWN) -> STOP
On invocation of shutdownNow()
SHUTDOWN -> TERMINATED
When both queue and pool are empty
STOP -> TERMINATED
When pool is empty | static final int | RUNNING | static final int | SHUTDOWN | static final int | STOP | static final int | TERMINATED | private final BlockingQueue | workQueueThe queue used for holding tasks and handing off to worker
threads. Note that when using this queue, we do not require
that workQueue.poll() returning null necessarily means that
workQueue.isEmpty(), so must sometimes check both. This
accommodates special-purpose queues such as DelayQueues for
which poll() is allowed to return null even if it may later
return non-null when delays expire. | private final ReentrantLock | mainLockLock held on updates to poolSize, corePoolSize,
maximumPoolSize, runState, and workers set. | private final Condition | terminationWait condition to support awaitTermination | private final HashSet | workersSet containing all worker threads in pool. Accessed only when
holding mainLock. | private volatile long | keepAliveTimeTimeout in nanoseconds for idle threads waiting for work.
Threads use this timeout when there are more than corePoolSize
present or if allowCoreThreadTimeOut. Otherwise they wait
forever for new work. | private volatile boolean | allowCoreThreadTimeOutIf false (default) core threads stay alive even when idle. If
true, core threads use keepAliveTime to time out waiting for
work. | private volatile int | corePoolSizeCore pool size, updated only while holding mainLock, but
volatile to allow concurrent readability even during updates. | private volatile int | maximumPoolSizeMaximum pool size, updated only while holding mainLock but
volatile to allow concurrent readability even during updates. | private volatile int | poolSizeCurrent pool size, updated only while holding mainLock but
volatile to allow concurrent readability even during updates. | private volatile RejectedExecutionHandler | handlerHandler called when saturated or shutdown in execute. | private volatile ThreadFactory | threadFactoryFactory for new threads. All threads are created using this
factory (via method addThread). All callers must be prepared
for addThread to fail by returning null, which may reflect a
system or user's policy limiting the number of threads. Even
though it is not treated as an error, failure to create threads
may result in new tasks being rejected or existing ones
remaining stuck in the queue. On the other hand, no special
precautions exist to handle OutOfMemoryErrors that might be
thrown while trying to create threads, since there is generally
no recourse from within this class. | private int | largestPoolSizeTracks largest attained pool size. | private long | completedTaskCountCounter for completed tasks. Updated only on termination of
worker threads. | private static final RejectedExecutionHandler | defaultHandlerThe default rejected execution handler |
Constructors Summary |
---|
public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue)Creates a new ThreadPoolExecutor with the given initial
parameters and default thread factory and rejected execution handler.
It may be more convenient to use one of the {@link Executors} factory
methods instead of this general purpose constructor.
// Constructors
this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
Executors.defaultThreadFactory(), defaultHandler);
| public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue, ThreadFactory threadFactory)Creates a new ThreadPoolExecutor with the given initial
parameters and default rejected execution handler.
this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
threadFactory, defaultHandler);
| public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue, RejectedExecutionHandler handler)Creates a new ThreadPoolExecutor with the given initial
parameters and default thread factory.
this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
Executors.defaultThreadFactory(), handler);
| public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler)Creates a new ThreadPoolExecutor with the given initial
parameters.
if (corePoolSize < 0 ||
maximumPoolSize <= 0 ||
maximumPoolSize < corePoolSize ||
keepAliveTime < 0)
throw new IllegalArgumentException();
if (workQueue == null || threadFactory == null || handler == null)
throw new NullPointerException();
this.corePoolSize = corePoolSize;
this.maximumPoolSize = maximumPoolSize;
this.workQueue = workQueue;
this.keepAliveTime = unit.toNanos(keepAliveTime);
this.threadFactory = threadFactory;
this.handler = handler;
|
Methods Summary |
---|
private boolean | addIfUnderCorePoolSize(java.lang.Runnable firstTask)Creates and starts a new thread running firstTask as its first
task, only if fewer than corePoolSize threads are running
and the pool is not shut down.
Thread t = null;
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
if (poolSize < corePoolSize && runState == RUNNING)
t = addThread(firstTask);
} finally {
mainLock.unlock();
}
if (t == null)
return false;
t.start();
return true;
| private boolean | addIfUnderMaximumPoolSize(java.lang.Runnable firstTask)Creates and starts a new thread running firstTask as its first
task, only if fewer than maximumPoolSize threads are running
and pool is not shut down.
Thread t = null;
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
if (poolSize < maximumPoolSize && runState == RUNNING)
t = addThread(firstTask);
} finally {
mainLock.unlock();
}
if (t == null)
return false;
t.start();
return true;
| private java.lang.Thread | addThread(java.lang.Runnable firstTask)Creates and returns a new thread running firstTask as its first
task. Call only while holding mainLock.
Worker w = new Worker(firstTask);
Thread t = threadFactory.newThread(w);
if (t != null) {
w.thread = t;
workers.add(w);
int nt = ++poolSize;
if (nt > largestPoolSize)
largestPoolSize = nt;
}
return t;
| protected void | afterExecute(java.lang.Runnable r, java.lang.Throwable t)Method invoked upon completion of execution of the given Runnable.
This method is invoked by the thread that executed the task. If
non-null, the Throwable is the uncaught RuntimeException
or Error that caused execution to terminate abruptly.
Note: When actions are enclosed in tasks (such as
{@link FutureTask}) either explicitly or via methods such as
submit, these task objects catch and maintain
computational exceptions, and so they do not cause abrupt
termination, and the internal exceptions are not
passed to this method.
This implementation does nothing, but may be customized in
subclasses. Note: To properly nest multiple overridings, subclasses
should generally invoke super.afterExecute at the
beginning of this method.
| public void | allowCoreThreadTimeOut(boolean value)Sets the policy governing whether core threads may time out and
terminate if no tasks arrive within the keep-alive time, being
replaced if needed when new tasks arrive. When false, core
threads are never terminated due to lack of incoming
tasks. When true, the same keep-alive policy applying to
non-core threads applies also to core threads. To avoid
continual thread replacement, the keep-alive time must be
greater than zero when setting true. This method
should in general be called before the pool is actively used.
if (value && keepAliveTime <= 0)
throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
allowCoreThreadTimeOut = value;
| public boolean | allowsCoreThreadTimeOut()Returns true if this pool allows core threads to time out and
terminate if no tasks arrive within the keepAlive time, being
replaced if needed when new tasks arrive. When true, the same
keep-alive policy applying to non-core threads applies also to
core threads. When false (the default), core threads are never
terminated due to lack of incoming tasks.
return allowCoreThreadTimeOut;
| public boolean | awaitTermination(long timeout, java.util.concurrent.TimeUnit unit)
long nanos = unit.toNanos(timeout);
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
for (;;) {
if (runState == TERMINATED)
return true;
if (nanos <= 0)
return false;
nanos = termination.awaitNanos(nanos);
}
} finally {
mainLock.unlock();
}
| protected void | beforeExecute(java.lang.Thread t, java.lang.Runnable r)Method invoked prior to executing the given Runnable in the
given thread. This method is invoked by thread t that
will execute task r, and may be used to re-initialize
ThreadLocals, or to perform logging.
This implementation does nothing, but may be customized in
subclasses. Note: To properly nest multiple overridings, subclasses
should generally invoke super.beforeExecute at the end of
this method.
| private java.util.List | drainQueue()Drains the task queue into a new list. Used by shutdownNow.
Call only while holding main lock.
List<Runnable> taskList = new ArrayList<Runnable>();
workQueue.drainTo(taskList);
/*
* If the queue is a DelayQueue or any other kind of queue
* for which poll or drainTo may fail to remove some elements,
* we need to manually traverse and remove remaining tasks.
* To guarantee atomicity wrt other threads using this queue,
* we need to create a new iterator for each element removed.
*/
while (!workQueue.isEmpty()) {
Iterator<Runnable> it = workQueue.iterator();
try {
if (it.hasNext()) {
Runnable r = it.next();
if (workQueue.remove(r))
taskList.add(r);
}
} catch (ConcurrentModificationException ignore) {
}
}
return taskList;
| private void | ensureQueuedTaskHandled(java.lang.Runnable command)Rechecks state after queuing a task. Called from execute when
pool state has been observed to change after queuing a task. If
the task was queued concurrently with a call to shutdownNow,
and is still present in the queue, this task must be removed
and rejected to preserve shutdownNow guarantees. Otherwise,
this method ensures (unless addThread fails) that there is at
least one live thread to handle this task
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
boolean reject = false;
Thread t = null;
try {
int state = runState;
if (state != RUNNING && workQueue.remove(command))
reject = true;
else if (state < STOP &&
poolSize < Math.max(corePoolSize, 1) &&
!workQueue.isEmpty())
t = addThread(null);
} finally {
mainLock.unlock();
}
if (reject)
reject(command);
else if (t != null)
t.start();
| public void | execute(java.lang.Runnable command)Executes the given task sometime in the future. The task
may execute in a new thread or in an existing pooled thread.
If the task cannot be submitted for execution, either because this
executor has been shutdown or because its capacity has been reached,
the task is handled by the current RejectedExecutionHandler.
if (command == null)
throw new NullPointerException();
if (poolSize >= corePoolSize || !addIfUnderCorePoolSize(command)) {
if (runState == RUNNING && workQueue.offer(command)) {
if (runState != RUNNING || poolSize == 0)
ensureQueuedTaskHandled(command);
}
else if (!addIfUnderMaximumPoolSize(command))
reject(command); // is shutdown or saturated
}
| protected void | finalize()Invokes shutdown when this executor is no longer
referenced.
shutdown();
| public int | getActiveCount()Returns the approximate number of threads that are actively
executing tasks.
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
int n = 0;
for (Worker w : workers) {
if (w.isActive())
++n;
}
return n;
} finally {
mainLock.unlock();
}
| public long | getCompletedTaskCount()Returns the approximate total number of tasks that have
completed execution. Because the states of tasks and threads
may change dynamically during computation, the returned value
is only an approximation, but one that does not ever decrease
across successive calls.
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
long n = completedTaskCount;
for (Worker w : workers)
n += w.completedTasks;
return n;
} finally {
mainLock.unlock();
}
| public int | getCorePoolSize()Returns the core number of threads.
return corePoolSize;
| public long | getKeepAliveTime(java.util.concurrent.TimeUnit unit)Returns the thread keep-alive time, which is the amount of time
that threads in excess of the core pool size may remain
idle before being terminated.
return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
| public int | getLargestPoolSize()Returns the largest number of threads that have ever
simultaneously been in the pool.
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
return largestPoolSize;
} finally {
mainLock.unlock();
}
| public int | getMaximumPoolSize()Returns the maximum allowed number of threads.
return maximumPoolSize;
| public int | getPoolSize()Returns the current number of threads in the pool.
return poolSize;
| public java.util.concurrent.BlockingQueue | getQueue()Returns the task queue used by this executor. Access to the
task queue is intended primarily for debugging and monitoring.
This queue may be in active use. Retrieving the task queue
does not prevent queued tasks from executing.
return workQueue;
| public java.util.concurrent.RejectedExecutionHandler | getRejectedExecutionHandler()Returns the current handler for unexecutable tasks.
return handler;
| java.lang.Runnable | getTask()Gets the next task for a worker thread to run. The general
approach is similar to execute() in that worker threads trying
to get a task to run do so on the basis of prevailing state
accessed outside of locks. This may cause them to choose the
"wrong" action, such as trying to exit because no tasks
appear to be available, or entering a take when the pool is in
the process of being shut down. These potential problems are
countered by (1) rechecking pool state (in workerCanExit)
before giving up, and (2) interrupting other workers upon
shutdown, so they can recheck state. All other user-based state
changes (to allowCoreThreadTimeOut etc) are OK even when
performed asynchronously wrt getTask.
for (;;) {
try {
int state = runState;
if (state > SHUTDOWN)
return null;
Runnable r;
if (state == SHUTDOWN) // Help drain queue
r = workQueue.poll();
else if (poolSize > corePoolSize || allowCoreThreadTimeOut)
r = workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS);
else
r = workQueue.take();
if (r != null)
return r;
if (workerCanExit()) {
if (runState >= SHUTDOWN) // Wake up others
interruptIdleWorkers();
return null;
}
// Else retry
} catch (InterruptedException ie) {
// On interruption, re-check runState
}
}
| public long | getTaskCount()Returns the approximate total number of tasks that have ever been
scheduled for execution. Because the states of tasks and
threads may change dynamically during computation, the returned
value is only an approximation.
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
long n = completedTaskCount;
for (Worker w : workers) {
n += w.completedTasks;
if (w.isActive())
++n;
}
return n + workQueue.size();
} finally {
mainLock.unlock();
}
| public java.util.concurrent.ThreadFactory | getThreadFactory()Returns the thread factory used to create new threads.
return threadFactory;
| void | interruptIdleWorkers()Wakes up all threads that might be waiting for tasks so they
can check for termination. Note: this method is also called by
ScheduledThreadPoolExecutor.
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
for (Worker w : workers)
w.interruptIfIdle();
} finally {
mainLock.unlock();
}
| public boolean | isShutdown()
return runState != RUNNING;
| public boolean | isTerminated()
return runState == TERMINATED;
| public boolean | isTerminating()Returns true if this executor is in the process of terminating
after shutdown or shutdownNow but has not
completely terminated. This method may be useful for
debugging. A return of true reported a sufficient
period after shutdown may indicate that submitted tasks have
ignored or suppressed interruption, causing this executor not
to properly terminate.
int state = runState;
return state == SHUTDOWN || state == STOP;
| public int | prestartAllCoreThreads()Starts all core threads, causing them to idly wait for work. This
overrides the default policy of starting core threads only when
new tasks are executed.
int n = 0;
while (addIfUnderCorePoolSize(null))
++n;
return n;
| public boolean | prestartCoreThread()Starts a core thread, causing it to idly wait for work. This
overrides the default policy of starting core threads only when
new tasks are executed. This method will return false
if all core threads have already been started.
return addIfUnderCorePoolSize(null);
| public void | purge()Tries to remove from the work queue all {@link Future}
tasks that have been cancelled. This method can be useful as a
storage reclamation operation, that has no other impact on
functionality. Cancelled tasks are never executed, but may
accumulate in work queues until worker threads can actively
remove them. Invoking this method instead tries to remove them now.
However, this method may fail to remove tasks in
the presence of interference by other threads.
// Fail if we encounter interference during traversal
try {
Iterator<Runnable> it = getQueue().iterator();
while (it.hasNext()) {
Runnable r = it.next();
if (r instanceof Future<?>) {
Future<?> c = (Future<?>)r;
if (c.isCancelled())
it.remove();
}
}
}
catch (ConcurrentModificationException ex) {
return;
}
| void | reject(java.lang.Runnable command)Invokes the rejected execution handler for the given command.
handler.rejectedExecution(command, this);
| public boolean | remove(java.lang.Runnable task)Removes this task from the executor's internal queue if it is
present, thus causing it not to be run if it has not already
started.
This method may be useful as one part of a cancellation
scheme. It may fail to remove tasks that have been converted
into other forms before being placed on the internal queue. For
example, a task entered using submit might be
converted into a form that maintains Future status.
However, in such cases, method {@link ThreadPoolExecutor#purge}
may be used to remove those Futures that have been cancelled.
return getQueue().remove(task);
| public void | setCorePoolSize(int corePoolSize)Sets the core number of threads. This overrides any value set
in the constructor. If the new value is smaller than the
current value, excess existing threads will be terminated when
they next become idle. If larger, new threads will, if needed,
be started to execute any queued tasks.
if (corePoolSize < 0)
throw new IllegalArgumentException();
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
int extra = this.corePoolSize - corePoolSize;
this.corePoolSize = corePoolSize;
if (extra < 0) {
int n = workQueue.size(); // don't add more threads than tasks
while (extra++ < 0 && n-- > 0 && poolSize < corePoolSize) {
Thread t = addThread(null);
if (t != null)
t.start();
else
break;
}
}
else if (extra > 0 && poolSize > corePoolSize) {
try {
Iterator<Worker> it = workers.iterator();
while (it.hasNext() &&
extra-- > 0 &&
poolSize > corePoolSize &&
workQueue.remainingCapacity() == 0)
it.next().interruptIfIdle();
} catch (SecurityException ignore) {
// Not an error; it is OK if the threads stay live
}
}
} finally {
mainLock.unlock();
}
| public void | setKeepAliveTime(long time, java.util.concurrent.TimeUnit unit)Sets the time limit for which threads may remain idle before
being terminated. If there are more than the core number of
threads currently in the pool, after waiting this amount of
time without processing a task, excess threads will be
terminated. This overrides any value set in the constructor.
if (time < 0)
throw new IllegalArgumentException();
if (time == 0 && allowsCoreThreadTimeOut())
throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
this.keepAliveTime = unit.toNanos(time);
| public void | setMaximumPoolSize(int maximumPoolSize)Sets the maximum allowed number of threads. This overrides any
value set in the constructor. If the new value is smaller than
the current value, excess existing threads will be
terminated when they next become idle.
if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
throw new IllegalArgumentException();
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
int extra = this.maximumPoolSize - maximumPoolSize;
this.maximumPoolSize = maximumPoolSize;
if (extra > 0 && poolSize > maximumPoolSize) {
try {
Iterator<Worker> it = workers.iterator();
while (it.hasNext() &&
extra > 0 &&
poolSize > maximumPoolSize) {
it.next().interruptIfIdle();
--extra;
}
} catch (SecurityException ignore) {
// Not an error; it is OK if the threads stay live
}
}
} finally {
mainLock.unlock();
}
| public void | setRejectedExecutionHandler(java.util.concurrent.RejectedExecutionHandler handler)Sets a new handler for unexecutable tasks.
if (handler == null)
throw new NullPointerException();
this.handler = handler;
| public void | setThreadFactory(java.util.concurrent.ThreadFactory threadFactory)Sets the thread factory used to create new threads.
if (threadFactory == null)
throw new NullPointerException();
this.threadFactory = threadFactory;
| public void | shutdown()Initiates an orderly shutdown in which previously submitted
tasks are executed, but no new tasks will be
accepted. Invocation has no additional effect if already shut
down.
/*
* Conceptually, shutdown is just a matter of changing the
* runState to SHUTDOWN, and then interrupting any worker
* threads that might be blocked in getTask() to wake them up
* so they can exit. Then, if there happen not to be any
* threads or tasks, we can directly terminate pool via
* tryTerminate. Else, the last worker to leave the building
* turns off the lights (in workerDone).
*
* But this is made more delicate because we must cooperate
* with the security manager (if present), which may implement
* policies that make more sense for operations on Threads
* than they do for ThreadPools. This requires 3 steps:
*
* 1. Making sure caller has permission to shut down threads
* in general (see shutdownPerm).
*
* 2. If (1) passes, making sure the caller is allowed to
* modify each of our threads. This might not be true even if
* first check passed, if the SecurityManager treats some
* threads specially. If this check passes, then we can try
* to set runState.
*
* 3. If both (1) and (2) pass, dealing with inconsistent
* security managers that allow checkAccess but then throw a
* SecurityException when interrupt() is invoked. In this
* third case, because we have already set runState, we can
* only try to back out from the shutdown as cleanly as
* possible. Some workers may have been killed but we remain
* in non-shutdown state (which may entail tryTerminate from
* workerDone starting a new worker to maintain liveness.)
*/
SecurityManager security = System.getSecurityManager();
if (security != null)
security.checkPermission(shutdownPerm);
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
if (security != null) { // Check if caller can modify our threads
for (Worker w : workers)
security.checkAccess(w.thread);
}
int state = runState;
if (state < SHUTDOWN)
runState = SHUTDOWN;
try {
for (Worker w : workers) {
w.interruptIfIdle();
}
} catch (SecurityException se) { // Try to back out
runState = state;
// tryTerminate() here would be a no-op
throw se;
}
tryTerminate(); // Terminate now if pool and queue empty
} finally {
mainLock.unlock();
}
| public java.util.List | shutdownNow()Attempts to stop all actively executing tasks, halts the
processing of waiting tasks, and returns a list of the tasks
that were awaiting execution. These tasks are drained (removed)
from the task queue upon return from this method.
There are no guarantees beyond best-effort attempts to stop
processing actively executing tasks. This implementation
cancels tasks via {@link Thread#interrupt}, so any task that
fails to respond to interrupts may never terminate.
/*
* shutdownNow differs from shutdown only in that
* 1. runState is set to STOP,
* 2. all worker threads are interrupted, not just the idle ones, and
* 3. the queue is drained and returned.
*/
SecurityManager security = System.getSecurityManager();
if (security != null)
security.checkPermission(shutdownPerm);
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
if (security != null) { // Check if caller can modify our threads
for (Worker w : workers)
security.checkAccess(w.thread);
}
int state = runState;
if (state < STOP)
runState = STOP;
try {
for (Worker w : workers) {
w.interruptNow();
}
} catch (SecurityException se) { // Try to back out
runState = state;
// tryTerminate() here would be a no-op
throw se;
}
List<Runnable> tasks = drainQueue();
tryTerminate(); // Terminate now if pool and queue empty
return tasks;
} finally {
mainLock.unlock();
}
| protected void | terminated()Method invoked when the Executor has terminated. Default
implementation does nothing. Note: To properly nest multiple
overridings, subclasses should generally invoke
super.terminated within this method.
| private void | tryTerminate()Transitions to TERMINATED state if either (SHUTDOWN and pool
and queue empty) or (STOP and pool empty), otherwise unless
stopped, ensuring that there is at least one live thread to
handle queued tasks.
This method is called from the three places in which
termination can occur: in workerDone on exit of the last thread
after pool has been shut down, or directly within calls to
shutdown or shutdownNow, if there are no live threads.
if (poolSize == 0) {
int state = runState;
if (state < STOP && !workQueue.isEmpty()) {
state = RUNNING; // disable termination check below
Thread t = addThread(null);
if (t != null)
t.start();
}
if (state == STOP || state == SHUTDOWN) {
runState = TERMINATED;
termination.signalAll();
terminated();
}
}
| private boolean | workerCanExit()Check whether a worker thread that fails to get a task can
exit. We allow a worker thread to die if the pool is stopping,
or the queue is empty, or there is at least one thread to
handle possibly non-empty queue, even if core timeouts are
allowed.
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
boolean canExit;
try {
canExit = runState >= STOP ||
workQueue.isEmpty() ||
(allowCoreThreadTimeOut &&
poolSize > Math.max(1, corePoolSize));
} finally {
mainLock.unlock();
}
return canExit;
| void | workerDone(java.util.concurrent.ThreadPoolExecutor$Worker w)Performs bookkeeping for an exiting worker thread.
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
completedTaskCount += w.completedTasks;
workers.remove(w);
if (--poolSize == 0)
tryTerminate();
} finally {
mainLock.unlock();
}
|
|