Struct rayon_core::ThreadPool
source · [−]pub struct ThreadPool { /* private fields */ }
Expand description
Represents a user created thread-pool.
Use a ThreadPoolBuilder
to specify the number and/or names of threads
in the pool. After calling ThreadPoolBuilder::build()
, you can then
execute functions explicitly within this ThreadPool
using
ThreadPool::install()
. By contrast, top level rayon functions
(like join()
) will execute implicitly within the current thread-pool.
Creating a ThreadPool
let pool = rayon::ThreadPoolBuilder::new().num_threads(8).build().unwrap();
install()
executes a closure in one of the ThreadPool
’s
threads. In addition, any other rayon operations called inside of install()
will also
execute in the context of the ThreadPool
.
When the ThreadPool
is dropped, that’s a signal for the threads it manages to terminate,
they will complete executing any remaining work that you have spawned, and automatically
terminate.
Implementations
sourceimpl ThreadPool
impl ThreadPool
sourcepub fn new(configuration: Configuration) -> Result<ThreadPool, Box<dyn Error>>
👎Deprecated: Use ThreadPoolBuilder::build
pub fn new(configuration: Configuration) -> Result<ThreadPool, Box<dyn Error>>
ThreadPoolBuilder::build
Deprecated in favor of ThreadPoolBuilder::build
.
sourcepub fn install<OP, R>(&self, op: OP) -> Rwhere
OP: FnOnce() -> R + Send,
R: Send,
pub fn install<OP, R>(&self, op: OP) -> Rwhere
OP: FnOnce() -> R + Send,
R: Send,
Executes op
within the threadpool. Any attempts to use
join
, scope
, or parallel iterators will then operate
within that threadpool.
Warning: thread-local data
Because op
is executing within the Rayon thread-pool,
thread-local data from the current thread will not be
accessible.
Warning: execution order
If the current thread is part of a different thread pool, it will try to
keep busy while the op
completes in its target pool, similar to
calling ThreadPool::yield_now()
in a loop. Therefore, it may
potentially schedule other tasks to run on the current thread in the
meantime. For example
fn main() {
rayon::ThreadPoolBuilder::new().num_threads(1).build_global().unwrap();
let pool = rayon_core::ThreadPoolBuilder::default().build().unwrap();
let do_it = || {
print!("one ");
pool.install(||{});
print!("two ");
};
rayon::join(|| do_it(), || do_it());
}
Since we configured just one thread in the global pool, one might
expect do_it()
to run sequentially, producing:
one two one two
However each call to install()
yields implicitly, allowing rayon to
run multiple instances of do_it()
concurrently on the single, global
thread. The following output would be equally valid:
one one two two
Panics
If op
should panic, that panic will be propagated.
Using install()
fn main() {
let pool = rayon::ThreadPoolBuilder::new().num_threads(8).build().unwrap();
let n = pool.install(|| fib(20));
println!("{}", n);
}
fn fib(n: usize) -> usize {
if n == 0 || n == 1 {
return n;
}
let (a, b) = rayon::join(|| fib(n - 1), || fib(n - 2)); // runs inside of `pool`
return a + b;
}
sourcepub fn broadcast<OP, R>(&self, op: OP) -> Vec<R>where
OP: Fn(BroadcastContext<'_>) -> R + Sync,
R: Send,
pub fn broadcast<OP, R>(&self, op: OP) -> Vec<R>where
OP: Fn(BroadcastContext<'_>) -> R + Sync,
R: Send,
Executes op
within every thread in the threadpool. Any attempts to use
join
, scope
, or parallel iterators will then operate within that
threadpool.
Broadcasts are executed on each thread after they have exhausted their local work queue, before they attempt work-stealing from other threads. The goal of that strategy is to run everywhere in a timely manner without being too disruptive to current work. There may be alternative broadcast styles added in the future for more or less aggressive injection, if the need arises.
Warning: thread-local data
Because op
is executing within the Rayon thread-pool,
thread-local data from the current thread will not be
accessible.
Panics
If op
should panic on one or more threads, exactly one panic
will be propagated, only after all threads have completed
(or panicked) their own op
.
Examples
use std::sync::atomic::{AtomicUsize, Ordering};
fn main() {
let pool = rayon::ThreadPoolBuilder::new().num_threads(5).build().unwrap();
// The argument gives context, including the index of each thread.
let v: Vec<usize> = pool.broadcast(|ctx| ctx.index() * ctx.index());
assert_eq!(v, &[0, 1, 4, 9, 16]);
// The closure can reference the local stack
let count = AtomicUsize::new(0);
pool.broadcast(|_| count.fetch_add(1, Ordering::Relaxed));
assert_eq!(count.into_inner(), 5);
}
sourcepub fn current_num_threads(&self) -> usize
pub fn current_num_threads(&self) -> usize
Returns the (current) number of threads in the thread pool.
Future compatibility note
Note that unless this thread-pool was created with a
ThreadPoolBuilder
that specifies the number of threads,
then this number may vary over time in future versions (see the
num_threads()
method for details).
sourcepub fn current_thread_index(&self) -> Option<usize>
pub fn current_thread_index(&self) -> Option<usize>
If called from a Rayon worker thread in this thread-pool,
returns the index of that thread; if not called from a Rayon
thread, or called from a Rayon thread that belongs to a
different thread-pool, returns None
.
The index for a given thread will not change over the thread’s lifetime. However, multiple threads may share the same index if they are in distinct thread-pools.
Future compatibility note
Currently, every thread-pool (including the global
thread-pool) has a fixed number of threads, but this may
change in future Rayon versions (see the num_threads()
method
for details). In that case, the index for a
thread would not change during its lifetime, but thread
indices may wind up being reused if threads are terminated and
restarted.
sourcepub fn current_thread_has_pending_tasks(&self) -> Option<bool>
pub fn current_thread_has_pending_tasks(&self) -> Option<bool>
Returns true if the current worker thread currently has “local tasks” pending. This can be useful as part of a heuristic for deciding whether to spawn a new task or execute code on the current thread, particularly in breadth-first schedulers. However, keep in mind that this is an inherently racy check, as other worker threads may be actively “stealing” tasks from our local deque.
Background: Rayon’s uses a work-stealing scheduler. The
key idea is that each thread has its own deque of
tasks. Whenever a new task is spawned – whether through
join()
, Scope::spawn()
, or some other means – that new
task is pushed onto the thread’s local deque. Worker threads
have a preference for executing their own tasks; if however
they run out of tasks, they will go try to “steal” tasks from
other threads. This function therefore has an inherent race
with other active worker threads, which may be removing items
from the local deque.
sourcepub fn join<A, B, RA, RB>(&self, oper_a: A, oper_b: B) -> (RA, RB)where
A: FnOnce() -> RA + Send,
B: FnOnce() -> RB + Send,
RA: Send,
RB: Send,
pub fn join<A, B, RA, RB>(&self, oper_a: A, oper_b: B) -> (RA, RB)where
A: FnOnce() -> RA + Send,
B: FnOnce() -> RB + Send,
RA: Send,
RB: Send,
Execute oper_a
and oper_b
in the thread-pool and return
the results. Equivalent to self.install(|| join(oper_a, oper_b))
.
sourcepub fn scope<'scope, OP, R>(&self, op: OP) -> Rwhere
OP: FnOnce(&Scope<'scope>) -> R + Send,
R: Send,
pub fn scope<'scope, OP, R>(&self, op: OP) -> Rwhere
OP: FnOnce(&Scope<'scope>) -> R + Send,
R: Send,
Creates a scope that executes within this thread-pool.
Equivalent to self.install(|| scope(...))
.
See also: the scope()
function.
sourcepub fn scope_fifo<'scope, OP, R>(&self, op: OP) -> Rwhere
OP: FnOnce(&ScopeFifo<'scope>) -> R + Send,
R: Send,
pub fn scope_fifo<'scope, OP, R>(&self, op: OP) -> Rwhere
OP: FnOnce(&ScopeFifo<'scope>) -> R + Send,
R: Send,
Creates a scope that executes within this thread-pool.
Spawns from the same thread are prioritized in relative FIFO order.
Equivalent to self.install(|| scope_fifo(...))
.
See also: the scope_fifo()
function.
sourcepub fn in_place_scope<'scope, OP, R>(&self, op: OP) -> Rwhere
OP: FnOnce(&Scope<'scope>) -> R,
pub fn in_place_scope<'scope, OP, R>(&self, op: OP) -> Rwhere
OP: FnOnce(&Scope<'scope>) -> R,
Creates a scope that spawns work into this thread-pool.
See also: the in_place_scope()
function.
sourcepub fn in_place_scope_fifo<'scope, OP, R>(&self, op: OP) -> Rwhere
OP: FnOnce(&ScopeFifo<'scope>) -> R,
pub fn in_place_scope_fifo<'scope, OP, R>(&self, op: OP) -> Rwhere
OP: FnOnce(&ScopeFifo<'scope>) -> R,
Creates a scope that spawns work into this thread-pool in FIFO order.
See also: the in_place_scope_fifo()
function.
sourcepub fn spawn<OP>(&self, op: OP)where
OP: FnOnce() + Send + 'static,
pub fn spawn<OP>(&self, op: OP)where
OP: FnOnce() + Send + 'static,
Spawns an asynchronous task in this thread-pool. This task will
run in the implicit, global scope, which means that it may outlast
the current stack frame – therefore, it cannot capture any references
onto the stack (you will likely need a move
closure).
See also: the spawn()
function defined on scopes.
sourcepub fn spawn_fifo<OP>(&self, op: OP)where
OP: FnOnce() + Send + 'static,
pub fn spawn_fifo<OP>(&self, op: OP)where
OP: FnOnce() + Send + 'static,
Spawns an asynchronous task in this thread-pool. This task will
run in the implicit, global scope, which means that it may outlast
the current stack frame – therefore, it cannot capture any references
onto the stack (you will likely need a move
closure).
See also: the spawn_fifo()
function defined on scopes.
sourcepub fn spawn_broadcast<OP>(&self, op: OP)where
OP: Fn(BroadcastContext<'_>) + Send + Sync + 'static,
pub fn spawn_broadcast<OP>(&self, op: OP)where
OP: Fn(BroadcastContext<'_>) + Send + Sync + 'static,
Spawns an asynchronous task on every thread in this thread-pool. This task
will run in the implicit, global scope, which means that it may outlast the
current stack frame – therefore, it cannot capture any references onto the
stack (you will likely need a move
closure).
sourcepub fn yield_now(&self) -> Option<Yield>
pub fn yield_now(&self) -> Option<Yield>
Cooperatively yields execution to Rayon.
This is similar to the general yield_now()
, but only if the current
thread is part of this thread pool.
Returns Some(Yield::Executed)
if anything was executed, Some(Yield::Idle)
if
nothing was available, or None
if the current thread is not part this pool.
sourcepub fn yield_local(&self) -> Option<Yield>
pub fn yield_local(&self) -> Option<Yield>
Cooperatively yields execution to local Rayon work.
This is similar to the general yield_local()
, but only if the current
thread is part of this thread pool.
Returns Some(Yield::Executed)
if anything was executed, Some(Yield::Idle)
if
nothing was available, or None
if the current thread is not part this pool.