Struct rayon_core::ThreadPoolBuilder
source · pub struct ThreadPoolBuilder<S = DefaultSpawn> { /* private fields */ }
Expand description
Used to create a new ThreadPool
or to configure the global rayon thread pool.
Creating a ThreadPool
The following creates a thread pool with 22 threads.
let pool = rayon::ThreadPoolBuilder::new().num_threads(22).build().unwrap();
To instead configure the global thread pool, use build_global()
:
rayon::ThreadPoolBuilder::new().num_threads(22).build_global().unwrap();
Implementations§
source§impl<S> ThreadPoolBuilder<S>where
S: ThreadSpawn,
impl<S> ThreadPoolBuilder<S>where S: ThreadSpawn,
Note: the S: ThreadSpawn
constraint is an internal implementation detail for the
default spawn and those set by spawn_handler
.
sourcepub fn build(self) -> Result<ThreadPool, ThreadPoolBuildError>
pub fn build(self) -> Result<ThreadPool, ThreadPoolBuildError>
Creates a new ThreadPool
initialized using this configuration.
sourcepub fn build_global(self) -> Result<(), ThreadPoolBuildError>
pub fn build_global(self) -> Result<(), ThreadPoolBuildError>
Initializes the global thread pool. This initialization is
optional. If you do not call this function, the thread pool
will be automatically initialized with the default
configuration. Calling build_global
is not recommended, except
in two scenarios:
- You wish to change the default configuration.
- You are running a benchmark, in which case initializing may yield slightly more consistent results, since the worker threads will already be ready to go even in the first iteration. But this cost is minimal.
Initialization of the global thread pool happens exactly
once. Once started, the configuration cannot be
changed. Therefore, if you call build_global
a second time, it
will return an error. An Ok
result indicates that this
is the first initialization of the thread pool.
source§impl ThreadPoolBuilder
impl ThreadPoolBuilder
sourcepub fn build_scoped<W, F, R>(
self,
wrapper: W,
with_pool: F
) -> Result<R, ThreadPoolBuildError>where
W: Fn(ThreadBuilder) + Sync,
F: FnOnce(&ThreadPool) -> R,
pub fn build_scoped<W, F, R>( self, wrapper: W, with_pool: F ) -> Result<R, ThreadPoolBuildError>where W: Fn(ThreadBuilder) + Sync, F: FnOnce(&ThreadPool) -> R,
Creates a scoped ThreadPool
initialized using this configuration.
This is a convenience function for building a pool using std::thread::scope
to spawn threads in a spawn_handler
.
The threads in this pool will start by calling wrapper
, which should
do initialization and continue by calling ThreadBuilder::run()
.
Examples
A scoped pool may be useful in combination with scoped thread-local variables.
scoped_tls::scoped_thread_local!(static POOL_DATA: Vec<i32>);
fn main() -> Result<(), rayon::ThreadPoolBuildError> {
let pool_data = vec![1, 2, 3];
// We haven't assigned any TLS data yet.
assert!(!POOL_DATA.is_set());
rayon::ThreadPoolBuilder::new()
.build_scoped(
// Borrow `pool_data` in TLS for each thread.
|thread| POOL_DATA.set(&pool_data, || thread.run()),
// Do some work that needs the TLS data.
|pool| pool.install(|| assert!(POOL_DATA.is_set())),
)?;
// Once we've returned, `pool_data` is no longer borrowed.
drop(pool_data);
Ok(())
}
source§impl<S> ThreadPoolBuilder<S>
impl<S> ThreadPoolBuilder<S>
sourcepub fn spawn_handler<F>(self, spawn: F) -> ThreadPoolBuilder<CustomSpawn<F>>where
F: FnMut(ThreadBuilder) -> Result<()>,
pub fn spawn_handler<F>(self, spawn: F) -> ThreadPoolBuilder<CustomSpawn<F>>where F: FnMut(ThreadBuilder) -> Result<()>,
Sets a custom function for spawning threads.
Note that the threads will not exit until after the pool is dropped. It
is up to the caller to wait for thread termination if that is important
for any invariants. For instance, threads created in std::thread::scope
will be joined before that scope returns, and this will block indefinitely
if the pool is leaked. Furthermore, the global thread pool doesn’t terminate
until the entire process exits!
Examples
A minimal spawn handler just needs to call run()
from an independent thread.
fn main() -> Result<(), rayon::ThreadPoolBuildError> {
let pool = rayon::ThreadPoolBuilder::new()
.spawn_handler(|thread| {
std::thread::spawn(|| thread.run());
Ok(())
})
.build()?;
pool.install(|| println!("Hello from my custom thread!"));
Ok(())
}
The default spawn handler sets the name and stack size if given, and propagates any errors from the thread builder.
fn main() -> Result<(), rayon::ThreadPoolBuildError> {
let pool = rayon::ThreadPoolBuilder::new()
.spawn_handler(|thread| {
let mut b = std::thread::Builder::new();
if let Some(name) = thread.name() {
b = b.name(name.to_owned());
}
if let Some(stack_size) = thread.stack_size() {
b = b.stack_size(stack_size);
}
b.spawn(|| thread.run())?;
Ok(())
})
.build()?;
pool.install(|| println!("Hello from my fully custom thread!"));
Ok(())
}
This can also be used for a pool of scoped threads like crossbeam::scope
,
or std::thread::scope
introduced in Rust 1.63, which is encapsulated in
build_scoped
.
fn main() -> Result<(), rayon::ThreadPoolBuildError> {
std::thread::scope(|scope| {
let pool = rayon::ThreadPoolBuilder::new()
.spawn_handler(|thread| {
let mut builder = std::thread::Builder::new();
if let Some(name) = thread.name() {
builder = builder.name(name.to_string());
}
if let Some(size) = thread.stack_size() {
builder = builder.stack_size(size);
}
builder.spawn_scoped(scope, || {
// Add any scoped initialization here, then run!
thread.run()
})?;
Ok(())
})
.build()?;
pool.install(|| println!("Hello from my custom scoped thread!"));
Ok(())
})
}
sourcepub fn thread_name<F>(self, closure: F) -> Selfwhere
F: FnMut(usize) -> String + 'static,
pub fn thread_name<F>(self, closure: F) -> Selfwhere F: FnMut(usize) -> String + 'static,
Sets a closure which takes a thread index and returns the thread’s name.
sourcepub fn num_threads(self, num_threads: usize) -> Self
pub fn num_threads(self, num_threads: usize) -> Self
Sets the number of threads to be used in the rayon threadpool.
If you specify a non-zero number of threads using this function, then the resulting thread-pools are guaranteed to start at most this number of threads.
If num_threads
is 0, or you do not call this function, then
the Rayon runtime will select the number of threads
automatically. At present, this is based on the
RAYON_NUM_THREADS
environment variable (if set),
or the number of logical CPUs (otherwise).
In the future, however, the default behavior may
change to dynamically add or remove threads as needed.
Future compatibility warning: Given the default behavior
may change in the future, if you wish to rely on a fixed
number of threads, you should use this function to specify
that number. To reproduce the current default behavior, you
may wish to use std::thread::available_parallelism
to query the number of CPUs dynamically.
Old environment variable: RAYON_NUM_THREADS
is a one-to-one
replacement of the now deprecated RAYON_RS_NUM_CPUS
environment
variable. If both variables are specified, RAYON_NUM_THREADS
will
be preferred.
sourcepub fn use_current_thread(self) -> Self
pub fn use_current_thread(self) -> Self
Use the current thread as one of the threads in the pool.
The current thread is guaranteed to be at index 0, and since the thread is not managed by rayon, the spawn and exit handlers do not run for that thread.
Note that the current thread won’t run the main work-stealing loop, so jobs spawned into
the thread-pool will generally not be picked up automatically by this thread unless you
yield to rayon in some way, like via yield_now()
, yield_local()
, or scope()
.
Local thread-pools
Using this in a local thread-pool means the registry will be leaked. In future versions there might be a way of cleaning up the current-thread state.
sourcepub fn panic_handler<H>(self, panic_handler: H) -> Selfwhere
H: Fn(Box<dyn Any + Send>) + Send + Sync + 'static,
pub fn panic_handler<H>(self, panic_handler: H) -> Selfwhere H: Fn(Box<dyn Any + Send>) + Send + Sync + 'static,
Normally, whenever Rayon catches a panic, it tries to
propagate it to someplace sensible, to try and reflect the
semantics of sequential execution. But in some cases,
particularly with the spawn()
APIs, there is no
obvious place where we should propagate the panic to.
In that case, this panic handler is invoked.
If no panic handler is set, the default is to abort the process, under the principle that panics should not go unobserved.
If the panic handler itself panics, this will abort the
process. To prevent this, wrap the body of your panic handler
in a call to std::panic::catch_unwind()
.
sourcepub fn stack_size(self, stack_size: usize) -> Self
pub fn stack_size(self, stack_size: usize) -> Self
Sets the stack size of the worker threads
sourcepub fn breadth_first(self) -> Self
👎Deprecated: use scope_fifo
and spawn_fifo
for similar effect
pub fn breadth_first(self) -> Self
scope_fifo
and spawn_fifo
for similar effect(DEPRECATED) Suggest to worker threads that they execute spawned jobs in a “breadth-first” fashion.
Typically, when a worker thread is idle or blocked, it will attempt to execute the job from the top of its local deque of work (i.e., the job most recently spawned). If this flag is set to true, however, workers will prefer to execute in a breadth-first fashion – that is, they will search for jobs at the bottom of their local deque. (At present, workers always steal from the bottom of other workers’ deques, regardless of the setting of this flag.)
If you think of the tasks as a tree, where a parent task spawns its children in the tree, then this flag loosely corresponds to doing a breadth-first traversal of the tree, whereas the default would be to do a depth-first traversal.
Note that this is an “execution hint”. Rayon’s task execution is highly dynamic and the precise order in which independent tasks are executed is not intended to be guaranteed.
This breadth_first()
method is now deprecated per RFC #1,
and in the future its effect may be removed. Consider using
scope_fifo()
for a similar effect.
sourcepub fn start_handler<H>(self, start_handler: H) -> Selfwhere
H: Fn(usize) + Send + Sync + 'static,
pub fn start_handler<H>(self, start_handler: H) -> Selfwhere H: Fn(usize) + Send + Sync + 'static,
Sets a callback to be invoked on thread start.
The closure is passed the index of the thread on which it is invoked. Note that this same closure may be invoked multiple times in parallel. If this closure panics, the panic will be passed to the panic handler. If that handler returns, then startup will continue normally.
sourcepub fn exit_handler<H>(self, exit_handler: H) -> Selfwhere
H: Fn(usize) + Send + Sync + 'static,
pub fn exit_handler<H>(self, exit_handler: H) -> Selfwhere H: Fn(usize) + Send + Sync + 'static,
Sets a callback to be invoked on thread exit.
The closure is passed the index of the thread on which it is invoked. Note that this same closure may be invoked multiple times in parallel. If this closure panics, the panic will be passed to the panic handler. If that handler returns, then the thread will exit normally.