Expand description
Rayon is a data-parallelism library that makes it easy to convert sequential computations into parallel.
It is lightweight and convenient for introducing parallelism into existing code. It guarantees data-race free executions and takes advantage of parallelism when sensible, based on work-load at runtime.
How to use Rayon
There are two ways to use Rayon:
- High-level parallel constructs are the simplest way to use Rayon and also
typically the most efficient.
- Parallel iterators make it easy to convert a sequential iterator to
execute in parallel.
- The
ParallelIterator
trait defines general methods for all parallel iterators. - The
IndexedParallelIterator
trait adds methods for iterators that support random access.
- The
- The
par_sort
method sorts&mut [T]
slices (or vectors) in parallel. par_extend
can be used to efficiently grow collections with items produced by a parallel iterator.
- Parallel iterators make it easy to convert a sequential iterator to
execute in parallel.
- Custom tasks let you divide your work into parallel tasks yourself.
join
is used to subdivide a task into two pieces.scope
creates a scope within which you can create any number of parallel tasks.ThreadPoolBuilder
can be used to create your own thread pools or customize the global one.
Basic usage and the Rayon prelude
First, you will need to add rayon
to your Cargo.toml
.
Next, to use parallel iterators or the other high-level methods,
you need to import several traits. Those traits are bundled into
the module rayon::prelude
. It is recommended that you import
all of these traits at once by adding use rayon::prelude::*
at
the top of each module that uses Rayon methods.
These traits give you access to the par_iter
method which provides
parallel implementations of many iterative functions such as map
,
for_each
, filter
, fold
, and more.
Crate Layout
Rayon extends many of the types found in the standard library with
parallel iterator implementations. The modules in the rayon
crate mirror std
itself: so, e.g., the option
module in
Rayon contains parallel iterators for the Option
type, which is
found in the option
module of std
. Similarly, the
collections
module in Rayon offers parallel iterator types for
the collections
from std
. You will rarely need to access
these submodules unless you need to name iterator types
explicitly.
Targets without threading
Rayon has limited support for targets without std
threading implementations.
See the rayon_core
documentation for more information about its global fallback.
Other questions?
See the Rayon FAQ.
Modules
ParallelIterator
traits.
The intention is that one can include use rayon::prelude::*
and
have easy access to the various traits and methods you will need.a..=b
expressionsString
). You will rarely need to interact with it directly
unless you have need to name one of the iterator types.Structs
broadcast
.join_context
.scope()
for more information.scope_fifo()
for more information.ThreadPoolBuilder::spawn_handler
.ThreadPool
or to configure the global rayon thread pool.Enums
yield_now()
or yield_local()
.Functions
op
within every thread in the current threadpool. If this is
called from a non-Rayon thread, it will execute in the global threadpool.
Any attempts to use join
, scope
, or parallel iterators will then operate
within that threadpool. When the call has completed on each thread, returns
a vector containing all of their return values.None
.s
and invokes the closure with a
reference to s
. This closure can then spawn asynchronous tasks
into s
. Those tasks may run asynchronously with respect to the
closure; they may themselves spawn additional tasks into s
. When
the closure returns, it will block until all tasks that have been
spawned into s
complete.s
with FIFO order, and invokes the
closure with a reference to s
. This closure can then spawn
asynchronous tasks into s
. Those tasks may run asynchronously with
respect to the closure; they may themselves spawn additional tasks
into s
. When the closure returns, it will block until all tasks
that have been spawned into s
complete.join
, except that the closures have a parameter
that provides context for the way the closure has been called,
especially indicating whether they’re executing on a different
thread than where join_context
was called. This will occur if
the second job is stolen by a different thread, or if
join_context
was called from outside the thread pool to begin
with.s
and invokes the closure with a
reference to s
. This closure can then spawn asynchronous tasks
into s
. Those tasks may run asynchronously with respect to the
closure; they may themselves spawn additional tasks into s
. When
the closure returns, it will block until all tasks that have been
spawned into s
complete.s
with FIFO order, and invokes the
closure with a reference to s
. This closure can then spawn
asynchronous tasks into s
. Those tasks may run asynchronously with
respect to the closure; they may themselves spawn additional tasks
into s
. When the closure returns, it will block until all tasks
that have been spawned into s
complete.'static
lifetime. If you want
to spawn a task that references stack data, use the scope()
function to create a scope.move
closure).'static
lifetime. If you want
to spawn a task that references stack data, use the scope_fifo()
function to create a scope.