Document Number: | |
---|---|
Date: | |
Revises: | |
Editor: | NVIDIA Corporation |
Note: this is an early draft. It’s known to be incomplet and incorrekt, and it has lots of bad formatting.
This Technical Specification describes requirements for implementations of an interface that computer programs written in the C++ programming language may use to invoke algorithms with parallel execution. The algorithms described by this Technical Specification are realizable across a broad class of computer architectures.
This Technical Specification is non-normative. Some of the functionality described by this Technical Specification may be considered for standardization in a future version of C++, but it is not currently part of any C++ standard. Some of the functionality in this Technical Specification may never be standardized, and other functionality may be standardized in a substantially changed form.
The goal of this Technical Specification is to build widespread existing practice for parallelism in the C++ standard algorithms library. It gives advice on extensions to those vendors who wish to provide them.
The following referenced document is indispensable for the application of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.
ISO/IEC 14882:2017 is herein called the C++ Standard. The library described in ISO/IEC 14882:2017 clauses 20-33 is herein called the C++ Standard Library. The C++ Standard Library components described in ISO/IEC 14882:2017 clauses 28, 29.8 and 23.10.10 are herein called the C++ Standard Algorithms Library.
Unless otherwise specified, the whole of the C++ Standard's Library introduction (C++14 §20) is included into this Technical Specification by reference.
Since the extensions described in this Technical Specification are
experimental and not part of the C++ Standard Library, they should not be
declared directly within namespace std
. Unless otherwise specified, all
components described in this Technical Specification are declared in namespace
std::experimental::parallelism_v2
.
std
.
— end note ]
Unless otherwise specified, references to such entities described in this
Technical Specification are assumed to be qualified with
std::experimental::parallelism_v2
, and references to entities described in the C++
Standard Library are assumed to be qualified with std::
.
Extensions that are expected to eventually be added to an existing header
<meow>
are provided inside the <experimental/meow>
header,
which shall include the standard contents of <meow>
as if by
#include <meow>
An implementation that provides support for this Technical Specification shall define the feature test macro(s) in Table 1.
Doc. No. | Title | Primary Section | Macro Name | Value | Header |
---|---|---|---|---|---|
P0155R0 | Task Block R5 | __cpp_lib_experimental_parallel_task_block |
201711 |
<experimental/exception_list> <experimental/task_block> |
|
P0076R4 | Vector and Wavefront Policies | __cpp_lib_experimental_execution_vector_policy |
201711 |
<experimental/algorithm> <experimental/execution> |
|
P0075R2 | Template Library for Parallel For Loops | __cpp_lib_experimental_parallel_for_loop |
201711 | <experimental/algorithm> |
<experimental/execution>
synopsis#include <execution> namespace std::experimental { inline namespace parallelism_v2 { namespace execution {// 5.2, Unsequenced execution policy class unsequenced_policy;// 5.3, Vector execution policy class vector_policy;// 5.4, Execution policy objects inline constexpr unsequenced_policy unseq{ unspecified }; inline constexpr vector_policy vecparallel_policy par{ unspecified }; } } }
class unsequenced_policy{ unspecified };
The class unsequenced_policy
is an execution policy type used as a unique type to disambiguate
parallel algorithm overloading and indicate that a parallel algorithm's
execution may be vectorized, e.g., executed on a single thread using
instructions that operate on multiple data items.
The invocations of element access functions in parallel algorithms invoked with an execution policy of type unsequenced_policy
are permitted to execute in an unordered fashion in the calling thread,
unsequenced with respect to one another within the calling thread.
During the execution of a parallel algorithm with the experimental::execution::unsequenced_policy
policy, if the invocation of an element access function exits via an uncaught exception, terminate()
willshall be called.
class vector_policy{ unspecified };
The class vector_policy
is an execution policy type used as a unique type to disambiguate
parallel algorithm overloading and indicate that a parallel algorithm's
execution may be vectorized. Additionally, such vectorization will
result in an execution that respects the sequencing constraints of
wavefront application ([parallel.alg.general.wavefront]). unsequenced_policy
, for example.
— end note ]
The invocations of element access functions in parallel algorithms invoked with an execution policy of type vector_policy
are permitted to execute in unordered fashion in the calling thread,
unsequenced with respect to one another within the calling thread,
subject to the sequencing constraints of wavefront application (for_loop
, for_loop_n, or for_loop_strided
, or for_loop_strided_n
.
During the execution of a parallel algorithm with the experimental::execution::vector_policy
policy, if the invocation of an element access function exits via an uncaught exception, terminate()
willshall be called.
inline constexpr execution::unsequenced_policy unseq{}; inline constexpr execution::vector_policy vec{};
The header <experimental/execution>
declares a global object associated with each type of execution policy defined by this Technical Specification.
<experimental/exception_list>
synopsisnamespace std::experimental { inline namespace parallelism_v2 { class exception_list : public exception { public: using iterator = unspecified; size_t size() const noexcept; iterator begin() const noexcept; iterator end() const noexcept; const char* what() const noexcept override; }; } }
The class exception_list
owns a sequence of exception_ptr
objects.
The type exception_list::iterator
fulfillsshall fulfill the requirements of
ForwardIterator
.
size_t size() const noexcept;
exception_ptr
objects contained within the exception_list
.
iterator begin() const noexcept;
exception_ptr
object contained within the exception_list
.
iterator end() const noexcept;
const char* what() const noexcept override;
For the purposes of this section, an evaluation is a value computation or side effect of an expression, or an execution of a statement. Initialization of a temporary object is considered a subexpression of the expression that necessitates the temporary object.
An evaluation A contains an evaluation B if:
An evaluation A is ordered before an evaluation B if A is deterministically
sequenced before B.
For an evaluation A ordered before an evaluation B, both contained in the same invocation of an element access function, A is a vertical antecedent of B if:
goto
statement or asm
declaration that jumps to a statement outside of S, orswitch
statement executed within S that transfers control into a substatement of a nested selection or iteration statement, orthrow
longjmp
.
In the following, Xi and Xj refer to evaluations of the same expression
or statement contained in the application of an element access function corresponding to the ith and
jth elements of the input sequence.
Horizontally matched is an equivalence relationship between two evaluations of the same expression. An evaluation Bi is horizontally matched with an evaluation Bj if:
Let f be a function called for each argument list in a sequence of argument lists.
Wavefront application of f requires that evaluation Ai be sequenced
before evaluation Bji if i < j and and:
<experimental/algorithm>
synopsis#include <algorithm> namespace std::experimental { inline namespace parallelism_v2 { namespace execution {// 7.2.5, No vec template<class F> auto no_vec(F&& f) noexcept -> decltype(std::forward<F>(f)());// 7.2.6, Ordered update class template<class T> class ordered_update_t;// 7.2.7, Ordered update function template template<class T> ordered_update_t<T> ordered_update(T& ref) noexcept; } // Exposition only: Suppress template argument deduction. template<class T> struct no_deduce { using type = T; }; template<class T> struct no_deducde_t = typename no_deduce<T>::type;// 7.2.2, Reductions Support for reductions template<class T, class BinaryOperation> unspecified reduction(T& var, const T& identity, BinaryOperation combiner); template<class T> unspecified reduction_plus(T& var); template<class T> unspecified reduction_multiplies(T& var); template<class T> unspecified reduction_bit_and(T& var); template<class T> unspecified reduction_bit_or(T& var); template<class T> unspecified reduction_bit_xor(T& var); template<class T> unspecified reduction_min(T& var); template<class T> unspecified reduction_max(T& var);// 7.2.3, Inductions Support for inductions template<class T> unspecified induction(T&& var); template<class T> unspecified induction(T&& var, S stride);// 7.2.4, For loop for_loop template<class I, class... Rest> void for_loop(no_deduce_t<I> start, I finish, Rest&&... rest); template<class ExecutionPolicy, class I, class... Rest> void for_loop(ExecutionPolicy&& exec, no_deduce_t<I> start, I finish, Rest&&... rest); template<class I, class S, class... Rest> void for_loop_strided(no_deduce_t<I> start, I finish, S stride, Rest&&... rest); template<class ExecutionPolicy, class I, class S, class... Rest> void for_loop_strided(ExecutionPolicy&& exec, no_deduce_t<I> start, I finish, S stride, Rest&&... rest); template<class I, class Size, class... Rest> void for_loop_n(I start, Size n, Rest&&... rest); template<class ExecutionPolicy, class I, class Size, class... Rest> void for_loop_n(ExecutionPolicy&& exec, I start, Size n, Rest&&... rest); template<class I, class Size, class S, class... Rest> void for_loop_n_strided(I start, Size n, S stride, Rest&&... rest); template<class ExecutionPolicy, class I, class Size, class S, class... Rest> void for_loop_n_strided(ExecutionPolicy&& exec, I start, Size n, S stride, Rest&&... rest); } }
Each of the function templates in this subclause ([parallel.alg.reductions]) returns a reduction object of unspecified type having a reduction value type and encapsulating a reduction identity value for the reduction, a combiner function object, and a live-out object from which the initial value is obtained and into which the final value is stored.
An algorithm uses reduction objects by allocating an unspecified number of instances, known as accumulators, of the reduction value
type.
Modifications to the accumulator by the application of element
access functions accrue as partial results. At some point before the
algorithm
returns, the partial results are combined, two at a time, using
the reduction object’s combiner operation until a single value remains,
which
is then assigned back to the live-out object. plus<T>
, incrementing
the accumulator would be consistent with the combiner but doubling it or assigning to it would not.
— end note ]
template<class T, class BinaryOperation>
unspecified reduction(T& var, const T& identity, BinaryOperation combiner);
CopyConstructible
and MoveAssignable
. The expression var = combiner(var, var)
shall be well-formed.T
, reduction identity identity
, combiner function object combiner
, and using the object referenced by var
as its live-out object.template<class T>
unspecified reduction_plus(T& var); template<class T>
unspecified reduction_multiplies(T& var); template<class T>
unspecified reduction_bit_and(T& var); template<class T>
unspecified reduction_bit_or(T& var); template<class T>
unspecified reduction_bit_xor(T& var); template<class T>
unspecified reduction_min(T& var); template<class T>
unspecified reduction_max(T& var);
CopyConstructible
and MoveAssignable
.>T
, reduction identity and combiner operation as specified in table var
as its live-out object.Function | Reduction Identity | Combiner Operation |
---|---|---|
reduction_plus |
T() |
x + y |
reduction_multiplies |
T(1) |
x * y |
reduction_bit_and |
(~T()) |
X & y |
reduction_bit_or |
T() |
x | y |
reduction_bit_xor |
T() |
x ^ y |
reduction_min |
var |
min(x, y) |
reduction_max |
var |
max(x, y) |
y
and sets s
toextern int n; extern float x[], y[], a; float s = 0; for_loop(execution::vec, 0, n, reduction(s, 0.0f, plus<>()), [&](int i, float& accum) { y[i] += a*x[i]; accum += y[i]*y[i]; } );— end example ]
Each of the function templates in this section return an induction object of unspecified type having an induction value type and encapsulating an initial value i of that type and, optionally, a stride.
For each element in the input range, an algorithm over input sequence S computes an induction value from an induction variable and ordinal position p within S by the formula i + p * stride if a stride was specified or i + p otherwise. This induction value is passed to the element access function.
An induction object may refer to a live-out object to hold the final value of the induction sequence. When the algorithm using the induction object completes, the live-out object is assigned the value i + n * stride, where n is the number of elements in the input range.
template<class T>
unspecified induction(T&& var); template<class T, class S>
unspecified induction(T&& var, S stride);
remove_cv_t<>remove_reference_t<>T>><<
,
initial value var
, and (if specified) stride stride
. If T
is an lvalue reference
to non-const
type, then the object referenced by var
becomes the live-out object for the
induction object; otherwise there is no live-out object.
template<class I, class... Rest>
void for_loop(no_deduce_t<I> start, I finish, Rest&&... rest); template<class ExecutionPolicy,
class I, class... Rest>
void for_loop(ExecutionPolicy&& exec,
no_deduce_t<I> start, I finish, Rest&&... rest);
template<class I, class S, class... Rest>
void for_loop_strided(no_deduce_t<I> start, I finish,
S stride, Rest&&... rest); template<class ExecutionPolicy,
class I, class S, class... Rest>
void for_loop_strided(ExecutionPolicy&& exec,
no_deduce_t<I> start, I finish,
S stride, Rest&&... rest);
template<class I, class Size, class... Rest>
void for_loop_n(I start, Size n, Rest&&... rest); template<class ExecutionPolicy,
class I, class Size, class... Rest>
void for_loop_n(ExecutionPolicy&& exec,
I start, Size n, Rest&&... rest);
template<class I, class Size, class S, class... Rest>
void for_loop_n_strided(I start, Size n, S stride, Rest&&... rest); template<class ExecutionPolicy,
class I, class Size, class S, class... Rest>
void for_loop_n_strided(ExecutionPolicy&& exec,
I start, Size n, S stride, Rest&&... rest);
ExecutionPolicy
, I
shall be an integral type
or meet the requirements of a forward iterator type; otherwise, I
shall be an integral
type or meet the requirements of an input iterator type. Size
shall be an integral type
and n
shall be non-negative. S
shall have integral type and stride
shall have non-zero value. stride
shall be negative only if I
has integral
type or meets the requirements of a bidirectional iterator. The rest
parameter pack shall
have at least one element, comprising objects returned by invocations of reduction
([parallel.alg.reduction]) and/or induction
([parallel.alg.induction]) function templates
followed by exactly one invocable element-access function, f. For the overloads with an
ExecutionPolicy
, f shall meet the requirements of CopyConstructible
;
otherwise, f shall meet the requirements of MoveConstructible
.
rest
parameter pack. The
length of the input sequence is:
n
, if specified,
finish - start
if neither n
nor stride
is specified,
1 + (finish-start-1)/stride
if stride
is positive,
1 + (start-finish-1)/-stride
.
start
. Each subsequent element is generated by adding
stride
to the previous element, if stride
is specified, otherwise by incrementing
the previous element. advance
distance
I
is an
iterator type, the iterators in the input sequence are not dereferenced before
being passed to f.
— end note ]
rest
induction
, then the additional argument is the
induction value for that induction object corresponding to the position of the application of f in the input
sequence.
template<class F>
auto no_vec(F&& f) noexcept -> decltype(std::forward<F>(f)());
std::forward>F<(f)()
. When invoked within an element access function
in a parallel algorithm using vector_policy
, if two calls to no_vec
are
horizontally matched within a wavefront application of an element access function over input
sequence S, then the execution of f
in the application for one element in S is
sequenced before the execution of f
in the application for a subsequent element in
S; otherwise, there is no effect on sequencing.
f
.
f
exits via an exception, then terminate
will be called, consistent
with all other potentially-throwing operations invoked with vector_policy
execution.
extern int* p; for_loop(vec, 0, n[&](int i) { y[i] +=y[i+1]; if(y[i] < 0) { no_vec([]{ *p++ = i; }); } });The updates
*p++ = i
will occur in the same order as if the policy were seq
.
— end example ]
class ordered_update_t { T& ref_; // exposition only public: ordered_update_t(T& loc) noexcept : ref_(loc) {} ordered_update_t(const ordered_update_t&) = delete; ordered_update_t& operator=(const ordered_update_t&) = delete; template <class U> auto operator=(U rhs) const noexcept { return no_vec([&]{ return ref_ = std::move(rhs); }); } template <class U> auto operator+=(U rhs) const noexcept { return no_vec([&]{ return ref_ += std::move(rhs); }); } template <class U> auto operator-=(U rhs) const noexcept { return no_vec([&]{ return ref_ -= std::move(rhs); }); } template <class U> auto operator*=(U rhs) const noexcept { return no_vec([&]{ return ref_ *= std::move(rhs); }); } template <class U> auto operator/=(U rhs) const noexcept { return no_vec([&]{ return ref_ /= std::move(rhs); }); } template <class U> auto operator%=(U rhs) const noexcept { return no_vec([&]{ return ref_ %= std::move(rhs); }); } template <class U> auto operator>>=(U rhs) const noexcept { return no_vec([&]{ return ref_ >>= std::move(rhs); }); } template <class U> auto operator<<=(U rhs) const noexcept { return no_vec([&]{ return ref_ <<= std::move(rhs); }); } template <class U> auto operator&=(U rhs) const noexcept { return no_vec([&]{ return ref_ &= std::move(rhs); }); } template <class U> auto operator^=(U rhs) const noexcept { return no_vec([&]{ return ref_ ^= std::move(rhs); }); } template <class U> auto operator|=(U rhs) const noexcept { return no_vec([&]{ return ref_ |= std::move(rhs); }); } auto operator++() const noexcept { return no_vec([&]{ return ++ref_; }); } auto operator++(int) const noexcept { return no_vec([&]{ return ref_++; }); } auto operator--() const noexcept { return no_vec([&]{ return --ref_; }); } auto operator--(int) const noexcept { return no_vec([&]{ return ref_--; }); } };
An object of type ordered_update_t<T>
is a proxy for an object of type T
intended to be used within a parallel application of an element access function using a
policy object of type vector_policy
. Simple increments, assignments, and compound
assignments to the object are forwarded to the proxied object, but are sequenced as though
executed within a no_vec
invocation.
template<T>
ordered_update_t<T> ordered_update(T& loc) noexcept;
{ loc }
.
<experimental/task_block>
synopsisnamespace std::experimental { inline namespace parallelism_v2 { class task_cancelled_exception; class task_block; template<class F> void define_task_block(F&& f); template<class f> void define_task_block_restore_thread(F&& f); } }
task_cancelled_exception
namespace std::experimental { inline namespace parallelism_v2 { class task_cancelled_exception : public exception { public: task_cancelled_exception() noexcept; virtual const char* what() const noexcept override; }; } }
The class task_cancelled_exception
defines the type of objects thrown by
task_block::run
or task_block::wait
if they detect than an
exception is pending within the current parallel block. See
task_cancelled_exception
member function what
virtual const char* what() const noexcept
task_block
namespace std::experimental { inline namespace parallelism_v2 { class task_block { private: ~task_block(); public: task_block(const task_block&) = delete; task_block& operator=(const task_block&) = delete; void operator&() const = delete; template<class F> void run(F&& f); void wait(); }; } }
The class task_block
defines an interface for forking and joining parallel tasks. The define_task_block
and define_task_block_restore_thread
function templates create an object of type task_block
and pass a reference to that object to a user-provided function object.
An object of class task_block
cannot be constructed,
destroyed, copied, or moved except by the implementation of the task
block library. Taking the address of a task_block
object via operator&
is ill-formed. Obtaining its address by any other means (including addressof
) results in a pointer with an unspecified value; dereferencing such a pointer results in undefined behavior.
A task_block
is active if it was created by the nearest enclosing task block, where “task block” refers to an
invocation of define_task_block
or define_task_block_restore_thread
and “nearest enclosing” means the most
recent invocation that has not yet completed. Code designated for execution in another thread by means other
than the facilities in this section (e.g., using thread
or async
) are not enclosed in the task block and a
task_block
passed to (or captured by) such code is not active within that code. Performing any operation on a
task_block
that is not active results in undefined behavior.
When the argument to task_block::run
is called, no task_block
is active, not even the task_block
on which run
was called.
(The function object should not, therefore, capture a task_block
from the surrounding block.)
define_task_block([&](auto& tb) { tb.run([&]{ tb.run([] { f(); }); // Error: tb is not active within run define_task_block([&](auto& tb2) { // Define new task block tb2.run(f); ... }); }); ... });— end example ]
task_block
member function template run
template<class F> void run(F&& f);
F
shall be MoveConstructible
. DECAY_COPY(std::forward<F>(f))()
shall be a valid expression.
*this
shall be the active task_block
.
DECAY_COPY(std::forward<F>(f))()
, where DECAY_COPY(std::forward<F>(f))
is evaluated synchronously within the current thread. The call to the resulting copy of the function object is
permitted to run on an unspecified thread created by the implementation in an unordered fashion relative to
the sequence of operations following the call to run(f)
(the continuation), or indeterminately sequenced
within the same thread as the continuation. The call to run
synchronizes with the call to the function
object. The completion of the call to the function object synchronizes with the next invocation of wait
on
the same task_block
or completion of the nearest enclosing task block (i.e., the define_task_block
or
define_task_block_restore_thread
that created this task_block
).
task_cancelled_exception
, as described in run
function may return on a thread other than the one on which it was called; in such cases,
completion of the call to run
synchronizes with the continuation.
run
is ordered similarly to an ordinary function call in a single thread.
— end note ]
f
may be immediate or may be delayed until
compute resources are available. run
might or might not return before the invocation of f
completes.
task_block
member function wait
void wait();
*this
shall be the active task_block
.task_block
have completed.
task_cancelled_exception
, as described in wait
function may return on a thread other than the one on which it was called; in such cases, completion of the call to wait
synchronizes with subsequent operations.
define_task_block([&](auto& tb) { tb.run([&]{ process(a, w, x); }); // Process a[w] through a[x] if (y < x) tb.wait(); // Wait if overlap between [w,x) and [y,z) process(a, y, z); // Process a[y] through a[z] });— end example ]
define_task_block
template<class F>
void define_task_block(F&& f);
template<class F>
void define_task_block_restore_thread(F&& f);
tb
of type task_block
, the expression f(tb)
shall be well-formed
task_block
tb
and calls f(tb)
.
exception_list
, as specified in f
have finished execution.
define_task_block
function may return on a thread other than the one on which it was called
unless there are no task blocks active on entry to define_task_block
(see define_task_block
returns on a different thread,
it synchronizes with operations following the call. define_task_block_restore_thread
function always returns on the same thread as the one on which it was called.
f
will (directly or indirectly) call tb.run(function-object)
.
Every task_block
has an associated exception list. When the task block starts, its associated exception list is empty.
When an exception is thrown from the user-provided function object passed to define_task_block
or
define_task_block_restore_thread
, it is added to the exception list for that task block. Similarly, when
an exception is thrown from the user-provided function object passed into task_block::run
, the exception
object is added to the exception list associated with the nearest enclosing task block. In both cases, an
implementation may discard any pending tasks that have not yet been invoked. Tasks that are already in
progress are not interrupted except at a call to task_block::run
or task_block::wait
as described below.
If the implementation is able to detect that an exception has been thrown by another task within
the same nearest enclosing task block, then task_block::run
or task_block::wait
may throw
task_canceled_exception
; these instances of task_canceled_exception
are not added to the exception
list of the corresponding task block.
When a task block finishes with a non-empty exception list, the exceptions are aggregated into an exception_list
object, which is then thrown from the task block.
The order of the exceptions in the exception_list
object is unspecified.