Document Number:N4706
Date:
Revises:N4698
Editor: Jared Hoberock
NVIDIA Corporation
jhoberock@nvidia.com

Working Draft, Technical Specification for C++ Extensions for Parallelism Version 2

Note: this is an early draft. It’s known to be incomplet and incorrekt, and it has lots of bad formatting.

1

Scope

[parallel.scope]

This Technical Specification describes requirements for implementations of an interface that computer programs written in the C++ programming language may use to invoke algorithms with parallel execution. The algorithms described by this Technical Specification are realizable across a broad class of computer architectures.

This Technical Specification is non-normative. Some of the functionality described by this Technical Specification may be considered for standardization in a future version of C++, but it is not currently part of any C++ standard. Some of the functionality in this Technical Specification may never be standardized, and other functionality may be standardized in a substantially changed form.

The goal of this Technical Specification is to build widespread existing practice for parallelism in the C++ standard algorithms library. It gives advice on extensions to those vendors who wish to provide them.

2

Normative references

[parallel.references]

The following referenced document is indispensable for the application of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.

ISO/IEC 14882:2017 is herein called the C++ Standard. The library described in ISO/IEC 14882:2017 clauses 20-3317-30 is herein called the C++ Standard Library. The C++ Standard Library components described in ISO/IEC 14882:2017 clauses 28, 29.8 and 23.10.1025, 26.7 and 20.7.2 are herein called the C++ Standard Algorithms Library.

Unless otherwise specified, the whole of the C++ Standard's Library introduction (C++17 §20C++14 §20) is included into this Technical Specification by reference.

3

Terms and definitions

[parallel.defns]
  • No terms and definitions are listed in this document.
  • ISO and IEC maintained terminological databases for us in standardization at the following addresses:

For the purposes of this document, the terms and definitions given in the C++ Standard and the following apply.

A parallel algorithm is a function template described by this Technical Specification declared in namespace std::experimental::parallel::v2 with a formal template parameter named ExecutionPolicy.

Parallel algorithms access objects indirectly accessible via their arguments by invoking the following functions:

  • All operations of the categories of the iterators that the algorithm is instantiated with.
  • Functions on those sequence elements that are required by its specification.
  • User-provided function objects to be applied during the execution of the algorithm, if required by the specification.
  • Operations on those function objects required by the specification. [ Note: See clause 25.1 of C++ Standard Algorithms Library. end note ]
These functions are herein called element access functions. [ Example: The sort function may invoke the following element access functions:
  • Methods of the random-access iterator of the actual template argument, as per 24.2.7, as implied by the name of the template parameters RandomAccessIterator.
  • The swap function on the elements of the sequence (as per 25.4.1.1 [sort]/2).
  • The user-provided Compare function object.
end example ]
4

General

[parallel.general]
4.1

Namespaces and headers

[parallel.general.namespaces]

Since the extensions described in this Technical Specification are experimental and not part of the C++ Standard Library, they should not be declared directly within namespace std. Unless otherwise specified, all components described in this Technical Specification are declared in namespace std::experimental::parallelism_v2parallel::v2.

[ Note: Once standardized, the components described by this Technical Specification are expected to be promoted to namespace std. end note ]

Unless otherwise specified, references to such entities described in this Technical Specification are assumed to be qualified with std::experimental::parallelism_v2parallel::v2, and references to entities described in the C++ Standard Library are assumed to be qualified with std::.

Extensions that are expected to eventually be added to an existing header <meow> are provided inside the <experimental/meow> header, which shall include the standard contents of <meow> as if by

    #include <meow>
4.2

Feature-testing recommendations

[parallel.general.features]

An implementation that provides support for this Technical Specification shall define the feature test macro(s) in Table 1.

Table 1 — Feature Test Macro(s)
Doc. No. Title Primary Section Macro Name Value Header
N4505 Working Draft, Technical Specification for C++ Extensions for Parallelism 7 __cpp_lib_experimental_parallel_algorithm 201505 <experimental/algorithm>
<experimental/exception_list>
<experimental/execution_policy>
<experimental/numeric>
P0155R0 Task Block R5 8 __cpp_lib_experimental_parallel_task_block 201711201510 <experimental/exception_list>
<experimental/task_block>
P0076R4 Vector and Wavefront Policies 5.2 __cpp_lib_experimental_execution_vector_policy 201711201707 <experimental/algorithm>
<experimental/execution>
P0075R2 Template Library for Parallel For Loops 7.3.2 __cpp_lib_experimental_parallel_for_loop 201711 <experimental/algorithm>
5

Execution policies

[parallel.execpol]
? [parallel.execpol.general]

In general

This clause describes classes that are execution policy types. An object of an execution policy type indicates the kinds of parallelism allowed in the execution of an algorithm and expresses the consequent requirements on the element access functions.

[ Example:
std::vector<int> v = ...

// standard sequential sort
std::sort(v.begin(), v.end());

using namespace std::experimental::parallel;

// explicitly sequential sort
sort(seq, v.begin(), v.end());

// permitting parallel execution
sort(par, v.begin(), v.end());

// permitting vectorization as well
sort(par_vec, v.begin(), v.end());

// sort with dynamically-selected execution
size_t threshold = ...
execution_policy exec = seq;
if (v.size() > threshold)
{
  exec = par;
}

sort(exec, v.begin(), v.end());
end example ]
 
    [ Note:
    
      Because different parallel architectures may require idiosyncratic
      parameters for efficient execution, implementations of the Standard Library 
      may provide additional execution policies to those described in this
      Technical Specification as extensions.
    
    end note ]
  
  
5.1

Header <experimental/execution_policy> synopsis

[parallel.execpol.synopsis]
#include <execution>

namespace std {
namespace std::experimental {
inline namespace parallelism_v2 {
inline namespace v2 {
  // ?, 
  template<class T> struct is_execution_policy;
  template<class T> constexpr bool is_execution_policy_v = is_execution_policy<T>::value;

  // ?, 
  class sequential_execution_policy;

  // ?, 
  class parallel_execution_policy;

  // ?, 
  class parallel_vector_execution_policy;

  // ?, 
  class execution_policy;

namespace execution {
  // 5.2, Unsequenced execution policy
  class unsequenced_policy;

  // 5.3, Vector execution policy
  class vector_policy;

  // 5.6, Execution policy objects
  inline constexpr unsequenced_policy unseq{ unspecified };
  inline constexpr parallel_policy par{ unspecified };
}
}
}
}
}
? [parallel.execpol.type]

Execution policy type trait

template<class T> struct is_execution_policy { see below };

is_execution_policy can be used to detect parallel execution policies for the purpose of excluding function signatures from otherwise ambiguous overload resolution participation.

is_execution_policy<T> shall be a UnaryTypeTrait with a BaseCharacteristic of true_type if T is the type of a standard or implementation-defined execution policy, otherwise false_type.



    [ Note:
    
      This provision reserves the privilege of creating non-standard execution policies to the library implementation.
    
    end note ]
  
    
    

The behavior of a program that adds specializations for is_execution_policy is undefined.

? [parallel.execpol.seq]

Sequential execution policy

class sequential_execution_policy{ unspecified };

The class sequential_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and require that a parallel algorithm's execution may not be parallelized.

? [parallel.execpol.par]

Parallel execution policy

class parallel_execution_policy{ unspecified };

The class parallel_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may be parallelized.

? [parallel.execpol.parvec]

Parallel+Vector execution policy

class parallel_vector_execution_policy{ unspecified };

The class class parallel_vector_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may be vectorized and parallelized.

5.2

Unsequenced execution policy

[parallel.execpol.unseq]
class unsequenced_policy{ unspecified };

The class unsequenced_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may be vectorized, e.g., executed on a single thread using instructions that operate on multiple data items.

The invocations of element access functions in parallel algorithms invoked with an execution policy of type unsequenced_policy are permitted to execute in an unordered fashion in the calling thread, unsequenced with respect to one another within the calling thread. [ Note: This means that multiple function object invocations may be interleaved on a single thread. end note ]

[ Note: This overrides the usual guarantee from the C++ Standard, C++17 §4.6 [intro.execution] that function executions do not overlap with one another. end note ]

During the execution of a parallel algorithm with the experimental::execution::unsequenced_policy policy, if the invocation of an element access function exits via an uncaught exception, terminate() shall be called.

5.3

Vector execution policy

[parallel.execpol.vec]
class vector_policy{ unspecified };

The class vector_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may be vectorized. Additionally, such vectorization will result in an execution that respects the sequencing constraints of wavefront application ([parallel.alg.general.wavefront]). [ Note: The implementation thus makes stronger guarantees than for unsequenced_policy, for example. end note ]

The invocations of element access functions in parallel algorithms invoked with an execution policy of type vector_policy are permitted to execute in unordered fashion in the calling thread, unsequenced with respect to one another within the calling thread, subject to the sequencing constraints of wavefront application (7.1.3) for the last argument to for_loop or for_loop_strided.

During the execution of a parallel algorithm with the experimental::execution::vector_policy policy, if the invocation of an element access function exits via an uncaught exception, terminate() shall be called.

? [parallel.execpol.dynamic]

Dynamic execution policy

class execution_policy
{
  public:
    // 5.4, execution_policy construct/assign
    template<class T> execution_policy(const T& exec);
    template<class T> execution_policy& operator=(const T& exec);

    // 5.5, execution_policy object access
    const type_info& type() const noexcept;
    template<class T> T* get() noexcept;
    template<class T> const T* get() const noexcept;
};

The class execution_policy is a container for execution policy objects. execution_policy allows dynamic control over standard algorithm execution.

[ Example:
std::vector<float> sort_me = ...
        
using namespace std::experimental::parallel;
execution_policy exec = seq;

if(sort_me.size() > threshold)
{
  exec = std::par;
}
 
std::sort(exec, std::begin(sort_me), std::end(sort_me));
end example ]

Objects of type execution_policy shall be constructible and assignable from objects of type T for which is_execution_policy<T>::value is true.

5.4

execution_policy construct/assign

[parallel.execpol.con]
template<class T> execution_policy(const T& exec);
Effects:
Constructs an execution_policy object with a copy of exec's state.
Remarks:
This constructor shall not participate in overload resolution unless is_execution_policy<T>::value is true.
template<class T> execution_policy& operator=(const T& exec);
Effects:
Assigns a copy of exec's state to *this.
Returns:
*this.
5.5

execution_policy object access

[parallel.execpol.access]
const type_info& type() const noexcept;
Returns:
typeid(T), such that T is the type of the execution policy object contained by *this.
template<class T> T* get() noexcept; template<class T> const T* get() const noexcept;
Returns:
If target_type() == typeid(T), a pointer to the stored execution policy object; otherwise a null pointer.
Requires:
is_execution_policy<T>::value is true.
5.6

Execution policy objects

[parallel.execpol.objects]
constexpr sequential_execution_policy      seq{};
constexpr parallel_execution_policy        par{};
constexpr parallel_vector_execution_policy par_vec{};
constexpr execution::unsequenced_policy unseq{};
constexpr execution::vector_policy vec{};

The header <experimental/execution_policy> declares a global object associated with each type of execution policy defined by this Technical Specification.

6

Parallel exceptions

[parallel.exceptions]
6.1

Exception reporting behavior

[parallel.exceptions.behavior]

During the execution of a standard parallel algorithm, if temporary memory resources are required and none are available, the algorithm throws a std::bad_alloc exception.

During the execution of a standard parallel algorithm, if the invocation of an element access function exits via an uncaught exception, the behavior of the program is determined by the type of execution policy used to invoke the algorithm:

  • If the execution policy object is of type parallel_vector_execution_policy, unsequenced_policy, or vector_policy, std::terminate shall be called.
  • If the execution policy object is of type sequential_execution_policy or parallel_execution_policy, the execution of the algorithm exits via an exception. The exception shall be an exception_list containing all uncaught exceptions thrown during the invocations of element access functions, or optionally the uncaught exception if there was only one.
    
    
                  [ Note:
        
                    For example, when for_each is executed sequentially,
                    if an invocation of the user-provided function object throws an exception, for_each can exit via the uncaught exception, or throw an exception_list containing the original exception.
                  
        end note ]
      
    
    
                  [ Note:
        
                    These guarantees imply that, unless the algorithm has failed to allocate memory and
                    exits via std::bad_alloc, all exceptions 
    thrown during the execution of
                    the algorithm are communicated to the caller. It is 
    unspecified whether an algorithm implementation will "forge ahead" after
     
                    encountering and capturing a user exception.
                  
        end note ]
      
    
                  [ Note:
        
                    The algorithm may exit via the std::bad_alloc
     exception even if one or more
                    user-provided function objects have exited via an 
    exception. For example, this can happen when an algorithm fails to 
    allocate memory while
                    creating or adding elements to the exception_list object.
                  
        end note ]
      
                
  • If the execution policy object is of any other type, the behavior is implementation-defined.

6.2

Header <experimental/exception_list> synopsis

[parallel.exceptions.synopsis]
namespace std {
namespace std::experimental {
inline namespace parallelism_v2 {
inline namespace v2 {

  class exception_list : public exception
  {
    public:
      typedef unspecified iterator;
      using iterator = unspecified;
  
      size_t size() const noexcept;
      iterator begin() const noexcept;
      iterator end() const noexcept;

      const char* what() const noexcept override;
  };
}
}
}
}
      

The class exception_list owns a sequence of exception_ptr objects. The parallel algorithms may use the exception_list to communicate uncaught exceptions encountered during parallel execution to the caller of the algorithm.

The type exception_list::iterator shall fulfill the requirements of ForwardIterator.

size_t size() const noexcept;
Returns:
The number of exception_ptr objects contained within the exception_list.
Complexity:
Constant time.
iterator begin() const noexcept;
Returns:
An iterator referring to the first exception_ptr object contained within the exception_list.
iterator end() const noexcept;
Returns:
An iterator that is past the end of the owned sequence.
const char* what() const noexcept override;
Returns:
An implementation-defined NTBS.
7

Parallel algorithms

[parallel.alg]
7.1

In general

[parallel.alg.general]
This clause describes components that C++ programs may use to perform operations on containers and other sequences in parallel.
7.1.1

Requirements on user-provided function objects

[parallel.alg.general.user]

Function objects passed into parallel algorithms as objects of type BinaryPredicate, Compare, and BinaryOperation shall not directly or indirectly modify objects via their arguments.

7.1.2

Effect of execution policies on algorithm execution

[parallel.alg.general.exec]

Parallel algorithms have template parameters named ExecutionPolicy which describe the manner in which the execution of these algorithms may be parallelized and the manner in which they apply the element access functions.

The invocations of element access functions in parallel algorithms invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The invocations of element access functions in parallel algorithms invoked with an execution policy object of type parallel_execution_policy are permitted to execute in an unordered fashion in either the invoking thread or in a thread implicitly created by the library to support parallel algorithm execution. Any such invocations executing in the same thread are indeterminately sequenced with respect to each other. [ Note: It is the caller's responsibility to ensure correctness, for example that the invocation does not introduce data races or deadlocks. end note ]

[ Example:
using namespace std::experimental::parallel;
int a[] = {0,1};
std::vector<int> v;
for_each(par, std::begin(a), std::end(a), [&](int i) {
  v.push_back(i*2+1);
});
The program above has a data race because of the unsynchronized access to the container v. end example ]


      
      
    
    [ Example:
    
using namespace std::experimental::parallel;
std::atomic<int> x = 0;
int a[] = {1,2};
for_each(par, std::begin(a), std::end(a), [&](int n) {
  x.fetch_add(1, std::memory_order_relaxed);
  // spin wait for another iteration to change the value of x
  while (x.load(std::memory_order_relaxed) == 1) { }
});
The above example depends on the order of execution of the iterations, and is therefore undefined (may deadlock). end example ]


      
      
    
    [ Example:
    
using namespace std::experimental::parallel;
int x=0;
std::mutex m;
int a[] = {1,2};
for_each(par, std::begin(a), std::end(a), [&](int) {
  m.lock();
  ++x;
  m.unlock();
});
The above example synchronizes access to object x ensuring that it is incremented correctly. end example ]

The invocations of element access functions in parallel algorithms invoked with an execution policy of type unsequenced_policy are permitted to execute in an unordered fashion in the calling thread, unsequenced with respect to one another within the calling thread. [ Note: This means that multiple function object invocations may be interleaved on a single thread. end note ]



        [ Note:
    
          This overrides the usual guarantee from the C++ standard, Section 1.9 [intro.execution] that
          function executions do not interleave with one another.
        
    end note ]
  
      

The invocations of element access functions in parallel algorithms invoked with an executino policy of type vector_policy are permitted to execute in an unordered fashion in the calling thread, unsequenced with respect to one another within the calling thread, subject to the sequencing constraints of wavefront application (7.1.3) for the last argument to for_loop or for_loop_strided.

The invocations of element access functions in parallel algorithms invoked with an execution policy of type parallel_vector_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and unsequenced with respect to one another within each thread. [ Note: This means that multiple function object invocations may be interleaved on a single thread. end note ]



        [ Note:
    
          This overrides the usual guarantee from the C++ standard, Section 1.9 [intro.execution] that
          function executions do not interleave with one another.
        
    end note ]
  
        


          Since parallel_vector_execution_policy allows the execution of element access functions to be
          interleaved on a single thread, synchronization, including the use of mutexes, risks deadlock. Thus the
          synchronization with parallel_vector_execution_policy is restricted as follows:


          A standard library function is vectorization-unsafe if it is specified to synchronize with
          another function invocation, or another function invocation is specified to synchronize with it, and if
          it is not a memory allocation or deallocation function. Vectorization-unsafe standard library functions
          may not be invoked by user code called from parallel_vector_execution_policy algorithms.


          [ Note:
    
            Implementations must ensure that internal synchronization inside standard library routines does not
            induce deadlock.
          
    end note ]
  
      

[ Example:
using namespace std::experimental::parallel;
int x=0;
std::mutex m;
int a[] = {1,2};
for_each(par_vec, std::begin(a), std::end(a), [&](int) {
  m.lock();
  ++x;
  m.unlock();
});
The above program is invalid because the applications of the function object are not guaranteed to run on different threads. end example ]


      [ Note:
    
        The application of the function object may result in two consecutive calls to
        m.lock on the same thread, which may deadlock.
      
    end note ]
  


      [ Note:
    
        The semantics of the parallel_execution_policy or the
        parallel_vector_execution_policy invocation allow the implementation to fall back to
        sequential execution if the system cannot parallelize an algorithm invocation due to lack of
        resources.
      
    end note ]
  

      

Algorithms invoked with an execution policy object of type execution_policy execute internally as if invoked with the contained execution policy object.

The semantics of parallel algorithms invoked with an execution policy object of implementation-defined type are implementation-defined.

7.1.3

Wavefront Application

[parallel.alg.general.wavefront]

For the purposes of this section, an evaluation is a value computation or side effect of an expression, or an execution of a statement. Initialization of a temporary object is considered a subexpression of the expression that necessitates the temporary object.

An evaluation A contains an evaluation B if:

  • A and B are not potentially concurrent ([intro.races]); and
  • the start of A is the start of B or the start of A is sequenced before the start of B; and
  • the completion of B is the completion of A or the completion of B is sequenced before the completion of A.
[ Note: This includes evaluations occurring in function invocations. end note ]

An evaluation A is ordered before an evaluation B if A is deterministically sequenced before B. [ Note: If A is indeterminately sequenced with respect to B or A and B are unsequenced, then A is not ordered before B and B is not ordered before A. The ordered before relationship is transitive. end note ]

For an evaluation A ordered before an evaluation B, both contained in the same invocation of an element access function, A is a vertical antecedent of B if:

  • there exists an evaluation S such that:
    • S contains A, and
    • S contains all evaluations C (if any) such that A is ordered before C and C is ordered before B,
    • but S does not contain B, and
  • control reached B from A without executing any of the following:
    • a goto statement or asm declaration that jumps to a statement outside of S, or
    • a switch statement executed within S that transfers control into a substatement of a nested selection or iteration statement, or
    • a throw [ Note: even if caught end note ] , or
    • a longjmp.
[ Note: Vertical antecedent is an irreflexive, antisymmetric, nontransitive relationship between two evaluations. Informally, A is a vertical antecedent of B if A is sequenced immediately before B or A is nested zero or more levels within a statement S that immediately precedes B. end note ]

In the following, Xi and Xj refer to evaluations of the same expression or statement contained in the application of an element access function corresponding to the ith and jth elements of the input sequence. [ Note: There might be several evaluations Xk, Yk, etc. of a single expression or statement in application k, for example, if the expression or statement appears in a loop within the element access function. end note ]

Horizontally matched is an equivalence relationship between two evaluations of the same expression. An evaluation Bi is horizontally matched with an evaluation Bj if:

  • both are the first evaluations in their respective applications of the element access function, or
  • there exist horizontally matched evaluations Ai and Aj that are vertical antecedents of evaluations Bi and Bj, respectively.
[ Note: Horizontally matched establishes a theoretical lock-step relationship between evaluations in different applications of an element access function. end note ]

Let f be a function called for each argument list in a sequence of argument lists. Wavefront application of f requires that evaluation Ai be sequenced before evaluation Bi if i < j and and:

  • Ai is sequenced before some evaluation Bi and Bi is horizontally matched with Bj, or
  • Ai is horizontally matched with some evaluation Aj and Aj is sequenced before Bj.
[ Note: Wavefront application guarantees that parallel applications i and j execute such that progress on application j never gets ahead of application i. end note ] [ Note: The relationships between Ai and Bi and between Aj and Bj are sequenced before, not vertical antecedent. end note ]

7.1.4

ExecutionPolicy algorithm overloads

[parallel.alg.overloads]

The Parallel Algorithms Library provides overloads for each of the algorithms named in Table 1, corresponding to the algorithms with the same name in the C++ Standard Algorithms Library. For each algorithm in Table 2, if there are overloads for corresponding algorithms with the same name in the C++ Standard Algorithms Library, the overloads shall have an additional template type parameter named ExecutionPolicy, which shall be the first template parameter. In addition, each such overload shall have the new function parameter as the first function parameter of type ExecutionPolicy&&.

Unless otherwise specified, the semantics of ExecutionPolicy algorithm overloads are identical to their overloads without.

Parallel algorithms shall not participate in overload resolution unless is_execution_policy<decay_t<ExecutionPolicy>>::value is true.

[ Note: Not all algorithms in the Standard Library have counterparts in Table 2. end note ]
7.2

Definitions

[parallel.alg.defns]

Define GENERALIZED_SUM(op, a1, ..., aN) as follows:

  • a1 when N is 1
  • op(GENERALIZED_SUM(op, b1, ..., bK), GENERALIZED_SUM(op, bM, ..., bN)) where
    • b1, ..., bN may be any permutation of a1, ..., aN and
    • 1 < K+1 = M ≤ N.

Define GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aN) as follows:

  • a1 when N is 1
  • op(GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aK), GENERALIZED_NONCOMMUTATIVE_SUM(op, aM,
    ..., aN) where 1 < K+1 = M ≤ N.

7.3

Non-Numeric Parallel Algorithms

[parallel.alg.ops]
7.3.1

Header <experimental/algorithm> synopsis

[parallel.alg.ops.synopsis]
#include <algorithm>

namespace std {
namespace std::experimental {
inline namespace parallelism_v2 {
inline namespace v2 {
  template<class ExecutionPolicy,
           class InputIterator, class Function>
    void for_each(ExecutionPolicy&& exec,
                  InputIterator first, InputIterator last,
                  Function f);
  template<class InputIterator, class Size, class Function>
    InputIterator for_each_n(InputIterator first, Size n,
                             Function f);
  template<class ExecutionPolicy,
           class InputIterator, class Size, class Function>
    InputIterator for_each_n(ExecutionPolicy&& exec,
                             InputIterator first, Size n,
                             Function f);

namespace execution {
  // 7.3.6, No vec
  template<class F>
    auto no_vec(F&& f) noexcept -> decltype(std::forward<F>(f)());

  // 7.3.7, Ordered update class
  template<class T>
    class ordered_update_t;

  // 7.3.8, Ordered update function template
  template<class T>
    ordered_update_t<T> ordered_update(T& ref) noexcept;
}


// Exposition only: Suppress template argument deduction.
template<class T> struct no_deduce { using type = T; };
template<class T> struct no_dedude_t = typename no_deduce<T>::type;

// 7.3.2, Reductions Support for reductions
template<class T, class BinaryOperation>
  unspecified reduction(T& var, const T& identity, BinaryOperation combiner);
template<class T>
  unspecified reduction_plus(T& var);
template<class T>
  unspecified reduction_multiplies(T& var);
template<class T>
  unspecified reduction_bit_and(T& var);
template<class T>
  unspecified reduction_bit_or(T& var);
template<class T>
  unspecified reduction_bit_xor(T& var);
template<class T>
  unspecified reduction_min(T& var);
template<class T>
  unspecified reduction_max(T& var);

// 7.3.3, Inductions Support for inductions
template<class T>
  unspecified induction(T&& var);
template<class T>
  unspecified induction(T&& var, S stride);

// 7.3.4, For loop for_loop
template<class I, class... Rest>
  void for_loop(no_deduce_t<I> start, I finish, Rest&&... rest);
template<class ExecutionPolicy,
         class I, class... Rest>
  void for_loop(ExecutionPolicy&& exec,
                no_deduce_t<I> start, I finish, Rest&&... rest);
template<class I, class S, class... Rest>
  void for_loop_strided(no_deduce_t<I> start, I finish,
                        S stride, Rest&&... rest);
template<class ExecutionPolicy,
         class I, class S, class... Rest>
  void for_loop_strided(ExecutionPolicy&& exec,
                        no_deduce_t<I> start, I finish,
                        S stride, Rest&&... rest);
template<class I, class Size, class... Rest>
  void for_loop_n(I start, Size n, Rest&&... rest);
template<class ExecutionPolicy,
         class I, class Size, class... Rest>
  void for_loop_n(ExecutionPolicy&& exec,
                  I start, Size n, Rest&&... rest);
template<class I, class Size, class S, class... Rest>
  void for_loop_n_strided(I start, Size n, S stride, Rest&&... rest);
template<class ExecutionPolicy,
         class I, class Size, class S, class... Rest>
  void for_loop_n_strided(ExecutionPolicy&& exec,
                          I start, Size n, S stride, Rest&&... rest);

}
}
}
}
7.3.2

Reductions

[parallel.alg.reductions]

Each of the function templates in this subclause ([parallel.alg.reductions]) returns a reduction object of unspecified type having a reduction value type and encapsulating a reduction identity value for the reduction, a combiner function object, and a live-out object from which the initial value is obtained and into which the final value is stored.

An algorithm uses reduction objects by allocating an unspecified number of instances, known as accumulators, of the reduction value type. [ Note: An implementation might, for example, allocate an accumulator for each thread in its private thread pool. end note ] Each accumulator is initialized with the object’s reduction identity, except that the live-out object (which was initialized by the caller) comprises one of the accumulators. The algorithm passes a reference to an accumulator to each application of an element-access function, ensuring that no two concurrently executing invocations share the same accumulator. An accumulator can be shared between two applications that do not execute concurrently, but initialization is performed only once per accumulator.

Modifications to the accumulator by the application of element access functions accrue as partial results. At some point before the algorithm returns, the partial results are combined, two at a time, using the reduction object’s combiner operation until a single value remains, which is then assigned back to the live-out object. [ Note: in order to produce useful results, modifications to the accumulator should be limited to commutative operations closely related to the combiner operation. For example if the combiner is plus<T>, incrementing the accumulator would be consistent with the combiner but doubling it or assigning to it would not. end note ]

template<class T, class BinaryOperation>
unspecified reduction(T& var, const T& identity, BinaryOperation combiner);
Requires:
T shall meet the requirements of CopyConstructible and MoveAssignable. The expression var = combiner(var, var) shall be well-formed.
Returns:
a reduction object of unspecified type having reduction value type T, reduction identity identity, combiner function object combiner, and using the object referenced by var as its live-out object.
template<class T>
unspecified reduction_plus(T& var);template<class T>
unspecified reduction_multiplies(T& var);template<class T>
unspecified reduction_bit_and(T& var);template<class T>
unspecified reduction_bit_or(T& var);template<class T>
unspecified reduction_bit_xor(T& var);template<class T>
unspecified reduction_min(T& var);template<class T>
unspecified reduction_max(T& var);
Requires:
T shall meet the requirements of CopyConstructible and MoveAssignable.
Returns:
a reduction object of unspecified type having reduction value type T, reduction identity and combiner operation as specified in table Table 3 and using the object referenced by var as its live-out object.
Table 3 — Reduction identities and combiner operations
Function Reduction Identity Combiner Operation
reduction_plus T() x + y
reduction_multiplies T(1) x * y
reduction_bit_and (~T()) X & y
reduction_bit_or T() x | y
reduction_bit_xor T() x ^ y
reduction_min var min(x, y)
reduction_max var max(x, y)
[ Example: The following code updates each element of y and sets s ot the sum of the squares.
extern int n;
extern float x[], y[], a;
float s = 0;
for_loop(execution::vec, 0, n,
    reduction(s, 0.0f, plus<>()),
    [&](int i, float& accum) {
            y[i] += a*x[i];
            accum += y[i]*y[i];
    }
);
end example ]
7.3.3

Inductions

[parallel.alg.inductions]

Each of the function templates in this section return an induction object of unspecified type having an induction value type and encapsulating an initial value i of that type and, optionally, a stride.

For each element in the input range, an algorithm over input sequence S computes an induction value from an induction variable and ordinal position p within S by the formula i + p * stride if a stride was specified or i + p otherwise. This induction value is passed to the element access function.

An induction object may refer to a live-out object to hold the final value of the induction sequence. When the algorithm using the induction object completes, the live-out object is assigned the value i + n * stride, where n is the number of elements in the input range.

template<class T>
unspecified induction(T&& var);template<class T, class S>
unspecified induction(T&& var, S stride);
Returns:
an induction object with induction value type remove_cv_t>remove_reference_t>T<<, initial value var, and (if specified) stride stride. If T is an lvalue reference to non-const type, then the object referenced by var becomes the live-out object for the induction object; otherwise there is no live-out object.
7.3.4

For loop

[parallel.alg.forloop]
template<class I, class... Rest>
void for_loop(no_deduce_t<I> start, I finish, Rest&&... rest);template<class ExecutionPolicy,
      class I, class... Rest>
void for_loop(ExecutionPolicy&& exec,
              no_deduce_t<I> start, I finish, Rest&&... rest);

template<class I, class S, class... Rest>
void for_loop_strided(no_deduce_t<I> start, I finish,
                      S stride, Rest&&... rest);template<class ExecutionPolicy,
      class I, class S, class... Rest>
void for_loop_strided(ExecutionPolicy&& exec,
                      no_deduce_t<I> start, I finish,
                      S stride, Rest&&... rest);

template<class I, class Size, class... Rest>
void for_loop_n(I start, Size n, Rest&&... rest);template<class ExecutionPolicy,
      class I, class Size, class... Rest>
void for_loop_n(ExecutionPolicy&& exec,
                I start, Size n, Rest&&... rest);
          
template<class I, class Size, class S, class... Rest>
void for_loop_n_strided(I start, Size n, S stride, Rest&&... rest);template<class ExecutionPolicy, 
      class I, class Size, class S, class... Rest>
void for_loop_n_strided(ExecutionPolicy&& exec,
                        I start, Size n, S stride, Rest&&... rest);
Requires:
For the overloads with an ExecutionPolicy, I shall be an integral type or meet the requirements of a forward iterator type; otherwise, I shall be an integral type or meet the requirements of an input iterator type. Size shall be an integral type and n shall be non-negative. S shall have integral type and stride shall have non-zero value. stride shall be negative only if I has integral type or meets the requirements of a bidirectional iterator. The rest parameter pack shall have at least one element, comprising objects returned by invocations of reduction ([parallel.alg.reduction]) and/or induction ([parallel.alg.induction]) function templates followed by exactly one invocable element-access function, f. For the overloads with an ExecutionPolicy, f shall meet the requirements of CopyConstructible; otherwise, f shall meet the requirements of MoveConstructible.
Effects:
Applies f to each element in the input sequence, as described below, with additional arguments corresponding to the reductions and inductions in the rest parameter pack. The length of the input sequence is:
  • n, if specified,
  • otherwise finish - start if neither n nor stride is specified,
  • otherwise 1 + (finish-start-1)/stride if stride is positive,
  • otherwise 1 + (start-finish-1)/-stride.
The first element in the input sequence is start. Each subsequent element is generated by adding stride to the previous element, if stride is specified, otherwise by incrementing the previous element. [ Note: As described in the C++ standard, section [algorithms.general], arithmetic on non-random-access iterators is performed using advance and distance. end note ] [ Note: The order of the elements of the input sequence is important for determining ordinal position of an application of f, even though the applications themselves may be unordered. end note ]

The first argument to f is an element from the input sequence. [ Note: if I is an iterator type, the iterators in the input sequence are not dereferenced before being passed to f. end note ] For each member of the rest parameter pack excluding f, an additional argument is passed to each application of f as follows:
  • If the pack member is an object returned by a call to a reduction function listed in section [parallel.alg.reductions], then the additional argument is a reference to an accumulator of that reduction object.
  • If the pack member is an object returned by a call to induction, then the additional argument is the induction value for that induction object corresponding to the position of the application of f in the input sequence.
Complexity:
Applies f exactly once for each element of the input sequence.
Remarks:
If f returns a result, the result is ignored.
7.3.5

For each

[parallel.alg.foreach]
template<class ExecutionPolicy,
      class InputIterator, class Function>
void for_each(ExecutionPolicy&& exec,
              InputIterator first, InputIterator last,
              Function f);
Effects:
Applies f to the result of dereferencing every iterator in the range [first,last). [ Note: If the type of first satisfies the requirements of a mutable iterator, f may apply nonconstant functions through the dereferenced iterator. end note ]
Complexity:
Applies f exactly last - first times.
Remarks:
If f returns a result, the result is ignored.
Notes:
Unlike its sequential form, the parallel overload of for_each does not return a copy of its Function parameter, since parallelization may not permit efficient state accumulation.
Requires:
Unlike its sequential form, the parallel overload of for_each requires Function to meet the requirements of CopyConstructible.
template<class InputIterator, class Size, class Function>
InputIterator for_each_n(InputIterator first, Size n,
Function f);
Requires:
Function shall meet the requirements of MoveConstructible [ Note: Function need not meet the requirements of CopyConstructible. end note ]
Effects:
Applies f to the result of dereferencing every iterator in the range [first,first + n), starting from first and proceeding to first + n - 1. [ Note: If the type of first satisfies the requirements of a mutable iterator, f may apply nonconstant functions through the dereferenced iterator. end note ]
Returns:
first + n for non-negative values of n and first for negative values.
Remarks:
If f returns a result, the result is ignored.
template<class ExecutionPolicy,
      class InputIterator, class Size, class Function>
InputIterator for_each_n(ExecutionPolicy && exec,
                         InputIterator first, Size n,
                         Function f);
Effects:
Applies f to the result of dereferencing every iterator in the range [first,first + n), starting from first and proceeding to first + n - 1. [ Note: If the type of first satisfies the requirements of a mutable iterator, f may apply nonconstant functions through the dereferenced iterator. end note ]
Returns:
first + n for non-negative values of n and first for negative values.
Remarks:
If f returns a result, the result is ignored.
Notes:
Unlike its sequential form, the parallel overload of for_each_n requires Function to meet the requirements of CopyConstructible.
7.3.6

No vec

[parallel.alg.novec]
template<class F>
auto no_vec(F&& f) noexcept -> decltype(std::forward<F>(f)());
Effects:
Evaluates std::forward>F<(f)(). When invoked within an element access function in a parallel algorithm using vector_policy, if two calls to no_vec are horizontally matched within a wavefront application of an element access function over input sequence S, then the execution of f in the application for one element in S is sequenced before the execution of f in the application for a subsequent element in S; otherwise, there is no effect on sequencing.
Returns:
the result of f.
Remarks:
If f returns a result, the result is ignored.
Notes:
If f exits via an exception, then terminate will be called, consistent with all other potentially-throwing operations invoked with vector_policy execution. [ Example:
extern int* p;
for_loop(vec, 0, n[&](int i) {
  y[i] +=y[i+1];
  if(y[i] < 0) {
    no_vec([]{
      *p++ = i;
    });
  }
});
The updates *p++ = i will occur in the same order as if the policy were seq. end example ]
7.3.7

Ordered update class

[parallel.alg.ordupdate.class]
class ordered_update_t {
  T& ref_; // exposition only
public:
  ordered_update_t(T& loc) noexcept
    : ref_(loc) {}
  ordered_update_t(const ordered_update_t&) = delete;
  ordered_update_t& operator=(const ordered_update_t&) = delete;

  template <class U>
    auto operator=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ = std::move(rhs); }); }
  template <class U>
    auto operator+=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ += std::move(rhs); }); }
  template <class U>
    auto operator-=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ -= std::move(rhs); }); }
  template <class U>
    auto operator*=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ *= std::move(rhs); }); }
  template <class U>
    auto operator/=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ /= std::move(rhs); }); }
  template <class U>
    auto operator%=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ %= std::move(rhs); }); }
  template <class U>
    auto operator>>=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ >>= std::move(rhs); }); }
  template <class U>
    auto operator<<=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ <<= std::move(rhs); }); }
  template <class U>
    auto operator&=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ &= std::move(rhs); }); }
  template <class U>
    auto operator^=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ ^= std::move(rhs); }); }
  template <class U>
    auto operator|=(U rhs) const noexcept
      { return no_vec([&]{ return ref_ |= std::move(rhs); }); }

  auto operator++() const noexcept
    { return no_vec([&]{ return ++ref_; }); }
  auto operator++(int) const noexcept
    { return no_vec([&]{ return ref_++; }); }
  auto operator--() const noexcept
    { return no_vec([&]{ return --ref_; }); }
  auto operator--(int) const noexcept
    { return no_vec([&]{ return ref_--; }); }
};

An object of type ordered_update_t><T<> is a proxy for an object of type T intended to be used within a parallel application of an element access function using a policy object of type vector_policy. Simple increments, assignments, and compound assignments to the object are forwarded to the proxied object, but are sequenced as though executed within a no_vec invocation. [ Note: The return-value deduction of the forwarded operations results in these operations returning by value, not reference. This formulation prevents accidental collisions on accesses to the return value. end note ]

7.3.8

Ordered update function template

[parallel.alg.ordupdate.func]
template<T>
ordered_update_t<T> ordered_update(T& loc) noexcept;
Returns:
{ loc }.
7.4

Numeric Parallel Algorithms

[parallel.alg.numeric]
7.4.1

Header <experimental/numeric> synopsis

[parallel.alg.numeric.synopsis]
namespace std {
namespace experimental {
namespace parallel {
inline namespace v2 {
  template<class InputIterator>
    typename iterator_traits<InputIterator>::value_type
      reduce(InputIterator first, InputIterator last);
  template<class ExecutionPolicy,
           class InputIterator>
    typename iterator_traits<InputIterator>::value_type
      reduce(ExecutionPolicy&& exec,
             InputIterator first, InputIterator last);
  template<class InputIterator, class T>
    T reduce(InputIterator first, InputIterator last, T init);
  template<class ExecutionPolicy,
           class InputIterator, class T>
    T reduce(ExecutionPolicy&& exec,
             InputIterator first, InputIterator last, T init);
  template<class InputIterator, class T, class BinaryOperation>
    T reduce(InputIterator first, InputIterator last, T init,
             BinaryOperation binary_op);
  template<class ExecutionPolicy, class InputIterator, class T, class BinaryOperation>
    T reduce(ExecutionPolicy&& exec,
             InputIterator first, InputIterator last, T init,
             BinaryOperation binary_op);

  template<class InputIterator, class OutputIterator,
           class T>
    OutputIterator
      exclusive_scan(InputIterator first, InputIterator last,
                     OutputIterator result,
                     T init);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class T>
    OutputIterator
      exclusive_scan(ExecutionPolicy&& exec,
                     InputIterator first, InputIterator last,
                     OutputIterator result,
                     T init);
  template<class InputIterator, class OutputIterator,
           class T, class BinaryOperation>
    OutputIterator
      exclusive_scan(InputIterator first, InputIterator last,
                     OutputIterator result,
                     T init, BinaryOperation binary_op);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class T, class BinaryOperation>
    OutputIterator
      exclusive_scan(ExecutionPolicy&& exec,
                     InputIterator first, InputIterator last,
                     OutputIterator result,
                     T init, BinaryOperation binary_op);

  template<class InputIterator, class OutputIterator>
    OutputIterator
      inclusive_scan(InputIterator first, InputIterator last,
                     OutputIterator result);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator>
    OutputIterator
      inclusive_scan(ExecutionPolicy&& exec,
                     InputIterator first, InputIterator last,
                     OutputIterator result);
  template<class InputIterator, class OutputIterator,
           class BinaryOperation>
    OutputIterator
      inclusive_scan(InputIterator first, InputIterator last,
                     OutputIterator result,
                     BinaryOperation binary_op);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class BinaryOperation>
    OutputIterator
      inclusive_scan(ExecutionPolicy&& exec,
                     InputIterator first, InputIterator last,
                     OutputIterator result,
                     BinaryOperation binary_op);
  template<class InputIterator, class OutputIterator,
           class BinaryOperation, class T>
    OutputIterator
      inclusive_scan(InputIterator first, InputIterator last,
                     OutputIterator result,
                     BinaryOperation binary_op, T init);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class BinaryOperation, class T>
    OutputIterator
      inclusive_scan(ExecutionPolicy&& exec,
                     InputIterator first, InputIterator last,
                     OutputIterator result,
                     BinaryOperation binary_op, T init);

  template<class InputIterator, class UnaryOperation,
           class T, class BinaryOperation>
    T transform_reduce(InputIterator first, InputIterator last,
                       UnaryOperation unary_op,
                       T init, BinaryOperation binary_op);
  template<class ExecutionPolicy,
           class InputIterator, class UnaryOperation,
           class T, class BinaryOperation>
    T transform_reduce(ExecutionPolicy&& exec,
                       InputIterator first, InputIterator last,
                       UnaryOperation unary_op,
                       T init, BinaryOperation binary_op);

  template<class InputIterator, class OutputIterator,
           class UnaryOperation, class T, class BinaryOperation>
    OutputIterator
      transform_exclusive_scan(InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               T init, BinaryOperation binary_op);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class UnaryOperation, class T, class BinaryOperation>
    OutputIterator
      transform_exclusive_scan(ExecutionPolicy&& exec,
                               InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               T init, BinaryOperation binary_op);

  template<class InputIterator, class OutputIterator,
           class UnaryOperation, class BinaryOperation>
    OutputIterator
      transform_inclusive_scan(InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               BinaryOperation binary_op);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class UnaryOperation, class BinaryOperation>
    OutputIterator
      transform_inclusive_scan(ExecutionPolicy&& exec,
                               InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               BinaryOperation binary_op);

  template<class InputIterator, class OutputIterator,
           class UnaryOperation, class BinaryOperation, class T>
    OutputIterator
      transform_inclusive_scan(InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               BinaryOperation binary_op, T init);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class UnaryOperation, class BinaryOperation, class T>
    OutputIterator
      transform_inclusive_scan(ExecutionPolicy&& exec,
                               InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               BinaryOperation binary_op, T init);
}
}
}
}
7.4.2

Reduce

[parallel.alg.reduce]
template<class InputIterator>
typename iterator_traits<InputIterator>::value_type
   reduce(InputIterator first, InputIterator last);
Effects:
Same as reduce(first, last, typename iterator_traits<InputIterator>::value_type{}).
template<class InputIterator, class T>
T reduce(InputIterator first, InputIterator last, T init);
Effects:
Same as reduce(first, last, init, plus<>()).
template<class InputIterator, class T, class BinaryOperation>
T reduce(InputIterator first, InputIterator last, T init,
         BinaryOperation binary_op);
Returns:
GENERALIZED_SUM(binary_op, init, *first, ..., *(first + (last - first) - 1)).
Requires:
binary_op shall not invalidate iterators or subranges, nor modify elements in the range [first,last).
Complexity:
O(last - first) applications of binary_op.
Notes:
The primary difference between reduce and accumulate is that the behavior of reduce may be non-deterministic for non-associative or non-commutative binary_op.
7.4.3

Exclusive scan

[parallel.alg.exclusive.scan]
template<class InputIterator, class OutputIterator, class T>
OutputIterator exclusive_scan(InputIterator first, InputIterator last,
                              OutputIterator result,
                              T init);
Effects:
Same as exclusive_scan(first, last, result, init, plus<>()).
template<class InputIterator, class OutputIterator, class T, class BinaryOperation>
OutputIterator exclusive_scan(InputIterator first, InputIterator last,
                              OutputIterator result,
                              T init, BinaryOperation binary_op);
Effects:
Assigns through each iterator i in [result,result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, init, *first, ..., *(first + (i - result) - 1)).
Returns:
The end of the resulting range beginning at result.
Requires:
binary_op shall not invalidate iterators or subranges, nor modify elements in the ranges [first,last) or [result,result + (last - first)).
Complexity:
O(last - first) applications of binary_op.
Notes:
The difference between exclusive_scan and inclusive_scan is that exclusive_scan excludes the ith input element from the ith sum. If binary_op is not mathematically associative, the behavior of exclusive_scan may be non-deterministic.
7.4.4

Inclusive scan

[parallel.alg.inclusive.scan]
template<class InputIterator, class OutputIterator>
OutputIterator inclusive_scan(InputIterator first, InputIterator last,
                              OutputIterator result);
Effects:
Same as inclusive_scan(first, last, result, plus<>()).
template<class InputIterator, class OutputIterator, class BinaryOperation>
OutputIterator inclusive_scan(InputIterator first, InputIterator last,
                              OutputIterator result,
                              BinaryOperation binary_op);template<class InputIterator, class OutputIterator, class BinaryOperation, class T>
OutputIterator inclusive_scan(InputIterator first, InputIterator last,
                              OutputIterator result,
                              BinaryOperation binary_op, T init);
Effects:
Assigns through each iterator i in [result,result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, *first, ..., *(first + (i - result))) or GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, init, *first, ..., *(first + (i - result))) if init is provided.
Returns:
The end of the resulting range beginning at result.
Requires:
binary_op shall not invalidate iterators or subranges, nor modify elements in the ranges [first,last) or [result,result + (last - first)).
Complexity:
O(last - first) applications of binary_op.
Notes:
The difference between exclusive_scan and inclusive_scan is that inclusive_scan includes the ith input element in the ith sum. If binary_op is not mathematically associative, the behavior of inclusive_scan may be non-deterministic.
7.4.5

Transform reduce

[parallel.alg.transform.reduce]
template<class InputIterator, class UnaryFunction, class T, class BinaryOperation>
T transform_reduce(InputIterator first, InputIterator last,
                   UnaryOperation unary_op, T init, BinaryOperation binary_op);
Returns:
GENERALIZED_SUM(binary_op, init, unary_op(*first), ..., unary_op(*(first + (last - first) -
1))).
Requires:
Neither unary_op nor binary_op shall invalidate subranges, or modify elements in the range [first,last)
Complexity:
O(last - first) applications each of unary_op and binary_op.
Notes:
transform_reduce does not apply unary_op to init.
7.4.6

Transform exclusive scan

[parallel.alg.transform.exclusive.scan]
template<class InputIterator, class OutputIterator,
      class UnaryOperation,
      class T, class BinaryOperation>
OutputIterator transform_exclusive_scan(InputIterator first, InputIterator last,
                                        OutputIterator result,
                                        UnaryOperation unary_op,
                                        T init, BinaryOperation binary_op);
Effects:
Assigns through each iterator i in [result,result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, init, unary_op(*first), ..., unary_op(*(first + (i
- result) - 1))).
Returns:
The end of the resulting range beginning at result.
Requires:
Neither unary_op nor binary_op shall invalidate iterators or subranges, or modify elements in the ranges [first,last) or [result,result + (last - first)).
Complexity:
O(last - first) applications each of unary_op and binary_op.
Notes:
The difference between transform_exclusive_scan and transform_inclusive_scan is that transform_exclusive_scan excludes the ith input element from the ith sum. If binary_op is not mathematically associative, the behavior of transform_exclusive_scan may be non-deterministic. transform_exclusive_scan does not apply unary_op to init.
7.4.7

Transform inclusive scan

[parallel.alg.transform.inclusive.scan]
template<class InputIterator, class OutputIterator,
      class UnaryOperation,
      class BinaryOperation>
OutputIterator transform_inclusive_scan(InputIterator first, InputIterator last,
                                        OutputIterator result,
                                        UnaryOperation unary_op,
                                        BinaryOperation binary_op);template<class InputIterator, class OutputIterator,
      class UnaryOperation,
      class BinaryOperation, class T>
OutputIterator transform_inclusive_scan(InputIterator first, InputIterator last,
                                        OutputIterator result,
                                        UnaryOperation unary_op,
                                        BinaryOperation binary_op, T init);
Effects:
Assigns through each iterator i in [result,result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, unary_op(*first), ..., unary_op(*(first + (i -
result)))) or GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, init, unary_op(*first), ..., unary_op(*(first + (i
- result)))) if init is provided.
Returns:
The end of the resulting range beginning at result.
Requires:
Neither unary_op nor binary_op shall invalidate iterators or subranges, or modify elements in the ranges [first,last) or [result,result + (last - first)).
Complexity:
O(last - first) applications each of unary_op and binary_op.
Notes:
The difference between transform_exclusive_scan and transform_inclusive_scan is that transform_inclusive_scan includes the ith input element from the ith sum. If binary_op is not mathematically associative, the behavior of transform_inclusive_scan may be non-deterministic. transform_inclusive_scan does not apply unary_op to init.
8

Task Block

[parallel.task_block]
8.1

Header <experimental/task_block> synopsis

[parallel.task_block.synopsis]
namespace std {
namespace std::experimental {
inline namespace parallelism_v2 {
inline namespace v2 {
  class task_cancelled_exception;

  class task_block;

  template<class F>
    void define_task_block(F&& f);

  template<class f>
    void define_task_block_restore_thread(F&& f);
}
}
}
}
     
8.2

Class task_cancelled_exception

[parallel.task_block.task_cancelled_exception]
namespace std {
namespace std::experimental {
inline namespace parallelism_v2 {
inline namespace v2 {

  class task_cancelled_exception : public exception
  {
    public:
      task_cancelled_exception() noexcept;
      virtual const char* what() const noexcept override;
  };
}
}
}
}
     

The class task_cancelled_exception defines the type of objects thrown by task_block::run or task_block::wait if they detect than an exception is pending within the current parallel block. See 8.5, below.

8.2.1

task_cancelled_exception member function what

[parallel.task_block.task_cancelled_exception.what]
virtual const char* what() const noexcept
Returns:
An implementation-defined NTBS.
8.3

Class task_block

[parallel.task_block.class]
namespace std {
namespace std::experimental {
inline namespace parallelism_v2 {
inline namespace v2 {

  class task_block
  {
    private:
      ~task_block();

    public:
      task_block(const task_block&) = delete;
      task_block& operator=(const task_block&) = delete;
      void operator&() const = delete;

      template<class F>
        void run(F&& f);

      void wait();
  };
}
}
}
}
     

The class task_block defines an interface for forking and joining parallel tasks. The define_task_block and define_task_block_restore_thread function templates create an object of type task_block and pass a reference to that object to a user-provided function object.

An object of class task_block cannot be constructed, destroyed, copied, or moved except by the implementation of the task block library. Taking the address of a task_block object via operator& is ill-formed. Obtaining its address by any other means (including addressof) results in a pointer with an unspecified value; dereferencing such a pointer results in undefined behavior.

A task_block is active if it was created by the nearest enclosing task block, where “task block” refers to an invocation of define_task_block or define_task_block_restore_thread and “nearest enclosing” means the most recent invocation that has not yet completed. Code designated for execution in another thread by means other than the facilities in this section (e.g., using thread or async) are not enclosed in the task block and a task_block passed to (or captured by) such code is not active within that code. Performing any operation on a task_block that is not active results in undefined behavior.

When the argument to task_block::run is called, no task_block is active, not even the task_block on which run was called. (The function object should not, therefore, capture a task_block from the surrounding block.)

[ Example:
define_task_block([&](auto& tb) {
  tb.run([&]{
    tb.run([] { f(); });               // Error: tb is not active within run
    define_task_block([&](auto& tb2) { // Define new task block
      tb2.run(f);
      ...
    });
  });
  ...
});
     
end example ]


     [ Note:
    
       Implementations are encouraged to diagnose the above error at translation time.
     
    end note ]
  

     
    

    
8.3.1

task_block member function template run

[parallel.task_block.class.run]
template<class F> void run(F&& f);
Requires:
F shall be MoveConstructible. DECAY_COPY(std::forward<F>(f))() shall be a valid expression.
Preconditions:
*this shall be the active task_block.
Effects:
Evaluates DECAY_COPY(std::forward<F>(f))(), where DECAY_COPY(std::forward<F>(f)) is evaluated synchronously within the current thread. The call to the resulting copy of the function object is permitted to run on an unspecified thread created by the implementation in an unordered fashion relative to the sequence of operations following the call to run(f) (the continuation), or indeterminately sequenced within the same thread as the continuation. The call to run synchronizes with the call to the function object. The completion of the call to the function object synchronizes with the next invocation of wait on the same task_block or completion of the nearest enclosing task block (i.e., the define_task_block or define_task_block_restore_thread that created this task_block).
Throws:
task_cancelled_exception, as described in 8.5.
Remarks:
The run function may return on a thread other than the one on which it was called; in such cases, completion of the call to run synchronizes with the continuation. [ Note: The return from run is ordered similarly to an ordinary function call in a single thread. end note ]
Remarks:
The invocation of the user-supplied function object f may be immediate or may be delayed until compute resources are available. run might or might not return before the invocation of f completes.
8.3.2

task_block member function wait

[parallel.task_block.class.wait]
void wait();
Preconditions:
*this shall be the active task_block.
Effects:
Blocks until the tasks spawned using this task_block have completed.
Throws:
task_cancelled_exception, as described in 8.5.
Postconditions:
All tasks spawned by the nearest enclosing task block have completed.
Remarks:
The wait function may return on a thread other than the one on which it was called; in such cases, completion of the call to wait synchronizes with subsequent operations. [ Note: The return from wait is ordered similarly to an ordinary function call in a single thread. end note ] [ Example:
define_task_block([&](auto& tb) {
  tb.run([&]{ process(a, w, x); }); // Process a[w] through a[x]
  if (y < x) tb.wait();             // Wait if overlap between [w,x) and [y,z)
  process(a, y, z);                 // Process a[y] through a[z]
});
end example ]
8.4

Function template define_task_block

[parallel.task_block.define_task_block]
template<class F>
void define_task_block(F&& f);
       template<class F>
void define_task_block_restore_thread(F&& f);
       
Requires:
Given an lvalue tb of type task_block, the expression f(tb) shall be well-formed
Effects:
Constructs a task_block tb and calls f(tb).
Throws:
exception_list, as specified in 8.5.
Postconditions:
All tasks spawned from f have finished execution.
Remarks:
The define_task_block function may return on a thread other than the one on which it was called unless there are no task blocks active on entry to define_task_block (see 8.3), in which case the function returns on the original thread. When define_task_block returns on a different thread, it synchronizes with operations following the call. [ Note: The return from define_task_block is ordered similarly to an ordinary function call in a single thread. end note ] The define_task_block_restore_thread function always returns on the same thread as the one on which it was called.
Notes:
It is expected (but not mandated) that f will (directly or indirectly) call tb.run(function-object).
8.5

Exception Handling

[parallel.task_block.exceptions]

Every task_block has an associated exception list. When the task block starts, its associated exception list is empty.

When an exception is thrown from the user-provided function object passed to define_task_block or define_task_block_restore_thread, it is added to the exception list for that task block. Similarly, when an exception is thrown from the user-provided function object passed into task_block::run, the exception object is added to the exception list associated with the nearest enclosing task block. In both cases, an implementation may discard any pending tasks that have not yet been invoked. Tasks that are already in progress are not interrupted except at a call to task_block::run or task_block::wait as described below.

If the implementation is able to detect that an exception has been thrown by another task within the same nearest enclosing task block, then task_block::run or task_block::wait may throw task_canceled_exception; these instances of task_canceled_exception are not added to the exception list of the corresponding task block.

When a task block finishes with a non-empty exception list, the exceptions are aggregated into an exception_list object, which is then thrown from the task block.

The order of the exceptions in the exception_list object is unspecified.