1. Introduction
This paper proposes a self-contained design for a Standard C++ framework for managing asynchronous execution on generic execution resources. It is based on the ideas in A Unified Executors Proposal for C++ and its companion papers.
1.1. Motivation
Today, C++ software is increasingly asynchronous and parallel, a trend that is likely to only continue going forward. Asynchrony and parallelism appears everywhere, from processor hardware interfaces, to networking, to file I/O, to GUIs, to accelerators. Every C++ domain and every platform needs to deal with asynchrony and parallelism, from scientific computing to video games to financial services, from the smallest mobile devices to your laptop to GPUs in the world’s fastest supercomputer.
While the C++ Standard Library has a rich set of concurrency primitives (
,
,
, etc) and lower level building blocks (
, etc), we lack a Standard vocabulary and framework for asynchrony and parallelism that C++ programmers desperately need.
/
/
, C++11’s intended exposure for asynchrony, is inefficient, hard to use correctly, and severely lacking in genericity, making it unusable in many contexts.
We introduced parallel algorithms to the C++ Standard Library in C++17, and while they are an excellent start, they are all inherently synchronous and not composable.
This paper proposes a Standard C++ model for asynchrony, based around three key abstractions: schedulers, senders, and receivers, and a set of customizable asynchronous algorithms.
1.2. Priorities
-
Be composable and generic, allowing users to write code that can be used with many different types of execution resources.
-
Encapsulate common asynchronous patterns in customizable and reusable algorithms, so users don’t have to invent things themselves.
-
Make it easy to be correct by construction.
-
Support the diversity of execution resources and execution agents, because not all execution agents are created equal; some are less capable than others, but not less important.
-
Allow everything to be customized by an execution resource, including transfer to other execution resources, but don’t require that execution resources customize everything.
-
Care about all reasonable use cases, domains and platforms.
-
Errors must be propagated, but error handling must not present a burden.
-
Support cancellation, which is not an error.
-
Have clear and concise answers for where things execute.
-
Be able to manage and terminate the lifetimes of objects asynchronously.
1.3. Examples: End User
In this section we demonstrate the end-user experience of asynchronous programming directly with the sender algorithms presented in this paper. See § 4.20 User-facing sender factories, § 4.21 User-facing sender adaptors, and § 4.22 User-facing sender consumers for short explanations of the algorithms used in these code examples.
1.3.1. Hello world
using namespace std :: execution ; scheduler auto sch = thread_pool . scheduler (); // 1 sender auto begin = schedule ( sch ); // 2 sender auto hi = then ( begin , []{ // 3 std :: cout << "Hello world! Have an int." ; // 3 return 13 ; // 3 }); // 3 sender auto add_42 = then ( hi , []( int arg ) { return arg + 42 ; }); // 4 auto [ i ] = this_thread :: sync_wait ( add_42 ). value (); // 5
This example demonstrates the basics of schedulers, senders, and receivers:
-
First we need to get a scheduler from somewhere, such as a thread pool. A scheduler is a lightweight handle to an execution resource.
-
To start a chain of work on a scheduler, we call § 4.20.1 execution::schedule, which returns a sender that completes on the scheduler. A sender describes asynchronous work and sends a signal (value, error, or stopped) to some recipient(s) when that work completes.
-
We use sender algorithms to produce senders and compose asynchronous work. § 4.21.2 execution::then is a sender adaptor that takes an input sender and a
, and calls thestd :: invocable
on the signal sent by the input sender. The sender returned bystd :: invocable
sends the result of that invocation. In this case, the input sender came fromthen
, so itsschedule
, meaning it won’t send us a value, so ourvoid
takes no parameters. But we return anstd :: invocable
, which will be sent to the next recipient.int -
Now, we add another operation to the chain, again using § 4.21.2 execution::then. This time, we get sent a value - the
from the previous step. We addint
to it, and then return the result.42 -
Finally, we’re ready to submit the entire asynchronous pipeline and wait for its completion. Everything up until this point has been completely asynchronous; the work may not have even started yet. To ensure the work has started and then block pending its completion, we use § 4.22.2 this_thread::sync_wait, which will either return a
with the value sent by the last sender, or an emptystd :: optional < std :: tuple < ... >>
if the last sender sent a stopped signal, or it throws an exception if the last sender sent an error.std :: optional
1.3.2. Asynchronous inclusive scan
using namespace std :: execution ; sender auto async_inclusive_scan ( scheduler auto sch , // 2 std :: span < const double > input , // 1 std :: span < double > output , // 1 double init , // 1 std :: size_t tile_count ) // 3 { std :: size_t const tile_size = ( input . size () + tile_count - 1 ) / tile_count ; std :: vector < double > partials ( tile_count + 1 ); // 4 partials [ 0 ] = init ; // 4 return transfer_just ( sch , std :: move ( partials )) // 5 | bulk ( tile_count , // 6 [ = ]( std :: size_t i , std :: vector < double >& partials ) { // 7 auto start = i * tile_size ; // 8 auto end = std :: min ( input . size (), ( i + 1 ) * tile_size ); // 8 partials [ i + 1 ] = *-- std :: inclusive_scan ( begin ( input ) + start , // 9 begin ( input ) + end , // 9 begin ( output ) + start ); // 9 }) // 10 | then ( // 11 []( std :: vector < double >&& partials ) { std :: inclusive_scan ( begin ( partials ), end ( partials ), // 12 begin ( partials )); // 12 return std :: move ( partials ); // 13 }) | bulk ( tile_count , // 14 [ = ]( std :: size_t i , std :: vector < double >& partials ) { // 14 auto start = i * tile_size ; // 14 auto end = std :: min ( input . size (), ( i + 1 ) * tile_size ); // 14 std :: for_each ( begin ( output ) + start , begin ( output ) + end , // 14 [ & ] ( double & e ) { e = partials [ i ] + e ; } // 14 ); }) | then ( // 15 [ = ]( std :: vector < double >&& partials ) { // 15 return output ; // 15 }); // 15 }
This example builds an asynchronous computation of an inclusive scan:
-
It scans a sequence of
s (represented as thedouble std :: span < const double >
) and stores the result in another sequence ofinput
s (represented asdouble std :: span < double >
).output -
It takes a scheduler, which specifies what execution resource the scan should be launched on.
-
It also takes a
parameter that controls the number of execution agents that will be spawned.tile_count -
First we need to allocate temporary storage needed for the algorithm, which we’ll do with a
,std :: vector
. We need onepartials
of temporary storage for each execution agent we create.double -
Next we’ll create our initial sender with § 4.20.3 execution::transfer_just. This sender will send the temporary storage, which we’ve moved into the sender. The sender has a completion scheduler of
, which means the next item in the chain will usesch
.sch -
Senders and sender adaptors support composition via
, similar to C++ ranges. We’ll useoperator |
to attach the next piece of work, which will spawnoperator |
execution agents using § 4.21.9 execution::bulk (see § 4.13 Most sender adaptors are pipeable for details).tile_count -
Each agent will call a
, passing it two arguments. The first is the agent’s index (std :: invocable
) in the § 4.21.9 execution::bulk operation, in this case a unique integer ini
. The second argument is what the input sender sent - the temporary storage.[ 0 , tile_count ) -
We start by computing the start and end of the range of input and output elements that this agent is responsible for, based on our agent index.
-
Then we do a sequential
over our elements. We store the scan result for our last element, which is the sum of all of our elements, in our temporary storagestd :: inclusive_scan
.partials -
After all computation in that initial § 4.21.9 execution::bulk pass has completed, every one of the spawned execution agents will have written the sum of its elements into its slot in
.partials -
Now we need to scan all of the values in
. We’ll do that with a single execution agent which will execute after the § 4.21.9 execution::bulk completes. We create that execution agent with § 4.21.2 execution::then.partials -
§ 4.21.2 execution::then takes an input sender and an
and calls thestd :: invocable
with the value sent by the input sender. Inside ourstd :: invocable
, we callstd :: invocable
onstd :: inclusive_scan
, which the input senders will send to us.partials -
Then we return
, which the next phase will need.partials -
Finally we do another § 4.21.9 execution::bulk of the same shape as before. In this § 4.21.9 execution::bulk, we will use the scanned values in
to integrate the sums from other tiles into our elements, completing the inclusive scan.partials -
returns a sender that sends the outputasync_inclusive_scan
. A consumer of the algorithm can chain additional work that uses the scan result. At the point at whichstd :: span < double >
returns, the computation may not have completed. In fact, it may not have even started.async_inclusive_scan
1.3.3. Asynchronous dynamically-sized read
using namespace std :: execution ; sender_of < set_value_t ( std :: size_t ) > auto async_read ( // 1 sender_of < set_value_t ( std :: span < std :: byte > ) > auto buffer , // 1 auto handle ); // 1 struct dynamic_buffer { // 3 std :: unique_ptr < std :: byte [] > data ; // 3 std :: size_t size ; // 3 }; // 3 sender_of < set_value_t ( dynamic_buffer ) > auto async_read_array ( auto handle ) { // 2 return just ( dynamic_buffer {}) // 4 | let_value ([ handle ] ( dynamic_buffer & buf ) { // 5 return just ( std :: as_writeable_bytes ( std :: span ( & buf . size , 1 )) // 6 | async_read ( handle ) // 7 | then ( // 8 [ & buf ] ( std :: size_t bytes_read ) { // 9 assert ( bytes_read == sizeof ( buf . size )); // 10 buf . data = std :: make_unique < std :: byte [] > ( buf . size ); // 11 return std :: span ( buf . data . get (), buf . size ); // 12 }) | async_read ( handle ) // 13 | then ( [ & buf ] ( std :: size_t bytes_read ) { assert ( bytes_read == buf . size ); // 14 return std :: move ( buf ); // 15 }); }); }
This example demonstrates a common asynchronous I/O pattern - reading a payload of a dynamic size by first reading the size, then reading the number of bytes specified by the size:
-
is a pipeable sender adaptor. It’s a customization point object, but this is what it’s call signature looks like. It takes a sender parameter which must send an input buffer in the form of aasync_read
, and a handle to an I/O context. It will asynchronously read into the input buffer, up to the size of thestd :: span < std :: byte >
. It returns a sender which will send the number of bytes read once the read completes.std :: span -
takes an I/O handle and reads a size from it, and then a buffer of that many bytes. It returns a sender that sends aasync_read_array
object that owns the data that was sent.dynamic_buffer -
is an aggregate struct that contains adynamic_buffer
and a size.std :: unique_ptr < std :: byte [] > -
The first thing we do inside of
is create a sender that will send a new, emptyasync_read_array
object using § 4.20.2 execution::just. We can attach more work to the pipeline usingdynamic_array
composition (see § 4.13 Most sender adaptors are pipeable for details).operator | -
We need the lifetime of this
object to last for the entire pipeline. So, we usedynamic_array
, which takes an input sender and alet_value
that must return a sender itself (see § 4.21.4 execution::let_* for details).std :: invocable
sends the value from the input sender to thelet_value
. Critically, the lifetime of the sent object will last until the sender returned by thestd :: invocable
completes.std :: invocable -
Inside of the
let_value
, we have the rest of our logic. First, we want to initiate anstd :: invocable
of the buffer size. To do that, we need to send aasync_read
pointing tostd :: span
. We can do that with § 4.20.2 execution::just.buf . size -
We chain the
onto the § 4.20.2 execution::just sender withasync_read
.operator | -
Next, we pipe a
that will be invoked after thestd :: invocable
completes using § 4.21.2 execution::then.async_read -
That
gets sent the number of bytes read.std :: invocable -
We need to check that the number of bytes read is what we expected.
-
Now that we have read the size of the data, we can allocate storage for it.
-
We return a
to the storage for the data from thestd :: span < std :: byte >
. This will be sent to the next recipient in the pipeline.std :: invocable -
And that recipient will be another
, which will read the data.async_read -
Once the data has been read, in another § 4.21.2 execution::then, we confirm that we read the right number of bytes.
-
Finally, we move out of and return our
object. It will get sent by the sender returned bydynamic_buffer
. We can attach more things to that sender to use the data in the buffer.async_read_array
1.4. Asynchronous Windows socket recv
To get a better feel for how this interface might be used by low-level operations see this example implementation
of a cancellable
operation for a Windows Socket.
struct operation_base : WSAOVERALAPPED { using completion_fn = void ( operation_base * op , DWORD bytesTransferred , int errorCode ) noexcept ; // Assume IOCP event loop will call this when this OVERLAPPED structure is dequeued. completion_fn * completed ; }; template < typename Receiver > struct recv_op : operation_base { recv_op ( SOCKET s , void * data , size_t len , Receiver r ) : receiver ( std :: move ( r )) , sock ( s ) { this -> Internal = 0 ; this -> InternalHigh = 0 ; this -> Offset = 0 ; this -> OffsetHigh = 0 ; this -> hEvent = NULL; this -> completed = & recv_op :: on_complete ; buffer . len = len ; buffer . buf = static_cast < CHAR *> ( data ); } friend void tag_invoke ( std :: execution :: start_t , recv_op & self ) noexcept { // Avoid even calling WSARecv() if operation already cancelled auto st = std :: execution :: get_stop_token ( std :: get_env ( self . receiver )); if ( st . stop_requested ()) { std :: execution :: set_stopped ( std :: move ( self . receiver )); return ; } // Store and cache result here in case it changes during execution const bool stopPossible = st . stop_possible (); if ( ! stopPossible ) { self . ready . store ( true, std :: memory_order_relaxed ); } // Launch the operation DWORD bytesTransferred = 0 ; DWORD flags = 0 ; int result = WSARecv ( self . sock , & self . buffer , 1 , & bytesTransferred , & flags , static_cast < WSAOVERLAPPED *> ( & self ), NULL); if ( result == SOCKET_ERROR ) { int errorCode = WSAGetLastError (); if ( errorCode != WSA_IO_PENDING )) { if ( errorCode == WSA_OPERATION_ABORTED ) { std :: execution :: set_stopped ( std :: move ( self . receiver )); } else { std :: execution :: set_error ( std :: move ( self . receiver ), std :: error_code ( errorCode , std :: system_category ())); } return ; } } else { // Completed synchronously (assuming FILE_SKIP_COMPLETION_PORT_ON_SUCCESS has been set) execution :: set_value ( std :: move ( self . receiver ), bytesTransferred ); return ; } // If we get here then operation has launched successfully and will complete asynchronously. // May be completing concurrently on another thread already. if ( stopPossible ) { // Register the stop callback self . stopCallback . emplace ( std :: move ( st ), cancel_cb { self }); // Mark as 'completed' if ( self . ready . load ( std :: memory_order_acquire ) || self . ready . exchange ( true, std :: memory_order_acq_rel )) { // Already completed on another thread self . stopCallback . reset (); BOOL ok = WSAGetOverlappedResult ( self . sock , ( WSAOVERLAPPED * ) & self , & bytesTransferred , FALSE , & flags ); if ( ok ) { std :: execution :: set_value ( std :: move ( self . receiver ), bytesTransferred ); } else { int errorCode = WSAGetLastError (); std :: execution :: set_error ( std :: move ( self . receiver ), std :: error_code ( errorCode , std :: system_category ())); } } } } struct cancel_cb { recv_op & op ; void operator ()() noexcept { CancelIoEx (( HANDLE ) op . sock , ( OVERLAPPED * )( WSAOVERLAPPED * ) & op ); } }; static void on_complete ( operation_base * op , DWORD bytesTransferred , int errorCode ) noexcept { recv_op & self = * static_cast < recv_op *> ( op ); if ( ready . load ( std :: memory_order_acquire ) || ready . exchange ( true, std :: memory_order_acq_rel )) { // Unsubscribe any stop-callback so we know that CancelIoEx() is not accessing 'op' // any more stopCallback . reset (); if ( errorCode == 0 ) { std :: execution :: set_value ( std :: move ( receiver ), bytesTransferred ); } else { std :: execution :: set_error ( std :: move ( receiver ), std :: error_code ( errorCode , std :: system_category ())); } } } Receiver receiver ; SOCKET sock ; WSABUF buffer ; std :: optional < typename stop_callback_type_t < Receiver > :: template callback_type < cancel_cb >> stopCallback ; std :: atomic < bool > ready { false}; }; struct recv_sender { using is_sender = void ; SOCKET sock ; void * data ; size_t len ; template < typename Receiver > friend recv_op < Receiver > tag_invoke ( std :: execution :: connect_t , const recv_sender & s , Receiver r ) { return recv_op < Receiver > { s . sock , s . data , s . len , std :: move ( r )}; } }; recv_sender async_recv ( SOCKET s , void * data , size_t len ) { return recv_sender { s , data , len }; }
1.4.1. More end-user examples
1.4.1.1. Sudoku solver
This example comes from Kirk Shoop, who ported an example from TBB’s documentation to sender/receiver in his fork of the libunifex repo. It is a Sudoku solver that uses a configurable number of threads to explore the search space for solutions.
The sender/receiver-based Sudoku solver can be found here. Some things that are worth noting about Kirk’s solution:
-
Although it schedules asychronous work onto a thread pool, and each unit of work will schedule more work, its use of structured concurrency patterns make reference counting unnecessary. The solution does not make use of
.shared_ptr -
In addition to eliminating the need for reference counting, the use of structured concurrency makes it easy to ensure that resources are cleaned up on all code paths. In contrast, the TBB example that inspired this one leaks memory.
For comparison, the TBB-based Sudoku solver can be found here.
1.4.1.2. File copy
This example also comes from Kirk Shoop which uses sender/receiver to recursively copy the files a directory tree. It demonstrates how sender/receiver can be used to do IO, using a scheduler that schedules work on Linux’s io_uring.
As with the Sudoku example, this example obviates the need for reference counting by employing structured concurrency. It uses iteration with an upper limit to avoid having too many open file handles.
You can find the example here.
1.4.1.3. Echo server
Dietmar Kuehl has a hobby project that implements networking APIs on top of sender/receiver. He recently implemented an echo server as a demo. His echo server code can be found here.
Below, I show the part of the echo server code. This code is executed for each client that connects to the echo server. In a loop, it reads input from a socket and echos the input back to the same socket. All of this, including the loop, is implemented with generic async algorithms.
outstanding . start ( EX :: repeat_effect_until ( EX :: let_value ( NN :: async_read_some ( ptr -> d_socket , context . scheduler (), NN :: buffer ( ptr -> d_buffer )) | EX :: then ([ ptr ]( :: std :: size_t n ){ :: std :: cout << "read='" << :: std :: string_view ( ptr -> d_buffer , n ) << "' \n " ; ptr -> d_done = n == 0 ; return n ; }), [ & context , ptr ]( :: std :: size_t n ){ return NN :: async_write_some ( ptr -> d_socket , context . scheduler (), NN :: buffer ( ptr -> d_buffer , n )); }) | EX :: then ([]( auto && ...){}) , [ owner = :: std :: move ( owner )]{ return owner -> d_done ; } ) );
In this code,
and
are asynchronous socket-based networking APIs that return senders.
,
, and
are fully generic sender adaptor algorithms that accept and return senders.
This is a good example of seamless composition of async IO functions with non-IO operations. And by composing the senders in this structured way, all the state for the composite operation -- the
expression and all its child operations -- is stored altogether in a single object.
1.5. Examples: Algorithms
In this section we show a few simple sender/receiver-based algorithm implementations.
1.5.1. then
namespace exec = std :: execution ; template < class R , class F > class _then_receiver : exec :: receiver_adaptor < _then_receiver < R , F > , R > { friend exec :: receiver_adaptor < _then_receiver , R > ; F f_ ; // Customize set_value by invoking the callable and passing the result to the inner receiver template < class ... As > void set_value ( As && ... as ) && noexcept try { exec :: set_value ( std :: move ( * this ). base (), std :: invoke (( F && ) f_ , ( As && ) as ...)); } catch (...) { exec :: set_error ( std :: move ( * this ). base (), std :: current_exception ()); } public : _then_receiver ( R r , F f ) : exec :: receiver_adaptor < _then_receiver , R > { std :: move ( r )} , f_ ( std :: move ( f )) {} }; template < exec :: sender S , class F > struct _then_sender { using is_sender = void ; S s_ ; F f_ ; template < class ... Args > using _set_value_t = exec :: completion_signatures < exec :: set_value_t ( std :: invoke_result_t < F , Args ... > ) > ; // Compute the completion signatures template < class Env > friend auto tag_invoke ( exec :: get_completion_signatures_t , _then_sender && , Env ) -> exec :: make_completion_signatures < S , Env , exec :: completion_signatures < exec :: set_error_t ( std :: exception_ptr ) > , _set_value_t > ; // Connect: template < exec :: receiver R > friend auto tag_invoke ( exec :: connect_t , _then_sender && self , R r ) -> exec :: connect_result_t < S , _then_receiver < R , F >> { return exec :: connect ( ( S && ) self . s_ , _then_receiver < R , F > {( R && ) r , ( F && ) self . f_ }); } friend decltype ( auto ) tag_invoke ( get_env_t , const _then_sender & self ) noexcept { return get_env ( self . s_ ); } }; template < exec :: sender S , class F > exec :: sender auto then ( S s , F f ) { return _then_sender < S , F > {( S && ) s , ( F && ) f }; }
This code builds a
algorithm that transforms the value(s) from the input sender
with a transformation function. The result of the transformation becomes the new value.
The other receiver functions (
and
), as well as all receiver queries,
are passed through unchanged.
In detail, it does the following:
-
Defines a receiver in terms of
that aggregates another receiver and an invocable that:execution :: receiver_adaptor -
Defines a constrained
overload for transforming the value channel.tag_invoke -
Defines another constrained overload of
that passes all other customizations through unchanged.tag_invoke
The
overloads are actually implemented bytag_invoke
; they dispatch either to named members, as shown above withexecution :: receiver_adaptor
, or to the adapted receiver._then_receiver :: set_value -
-
Defines a sender that aggregates another sender and the invocable, which defines a
customization fortag_invoke
that wraps the incoming receiver in the receiver from (1) and passes it and the incoming sender tostd :: execution :: connect
, returning the result. It also defines astd :: execution :: connect
customization oftag_invoke
that declares the sender’s completion signatures when executed within a particular environment.get_completion_signatures
1.5.2. retry
using namespace std ; namespace exec = execution ; template < class From , class To > concept _decays_to = same_as < decay_t < From > , To > ; // _conv needed so we can emplace construct non-movable types into // a std::optional. template < invocable F > requires is_nothrow_move_constructible_v < F > struct _conv { F f_ ; explicit _conv ( F f ) noexcept : f_ (( F && ) f ) {} operator invoke_result_t < F > () && { return (( F && ) f_ )(); } }; template < class S , class R > struct _op ; // pass through all customizations except set_error, which retries the operation. template < class S , class R > struct _retry_receiver : exec :: receiver_adaptor < _retry_receiver < S , R >> { _op < S , R >* o_ ; R && base () && noexcept { return ( R && ) o_ -> r_ ; } const R & base () const & noexcept { return o_ -> r_ ; } explicit _retry_receiver ( _op < S , R >* o ) : o_ ( o ) {} void set_error ( auto && ) && noexcept { o_ -> _retry (); // This causes the op to be retried } }; // Hold the nested operation state in an optional so we can // re-construct and re-start it if the operation fails. template < class S , class R > struct _op { S s_ ; R r_ ; optional < exec :: connect_result_t < S & , _retry_receiver < S , R >>> o_ ; _op ( S s , R r ) : s_ (( S && ) s ), r_ (( R && ) r ), o_ { _connect ()} {} _op ( _op && ) = delete ; auto _connect () noexcept { return _conv {[ this ] { return exec :: connect ( s_ , _retry_receiver < S , R > { this }); }}; } void _retry () noexcept try { o_ . emplace ( _connect ()); // potentially-throwing exec :: start ( * o_ ); } catch (...) { exec :: set_error (( R && ) r_ , std :: current_exception ()); } friend void tag_invoke ( exec :: start_t , _op & o ) noexcept { exec :: start ( * o . o_ ); } }; template < class S > struct _retry_sender { using is_sender = void ; S s_ ; explicit _retry_sender ( S s ) : s_ (( S && ) s ) {} template < class ... Ts > using _value_t = exec :: completion_signatures < exec :: set_value_t ( Ts ...) > ; template < class > using _error_t = exec :: completion_signatures <> ; // Declare the signatures with which this sender can complete template < class Env > friend auto tag_invoke ( exec :: get_completion_signatures_t , const _retry_sender & , Env ) -> exec :: make_completion_signatures < S & , Env , exec :: completion_signatures < exec :: set_error_t ( std :: exception_ptr ) > , _value_t , _error_t > ; template < exec :: receiver R > friend _op < S , R > tag_invoke ( exec :: connect_t , _retry_sender && self , R r ) { return {( S && ) self . s_ , ( R && ) r }; } friend decltype ( auto ) tag_invoke ( exec :: get_env_t , const _retry_sender & self ) noexcept { return get_env ( self . s_ ); } }; template < exec :: sender S > exec :: sender auto retry ( S s ) { return _retry_sender {( S && ) s }; }
The
algorithm takes a multi-shot sender and causes it to repeat on error, passing
through values and stopped signals. Each time the input sender is restarted, a new receiver
is connected and the resulting operation state is stored in an
, which allows us
to reinitialize it multiple times.
This example does the following:
-
Defines a
utility that takes advantage of C++17’s guaranteed copy elision to emplace a non-movable type in a_conv
.std :: optional -
Defines a
that holds a pointer back to the operation state. It passes all customizations through unmodified to the inner receiver owned by the operation state except for_retry_receiver
, which causes aset_error
function to be called instead._retry () -
Defines an operation state that aggregates the input sender and receiver, and declares storage for the nested operation state in an
. Constructing the operation state constructs aoptional
with a pointer to the (under construction) operation state and uses it to connect to the aggregated sender._retry_receiver -
Starting the operation state dispatches to
on the inner operation state.start -
The
function reinitializes the inner operation state by connecting the sender to a new receiver, holding a pointer back to the outer operation state as before._retry () -
After reinitializing the inner operation state,
calls_retry ()
on it, causing the failed operation to be rescheduled.start -
Defines a
that implements the_retry_sender
customization point to return an operation state constructed from the passed-in sender and receiver.connect -
also implements the_retry_sender
customization point to describe the ways this sender may complete when executed in a particular execution resource.get_completion_signatures
1.6. Examples: Schedulers
In this section we look at some schedulers of varying complexity.
1.6.1. Inline scheduler
class inline_scheduler { template < class R > struct _op { [[ no_unique_address ]] R rec_ ; friend void tag_invoke ( std :: execution :: start_t , _op & op ) noexcept { std :: execution :: set_value (( R && ) op . rec_ ); } }; struct _env { template < class Tag > friend inline_scheduler tag_invoke ( std :: execution :: get_completion_scheduler_t < Tag > , _env ) noexcept { return {}; } }; struct _sender { using is_sender = void ; using completion_signatures = std :: execution :: completion_signatures < std :: execution :: set_value_t () > ; template < class R > friend auto tag_invoke ( std :: execution :: connect_t , _sender , R && rec ) noexcept ( std :: is_nothrow_constructible_v < std :: remove_cvref_t < R > , R > ) -> _op < std :: remove_cvref_t < R >> { return {( R && ) rec }; } friend _env tag_invoke ( exec :: get_env_t , _sender ) noexcept { return {}; } }; friend _sender tag_invoke ( std :: execution :: schedule_t , const inline_scheduler & ) noexcept { return {}; } public : inline_scheduler () = default ; bool operator == ( const inline_scheduler & ) const noexcept = default ; };
The inline scheduler is a trivial scheduler that completes immediately and synchronously on
the thread that calls
on the operation state produced by its sender.
In other words,
is
just a fancy way of saying
, with the exception of the fact that
wants to be passed an lvalue.
Although not a particularly useful scheduler, it serves to illustrate the basics of
implementing one. The
:
-
Customizes
to return an instance of the sender typeexecution :: schedule
._sender -
The
type models the_sender
concept and provides the metadata needed to describe it as a sender of no values and that never callssender
orset_error
. This metadata is provided with the help of theset_stopped
utility.execution :: completion_signatures -
The
type customizes_sender
to accept a receiver of no values. It returns an instance of typeexecution :: connect
that holds the receiver by value._op -
The operation state customizes
to callstd :: execution :: start
on the receiver.std :: execution :: set_value
1.6.2. Single thread scheduler
This example shows how to create a scheduler for an execution resource that consists of a single
thread. It is implemented in terms of a lower-level execution resource called
.
class single_thread_context { std :: execution :: run_loop loop_ ; std :: thread thread_ ; public : single_thread_context () : loop_ () , thread_ ([ this ] { loop_ . run (); }) {} ~ single_thread_context () { loop_ . finish (); thread_ . join (); } auto get_scheduler () noexcept { return loop_ . get_scheduler (); } std :: thread :: id get_thread_id () const noexcept { return thread_ . get_id (); } };
The
owns an event loop and a thread to drive it. In the destructor, it tells the event
loop to finish up what it’s doing and then joins the thread, blocking for the event loop to drain.
The interesting bits are in the
context implementation. It
is slightly too long to include here, so we only provide a reference to
it,
but there is one noteworthy detail about its implementation: It uses space in
its operation states to build an intrusive linked list of work items. In
structured concurrency patterns, the operation states of nested operations
compose statically, and in an algorithm like
, the
composite operation state lives on the stack for the duration of the operation.
The end result is that work can be scheduled onto this thread with zero
allocations.
1.7. Examples: Server theme
In this section we look at some examples of how one would use senders to implement an HTTP server. The examples ignore the low-level details of the HTTP server and looks at how senders can be combined to achieve the goals of the project.
General application context:
-
server application that processes images
-
execution resources:
-
1 dedicated thread for network I/O
-
N worker threads used for CPU-intensive work
-
M threads for auxiliary I/O
-
optional GPU context that may be used on some types of servers
-
-
all parts of the applications can be asynchronous
-
no locks shall be used in user code
1.7.1. Composability with execution :: let_ *
Example context:
-
we are looking at the flow of processing an HTTP request and sending back the response
-
show how one can break the (slightly complex) flow into steps with
functionsexecution :: let_ * -
different phases of processing HTTP requests are broken down into separate concerns
-
each part of the processing might use different execution resources (details not shown in this example)
-
error handling is generic, regardless which component fails; we always send the right response to the clients
Goals:
-
show how one can break more complex flows into steps with let_* functions
-
exemplify the use of
,let_value
,let_error
, andlet_stopped
algorithmsjust
namespace ex = std :: execution ; // Returns a sender that yields an http_request object for an incoming request ex :: sender auto schedule_request_start ( read_requests_ctx ctx ) {...} // Sends a response back to the client; yields a void signal on success ex :: sender auto send_response ( const http_response & resp ) {...} // Validate that the HTTP request is well-formed; forwards the request on success ex :: sender auto validate_request ( const http_request & req ) {...} // Handle the request; main application logic ex :: sender auto handle_request ( const http_request & req ) { //... return ex :: just ( http_response { 200 , result_body }); } // Transforms server errors into responses to be sent to the client ex :: sender auto error_to_response ( std :: exception_ptr err ) { try { std :: rethrow_exception ( err ); } catch ( const std :: invalid_argument & e ) { return ex :: just ( http_response { 404 , e . what ()}); } catch ( const std :: exception & e ) { return ex :: just ( http_response { 500 , e . what ()}); } catch (...) { return ex :: just ( http_response { 500 , "Unknown server error" }); } } // Transforms cancellation of the server into responses to be sent to the client ex :: sender auto stopped_to_response () { return ex :: just ( http_response { 503 , "Service temporarily unavailable" }); } //... // The whole flow for transforming incoming requests into responses ex :: sender auto snd = // get a sender when a new request comes schedule_request_start ( the_read_requests_ctx ) // make sure the request is valid; throw if not | ex :: let_value ( validate_request ) // process the request in a function that may be using a different execution resource | ex :: let_value ( handle_request ) // If there are errors transform them into proper responses | ex :: let_error ( error_to_response ) // If the flow is cancelled, send back a proper response | ex :: let_stopped ( stopped_to_response ) // write the result back to the client | ex :: let_value ( send_response ) // done ; // execute the whole flow asynchronously ex :: start_detached ( std :: move ( snd ));
The example shows how one can separate out the concerns for interpreting requests, validating requests, running the main logic for handling the request, generating error responses, handling cancellation and sending the response back to the client.
They are all different phases in the application, and can be joined together with the
functions.
All our functions return
objects, so that they can all generate success, failure and cancellation paths.
For example, regardless where an error is generated (reading request, validating request or handling the response), we would have one common block to handle the error, and following error flows is easy.
Also, because of using
objects at any step, we might expect any of these steps to be completely asynchronous; the overall flow doesn’t care.
Regardless of the execution resource in which the steps, or part of the steps are executed in, the flow is still the same.
1.7.2. Moving between execution resources with execution :: on
and execution :: transfer
Example context:
-
reading data from the socket before processing the request
-
reading of the data is done on the I/O context
-
no processing of the data needs to be done on the I/O context
Goals:
-
show how one can change the execution resource
-
exemplify the use of
andon
algorithmstransfer
namespace ex = std :: execution ; size_t legacy_read_from_socket ( int sock , char * buffer , size_t buffer_len ) {} void process_read_data ( const char * read_data , size_t read_len ) {} //... // A sender that just calls the legacy read function auto snd_read = ex :: just ( sock , buf , buf_len ) | ex :: then ( legacy_read_from_socket ); // The entire flow auto snd = // start by reading data on the I/O thread ex :: on ( io_sched , std :: move ( snd_read )) // do the processing on the worker threads pool | ex :: transfer ( work_sched ) // process the incoming data (on worker threads) | ex :: then ([ buf ]( int read_len ) { process_read_data ( buf , read_len ); }) // done ; // execute the whole flow asynchronously ex :: start_detached ( std :: move ( snd ));
The example assume that we need to wrap some legacy code of reading sockets, and handle execution resource switching. (This style of reading from socket may not be the most efficient one, but it’s working for our purposes.) For performance reasons, the reading from the socket needs to be done on the I/O thread, and all the processing needs to happen on a work-specific execution resource (i.e., thread pool).
Calling
will ensure that the given sender will be started on the given scheduler.
In our example,
is going to be started on the I/O scheduler.
This sender will just call the legacy code.
The completion-signal will be issued in the I/O execution resource, so we have to move it to the work thread pool.
This is achieved with the help of the
algorithm.
The rest of the processing (in our case, the last call to
) will happen in the work thread pool.
The reader should notice the difference between
and
.
The
algorithm will ensure that the given sender will start in the specified context, and doesn’t care where the completion-signal for that sender is sent.
The
algorithm will not care where the given sender is going to be started, but will ensure that the completion-signal of will be transferred to the given context.
1.8. What this proposal is not
This paper is not a patch on top of A Unified Executors Proposal for C++; we are not asking to update the existing paper, we are asking to retire it in favor of this paper, which is already self-contained; any example code within this paper can be written in Standard C++, without the need to standardize any further facilities.
This paper is not an alternative design to A Unified Executors Proposal for C++; rather, we have taken the design in the current executors paper, and applied targeted fixes to allow it to fulfill the promises of the sender/receiver model, as well as provide all the facilities we consider essential when writing user code using standard execution concepts; we have also applied the guidance of removing one-way executors from the paper entirely, and instead provided an algorithm based around senders that serves the same purpose.
1.9. Design changes from P0443
-
The
concept has been removed and all of its proposed functionality is now based on schedulers and senders, as per SG1 direction.executor -
Properties are not included in this paper. We see them as a possible future extension, if the committee gets more comfortable with them.
-
Senders now advertise what scheduler, if any, their evaluation will complete on.
-
The places of execution of user code in P0443 weren’t precisely defined, whereas they are in this paper. See § 4.5 Senders can propagate completion schedulers.
-
P0443 did not propose a suite of sender algorithms necessary for writing sender code; this paper does. See § 4.20 User-facing sender factories, § 4.21 User-facing sender adaptors, and § 4.22 User-facing sender consumers.
-
P0443 did not specify the semantics of variously qualified
overloads; this paper does. See § 4.7 Senders can be either multi-shot or single-shot.connect -
This paper extends the sender traits/typed sender design to support typed senders whose value/error types depend on type information provided late via the receiver.
-
Support for untyped senders is dropped; the
concept is renamedtyped_sender
;sender
is replaced withsender_traits
.completion_signatures_of_t -
Specific type erasure facilities are omitted, as per LEWG direction. Type erasure facilities can be built on top of this proposal, as discussed in § 5.9 Ranges-style CPOs vs tag_invoke.
-
A specific thread pool implementation is omitted, as per LEWG direction.
-
Some additional utilities are added:
-
: An execution resource that provides a multi-producer, single-consumer, first-in-first-out work queue.run_loop -
: A utility for algorithm authors for defining one receiver type in terms of another.receiver_adaptor -
andcompletion_signatures
: Utilities for describing the ways in which a sender can complete in a declarative syntax.make_completion_signatures
-
1.10. Prior art
This proposal builds upon and learns from years of prior art with asynchronous and parallel programming frameworks in C++. In this section, we discuss async abstractions that have previously been suggested as a possible basis for asynchronous algorithms and why they fall short.
1.10.1. Futures
A future is a handle to work that has already been scheduled for execution. It is one end of a communication channel; the other end is a promise, used to receive the result from the concurrent operation and to communicate it to the future.
Futures, as traditionally realized, require the dynamic allocation and management of a shared state, synchronization, and typically type-erasure of work and continuation. Many of these costs are inherent in the nature of "future" as a handle to work that is already scheduled for execution. These expenses rule out the future abstraction for many uses and makes it a poor choice for a basis of a generic mechanism.
1.10.2. Coroutines
C++20 coroutines are frequently suggested as a basis for asynchronous algorithms. It’s fair to ask why, if we added coroutines to C++, are we suggesting the addition of a library-based abstraction for asynchrony. Certainly, coroutines come with huge syntactic and semantic advantages over the alternatives.
Although coroutines are lighter weight than futures, coroutines suffer many of the same problems. Since they typically start suspended, they can avoid synchronizing the chaining of dependent work. However in many cases, coroutine frames require an unavoidable dynamic allocation and indirect function calls. This is done to hide the layout of the coroutine frame from the C++ type system, which in turn makes possible the separate compilation of coroutines and certain compiler optimizations, such as optimization of the coroutine frame size.
Those advantages come at a cost, though. Because of the dynamic allocation of coroutine frames, coroutines in embedded or heterogeneous environments, which often lack support for dynamic allocation, require great attention to detail. And the allocations and indirections tend to complicate the job of the inliner, often resulting in sub-optimal codegen.
The coroutine language feature mitigates these shortcomings somewhat with the HALO optimization Halo: coroutine Heap Allocation eLision Optimization: the joint response, which leverages existing compiler optimizations such as allocation elision and devirtualization to inline the coroutine, completely eliminating the runtime overhead. However, HALO requires a sophisiticated compiler, and a fair number of stars need to align for the optimization to kick in. In our experience, more often than not in real-world code today’s compilers are not able to inline the coroutine, resulting in allocations and indirections in the generated code.
In a suite of generic async algorithms that are expected to be callable from hot code paths, the extra allocations and indirections are a deal-breaker. It is for these reasons that we consider coroutines a poor choise for a basis of all standard async.
1.10.3. Callbacks
Callbacks are the oldest, simplest, most powerful, and most efficient mechanism for creating chains of work, but suffer problems of their own. Callbacks must propagate either errors or values. This simple requirement yields many different interface possibilities. The lack of a standard callback shape obstructs generic design.
Additionally, few of these possibilities accommodate cancellation signals when the user requests upstream work to stop and clean up.
1.11. Field experience
1.11.1. libunifex
This proposal draws heavily from our field experience with libunifex. Libunifex implements all of the concepts and customization points defined in this paper (with slight variations -- the design of P2300 has evolved due to LEWG feedback), many of this paper’s algorithms (some under different names), and much more besides.
Libunifex has several concrete schedulers in addition to the
suggested here (where it is called
). It has schedulers that dispatch efficiently to epoll and io_uring on Linux and the Windows Thread Pool on Windows.
In addition to the proposed interfaces and the additional schedulers, it has several important extensions to the facilities described in this paper, which demonstrate directions in which these abstractions may be evolved over time, including:
-
Timed schedulers, which permit scheduling work on an execution resource at a particular time or after a particular duration has elapsed. In addition, it provides time-based algorithms.
-
File I/O schedulers, which permit filesystem I/O to be scheduled.
-
Two complementary abstractions for streams (asynchronous ranges), and a set of stream-based algorithms.
Libunifex has seen heavy production use at Facebook. As of October 2021, it is currently used in production within the following applications and platforms:
-
Facebook Messenger on iOS, Android, Windows, and macOS
-
Instagram on iOS and Android
-
Facebook on iOS and Android
-
Portal
-
An internal Facebook product that runs on Linux
All of these applications are making direct use of the sender/receiver abstraction as presented in this paper. One product (Instagram on iOS) is making use of the sender/coroutine integration as presented. The monthly active users of these products number in the billions.
1.11.2. Other implementations
The authors are aware of a number of other implementations of sender/receiver from this paper. These are presented here in perceived order of maturity and field experience.
-
HPX - The C++ Standard Library for Parallelism and Concurrency
HPX is a general purpose C++ runtime system for parallel and distributed applications that has been under active development since 2007. HPX exposes a uniform, standards-oriented API, and keeps abreast of the latest standards and proposals. It is used in a wide variety of high-performance applications.
The sender/receiver implementation in HPX has been under active development since May 2020. It is used to erase the overhead of futures and to make it possible to write efficient generic asynchronous algorithms that are agnostic to their execution resource. In HPX, algorithms can migrate execution between execution resources, even to GPUs and back, using a uniform standard interface with sender/receiver.
Far and away, the HPX team has the greatest usage experience outside Facebook. Mikael Simberg summarizes the experience as follows:
Summarizing, for us the major benefits of sender/receiver compared to the old model are:
-
Proper hooks for transitioning between execution resources.
-
The adaptors. Things like
are really nice additions.let_value -
Separation of the error channel from the value channel (also cancellation, but we don’t have much use for it at the moment). Even from a teaching perspective having to explain that the future
in the continuation will always be ready heref2
is enough of a reason to separate the channels. All the other obvious reasons apply as well of course.f1 . then ([]( future < T > f2 ) {...}) -
For futures we have a thing called
which is an optimized version ofhpx :: dataflow
which avoids intermediate allocations. With the sender/receiverwhen_all (...). then (...)
we get that "for free".when_all (...) | then (...)
-
-
kuhllib by Dietmar Kuehl
This is a prototype Standard Template Library with an implementation of sender/receiver that has been under development since May, 2021. It is significant mostly for its support for sender/receiver-based networking interfaces.
Here, Dietmar Kuehl speaks about the perceived complexity of sender/receiver:
... and, also similar to STL: as I had tried to do things in that space before I recognize sender/receivers as being maybe complicated in one way but a huge simplification in another one: like with STL I think those who use it will benefit - if not from the algorithm from the clarity of abstraction: the separation of concerns of STL (the algorithm being detached from the details of the sequence representation) is a major leap. Here it is rather similar: the separation of the asynchronous algorithm from the details of execution. Sure, there is some glue to tie things back together but each of them is simpler than the combined result.
Elsewhere, he said:
... to me it feels like sender/receivers are like iterators when STL emerged: they are different from what everybody did in that space. However, everything people are already doing in that space isn’t right.
Kuehl also has experience teaching sender/receiver at Bloomberg. About that experience he says:
When I asked [my students] specifically about how complex they consider the sender/receiver stuff the feedback was quite unanimous that the sender/receiver parts aren’t trivial but not what contributes to the complexity.
-
This is a complete implementation written from the specification in this paper. Its primary purpose is to help find specification bugs and to harden the wording of the proposal. It is fit for broad use and for contribution to libc++.
It is current with R7 of this paper.
-
Reference implementation for the Microsoft STL by Michael Schellenberger Costa
This is another reference implementation of this proposal, this time in a fork of the Mircosoft STL implementation. Michael Schellenberger Costa is not affiliated with Microsoft. He intends to contribute this implementation upstream when it is complete.
1.11.3. Inspirations
This proposal also draws heavily from our experience with Thrust and Agency. It is also inspired by the needs of countless other C++ frameworks for asynchrony, parallelism, and concurrency, including:
2. Revision history
2.1. R7
The changes since R6 are as follows:
Fixes:
-
Make it valid to pass non-variadic templates to the exposition-only alias template
, fixing the definitions ofgather - signatures
,value_types_of_t
, and the exposition-only alias templateerror_types_of_t
.sync - wait - type -
Removed the query forwarding from
that was inadvertantly left over from a previous edit.receiver_adaptor -
When adapting a sender to an awaitable with
, the sender’s value result datum is decayed before being stored in the exposition-onlyas_awaitable
.variant -
Correctly specify the completion signatures of the
algorithm.schedule_from -
The
concept no longer distinguishes between a sender of a typesender_of
and a sender of a typeT
.T && -
The
andjust
sender factories now reject C-style arrays instead of silently decaying them to pointers.just_error
Enhancements:
-
The
andsender
concepts get explicit opt-in traits calledreceiver
andenable_sender
, respectively. The traits have default implementations that look for nestedenable_receiver
andis_sender
types, respectively.is_receiver -
is removed andget_attrs
is used in its place.get_env -
The exposition-only type
is made normative and is renamedempty - env
.empty_env -
gets a fall-back implementation that simply returnsget_env
if aempty_env {}
overload is not found.tag_invoke -
is required to be insensitive to the cvref-qualification of its argument.get_env -
,get_env
, andempty_env
are moved into theenv_of_t
namespace.std :: -
Add a new subsection describing the async programming model of senders in abstract terms. See § 11.3 Asynchronous operations [async.ops].
2.2. R6
The changes since R5 are as follows:
Fixes:
-
Fix typo in the specification of
about the relative lifetimes of the tokens and the source that produced them.in_place_stop_source -
tests for awaitability with a promise type similar to the one used byget_completion_signatures
for the sake of consistency.connect -
A coroutine promise type is an environment provider (that is, it implements
) rather than being directly queryable. The previous draft was inconsistent about that.get_env ()
Enhancements:
-
Sender queries are moved into a separate queryable "attributes" object that is accessed by passing the sender to
(see below). Theget_attrs ()
concept is reexpressed to requiresender
and separated from a newget_attrs ()
concept for checking whether a type is a sender within a particular execution environment.sender_in < Snd , Env > -
The placeholder types
andno_env
are no longer needed and are dropped.dependent_completion_signatures <> -
andensure_started
are changed to persist the result of callingsplit
on the input sender.get_attrs () -
Reorder constraints of the
andscheduler
concepts to avoid constraint recursion when used in tandem with poorly-constrained, implicitly convertible types.receiver -
Re-express the
concept to be more ergonomic and general.sender_of -
Make the specification of the alias templates
andvalue_types_of_t
, and the variable templateerror_types_of_t
more concise by expressing them in terms of a new exposition-only alias templatesends_done
.gather - signatures
2.2.1. Environments and attributes
In earlier revisions, receivers, senders, and schedulers all were directly
queryable. In R4, receiver queries were moved into a separate "environment"
object, obtainable from a receiver with a
accessor. In R6, the
sender queries are given similar treatment, relocating to a "attributes"
object obtainable from a sender with a
accessor. This was done
to solve a number of design problems with the
and
algorithms; _e.g._, see NVIDIA/stdexec#466.
Schedulers, however, remain directly queryable. As lightweight handles that are required to be movable and copyable, there is little reason to want to dispose of a scheduler and yet persist the scheduler’s queries.
This revision also makes operation states directly queryable, even though there isn’t yet a use for such. Some early prototypes of cooperative bulk parallel sender algorithms done at NVIDIA suggest the utility of forwardable operation state queries. The authors chose to make opstates directly queryable since the opstate object is itself required to be kept alive for the duration of asynchronous operation.
2.3. R5
The changes since R4 are as follows:
Fixes:
-
requires its argument to be astart_detached
sender (sends no values tovoid
).set_value
Enhancements:
-
Receiver concepts refactored to no longer require an error channel for
or a stopped channel.exception_ptr -
concept andsender_of
customization point additionally require that the receiver is capable of receiving all of the sender’s possible completions.connect -
is now required to return an instance of eitherget_completion_signatures
orcompletion_signatures
.dependent_completion_signatures -
made more general.make_completion_signatures -
handlesreceiver_adaptor
as it does theget_env
members; that is,set_ *
will look for a member namedreceiver_adaptor
in the derived class, and if found dispatch theget_env ()
tag invoke customization to it.get_env_t -
,just
,just_error
, andjust_stopped
have been respecified as customization point objects instead of functions, following LEWG guidance.into_variant
2.4. R4
The changes since R3 are as follows:
Fixes:
-
Fix specification of
on theget_completion_scheduler
,transfer
andschedule_from
algorithms; the completion scheduler cannot be guaranteed fortransfer_when_all
.set_error -
The value of
for the default sender traits of types that are generally awaitable was changed fromsends_stopped false
totrue
to acknowledge the fact that some coroutine types are generally awaitable and may implement the
protocol in their promise types.unhandled_stopped () -
Fix the incorrect use of inline namespaces in the
header.< execution > -
Shorten the stable names for the sections.
-
now handlessync_wait
specially by throwing astd :: error_code
on failure.std :: system_error -
Fix how ADL isolation from class template arguments is specified so it doesn’t constrain implmentations.
-
Properly expose the tag types in the header
synopsis.< execution >
Enhancements:
-
Support for "dependently-typed" senders, where the completion signatures -- and thus the sender metadata -- depend on the type of the receiver connected to it. See the section dependently-typed senders below for more information.
-
Add a
sender factory for issuing a query against a receiver and sending the result through the value channel. (This is a useful instance of a dependently-typed sender.)read ( query ) -
Add
utility for declaratively defining a typed sender’s metadata and acompletion_signatures
utility for adapting another sender’s completions in helpful ways.make_completion_signatures -
Add
utility for specifying a sender’s completion signatures by adapting those of another sender.make_completion_signatures -
Drop support for untyped senders and rename
totyped_sender
.sender -
is renamed toset_done
. All occurances of "set_stopped
" in indentifiers replaced with "done
"stopped -
Add customization points for controlling the forwarding of scheduler, sender, receiver, and environment queries through layers of adaptors; specify the behavior of the standard adaptors in terms of the new customization points.
-
Add
query to forward a scheduler that can be used by algorithms or by the scheduler to delegate work and forward progress.get_delegatee_scheduler -
Add
alias template.schedule_result_t -
More precisely specify the sender algorithms, including precisely what their completion signatures are.
-
respecified as a customization point object.stopped_as_error -
respecified to improve diagnostics.tag_invoke
2.4.1. Dependently-typed senders
Background:
In the sender/receiver model, as with coroutines, contextual information about
the current execution is most naturally propagated from the consumer to the
producer. In coroutines, that means information like stop tokens, allocators and
schedulers are propagated from the calling coroutine to the callee. In
sender/receiver, that means that that contextual information is associated with
the receiver and is queried by the sender and/or operation state after the
sender and the receiver are
-ed.
Problem:
The implication of the above is that the sender alone does not have all the
information about the async computation it will ultimately initiate; some of
that information is provided late via the receiver. However, the
mechanism, by which an algorithm can introspect the value and error types the
sender will propagate, only accepts a sender parameter. It does not take into
consideration the type information that will come in late via the receiver. The
effect of this is that some senders cannot be typed senders when they
otherwise could be.
Example:
To get concrete, consider the case of the "
" sender: when
-ed and
-ed, it queries the receiver for its associated
scheduler and passes it back to the receiver through the value channel. That
sender’s "value type" is the type of the receiver’s scheduler. What then
should
report for the
's value type? It can’t answer because it doesn’t know.
This causes knock-on problems since some important algorithms require a typed
sender, such as
. To illustrate the problem, consider the following
code:
namespace ex = std :: execution ; ex :: sender auto task = ex :: let_value ( ex :: get_scheduler (), // Fetches scheduler from receiver. []( auto current_sched ) { // Lauch some nested work on the current scheduler: return ex :: on ( current_sched , nested work ... ); }); std :: this_thread :: sync_wait ( std :: move ( task ));
The code above is attempting to schedule some work onto the
's
execution resource. But
only returns a typed sender when
the input sender is typed. As we explained above,
is not
typed, so
is likewise not typed. Since
isn’t typed, it cannot be
passed to
which is expecting a typed sender. The above code would
fail to compile.
Solution:
The solution is conceptually quite simple: extend the
mechanism
to optionally accept a receiver in addition to the sender. The algorithms can
use
to inspect the
async operation’s completion-signals. The
concept would also need
to take an optional receiver parameter. This is the simplest change, and it
would solve the immediate problem.
Design:
Using the receiver type to compute the sender traits turns out to have pitfalls
in practice. Many receivers make use of that type information in their
implementation. It is very easy to create cycles in the type system, leading to
inscrutible errors. The design pursued in R4 is to give receivers an associated environment object -- a bag of key/value pairs -- and to move the contextual
information (schedulers, etc) out of the receiver and into the environment. The
template and the
concept, rather than taking a
receiver, take an environment. This is a much more robust design.
A further refinement of this design would be to separate the receiver and the
environment entirely, passing then as separate arguments along with the sender to
. This paper does not propose that change.
Impact:
This change, apart from increasing the expressive power of the sender/receiver abstraction, has the following impact:
-
Typed senders become moderately more challenging to write. (The new
andcompletion_signatures
utilities are added to ease this extra burden.)make_completion_signatures -
Sender adaptor algorithms that previously constrained their sender arguments to satisfy the
concept can no longer do so as the receiver is not available yet. This can result in type-checking that is done later, whentyped_sender
is ultimately called on the resulting sender adaptor.connect -
Operation states that own receivers that add to or change the environment are typically larger by one pointer. It comes with the benefit of far fewer indirections to evaluate queries.
"Has it been implemented?"
Yes, the reference implementation, which can be found at https://github.com/NVIDIA/stdexec, has implemented this design as well as some dependently-typed senders to confirm that it works.
Implementation experience
Although this change has not yet been made in libunifex, the most widely adopted sender/receiver implementation, a similar design can be found in Folly’s coroutine support library. In Folly.Coro, it is possible to await a special awaitable to obtain the current coroutine’s associated scheduler (called an executor in Folly).
For instance, the following Folly code grabs the current executor, schedules a task for execution on that executor, and starts the resulting (scheduled) task by enqueueing it for execution.
// From Facebook’s Folly open source library: template < class T > folly :: coro :: Task < void > CancellableAsyncScope :: co_schedule ( folly :: coro :: Task < T >&& task ) { this -> add ( std :: move ( task ). scheduleOn ( co_await co_current_executor )); co_return ; }
Facebook relies heavily on this pattern in its coroutine code. But as described
above, this pattern doesn’t work with R3 of
because of the lack
of dependently-typed schedulers. The change to
in R4 rectifies that.
Why now?
The authors are loathe to make any changes to the design, however small, at this
stage of the C++23 release cycle. But we feel that, for a relatively minor
design change -- adding an extra template parameter to
and
-- the returns are large enough to justify the change. And there
is no better time to make this change than as early as possible.
One might wonder why this missing feature not been added to sender/receiver before now. The designers of sender/receiver have long been aware of the need. What was missing was a clean, robust, and simple design for the change, which we now have.
Drive-by:
We took the opportunity to make an additional drive-by change: Rather than
providing the sender traits via a class template for users to specialize, we
changed it into a sender query:
. That function’s return type is used as the sender’s traits.
The authors feel this leads to a more uniform design and gives sender authors a
straightforward way to make the value/error types dependent on the cv- and
ref-qualification of the sender if need be.
Details:
Below are the salient parts of the new support for dependently-typed senders in R4:
-
Receiver queries have been moved from the receiver into a separate environment object.
-
Receivers have an associated environment. The new
CPO retrieves a receiver’s environment. If a receiver doesn’t implementget_env
, it returns an unspecified "empty" environment -- an empty struct.get_env -
now takes an optionalsender_traits
parameter that is used to determine the error/value types.Env -
The primary
template is replaced with asender_traits
alias implemented in terms of a newcompletion_signatures_of_t
CPO that dispatches withget_completion_signatures
.tag_invoke
takes a sender and an optional environment. A sender can customize this to specify its value/error types.get_completion_signatures -
Support for untyped senders is dropped. The
concept has been renamed totyped_sender
and now takes an optional environment.sender -
The environment argument to the
concept and thesender
CPO defaults toget_completion_signatures
. All environment queries fail (are ill-formed) when passed an instance ofno_env
.no_env -
A type
is required to satisfyS
to be considered a sender. If it doesn’t know what types it will complete with independent of an environment, it returns an instance of the placeholder traitssender < S >
.dependent_completion_signatures -
If a sender satisfies both
andsender < S >
, then the completion signatures for the two cannot be different in any way. It is possible for an implementation to enforce this statically, but not required.sender < S , Env > -
All of the algorithms and examples have been updated to work with dependently-typed senders.
2.5. R3
The changes since R2 are as follows:
Fixes:
-
Fix specification of the
algorithm to clarify lifetimes of intermediate operation states and properly scope theon
query.get_scheduler -
Fix a memory safety bug in the implementation of
.connect - awaitable -
Fix recursive definition of the
concept.scheduler
Enhancements:
-
Add
execution resource.run_loop -
Add
utility to simplify writing receivers.receiver_adaptor -
Require a scheduler’s sender to model
and provide a completion scheduler.sender_of -
Specify the cancellation scope of the
algorithm.when_all -
Make
a customization point.as_awaitable -
Change
's handling of awaitables to consider those types that are awaitable owing to customization ofconnect
.as_awaitable -
Add
andvalue_types_of_t
alias templates; renameerror_types_of_t
tostop_token_type_t
.stop_token_of_t -
Add a design rationale for the removal of the possibly eager algorithms.
-
Expand the section on field experience.
2.6. R2
The changes since R1 are as follows:
-
Remove the eagerly executing sender algorithms.
-
Extend the
customization point and theexecution :: connect
template to recognize awaitables assender_traits <>
s.typed_sender -
Add utilities
andas_awaitable ()
so a coroutine type can trivially make senders awaitable with a coroutine.with_awaitable_senders <> -
Add a section describing the design of the sender/awaitable interactions.
-
Add a section describing the design of the cancellation support in sender/receiver.
-
Add a section showing examples of simple sender adaptor algorithms.
-
Add a section showing examples of simple schedulers.
-
Add a few more examples: a sudoku solver, a parallel recursive file copy, and an echo server.
-
Refined the forward progress guarantees on the
algorithm.bulk -
Add a section describing how to use a range of senders to represent async sequences.
-
Add a section showing how to use senders to represent partial success.
-
Add sender factories
andexecution :: just_error
.execution :: just_stopped -
Add sender adaptors
andexecution :: stopped_as_optional
.execution :: stopped_as_error -
Document more production uses of sender/receiver at scale.
-
Various fixes of typos and bugs.
2.7. R1
The changes since R0 are as follows:
-
Added a new concept,
.sender_of -
Added a new scheduler query,
.this_thread :: execute_may_block_caller -
Added a new scheduler query,
.get_forward_progress_guarantee -
Removed the
adaptor.unschedule -
Various fixes of typos and bugs.
2.8. R0
Initial revision.
3. Design - introduction
The following three sections describe the entirety of the proposed design.
-
§ 3 Design - introduction describes the conventions used through the rest of the design sections, as well as an example illustrating how we envision code will be written using this proposal.
-
§ 4 Design - user side describes all the functionality from the perspective we intend for users: it describes the various concepts they will interact with, and what their programming model is.
-
§ 5 Design - implementer side describes the machinery that allows for that programming model to function, and the information contained there is necessary for people implementing senders and sender algorithms (including the standard library ones) - but is not necessary to use senders productively.
3.1. Conventions
The following conventions are used throughout the design section:
-
The namespace proposed in this paper is the same as in A Unified Executors Proposal for C++:
; however, for brevity, thestd :: execution
part of this name is omitted. When you seestd ::
, treat that asexecution :: foo
.std :: execution :: foo -
Universal references and explicit calls to
/std :: move
are omitted in code samples and signatures for simplicity; assume universal references and perfect forwarding unless stated otherwise.std :: forward -
None of the names proposed here are names that we are particularly attached to; consider the names to be reasonable placeholders that can freely be changed, should the committee want to do so.
3.2. Queries and algorithms
A query is a callable that takes some set of objects (usually one) as parameters and returns facts about those objects without modifying them. Queries are usually customization point objects, but in some cases may be functions.
An algorithm is a callable that takes some set of objects as parameters and causes those objects to do something. Algorithms are usually customization point objects, but in some cases may be functions.
4. Design - user side
4.1. Execution resources describe the place of execution
An execution resource is a resource that represents the place where execution will happen. This could be a concrete resource - like a specific thread pool object, or a GPU - or a more abstract one, like the current thread of execution. Execution contexts don’t need to have a representation in code; they are simply a term describing certain properties of execution of a function.
4.2. Schedulers represent execution resources
A scheduler is a lightweight handle that represents a strategy for
scheduling work onto an execution resource. Since execution resources don’t
necessarily manifest in C++ code, it’s not possible to program directly against
their API. A scheduler is a solution to that problem: the scheduler concept is
defined by a single sender algorithm,
, which returns a sender that
will complete on an execution resource determined by the scheduler. Logic that
you want to run on that context can be placed in the receiver’s
completion-signalling method.
execution :: scheduler auto sch = thread_pool . scheduler (); execution :: sender auto snd = execution :: schedule ( sch ); // snd is a sender (see below) describing the creation of a new execution resource // on the execution resource associated with sch
Note that a particular scheduler type may provide other kinds of scheduling operations
which are supported by its associated execution resource. It is not limited to scheduling
purely using the
API.
Future papers will propose additional scheduler concepts that extend
to add other capabilities. For example:
-
A
concept that extendstime_scheduler
to support time-based scheduling. Such a concept might provide access toscheduler
,schedule_after ( sched , duration )
andschedule_at ( sched , time_point )
APIs.now ( sched ) -
Concepts that extend
to support opening, reading and writing files asynchronously.scheduler -
Concepts that extend
to support connecting, sending data and receiving data over the network asynchronously.scheduler
4.3. Senders describe work
A sender is an object that describes work. Senders are similar to futures in existing asynchrony designs, but unlike futures, the work that is being done to arrive at the values they will send is also directly described by the sender object itself. A sender is said to send some values if a receiver connected (see § 5.3 execution::connect) to that sender will eventually receive said values.
The primary defining sender algorithm is § 5.3 execution::connect; this function, however, is not a user-facing API; it is used to facilitate communication between senders and various sender algorithms, but end user code is not expected to invoke it directly.
The way user code is expected to interact with senders is by using sender algorithms. This paper proposes an initial set of such sender algorithms, which are described in § 4.4 Senders are composable through sender algorithms, § 4.20 User-facing sender factories, § 4.21 User-facing sender adaptors, and § 4.22 User-facing sender consumers. For example, here is how a user can create a new sender on a scheduler, attach a continuation to it, and then wait for execution of the continuation to complete:
execution :: scheduler auto sch = thread_pool . scheduler (); execution :: sender auto snd = execution :: schedule ( sch ); execution :: sender auto cont = execution :: then ( snd , []{ std :: fstream file { "result.txt" }; file << compute_result ; }); this_thread :: sync_wait ( cont ); // at this point, cont has completed execution
4.4. Senders are composable through sender algorithms
Asynchronous programming often departs from traditional code structure and control flow that we are familiar with. A successful asynchronous framework must provide an intuitive story for composition of asynchronous work: expressing dependencies, passing objects, managing object lifetimes, etc.
The true power and utility of senders is in their composability. With senders, users can describe generic execution pipelines and graphs, and then run them on and across a variety of different schedulers. Senders are composed using sender algorithms:
-
sender factories, algorithms that take no senders and return a sender.
-
sender adaptors, algorithms that take (and potentially
) senders and return a sender.execution :: connect -
sender consumers, algorithms that take (and potentially
) senders and do not return a sender.execution :: connect
4.5. Senders can propagate completion schedulers
One of the goals of executors is to support a diverse set of execution resources, including traditional thread pools, task and fiber frameworks (like HPX and Legion), and GPUs and other accelerators (managed by runtimes such as CUDA or SYCL). On many of these systems, not all execution agents are created equal and not all functions can be run on all execution agents. Having precise control over the execution resource used for any given function call being submitted is important on such systems, and the users of standard execution facilities will expect to be able to express such requirements.
A Unified Executors Proposal for C++ was not always clear about the place of execution of any given piece of code. Precise control was present in the two-way execution API present in earlier executor designs, but it has so far been missing from the senders design. There has been a proposal (Towards C++23 executors: A proposal for an initial set of algorithms) to provide a number of sender algorithms that would enforce certain rules on the places of execution of the work described by a sender, but we have found those sender algorithms to be insufficient for achieving the best performance on all platforms that are of interest to us. The implementation strategies that we are aware of result in one of the following situations:
-
trying to submit work to one execution resource (such as a CPU thread pool) from another execution resource (such as a GPU or a task framework), which assumes that all execution agents are as capable as a
(which they aren’t).std :: thread -
forcibly interleaving two adjacent execution graph nodes that are both executing on one execution resource (such as a GPU) with glue code that runs on another execution resource (such as a CPU), which is prohibitively expensive for some execution resources (such as CUDA or SYCL).
-
having to customise most or all sender algorithms to support an execution resource, so that you can avoid problems described in 1. and 2, which we believe is impractical and brittle based on months of field experience attempting this in Agency.
None of these implementation strategies are acceptable for many classes of parallel runtimes, such as task frameworks (like HPX) or accelerator runtimes (like CUDA or SYCL).
Therefore, in addition to the
sender algorithm from Towards C++23 executors: A proposal for an initial set of algorithms, we are proposing a way for senders to advertise what scheduler (and by extension what execution resource) they will complete on.
Any given sender may have completion schedulers for some or all of the signals (value, error, or stopped) it completes with (for more detail on the completion-signals, see § 5.1 Receivers serve as glue between senders).
When further work is attached to that sender by invoking sender algorithms, that work will also complete on an appropriate completion scheduler.
4.5.1. execution :: get_completion_scheduler
is a query that retrieves the completion scheduler for a specific completion-signal from a sender’s environment.
For a sender that lacks a completion scheduler query for a given signal, calling
is ill-formed.
If a sender advertises a completion scheduler for a signal in this way, that sender must ensure that it sends that signal on an execution agent belonging to an execution resource represented by a scheduler returned from this function.
See § 4.5 Senders can propagate completion schedulers for more details.
execution :: scheduler auto cpu_sched = new_thread_scheduler {}; execution :: scheduler auto gpu_sched = cuda :: scheduler (); execution :: sender auto snd0 = execution :: schedule ( cpu_sched ); execution :: scheduler auto completion_sch0 = execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( snd0 )); // completion_sch0 is equivalent to cpu_sched execution :: sender auto snd1 = execution :: then ( snd0 , []{ std :: cout << "I am running on cpu_sched! \n " ; }); execution :: scheduler auto completion_sch1 = execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( snd1 )); // completion_sch1 is equivalent to cpu_sched execution :: sender auto snd2 = execution :: transfer ( snd1 , gpu_sched ); execution :: sender auto snd3 = execution :: then ( snd2 , []{ std :: cout << "I am running on gpu_sched! \n " ; }); execution :: scheduler auto completion_sch3 = execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( snd3 )); // completion_sch3 is equivalent to gpu_sched
4.6. Execution resource transitions are explicit
A Unified Executors Proposal for C++ does not contain any mechanisms for performing an execution resource transition. The only sender algorithm that can create a sender that will move execution to a specific execution resource is
, which does not take an input sender.
That means that there’s no way to construct sender chains that traverse different execution resources. This is necessary to fulfill the promise of senders being able to replace two-way executors, which had this capability.
We propose that, for senders advertising their completion scheduler, all execution resource transitions must be explicit; running user code anywhere but where they defined it to run must be considered a bug.
The
sender adaptor performs a transition from one execution resource to another:
execution :: scheduler auto sch1 = ...; execution :: scheduler auto sch2 = ...; execution :: sender auto snd1 = execution :: schedule ( sch1 ); execution :: sender auto then1 = execution :: then ( snd1 , []{ std :: cout << "I am running on sch1! \n " ; }); execution :: sender auto snd2 = execution :: transfer ( then1 , sch2 ); execution :: sender auto then2 = execution :: then ( snd2 , []{ std :: cout << "I am running on sch2! \n " ; }); this_thread :: sync_wait ( then2 );
4.7. Senders can be either multi-shot or single-shot
Some senders may only support launching their operation a single time, while others may be repeatable and support being launched multiple times. Executing the operation may consume resources owned by the sender.
For example, a sender may contain a
that it will be transferring ownership of to the
operation-state returned by a call to
so that the operation has access to
this resource. In such a sender, calling
consumes the sender such that after
the call the input sender is no longer valid. Such a sender will also typically be move-only so that
it can maintain unique ownership of that resource.
A single-shot sender can only be connected to a receiver
at most once. Its implementation of
only has overloads for
an rvalue-qualified sender. Callers must pass the sender as an rvalue to the
call to
, indicating that the call consumes the sender.
A multi-shot sender can be connected to multiple
receivers and can be launched multiple times. Multi-shot senders customise
to accept an lvalue reference to the sender. Callers can
indicate that they want the sender to remain valid after the call to
by passing an lvalue reference to the sender to call these
overloads. Multi-shot senders should also define overloads of
that accept rvalue-qualified senders to allow the sender to
be also used in places where only a single-shot sender is required.
If the user of a sender does not require the sender to remain valid after connecting it to a
receiver then it can pass an rvalue-reference to the sender to the call to
.
Such usages should be able to accept either single-shot or multi-shot senders.
If the caller does wish for the sender to remain valid after the call then it can pass an lvalue-qualified sender
to the call to
. Such usages will only accept multi-shot senders.
Algorithms that accept senders will typically either decay-copy an input sender and store it somewhere
for later usage (for example as a data-member of the returned sender) or will immediately call
on the input sender, such as in
or
.
Some multi-use sender algorithms may require that an input sender be copy-constructible but will only call
on an rvalue of each copy, which still results in effectively executing the operation multiple times.
Other multi-use sender algorithms may require that the sender is move-constructible but will invoke
on an lvalue reference to the sender.
For a sender to be usable in both multi-use scenarios, it will generally be required to be both copy-constructible and lvalue-connectable.
4.8. Senders are forkable
Any non-trivial program will eventually want to fork a chain of senders into independent streams of work, regardless of whether they are single-shot or multi-shot. For instance, an incoming event to a middleware system may be required to trigger events on more than one downstream system. This requires that we provide well defined mechanisms for making sure that connecting a sender multiple times is possible and correct.
The
sender adaptor facilitates connecting to a sender multiple times, regardless of whether it is single-shot or multi-shot:
auto some_algorithm ( execution :: sender auto && input ) { execution :: sender auto multi_shot = split ( input ); // "multi_shot" is guaranteed to be multi-shot, // regardless of whether "input" was multi-shot or not return when_all ( then ( multi_shot , [] { std :: cout << "First continuation \n " ; }), then ( multi_shot , [] { std :: cout << "Second continuation \n " ; }) ); }
4.9. Senders are joinable
Similarly to how it’s hard to write a complex program that will eventually want to fork sender chains into independent streams, it’s also hard to write a program that does not want to eventually create join nodes, where multiple independent streams of execution are merged into a single one in an asynchronous fashion.
is a sender adaptor that returns a sender that completes when the last of the input senders completes. It sends a pack of values, where the elements of said pack are the values sent by the input senders, in order.
returns a sender that also does not have an associated scheduler.
accepts an additional scheduler argument. It returns a sender whose value completion scheduler is the scheduler provided as an argument, but otherwise behaves the same as
. You can think of it as a composition of
, but one that allows for better efficiency through customization.
4.10. Senders support cancellation
Senders are often used in scenarios where the application may be concurrently executing multiple strategies for achieving some program goal. When one of these strategies succeeds (or fails) it may not make sense to continue pursuing the other strategies as their results are no longer useful.
For example, we may want to try to simultaneously connect to multiple network servers and use whichever server responds first. Once the first server responds we no longer need to continue trying to connect to the other servers.
Ideally, in these scenarios, we would somehow be able to request that those other strategies stop executing promptly so that their resources (e.g. cpu, memory, I/O bandwidth) can be released and used for other work.
While the design of senders has support for cancelling an operation before it starts
by simply destroying the sender or the operation-state returned from
before calling
, there also needs to be a standard, generic mechanism
to ask for an already-started operation to complete early.
The ability to be able to cancel in-flight operations is fundamental to supporting some kinds of generic concurrency algorithms.
For example:
-
a
algorithm should cancel other operations as soon as one operation failswhen_all ( ops ...) -
a
algorithm should cancel the other operations as soon as one operation completes successfulyfirst_successful ( ops ...) -
a generic
algorithm needs to be able to cancel thetimeout ( src , duration )
operation after the timeout duration has elapsed.src -
a
algorithm should cancelstop_when ( src , trigger )
ifsrc
completes first and canceltrigger
iftrigger
completes firstsrc
The mechanism used for communcating cancellation-requests, or stop-requests, needs to have a uniform interface so that generic algorithms that compose sender-based operations, such as the ones listed above, are able to communicate these cancellation requests to senders that they don’t know anything about.
The design is intended to be composable so that cancellation of higher-level operations can propagate those cancellation requests through intermediate layers to lower-level operations that need to actually respond to the cancellation requests.
For example, we can compose the algorithms mentioned above so that child operations are cancelled when any one of the multiple cancellation conditions occurs:
sender auto composed_cancellation_example ( auto query ) { return stop_when ( timeout ( when_all ( first_successful ( query_server_a ( query ), query_server_b ( query )), load_file ( "some_file.jpg" )), 5 s ), cancelButton . on_click ()); }
In this example, if we take the operation returned by
, this operation will
receive a stop-request when any of the following happens:
-
algorithm will send a stop-request iffirst_successful
completes successfullyquery_server_a ( query ) -
algorithm will send a stop-request if thewhen_all
operation completes with an error or stopped result.load_file ( "some_file.jpg" ) -
algorithm will send a stop-request if the operation does not complete within 5 seconds.timeout -
algorithm will send a stop-request if the user clicks on the "Cancel" button in the user-interface.stop_when -
The parent operation consuming the
sends a stop-requestcomposed_cancellation_example ()
Note that within this code there is no explicit mention of cancellation, stop-tokens, callbacks, etc. yet the example fully supports and responds to the various cancellation sources.
The intent of the design is that the common usage of cancellation in sender/receiver-based code is primarily through use of concurrency algorithms that manage the detailed plumbing of cancellation for you. Much like algorithms that compose senders relieve the user from having to write their own receiver types, algorithms that introduce concurrency and provide higher-level cancellation semantics relieve the user from having to deal with low-level details of cancellation.
4.10.1. Cancellation design summary
The design of cancellation described in this paper is built on top of and extends the
-based
cancellation facilities added in C++20, first proposed in Composable cancellation for sender-based async operations.
At a high-level, the facilities proposed by this paper for supporting cancellation include:
-
Add
andstd :: stoppable_token
concepts that generalise the interface ofstd :: stoppable_token_for
type to allow other types with different implementation strategies.std :: stop_token -
Add
concept for detecting whether astd :: unstoppable_token
can never receive a stop-request.stoppable_token -
Add
,std :: in_place_stop_token
andstd :: in_place_stop_source
types that provide a more efficient implementation of a stop-token for use in structured concurrency situations.std :: in_place_stop_callback < CB > -
Add
for use in places where you never want to issue a stop-requeststd :: never_stop_token -
Add
CPO for querying the stop-token to use for an operation from its receiver’s execution environment.std :: execution :: get_stop_token () -
Add
for querying the type of a stop-token returned fromstd :: execution :: stop_token_of_t < T > get_stop_token ()
In addition, there are requirements added to some of the algorithms to specify what their cancellation behaviour is and what the requirements of customisations of those algorithms are with respect to cancellation.
The key component that enables generic cancellation within sender-based operations is the
CPO.
This CPO takes a single parameter, which is the execution environment of the receiver passed to
, and returns a
that the operation can use to check for stop-requests for that operation.
As the caller of
typically has control over the receiver
type it passes, it is able to customise the
CPO for that
receiver to return an execution environment that hooks the
CPO to return a stop-token that the receiver has
control over and that it can use to communicate a stop-request to the operation
once it has started.
4.10.2. Support for cancellation is optional
Support for cancellation is optional, both on part of the author of the receiver and on part of the author of the sender.
If the receiver’s execution environment does not customise the
CPO then invoking the CPO on that receiver’s
environment will invoke the default implementation which returns
. This is a special
type that is
statically known to always return false
from the
method.
Sender code that tries to use this stop-token will in general result in code that handles stop-requests being compiled out and having little to no run-time overhead.
If the sender doesn’t call
, for example because the operation does not support
cancellation, then it will simply not respond to stop-requests from the caller.
Note that stop-requests are generally racy in nature as there is often a race betwen an operation completing naturally and the stop-request being made. If the operation has already completed or past the point at which it can be cancelled when the stop-request is sent then the stop-request may just be ignored. An application will typically need to be able to cope with senders that might ignore a stop-request anyway.
4.10.3. Cancellation is inherently racy
Usually, an operation will attach a stop-callback at some point inside the call to
so that
a subsequent stop-request will interrupt the logic.
A stop-request can be issued concurrently from another thread. This means the implementation of
needs to be careful to ensure that, once a stop-callback has been registered, that there are no data-races between
a potentially concurrently-executing stop-callback and the rest of the
implementation.
An implementation of
that supports cancellation will generally need to perform (at least)
two separate steps: launch the operation, subscribe a stop-callback to the receiver’s stop-token. Care needs
to be taken depending on the order in which these two steps are performed.
If the stop-callback is subscribed first and then the operation is launched, care needs to be taken to ensure that a stop-request that invokes the stop-callback on another thread after the stop-callback is registered but before the operation finishes launching does not either result in a missed cancellation request or a data-race. e.g. by performing an atomic write after the launch has finished executing
If the operation is launched first and then the stop-callback is subscribed, care needs to be taken to ensure
that if the launched operation completes concurrently on another thread that it does not destroy the operation-state
until after the stop-callback has been registered. e.g. by having the
implementation write to
an atomic variable once it has finished registering the stop-callback and having the concurrent completion handler
check that variable and either call the completion-signalling operation or store the result and defer calling the
receiver’s completion-signalling operation to the
call (which is still executing).
For an example of an implementation strategy for solving these data-races see § 1.4 Asynchronous Windows socket recv.
4.10.4. Cancellation design status
This paper currently includes the design for cancellation as proposed in Composable cancellation for sender-based async operations - "Composable cancellation for sender-based async operations". P2175R0 contains more details on the background motivation and prior-art and design rationale of this design.
It is important to note, however, that initial review of this design in the SG1 concurrency subgroup raised some concerns related to runtime overhead of the design in single-threaded scenarios and these concerns are still being investigated.
The design of P2175R0 has been included in this paper for now, despite its potential to change, as we believe that support for cancellation is a fundamental requirement for an async model and is required in some form to be able to talk about the semantics of some of the algorithms proposed in this paper.
This paper will be updated in the future with any changes that arise from the investigations into P2175R0.
4.11. Sender factories and adaptors are lazy
In an earlier revision of this paper, some of the proposed algorithms supported executing their logic eagerly; i.e., before the returned sender has been connected to a receiver and started. These algorithms were removed because eager execution has a number of negative semantic and performance implications.
We have originally included this functionality in the paper because of a long-standing belief that eager execution is a mandatory feature to be included in the standard Executors facility for that facility to be acceptable for accelerator vendors. A particular concern was that we must be able to write generic algorithms that can run either eagerly or lazily, depending on the kind of an input sender or scheduler that have been passed into them as arguments. We considered this a requirement, because the _latency_ of launching work on an accelerator can sometimes be considerable.
However, in the process of working on this paper and implementations of the features
proposed within, our set of requirements has shifted, as we understood the different
implementation strategies that are available for the feature set of this paper better,
and, after weighting the earlier concerns against the points presented below, we
have arrived at the conclusion that a purely lazy model is enough for most algorithms,
and users who intend to launch work earlier may use an algorithm such as
to achieve that goal. We have also come to deeply appreciate the fact that a purely
lazy model allows both the implementation and the compiler to have a much better
understanding of what the complete graph of tasks looks like, allowing them to better
optimize the code - also when targetting accelerators.
4.11.1. Eager execution leads to detached work or worse
One of the questions that arises with APIs that can potentially return
eagerly-executing senders is "What happens when those senders are destructed
without a call to
?" or similarly, "What happens if a call
to
is made, but the returned operation state is destroyed
before
is called on that operation state"?
In these cases, the operation represented by the sender is potentially executing concurrently in another thread at the time that the destructor of the sender and/or operation-state is running. In the case that the operation has not completed executing by the time that the destructor is run we need to decide what the semantics of the destructor is.
There are three main strategies that can be adopted here, none of which is particularly satisfactory:
-
Make this undefined-behaviour - the caller must ensure that any eagerly-executing sender is always joined by connecting and starting that sender. This approach is generally pretty hostile to programmers, particularly in the presence of exceptions, since it complicates the ability to compose these operations.
Eager operations typically need to acquire resources when they are first called in order to start the operation early. This makes eager algorithms prone to failure. Consider, then, what might happen in an expression such as
. Imaginewhen_all ( eager_op_1 (), eager_op_2 ())
starts an asynchronous operation successfully, but theneager_op_1 ()
throws. For lazy senders, that failure happens in the context of theeager_op_2 ()
algorithm, which handles the failure and ensures that async work joins on all code paths. In this case though -- the eager case -- the child operation has failed even beforewhen_all
has been called.when_all It then becomes the responsibility, not of the algorithm, but of the end user to handle the exception and ensure that
is joined before allowing the exception to propagate. If they fail to do that, they incur undefined behavior.eager_op_1 () -
Detach from the computation - let the operation continue in the background - like an implicit call to
. While this approach can work in some circumstances for some kinds of applications, in general it is also pretty user-hostile; it makes it difficult to reason about the safe destruction of resources used by these eager operations. In general, detached work necessitates some kind of garbage collection; e.g.,std :: thread :: detach ()
, to ensure resources are kept alive until the operations complete, and can make clean shutdown nigh impossible.std :: shared_ptr -
Block in the destructor until the operation completes. This approach is probably the safest to use as it preserves the structured nature of the concurrent operations, but also introduces the potential for deadlocking the application if the completion of the operation depends on the current thread making forward progress.
The risk of deadlock might occur, for example, if a thread-pool with a small number of threads is executing code that creates a sender representing an eagerly-executing operation and then calls the destructor of that sender without joining it (e.g. because an exception was thrown). If the current thread blocks waiting for that eager operation to complete and that eager operation cannot complete until some entry enqueued to the thread-pool’s queue of work is run then the thread may wait for an indefinite amount of time. If all threads of the thread-pool are simultaneously performing such blocking operations then deadlock can result.
There are also minor variations on each of these choices. For example:
-
A variation of (1): Call
if an eager sender is destructed without joining it. This is the approach thatstd :: terminate
destructor takes.std :: thread -
A variation of (2): Request cancellation of the operation before detaching. This reduces the chances of operations continuing to run indefinitely in the background once they have been detached but does not solve the lifetime- or shutdown-related challenges.
-
A variation of (3): Request cancellation of the operation before blocking on its completion. This is the strategy that
uses for its destructor. It reduces the risk of deadlock but does not eliminate it.std :: jthread
4.11.2. Eager senders complicate algorithm implementations
Algorithms that can assume they are operating on senders with strictly lazy
semantics are able to make certain optimizations that are not available if
senders can be potentially eager. With lazy senders, an algorithm can safely
assume that a call to
on an operation state strictly happens
before the execution of that async operation. This frees the algorithm from
needing to resolve potential race conditions. For example, consider an algorithm
that puts async operations in sequence by starting an operation only
after the preceding one has completed. In an expression like
, one my reasonably assume that
,
and
are sequenced and therefore do not need synchronisation. Eager algorithms
break that assumption.
When an algorithm needs to deal with potentially eager senders, the potential race conditions can be resolved one of two ways, neither of which is desirable:
-
Assume the worst and implement the algorithm defensively, assuming all senders are eager. This obviously has overheads both at runtime and in algorithm complexity. Resolving race conditions is hard.
-
Require senders to declare whether they are eager or not with a query. Algorithms can then implement two different implementation strategies, one for strictly lazy senders and one for potentially eager senders. This addresses the performance problem of (1) while compounding the complexity problem.
4.11.3. Eager senders incur cancellation-related overhead
Another implication of the use of eager operations is with regards to cancellation. The eagerly executing operation will not have access to the caller’s stop token until the sender is connected to a receiver. If we still want to be able to cancel the eager operation then it will need to create a new stop source and pass its associated stop token down to child operations. Then when the returned sender is eventually connected it will register a stop callback with the receiver’s stop token that will request stop on the eager sender’s stop source.
As the eager operation does not know at the time that it is launched what the
type of the receiver is going to be, and thus whether or not the stop token
returned from
is an
or not,
the eager operation is going to need to assume it might be later connected to a
receiver with a stop token that might actually issue a stop request. Thus it
needs to declare space in the operation state for a type-erased stop callback
and incur the runtime overhead of supporting cancellation, even if cancellation
will never be requested by the caller.
The eager operation will also need to do this to support sending a stop request to the eager operation in the case that the sender representing the eager work is destroyed before it has been joined (assuming strategy (5) or (6) listed above is chosen).
4.11.4. Eager senders cannot access execution resource from the receiver
In sender/receiver, contextual information is passed from parent operations to their children by way of receivers. Information like stop tokens, allocators, current scheduler, priority, and deadline are propagated to child operations with custom receivers at the time the operation is connected. That way, each operation has the contextual information it needs before it is started.
But if the operation is started before it is connected to a receiver, then there isn’t a way for a parent operation to communicate contextual information to its child operations, which may complete before a receiver is ever attached.
4.12. Schedulers advertise their forward progress guarantees
To decide whether a scheduler (and its associated execution resource) is sufficient for a specific task, it may be necessary to know what kind of forward progress guarantees it provides for the execution agents it creates. The C++ Standard defines the following forward progress guarantees:
-
concurrent, which requires that a thread makes progress eventually;
-
parallel, which requires that a thread makes progress once it executes a step; and
-
weakly parallel, which does not require that the thread makes progress.
This paper introduces a scheduler query function,
, which returns one of the enumerators of a new
type,
. Each enumerator of
corresponds to one of the aforementioned
guarantees.
4.13. Most sender adaptors are pipeable
To facilitate an intuitive syntax for composition, most sender adaptors are pipeable; they can be composed (piped) together with
.
This mechanism is similar to the
composition that C++ range adaptors support and draws inspiration from piping in *nix shells.
Pipeable sender adaptors take a sender as their first parameter and have no other sender parameters.
will pass the sender
as the first argument to the pipeable sender adaptor
. Pipeable sender adaptors support partial application of the parameters after the first. For example, all of the following are equivalent:
execution :: bulk ( snd , N , [] ( std :: size_t i , auto d ) {}); execution :: bulk ( N , [] ( std :: size_t i , auto d ) {})( snd ); snd | execution :: bulk ( N , [] ( std :: size_t i , auto d ) {});
Piping enables you to compose together senders with a linear syntax. Without it, you’d have to use either nested function call syntax, which would cause a syntactic inversion of the direction of control flow, or you’d have to introduce a temporary variable for each stage of the pipeline. Consider the following example where we want to execute first on a CPU thread pool, then on a CUDA GPU, then back on the CPU thread pool:
Syntax Style | Example |
---|---|
Function call (nested) |
|
Function call (named temporaries) |
|
Pipe |
|
Certain sender adaptors are not pipeable, because using the pipeline syntax can result in confusion of the semantics of the adaptors involved. Specifically, the following sender adaptors are not pipeable.
-
andexecution :: when_all
: Since this sender adaptor takes a variadic pack of senders, a partially applied form would be ambiguous with a non partially applied form with an arity of one less.execution :: when_all_with_variant -
: This sender adaptor changes how the sender passed to it is executed, not what happens to its result, but allowing it in a pipeline makes it read as if it performed a function more similar toexecution :: on
.transfer
Sender consumers could be made pipeable, but we have chosen to not do so. However, since these are terminal nodes in a pipeline and nothing can be piped after them, we believe a pipe syntax may be confusing as well as unnecessary, as consumers cannot be chained. We believe sender consumers read better with function call syntax.
4.14. A range of senders represents an async sequence of data
Senders represent a single unit of asynchronous work. In many cases though, what is being modelled is a sequence of data arriving asynchronously, and you want computation to happen on demand, when each element arrives. This requires nothing more than what is in this paper and the range support in C++20. A range of senders would allow you to model such input as keystrikes, mouse movements, sensor readings, or network requests.
Given some expression
that is a range of senders, consider the following in a coroutine that returns an async generator type:
for ( auto snd : R ) { if ( auto opt = co_await execution :: stopped_as_optional ( std :: move ( snd ))) co_yield fn ( * std :: move ( opt )); else break ; }
This transforms each element of the asynchronous sequence
with the function
on demand, as the data arrives. The result is a new asynchronous sequence of the transformed values.
Now imagine that
is the simple expression
. This creates a lazy range of senders, each of which completes immediately with monotonically increasing integers. The above code churns through the range, generating a new infine asynchronous range of values [
,
,
, ...].
Far more interesting would be if
were a range of senders representing, say, user actions in a UI. The above code gives a simple way to respond to user actions on demand.
4.15. Senders can represent partial success
Receivers have three ways they can complete: with success, failure, or cancellation. This begs the question of how they can be used to represent async operations that partially succeed. For example, consider an API that reads from a socket. The connection could drop after the API has filled in some of the buffer. In cases like that, it makes sense to want to report both that the connection dropped and that some data has been successfully read.
Often in the case of partial success, the error condition is not fatal nor does it mean the API has failed to satisfy its post-conditions. It is merely an extra piece of information about the nature of the completion. In those cases, "partial success" is another way of saying "success". As a result, it is sensible to pass both the error code and the result (if any) through the value channel, as shown below:
// Capture a buffer for read_socket_async to fill in execution :: just ( array < byte , 1024 > {}) | execution :: let_value ([ socket ]( array < byte , 1024 >& buff ) { // read_socket_async completes with two values: an error_code and // a count of bytes: return read_socket_async ( socket , span { buff }) // For success (partial and full), specify the next action: | execution :: let_value ([]( error_code err , size_t bytes_read ) { if ( err != 0 ) { // OK, partial success. Decide how to deal with the partial results } else { // OK, full success here. } }); })
In other cases, the partial success is more of a partial failure. That happens when the error condition indicates that in some way the function failed to satisfy its post-conditions. In those cases, sending the error through the value channel loses valuable contextual information. It’s possible that bundling the error and the incomplete results into an object and passing it through the error channel makes more sense. In that way, generic algorithms will not miss the fact that a post-condition has not been met and react inappropriately.
Another possibility is for an async API to return a range of senders: if the API completes with full success, full error, or cancellation, the returned range contains just one sender with the result. Otherwise, if the API partially fails (doesn’t satisfy its post-conditions, but some incomplete result is available), the returned range would have two senders: the first containing the partial result, and the second containing the error. Such an API might be used in a coroutine as follows:
// Declare a buffer for read_socket_async to fill in array < byte , 1024 > buff ; for ( auto snd : read_socket_async ( socket , span { buff })) { try { if ( optional < size_t > bytes_read = co_await execution :: stopped_as_optional ( std :: move ( snd ))) // OK, we read some bytes into buff. Process them here.... } else { // The socket read was cancelled and returned no data. React // appropriately. } } catch (...) { // read_socket_async failed to meet its post-conditions. // Do some cleanup and propagate the error... } }
Finally, it’s possible to combine these two approaches when the API can both partially succeed (meeting its post-conditions) and partially fail (not meeting its post-conditions).
4.16. All awaitables are senders
Since C++20 added coroutines to the standard, we expect that coroutines and awaitables will be how a great many will choose to express their asynchronous code. However, in this paper, we are proposing to add a suite of asynchronous algorithms that accept senders, not awaitables. One might wonder whether and how these algorithms will be accessible to those who choose coroutines instead of senders.
In truth there will be no problem because all generally awaitable types
automatically model the
concept. The adaptation is transparent and
happens in the sender customization points, which are aware of awaitables. (By
"generally awaitable" we mean types that don’t require custom
trickery from a promise type to make them awaitable.)
For an example, imagine a coroutine type called
that knows nothing
about senders. It doesn’t implement any of the sender customization points.
Despite that fact, and despite the fact that the
algorithm is constrained with the
concept, the following would compile
and do what the user wants:
task < int > doSomeAsyncWork (); int main () { // OK, awaitable types satisfy the requirements for senders: auto o = this_thread :: sync_wait ( doSomeAsyncWork ()); }
Since awaitables are senders, writing a sender-based asynchronous algorithm is trivial if you have a coroutine task type: implement the algorithm as a coroutine. If you are not bothered by the possibility of allocations and indirections as a result of using coroutines, then there is no need to ever write a sender, a receiver, or an operation state.
4.17. Many senders can be trivially made awaitable
If you choose to implement your sender-based algorithms as coroutines, you’ll run into the issue of how to retrieve results from a passed-in sender. This is not a problem. If the coroutine type opts in to sender support -- trivial with the
utility -- then a large class of senders are transparently awaitable from within the coroutine.
For example, consider the following trivial implementation of the sender-based
algorithm:
template < class S > requires single - sender < S &> // see [exec.as.awaitable] task < single - sender - value - type < S >> retry ( S s ) { for (;;) { try { co_return co_await s ; } catch (...) { } } }
Only some senders can be made awaitable directly because of the fact that callbacks are more expressive than coroutines. An awaitable expression has a single type: the result value of the async operation. In contrast, a callback can accept multiple arguments as the result of an operation. What’s more, the callback can have overloaded function call signatures that take different sets of arguments. There is no way to automatically map such senders into awaitables. The
utility recognizes as awaitables those senders that send a single value of a single type. To await another kind of sender, a user would have to first map its value channel into a single value of a single type -- say, with the
sender algorithm -- before
-ing that sender.
4.18. Cancellation of a sender can unwind a stack of coroutines
When looking at the sender-based
algorithm in the previous section, we can see that the value and error cases are correctly handled. But what about cancellation? What happens to a coroutine that is suspended awaiting a sender that completes by calling
?
When your task type’s promise inherits from
, what happens is this: the coroutine behaves as if an uncatchable exception had been thrown from the
expression. (It is not really an exception, but it’s helpful to think of it that way.) Provided that the promise types of the calling coroutines also inherit from
, or more generally implement a member function called
, the exception unwinds the chain of coroutines as if an exception were thrown except that it bypasses
clauses.
In order to "catch" this uncatchable stopped exception, one of the calling coroutines in the stack would have to await a sender that maps the stopped channel into either a value or an error. That is achievable with the
,
,
, or
sender adaptors. For instance, we can use
to "catch" the stopped signal and map it into an empty optional as shown below:
if ( auto opt = co_await execution :: stopped_as_optional ( some_sender )) { // OK, some_sender completed successfully, and opt contains the result. } else { // some_sender completed with a cancellation signal. }
As described in the section "All awaitables are senders", the sender customization points recognize awaitables and adapt them transparently to model the sender concept. When
-ing an awaitable and a receiver, the adaptation layer awaits the awaitable within a coroutine that implements
in its promise type. The effect of this is that an "uncatchable" stopped exception propagates seamlessly out of awaitables, causing
to be called on the receiver.
Obviously,
is a library extension of the coroutine promise interface. Many promise types will not implement
. When an uncatchable stopped exception tries to propagate through such a coroutine, it is treated as an unhandled exception and
is called. The solution, as described above, is to use a sender adaptor to handle the stopped exception before awaiting it. It goes without saying that any future Standard Library coroutine types ought to implement
. The author of Add lazy coroutine (coroutine task) type, which proposes a standard coroutine task type, is in agreement.
4.19. Composition with parallel algorithms
The C++ Standard Library provides a large number of algorithms that offer the potential for non-sequential execution via the use of execution policies. The set of algorithms with execution policy overloads are often referred to as "parallel algorithms", although additional policies are available.
Existing policies, such as
, give the implementation permission to execute the algorithm in parallel. However, the choice of execution resources used to perform the work is left to the implementation.
We will propose a customization point for combining schedulers with policies in order to provide control over where work will execute.
template < class ExecutionPolicy > unspecified executing_on ( execution :: scheduler auto scheduler , ExecutionPolicy && policy );
This function would return an object of an unspecified type which can be used in place of an execution policy as the first argument to one of the parallel algorithms. The overload selected by that object should execute its computation as requested by
while using
to create any work to be run. The expression may be ill-formed if
is not able to support the given policy.
The existing parallel algorithms are synchronous; all of the effects performed by the computation are complete before the algorithm returns to its caller. This remains unchanged with the
customization point.
In the future, we expect additional papers will propose asynchronous forms of the parallel algorithms which (1) return senders rather than values or
and (2) where a customization point pairing a sender with an execution policy would similarly be used to
obtain an object of unspecified type to be provided as the first argument to the algorithm.
4.20. User-facing sender factories
A sender factory is an algorithm that takes no senders as parameters and returns a sender.
4.20.1. execution :: schedule
execution :: sender auto schedule ( execution :: scheduler auto scheduler );
Returns a sender describing the start of a task graph on the provided scheduler. See § 4.2 Schedulers represent execution resources.
execution :: scheduler auto sch1 = get_system_thread_pool (). scheduler (); execution :: sender auto snd1 = execution :: schedule ( sch1 ); // snd1 describes the creation of a new task on the system thread pool
4.20.2. execution :: just
execution :: sender auto just ( auto ... && values );
Returns a sender with no completion schedulers, which sends the provided values. The input values are decay-copied into the returned sender. When the returned sender is connected to a receiver, the values are moved into the operation state if the sender is an rvalue; otherwise, they are copied. Then xvalues referencing the values in the operation state are passed to the receiver’s
.
execution :: sender auto snd1 = execution :: just ( 3.14 ); execution :: sender auto then1 = execution :: then ( snd1 , [] ( double d ) { std :: cout << d << " \n " ; }); execution :: sender auto snd2 = execution :: just ( 3.14 , 42 ); execution :: sender auto then2 = execution :: then ( snd2 , [] ( double d , int i ) { std :: cout << d << ", " << i << " \n " ; }); std :: vector v3 { 1 , 2 , 3 , 4 , 5 }; execution :: sender auto snd3 = execution :: just ( v3 ); execution :: sender auto then3 = execution :: then ( snd3 , [] ( std :: vector < int >&& v3copy ) { for ( auto && e : v3copy ) { e *= 2 ; } return std :: move ( v3copy ); } auto && [ v3copy ] = this_thread :: sync_wait ( then3 ). value (); // v3 contains {1, 2, 3, 4, 5}; v3copy will contain {2, 4, 6, 8, 10}. execution :: sender auto snd4 = execution :: just ( std :: vector { 1 , 2 , 3 , 4 , 5 }); execution :: sender auto then4 = execution :: then ( std :: move ( snd4 ), [] ( std :: vector < int >&& v4 ) { for ( auto && e : v4 ) { e *= 2 ; } return std :: move ( v4 ); }); auto && [ v4 ] = this_thread :: sync_wait ( std :: move ( then4 )). value (); // v4 contains {2, 4, 6, 8, 10}. No vectors were copied in this example.
4.20.3. execution :: transfer_just
execution :: sender auto transfer_just ( execution :: scheduler auto scheduler , auto ... && values );
Returns a sender whose value completion scheduler is the provided scheduler, which sends the provided values in the same manner as
.
execution :: sender auto vals = execution :: transfer_just ( get_system_thread_pool (). scheduler (), 1 , 2 , 3 ); execution :: sender auto snd = execution :: then ( vals , []( auto ... args ) { std :: ( args ...); }); // when snd is executed, it will print "123"
This adaptor is included as it greatly simplifies lifting values into senders.
4.20.4. execution :: just_error
execution :: sender auto just_error ( auto && error );
Returns a sender with no completion schedulers, which completes with the specified error. If the provided error is an lvalue reference, a copy is made inside the returned sender and a non-const lvalue reference to the copy is sent to the receiver’s
. If the provided value is an rvalue reference, it is moved into the returned sender and an rvalue reference to it is sent to the receiver’s
.
4.20.5. execution :: just_stopped
execution :: sender auto just_stopped ();
Returns a sender with no completion schedulers, which completes immediately by calling the receiver’s
.
4.20.6. execution :: read
execution :: sender auto read ( auto tag ); execution :: sender auto get_scheduler () { return read ( execution :: get_scheduler ); } execution :: sender auto get_delegatee_scheduler () { return read ( execution :: get_delegatee_scheduler ); } execution :: sender auto get_allocator () { return read ( execution :: get_allocator ); } execution :: sender auto get_stop_token () { return read ( execution :: get_stop_token ); }
Returns a sender that reaches into a receiver’s environment and pulls out the current value associated with the customization point denoted by
. It then sends the value read back to the receiver through the value channel. For instance,
(with no arguments) is a sender that asks the receiver for the currently suggested
and passes it to the receiver’s
completion-signal.
This can be useful when scheduling nested dependent work. The following sender pulls the current schduler into the value channel and then schedules more work onto it.
execution :: sender auto task = execution :: get_scheduler () | execution :: let_value ([]( auto sched ) { return execution :: on ( sched , some nested work here ); }); this_thread :: sync_wait ( std :: move ( task ) ); // wait for it to finish
This code uses the fact that
associates a scheduler with the receiver that it connects with
.
reads that scheduler out of the receiver, and passes it to
's receiver’s
function, which in turn passes it to the lambda. That lambda returns a new sender that uses the scheduler to schedule some nested work onto
's scheduler.
4.21. User-facing sender adaptors
A sender adaptor is an algorithm that takes one or more senders, which it may
, as parameters, and returns a sender, whose completion is related to the sender arguments it has received.
Sender adaptors are lazy, that is, they are never allowed to submit any work for execution prior to the returned sender being started later on, and are also guaranteed to not start any input senders passed into them. Sender consumers such as § 4.22.1 execution::start_detached and § 4.22.2 this_thread::sync_wait start senders.
For more implementer-centric description of starting senders, see § 5.5 Sender adaptors are lazy.
4.21.1. execution :: transfer
execution :: sender auto transfer ( execution :: sender auto input , execution :: scheduler auto scheduler );
Returns a sender describing the transition from the execution agent of the input sender to the execution agent of the target scheduler. See § 4.6 Execution resource transitions are explicit.
execution :: scheduler auto cpu_sched = get_system_thread_pool (). scheduler (); execution :: scheduler auto gpu_sched = cuda :: scheduler (); execution :: sender auto cpu_task = execution :: schedule ( cpu_sched ); // cpu_task describes the creation of a new task on the system thread pool execution :: sender auto gpu_task = execution :: transfer ( cpu_task , gpu_sched ); // gpu_task describes the transition of the task graph described by cpu_task to the gpu
4.21.2. execution :: then
execution :: sender auto then ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function );
returns a sender describing the task graph described by the input sender, with an added node of invoking the provided function with the values sent by the input sender as arguments.
is guaranteed to not begin executing
until the returned sender is started.
execution :: sender auto input = get_input (); execution :: sender auto snd = execution :: then ( input , []( auto ... args ) { std :: ( args ...); }); // snd describes the work described by pred // followed by printing all of the values sent by pred
This adaptor is included as it is necessary for writing any sender code that actually performs a useful function.
4.21.3. execution :: upon_ *
execution :: sender auto upon_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto upon_stopped ( execution :: sender auto input , std :: invocable auto function );
and
are similar to
, but where
works with values sent by the input sender,
works with errors, and
is invoked when the "stopped" signal is sent.
4.21.4. execution :: let_ *
execution :: sender auto let_value ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function ); execution :: sender auto let_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto let_stopped ( execution :: sender auto input , std :: invocable auto function );
is very similar to
: when it is started, it invokes the provided function with the values sent by the input sender as arguments. However, where the sender returned from
sends exactly what that function ends up returning -
requires that the function return a sender, and the sender returned by
sends the values sent by the sender returned from the callback. This is similar to the notion of "future unwrapping" in future/promise-based frameworks.
is guaranteed to not begin executing
until the returned sender is started.
and
are similar to
, but where
works with values sent by the input sender,
works with errors, and
is invoked when the "stopped" signal is sent.
4.21.5. execution :: on
execution :: sender auto on ( execution :: scheduler auto sched , execution :: sender auto snd );
Returns a sender which, when started, will start the provided sender on an execution agent belonging to the execution resource associated with the provided scheduler. This returned sender has no completion schedulers.
4.21.6. execution :: into_variant
execution :: sender auto into_variant ( execution :: sender auto snd );
Returns a sender which sends a variant of tuples of all the possible sets of types sent by the input sender. Senders can send multiple sets of values depending on runtime conditions; this is a helper function that turns them into a single variant value.
4.21.7. execution :: stopped_as_optional
execution :: sender auto stopped_as_optional ( single - sender auto snd );
Returns a sender that maps the value channel from a
to an
, and maps the stopped channel to a value of an empty
.
4.21.8. execution :: stopped_as_error
template < move_constructible Error > execution :: sender auto stopped_as_error ( execution :: sender auto snd , Error err );
Returns a sender that maps the stopped channel to an error of
.
4.21.9. execution :: bulk
execution :: sender auto bulk ( execution :: sender auto input , std :: integral auto shape , invocable < decltype ( size ), values - sent - by ( input ) ... > function );
Returns a sender describing the task of invoking the provided function with every index in the provided shape along with the values sent by the input sender. The returned sender completes once all invocations have completed, or an error has occurred. If it completes by sending values, they are equivalent to those sent by the input sender.
No instance of
will begin executing until the returned sender is started. Each invocation of
runs in an execution agent whose forward progress guarantees are determined by the scheduler on which they are run. All agents created by a single use
of
execute with the same guarantee. The number of execution agents used by
is not specified. This allows a scheduler to execute some invocations of the
in parallel.
In this proposal, only integral types are used to specify the shape of the bulk section. We expect that future papers may wish to explore extensions of the interface to explore additional kinds of shapes, such as multi-dimensional grids, that are commonly used for parallel computing tasks.
4.21.10. execution :: split
execution :: sender auto split ( execution :: sender auto sender );
If the provided sender is a multi-shot sender, returns that sender. Otherwise, returns a multi-shot sender which sends values equivalent to the values sent by the provided sender. See § 4.7 Senders can be either multi-shot or single-shot.
4.21.11. execution :: when_all
execution :: sender auto when_all ( execution :: sender auto ... inputs ); execution :: sender auto when_all_with_variant ( execution :: sender auto ... inputs );
returns a sender that completes once all of the input senders have completed. It is constrained to only accept senders that can complete with a single set of values (_i.e._, it only calls one overload of
on its receiver). The values sent by this sender are the values sent by each of the input senders, in order of the arguments passed to
. It completes inline on the execution resource on which the last input sender completes, unless stop is requested before
is started, in which case it completes inline within the call to
.
does the same, but it adapts all the input senders using
, and so it does not constrain the input arguments as
does.
The returned sender has no completion schedulers.
See § 4.9 Senders are joinable.
execution :: scheduler auto sched = thread_pool . scheduler (); execution :: sender auto sends_1 = ...; execution :: sender auto sends_abc = ...; execution :: sender auto both = execution :: when_all ( sched , sends_1 , sends_abc ); execution :: sender auto final = execution :: then ( both , []( auto ... args ){ std :: cout << std :: format ( "the two args: {}, {}" , args ...); }); // when final executes, it will print "the two args: 1, abc"
4.21.12. execution :: transfer_when_all
execution :: sender auto transfer_when_all ( execution :: scheduler auto sched , execution :: sender auto ... inputs ); execution :: sender auto transfer_when_all_with_variant ( execution :: scheduler auto sched , execution :: sender auto ... inputs );
Similar to § 4.21.11 execution::when_all, but returns a sender whose value completion scheduler is the provided scheduler.
See § 4.9 Senders are joinable.
4.21.13. execution :: ensure_started
execution :: sender auto ensure_started ( execution :: sender auto sender );
Once
returns, it is known that the provided sender has been connected and
has been called on the resulting operation state (see § 5.2 Operation states represent work); in other words, the work described by the provided sender has been submitted
for execution on the appropriate execution resources. Returns a sender which completes when the provided sender completes and sends values equivalent to those of the provided sender.
If the returned sender is destroyed before
is called, or if
is called but the
returned operation-state is destroyed before
is called, then a stop-request is sent to the eagerly launched
operation and the operation is detached and will run to completion in the background. Its result will be discarded when it
eventually completes.
Note that the application will need to make sure that resources are kept alive in the case that the operation detaches.
e.g. by holding a
to those resources or otherwise having some out-of-band way to signal completion of
the operation so that resource release can be sequenced after the completion.
4.22. User-facing sender consumers
A sender consumer is an algorithm that takes one or more senders, which it may
, as parameters, and does not return a sender.
4.22.1. execution :: start_detached
void start_detached ( execution :: sender auto sender );
Like
, but does not return a value; if the provided sender sends an error instead of a value,
is called.
4.22.2. this_thread :: sync_wait
auto sync_wait ( execution :: sender auto sender ) requires ( always - sends - same - values ( sender )) -> std :: optional < std :: tuple < values - sent - by ( sender ) >> ;
is a sender consumer that submits the work described by the provided sender for execution, similarly to
, except that it blocks the current
or thread of
until the work is completed, and returns
an optional tuple of values that were sent by the provided sender on its completion of work. Where § 4.20.1 execution::schedule and § 4.20.3 execution::transfer_just are meant to enter the domain of senders,
is meant to exit the domain of
senders, retrieving the result of the task graph.
If the provided sender sends an error instead of values,
throws that error as an exception, or rethrows the original exception if the error is of type
.
If the provided sender sends the "stopped" signal instead of values,
returns an empty optional.
For an explanation of the
clause, see § 5.8 All senders are typed. That clause also explains another sender consumer, built on top of
:
.
Note: This function is specified inside
, and not inside
. This is because
has to block the current execution agent, but determining what the current execution agent is is not reliable. Since the standard
does not specify any functions on the current execution agent other than those in
, this is the flavor of this function that is being proposed. If C++ ever obtains fibers, for instance, we expect that a variant of this function called
would be provided. We also expect that runtimes with execution agents that use different synchronization mechanisms than
's will provide their own flavors of
as well (assuming their execution agents have the means
to block in a non-deadlock manner).
4.23. execution :: execute
In addition to the three categories of functions presented above, we also propose to include a convenience function for fire-and-forget eager one-way submission of an invocable to a scheduler, to fulfil the role of one-way executors from P0443.
void execution::execute ( execution :: schedule auto sched , std :: invocable auto fn );
Submits the provided function for execution on the provided scheduler, as-if by:
auto snd = execution :: schedule ( sched ); auto work = execution :: then ( snd , fn ); execution :: start_detached ( work );
5. Design - implementer side
5.1. Receivers serve as glue between senders
A receiver is a callback that supports more than one channel. In fact, it supports three of them:
-
, which is the moral equivalent of anset_value
or a function call, which signals successful completion of the operation its execution depends on;operator () -
, which signals that an error has happened during scheduling of the current work, executing the current work, or at some earlier point in the sender chain; andset_error -
, which signals that the operation completed without succeeding (set_stopped
) and without failing (set_value
). This result is often used to indicate that the operation stopped early, typically because it was asked to do so because the result is no longer needed.set_error
Once an async operation has been started exactly one of these functions must be invoked on a receiver before it is destroyed.
While the receiver interface may look novel, it is in fact very similar to the
interface of
, which provides the first two signals as
and
, and it’s possible to emulate the third channel with
lifetime management of the promise.
Receivers are not a part of the end-user-facing API of this proposal; they are necessary to allow unrelated senders communicate with each other, but the only users who will interact with receivers directly are authors of senders.
Receivers are what is passed as the second argument to § 5.3 execution::connect.
5.2. Operation states represent work
An operation state is an object that represents work. Unlike senders, it is not a chaining mechanism; instead, it is a concrete object that packages the work described by a full sender chain, ready to be executed. An operation state is neither movable nor
copyable, and its interface consists of a single algorithm:
, which serves as the submission point of the work represented by a given operation state.
Operation states are not a part of the user-facing API of this proposal; they are necessary for implementing sender consumers like
and
, and the knowledge of them is necessary to implement senders, so the only users who will
interact with operation states directly are authors of senders and authors of sender algorithms.
The return value of § 5.3 execution::connect must satisfy the operation state concept.
5.3. execution :: connect
is a customization point which connects senders with receivers, resulting in an operation state that will ensure that if
is called that one of the completion operations will be called on the receiver passed to
.
execution :: sender auto snd = some input sender ; execution :: receiver auto rcv = some receiver ; execution :: operation_state auto state = execution :: connect ( snd , rcv ); execution :: start ( state ); // at this point, it is guaranteed that the work represented by state has been submitted // to an execution resource, and that execution resource will eventually call one of the // completion operations on rcv // operation states are not movable, and therefore this operation state object must be // kept alive until the operation finishes
5.4. Sender algorithms are customizable
Senders being able to advertise what their completion schedulers are fulfills one of the promises of senders: that of being able to customize an implementation of a sender algorithm based on what scheduler any work it depends on will complete on.
The simple way to provide customizations for functions like
, that is for sender adaptors and sender consumers, is to follow the customization scheme that has been adopted for C++20 ranges library; to do that, we would define
the expression
to be equivalent to:
-
, if that expression is well-formed; otherwisesender . then ( invocable ) -
, performed in a context where this call always performs ADL, if that expression is well-formed; otherwisethen ( sender , invocable ) -
a default implementation of
, which returns a sender adaptor, and then define the exact semantics of said adaptor.then
However, this definition is problematic. Imagine another sender adaptor,
, which is a structured abstraction for a loop over an index space. Its default implementation is just a for loop. However, for accelerator runtimes like CUDA, we would like sender algorithms
like
to have specialized behavior, which invokes a kernel of more than one thread (with its size defined by the call to
); therefore, we would like to customize
for CUDA senders to achieve this. However, there’s no reason for CUDA kernels to
necessarily customize the
sender adaptor, as the generic implementation is perfectly sufficient. This creates a problem, though; consider the following snippet:
execution :: scheduler auto cuda_sch = cuda_scheduler {}; execution :: sender auto initial = execution :: schedule ( cuda_sch ); // the type of initial is a type defined by the cuda_scheduler // let’s call it cuda::schedule_sender<> execution :: sender auto next = execution :: then ( cuda_sch , []{ return 1 ; }); // the type of next is a standard-library unspecified sender adaptor // that wraps the cuda sender // let’s call it execution::then_sender_adaptor<cuda::schedule_sender<>> execution :: sender auto kernel_sender = execution :: bulk ( next , shape , []( int i ){ ... });
How can we specialize the
sender adaptor for our wrapped
? Well, here’s one possible approach, taking advantage of ADL (and the fact that the definition of "associated namespace" also recursively enumerates the associated namespaces of all template
parameters of a type):
namespace cuda :: for_adl_purposes { template < typename ... SentValues > class schedule_sender { execution :: operation_state auto connect ( execution :: receiver auto rcv ); execution :: scheduler auto get_completion_scheduler () const ; }; execution :: sender auto bulk ( execution :: sender auto && input , execution :: shape auto && shape , invocable < sender - values ( input ) > auto && fn ) { // return a cuda sender representing a bulk kernel launch } } // namespace cuda::for_adl_purposes
However, if the input sender is not just a
like in the example above, but another sender that overrides
by itself, as a member function, because its author believes they know an optimization for bulk - the specialization above will no
longer be selected, because a member function of the first argument is a better match than the ADL-found overload.
This means that well-meant specialization of sender algorithms that are entirely scheduler-agnostic can have negative consequences. The scheduler-specific specialization - which is essential for good performance on platforms providing specialized ways to launch certain sender algorithms - would not be selected in such cases. But it’s really the scheduler that should control the behavior of sender algorithms when a non-default implementation exists, not the sender. Senders merely describe work; schedulers, however, are the handle to the runtime that will eventually execute said work, and should thus have the final say in how the work is going to be executed.
Therefore, we are proposing the following customization scheme (also modified to take § 5.9 Ranges-style CPOs vs tag_invoke into account): the expression
, for any given sender algorithm that accepts a sender as its first argument, should be
equivalent to:
-
, if that expression is well-formed; otherwisetag_invoke ( < sender - algorithm > , get_completion_scheduler < Tag > ( get_env ( sender )), sender , args ...) -
, if that expression is well-formed; otherwisetag_invoke ( < sender - algorithm > , sender , args ...) -
a default implementation, if there exists a default implementation of the given sender algorithm.
where
is one of
,
, or
. For most sender algorithms, the completion scheduler for
would be used, but for some (like
or
), one of the others would be used.
For sender algorithms which accept concepts other than
as their first argument, we propose that the customization scheme remains as it has been in A Unified Executors Proposal for C++ so far, except it should also use
.
5.5. Sender adaptors are lazy
Contrary to early revisions of this paper, we propose to make all sender adaptors perform strictly lazy submission, unless specified otherwise (the one notable exception in this paper is § 4.21.13 execution::ensure_started, whose sole purpose is to start an input sender).
Strictly lazy submission means that there is a guarantee that no work is submitted to an execution resource before a receiver is connected to a sender, and
is called on the resulting operation state.
5.6. Lazy senders provide optimization opportunities
Because lazy senders fundamentally describe work, instead of describing or representing the submission of said work to an execution resource, and thanks to the flexibility of the customization of most sender algorithms, they provide an opportunity for fusing multiple algorithms in a sender chain together, into a single function that can later be submitted for execution by an execution resource. There are two ways this can happen.
The first (and most common) way for such optimizations to happen is thanks to the structure of the implementation: because all the work is done within callbacks invoked on the completion of an earlier sender, recursively up to the original source of computation, the compiler is able to see a chain of work described using senders as a tree of tail calls, allowing for inlining and removal of most of the sender machinery. In fact, when work is not submitted to execution resources outside of the current thread of execution, compilers are capable of removing the senders abstraction entirely, while still allowing for composition of functions across different parts of a program.
The second way for this to occur is when a sender algorithm is specialized for a specific set of arguments. For instance, we expect that, for senders which are known to have been started already, § 4.21.13 execution::ensure_started will be an identity transformation, because the sender algorithm will be specialized for such senders. Similarly, an implementation could recognize two subsequent § 4.21.9 execution::bulks of compatible shapes, and merge them together into a single submission of a GPU kernel.
5.7. Execution resource transitions are two-step
Because
takes a sender as its first argument, it is not actually directly customizable by the target scheduler. This is by design: the target scheduler may not know how to transition from a scheduler such as a CUDA scheduler;
transitioning away from a GPU in an efficient manner requires making runtime calls that are specific to the GPU in question, and the same is usually true for other kinds of accelerators too (or for scheduler running on remote systems). To avoid this problem,
specialized schedulers like the ones mentioned here can still hook into the transition mechanism, and inject a sender which will perform a transition to the regular CPU execution resource, so that any sender can be attached to it.
This, however, is a problem: because customization of sender algorithms must be controlled by the scheduler they will run on (see § 5.4 Sender algorithms are customizable), the type of the sender returned from
must be controllable by the target scheduler. Besides, the target
scheduler may itself represent a specialized execution resource, which requires additional work to be performed to transition to it. GPUs and remote node schedulers are once again good examples of such schedulers: executing code on their execution resources
requires making runtime API calls for work submission, and quite possibly for the data movement of the values being sent by the input sender passed into
.
To allow for such customization from both ends, we propose the inclusion of a secondary transitioning sender adaptor, called
. This adaptor is a form of
, but takes an additional, second argument: the input sender. This adaptor is not
meant to be invoked manually by the end users; they are always supposed to invoke
, to ensure that both schedulers have a say in how the transitions are made. Any scheduler that specializes
shall ensure that the
return value of their customization is equivalent to
, where
is a successor of
that sends values equivalent to those sent by
.
The default implementation of
is
.
5.8. All senders are typed
All senders must advertise the types they will send when they complete.
This is necessary for a number of features, and writing code in a way that’s
agnostic of whether an input sender is typed or not in common sender adaptors
such as
is hard.
The mechanism for this advertisement is similar to the one in A Unified Executors Proposal for C++; the
way to query the types is through
.
is a template that takes two
arguments: one is a tuple-like template, the other is a variant-like template.
The tuple-like argument is required to represent senders sending more than one
value (such as
). The variant-like argument is required to represent
senders that choose which specific values to send at runtime.
There’s a choice made in the specification of § 4.22.2 this_thread::sync_wait: it returns a tuple of values sent by the
sender passed to it, wrapped in
to handle the
signal. However, this assumes that those values can be represented as a tuple,
like here:
execution :: sender auto sends_1 = ...; execution :: sender auto sends_2 = ...; execution :: sender auto sends_3 = ...; auto [ a , b , c ] = this_thread :: sync_wait ( execution :: transfer_when_all ( execution :: get_completion_scheduler < execution :: set_value_t > ( get_env ( sends_1 )), sends_1 , sends_2 , sends_3 )). value (); // a == 1 // b == 2 // c == 3
This works well for senders that always send the same set of arguments. If we ignore the possibility of having a sender that sends different sets of arguments into a receiver, we can specify the "canonical" (i.e. required to be followed by all senders) form of
of a sender which sends
to be as follows:
template < template < typename ... > typename TupleLike > using value_types = TupleLike ;
If senders could only ever send one specific set of values, this would probably need to be the required form of
for all senders; defining it otherwise would cause very weird results and should be considered a bug.
This matter is somewhat complicated by the fact that (1)
for receivers can be overloaded and accept different sets of arguments, and (2) senders are allowed to send multiple different sets of values, depending on runtime conditions, the data they
consumed, and so on. To accomodate this, A Unified Executors Proposal for C++ also includes a second template parameter to
, one that represents a variant-like type. If we permit such senders, we would almost certainly need to require that the canonical form of
for all senders (to ensure consistency in how they are handled, and to avoid accidentally interpreting a user-provided variant as a sender-provided one) sending the different sets of arguments
,
, ...,
to be as follows:
template < template < typename ... > typename TupleLike , template < typename ... > typename VariantLike > using value_types = VariantLike < TupleLike < Types1 ... > , TupleLike < Types2 ... > , ..., TupleLike < Types3 ... > > ;
This, however, introduces a couple of complications:
-
A
sender would also need to follow this structure, so the correct type for storing the value sent by it would bejust ( 1 )
or some such. This introduces a lot of compile time overhead for the simplest senders, and this overhead effectively exists in all places in the code wherestd :: variant < std :: tuple < int >>
is queried, regardless of the tuple-like and variant-like templates passed to it. Such overhead does exist if only the tuple-like parameter exists, but is made much worse by adding this second wrapping layer.value_types -
As a consequence of (1): because
needs to store the above type, it can no longer return just async_wait
forstd :: tuple < int >
; it has to returnjust ( 1 )
. C++ currently does not have an easy way to destructure this; it may get less awkward with pattern matching, but even then it seems extremely heavyweight to involve variants in this API, and for the purpose of generic code, the kind of the return type ofstd :: variant < std :: tuple < int >>
must be the same across all sender types.sync_wait
One possible solution to (2) above is to place a requirement on
that it can only accept senders which send only a single set of values, therefore removing the need for
to appear in its API; because of this, we propose to expose both
, which is a simple, user-friendly version of the sender consumer, but requires that
have only one possible variant, and
, which accepts any sender, but returns an optional whose value type is the variant of all the
possible tuples sent by the input sender:
auto sync_wait_with_variant ( execution :: sender auto sender ) -> std :: optional < std :: variant < std :: tuple < values 0 - sent - by ( sender ) > , std :: tuple < values 1 - sent - by ( sender ) > , ..., std :: tuple < values n - sent - by ( sender ) > >> ; auto sync_wait ( execution :: sender auto sender ) requires ( always - sends - same - values ( sender )) -> std :: optional < std :: tuple < values - sent - by ( sender ) >> ;
5.9. Ranges-style CPOs vs tag_invoke
The contemporary technique for customization in the Standard Library is customization point objects. A customization point object, will it look for member functions and then for nonmember functions with the same name as the customization point, and calls those if they match. This is the technique used by the C++20 ranges library, and previous executors proposals (A Unified Executors Proposal for C++ and Towards C++23 executors: A proposal for an initial set of algorithms) intended to use it as well. However, it has several unfortunate consequences:
-
It does not allow for easy propagation of customization points unknown to the adaptor to a wrapped object, which makes writing universal adapter types much harder - and this proposal uses quite a lot of those.
-
It effectively reserves names globally. Because neither member names nor ADL-found functions can be qualified with a namespace, every customization point object that uses the ranges scheme reserves the name for all types in all namespaces. This is unfortunate due to the sheer number of customization points already in the paper, but also ones that we are envisioning in the future. It’s also a big problem for one of the operations being proposed already:
. We imagine that if, in the future, C++ was to gain fibers support, we would want to also havesync_wait
, in addition tostd :: this_fiber :: sync_wait
. However, because we would want the names to be the same in both cases, we would need to make the names of the customizations not match the names of the customization points. This is undesirable.std :: this_thread :: sync_wait
This paper proposes to instead use the mechanism described in tag_invoke: A general pattern for supporting customisable functions:
; the wording for
has been incorporated into the proposed specification in this paper.
In short, instead of using globally reserved names,
uses the type of the customization point object itself as the mechanism to find customizations. It globally reserves only a single name -
- which itself is used the same way that
ranges-style customization points are used. All other customization points are defined in terms of
. For example, the customization for
will call
, instead of attempting
to invoke
, and then
if the member call is not valid.
Using
has the following benefits:
-
It reserves only a single global name, instead of reserving a global name for every customization point object we define.
-
It is possible to propagate customizations to a subobject, because the information of which customization point is being resolved is in the type of an argument, and not in the name of the function:
// forward most customizations to a subobject template < typename Tag , typename ... Args > friend auto tag_invoke ( Tag && tag , wrapper & self , Args && ... args ) { return std :: forward < Tag > ( tag )( self . subobject , std :: forward < Args > ( args )...); } // but override one of them with a specific value friend auto tag_invoke ( specific_customization_point_t , wrapper & self ) { return self . some_value ; } -
It is possible to pass those as template arguments to types, because the information of which customization point is being resolved is in the type. Similarly to how A Unified Executors Proposal for C++ defines a polymorphic executor wrapper which accepts a list of properties it supports, we can imagine scheduler and sender wrappers that accept a list of queries and operations they support. That list can contain the types of the customization point objects, and the polymorphic wrappers can then specialize those customization points on themselves using
, dispatching to manually constructed vtables containing pointers to specialized implementations for the wrapped objects. For an example of such a polymorphic wrapper, seetag_invoke
(example).unifex :: any_unique
6. Specification
Much of this wording follows the wording of A Unified Executors Proposal for C++.
§ 8 Library introduction [library] is meant to be a diff relative to the wording of the [library] clause of Working Draft, Standard for Programming Language C++.
§ 9 General utilities library [utilities] is meant to be a diff relative to the wording of the [utilities] clause of Working Draft, Standard for Programming Language C++. This diff applies changes from tag_invoke: A general pattern for supporting customisable functions.
§ 10 Thread support library [thread] is meant to be a diff relative to the wording of the [thread] clause of Working Draft, Standard for Programming Language C++. This diff applies changes from Composable cancellation for sender-based async operations.
§ 11 Execution control library [exec] is meant to be added as a new library clause to the working draft of C++.
7. Exception handling [except]
7.1. Special functions [except.special]
7.1.1. General [except.special.general]
7.1.1.1. The std :: terminate
function [except.terminate]
when a callback invocation exits via an exception when requesting stop on a
or a
std :: stop_source ([stopsource.mem], [stopsource.inplace.mem]), or in the constructor of
std :: in_place_stop_source or
std :: stop_callback ([stopcallback.cons], [stopcallback.inplace.cons]) when a callback invocation exits via an exception.
std :: in_place_stop_callback
8. Library introduction [library]
< execution >
to Table 23: C++ library headers [tab:headers.cpp]
In subclause [conforming], after [lib.types.movedfrom], add the following new subclause with suggested stable name [lib.tmpl-heads].
16.4.6.17 Class template-heads
If a class template’s template-head is marked with "arguments are not associated entities"", any template arguments do not contribute to the associated entities ([basic.lookup.argdep]) of a function call where a specialization of the class template is an associated entity. In such a case, the class template can be implemented as an alias template referring to a templated class, or as a class template where the template arguments themselves are templated classes.
[Example:
template < class T > // arguments are not associated entities struct S {}; namespace N { int f ( auto ); struct A {}; } int x = f ( S < N :: A > {}); // error: N::f not a candidate The template
specified above can be implemented as
S template < class T > struct s - impl { struct type { }; }; template < class T > using S = typename s - impl < T >:: type ; or as
template < class T > struct hidden { using type = struct _ { using type = T ; }; }; template < class HiddenT > struct s - impl { using T = typename HiddenT :: type ; }; template < class T > using S = s - impl < typename hidden < T >:: type > ; -- end example]
9. General utilities library [utilities]
9.1. Function objects [function.objects]
9.1.1. Header < functional >
synopsis [functional.syn]
At the end of this subclause, insert the following declarations into the synopsis within
:
// expositon only: template < class Fn , class ... Args > concept callable = requires ( Fn && fn , Args && ... args ) { std :: forward < Fn > ( fn )( std :: forward < Args > ( args )...); }; template < class Fn , class ... Args > concept nothrow - callable = callable < Fn , Args ... > && requires ( Fn && fn , Args && ... args ) { { std :: forward < Fn > ( fn )( std :: forward < Args > ( args )...) } noexcept ; }; template < class Fn , class ... Args > using call - result - t = decltype ( declval < Fn > ()( declval < Args > ()...)); // [func.tag_invoke], tag_invoke namespace tag - invoke { // exposition only void tag_invoke (); template < class Tag , class ... Args > concept tag_invocable = requires ( Tag && tag , Args && ... args ) { tag_invoke ( std :: forward < Tag > ( tag ), std :: forward < Args > ( args )...); }; template < class Tag , class ... Args > concept nothrow_tag_invocable = tag_invocable < Tag , Args ... > && requires ( Tag && tag , Args && ... args ) { { tag_invoke ( std :: forward < Tag > ( tag ), std :: forward < Args > ( args )...) } noexcept ; }; template < class Tag , class ... Args > using tag_invoke_result_t = decltype ( tag_invoke ( declval < Tag > (), declval < Args > ()...)); template < class Tag , class ... Args > struct tag_invoke_result < Tag , Args ... > { using type = tag_invoke_result_t < Tag , Args ... > ; // present if and only if tag_invocable<Tag, Args...> is true }; struct tag ; // exposition only } inline constexpr tag - invoke :: tag tag_invoke {}; using tag - invoke :: tag_invocable ; using tag - invoke :: nothrow_tag_invocable ; using tag - invoke :: tag_invoke_result_t ; using tag - invoke :: tag_invoke_result ; template < auto & Tag > using tag_t = decay_t < decltype ( Tag ) > ;
9.1.2. tag_invoke
[func.tag_invoke]
Insert this section as a new subclause, between Searchers [func.search] and Class template
[unord.hash].
Given a subexpression
, let
E be expression-equivalent to a glvalue with the same type and value as
REIFY ( E ) as if by
E .
identity ()( E ) The name
denotes a customization point object [customization.point.object]. Given subexpressions
std :: tag_invoke and
T , the expression
A ... is expression-equivalent [defns.expression-equivalent] to
std :: tag_invoke ( T , A ...) with overload resolution performed in a context in which unqualified lookup for
tag_invoke ( REIFY ( T ), REIFY ( A )...) finds only the declaration
tag_invoke void tag_invoke (); [Note: Diagnosable ill-formed cases above result in substitution failure when
appears in the immediate context of a template instantiation. —end note]
std :: tag_invoke ( T , A ...)
10. Thread support library [thread]
10.1. Stop tokens [thread.stoptoken]
10.1.1. Header < stop_token >
synopsis [thread.stoptoken.syn]
At the beginning of this subclause, insert the following declarations into the synopsis within
:
template < template < class > class > struct check - type - alias - exists ; // exposition-only template < class T > concept stoppable_token = see - below ; template < class T , class CB , class Initializer = CB > concept stoppable_token_for = see - below ; template < class T > concept unstoppable_token = see - below ;
At the end of this subclause, insert the following declarations into the synopsis of within
:
// [stoptoken.never], class never_stop_token class never_stop_token ; // [stoptoken.inplace], class in_place_stop_token class in_place_stop_token ; // [stopsource.inplace], class in_place_stop_source class in_place_stop_source ; // [stopcallback.inplace], class template in_place_stop_callback template < class CB > class in_place_stop_callback ; template < class T , class CB > using stop_callback_for_t = typename T :: template callback_type < CB > ;
10.1.2. Stop token concepts [thread.stoptoken.concepts]
Insert this section as a new subclause between Header
synopsis [thread.stoptoken.syn] and Class
[stoptoken].
The
concept checks for the basic interface of a stop token that is copyable and allows polling to see if stop has been requested and also whether a stop request is possible. For a stop token type
stoppable_token and a type
T that is callable with no arguments, the type
CB is valid and denotes the stop callback type to use to register a callback to be executed if a stop request is ever made on a
T :: callback_type < CB > of type
stoppable_token . The
T concept checks for a stop token type compatible with a given callback type. The
stoppable_token_for concept checks for a stop token type that does not allow stopping.
unstoppable_token template < class T > concept stoppable_token = copyable < T > && equality_comparable < T > && requires ( const T t ) { { T ( t ) } noexcept ; // see implicit expression variations ([concepts.equality]) { t . stop_requested () } noexcept -> same_as < bool > ; { t . stop_possible () } noexcept -> same_as < bool > ; typename check - type - alias - exists < T :: template callback_type > ; }; template < class T , class CB , class Initializer = CB > concept stoppable_token_for = stoppable_token < T > && invocable < CB > && constructible_from < CB , Initializer > && requires { typename stop_callback_for_t < T , CB > ; } && constructible_from < stop_callback_for_t < T , CB > , const T & , Initializer > ; template < class T > concept unstoppable_token = stoppable_token < T > && requires { { bool_constant < T :: stop_possible () > {} } -> same_as < false_type > ; }; LWG directed me to replacewith
T :: stop_possible () because of the recent
t . stop_possible () changes in P2280R2. However, even with those changes, a nested requirement like
constexpr , where
requires ( ! t . stop_possible ()) is an argument in the requirement-parameter-list, is ill-formed according to [expr.prim.req.nested/p2]:
t A local parameter shall only appear as an unevaluated operand within the constraint-expression.
This is the subject of core issue 2517.
Let
and
t be distinct, valid objects of type
u . The type
T models
T only if:
stoppable_token
If
evaluates to
t . stop_possible () false
then, ifand
t reference the same logical shared stop state,
u shall also subsequently evaluate to
u . stop_possible () false
andshall also subsequently evaluate to
u . stop_requested () false
.If
evaluates to
t . stop_requested () true
then, ifand
t reference the same logical shared stop state,
u shall also subsequently evaluate to
u . stop_requested () true
andshall also subsequently evaluate to
u . stop_possible () true
.Let
and
t be distinct, valid objects of type
u and let
T be an object of type
init . Then for some type
Initializer , the type
CB models
T only if:
stoppable_token_for < CB , Initializer >
The type
models:
T :: callback_type < CB > constructible_from < T , Initializer > && constructible_from < T & , Initializer > && constructible_from < const T , Initializer > Direct non-list initializing an object
of type
cb from
T :: callback_type < CB > shall, if
t , init is
t . stop_possible () true
, construct an instance,, of type
callback , direct-initialized with
CB , and register callback with
init 's shared stop state such that
t will be invoked with an empty argument list if a stop request is made on the shared stop state.
callback
If
evaluates to
t . stop_requested () true
at the timeis registered then
callback can be invoked on the thread executing
callback 's constructor.
cb If
is invoked then, if
callback and
t reference the same shared stop state, an evaluation of
u will be
u . stop_requested () true
if the beginning of the invocation ofstrongly-happens-before the evaluation of
callback .
u . stop_requested () [Note: If
evaluates to
t . stop_possible () false
then the construction ofis not required to construct and initialize
cb . --end note]
callback Construction of a
instance shall only throw exceptions thrown by the initialization of the
T :: callback_type < CB > instance from the value of type
CB .
Initializer Destruction of the
object,
T :: callback_type < CB > , removes
cb from the shared stop state such that
callback will not be invoked after the destructor returns.
callback
If
is currently being invoked on another thread then the destructor of
callback will block until the invocation of
cb returns such that the return from the invocation of
callback strongly-happens-before the destruction of
callback .
callback Destruction of a callback
shall not block on the completion of the invocation of some other callback registered with the same shared stop state.
cb
10.1.3. Class stop_token
[stoptoken]
10.1.3.1. General [stoptoken.general]
Modify the synopsis of class
in section General [stoptoken.general] as follows:
namespace std { class stop_token { public : template < class T > using callback_type = stop_callback < T > ; // [stoptoken.cons], constructors, copy, and assignment stop_token () noexcept ; // ...
10.1.4. Class never_stop_token
[stoptoken.never]
Insert a new subclause, Class
[stoptoken.never], after section Class template
[stopcallback], as a new subclause of Stop tokens [thread.stoptoken].
10.1.4.1. General [stoptoken.never.general]
-
The class
provides an implementation of thenever_stop_token
concept. It provides a stop token interface, but also provides static information that a stop is never possible nor requested.unstoppable_token
namespace std { class never_stop_token { // exposition only struct callback { explicit callback ( never_stop_token , auto && ) noexcept {} }; public : template < class > using callback_type = callback ; [[ nodiscard ]] static constexpr bool stop_requested () noexcept { return false; } [[ nodiscard ]] static constexpr bool stop_possible () noexcept { return false; } [[ nodiscard ]] friend bool operator == ( const never_stop_token & , const never_stop_token & ) noexcept = default ; }; }
10.1.5. Class in_place_stop_token
[stoptoken.inplace]
Insert a new subclause, Class
[stoptoken.inplace], after the section added above, as a new subclause of Stop tokens [thread.stoptoken].
10.1.5.1. General [stoptoken.inplace.general]
-
The class
provides an interface for querying whether a stop request has been made (in_place_stop_token
) or can ever be made (stop_requested
) using an associatedstop_possible
object ([stopsource.inplace]). Anin_place_stop_source
can also be passed to anin_place_stop_token
([stopcallback.inplace]) constructor to register a callback to be called when a stop request has been made from an associatedin_place_stop_callback
.in_place_stop_source
namespace std { class in_place_stop_token { public : template < class CB > using callback_type = in_place_stop_callback < CB > ; // [stoptoken.inplace.cons], constructors, copy, and assignment in_place_stop_token () noexcept ; ~ in_place_stop_token (); void swap ( in_place_stop_token & ) noexcept ; // [stoptoken.inplace.mem], stop handling [[ nodiscard ]] bool stop_requested () const noexcept ; [[ nodiscard ]] bool stop_possible () const noexcept ; [[ nodiscard ]] friend bool operator == ( const in_place_stop_token & , const in_place_stop_token & ) noexcept = default ; friend void swap ( in_place_stop_token & lhs , in_place_stop_token & rhs ) noexcept ; private : const in_place_stop_source * source_ ; // exposition only }; }
10.1.5.2. Constructors, copy, and assignment [stoptoken.inplace.cons]
in_place_stop_token () noexcept ;
-
Effects: initializes
withsource_
.nullptr
void swap ( stop_token & rhs ) noexcept ;
-
Effects: Exchanges the values of
andsource_
.rhs . source_
10.1.5.3. Members [stoptoken.inplace.mem]
[[ nodiscard ]] bool stop_requested () const noexcept ;
-
Effects: Equivalent to:
return source_ != nullptr && source_ -> stop_requested (); -
[Note: The behavior of
is undefined unless the call strongly happens before the start of the destructor of the associatedstop_requested ()
, if any ([basic.life]). --end note]in_place_stop_source
[[ nodiscard ]] bool stop_possible () const noexcept ;
-
Effects: Equivalent to:
return source_ != nullptr ; -
[Note: The behavior of
is implementation-defined unless the call strongly happens before the end of the storage duration of the associatedstop_possible ()
object, if any ([basic.stc.general]). --end note]in_place_stop_source
10.1.5.4. Non-member functions [stoptoken.inplace.nonmembers]
friend void swap ( in_place_stop_token & x , in_place_stop_token & y ) noexcept ;
-
Effects: Equivalent to:
.x . swap ( y )
10.1.6. Class in_place_stop_source
[stopsource.inplace]
Insert a new subclause, Class
[stopsource.inplace], after the section added above, as a new subclause of Stop tokens [thread.stoptoken].
10.1.6.1. General [stopsource.inplace.general]
-
The class
implements the semantics of making a stop request, without the need for a dynamic allocation of a shared state. A stop request made on ain_place_stop_source
object is visible to all associatedin_place_stop_source
([stoptoken.inplace]) objects. Once a stop request has been made it cannot be withdrawn (a subsequent stop request has no effect). All uses ofin_place_stop_token
objects associated with a givenin_place_stop_token
object must happen before the start of the destructor of thatin_place_stop_source
object.in_place_stop_source
namespace std { class in_place_stop_source { public : // [stopsource.inplace.cons], constructors, copy, and assignment in_place_stop_source () noexcept ; in_place_stop_source ( in_place_stop_source && ) noexcept = delete ; ~ in_place_stop_source (); //[stopsource.inplace.mem], stop handling [[ nodiscard ]] in_place_stop_token get_token () const noexcept ; [[ nodiscard ]] static constexpr bool stop_possible () noexcept { return true; } [[ nodiscard ]] bool stop_requested () const noexcept ; bool request_stop () noexcept ; }; }
-
An instance of
maintains a list of registered callback invocations. The registration of a callback invocation either succeeds or fails. When an invocation of a callback is registered, the following happens atomically:in_place_stop_source -
The stop state is checked. If stop has not been requested, the callback invocation is added to the list of registered callback invocations, and registration has succeeded.
-
Otherwise, registration has failed.
When an invocation of a callback is unregistered, the invocation is atomically removed from the list of registered callback invocations. The removal is not blocked by the concurrent execution of another callback invocation in the list. If the callback invocation being unregistered is currently executing, then:
-
If the execution of the callback invocation is happening concurrently on another thread, the completion of the execution strongly happens before ([intro.races]) the end of the callback’s lifetime.
-
Otherwise, the execution is happening on the current thread. Removal of the callback invocation does not block waiting for the execution to complete.
-
10.1.6.2. Constructors, copy, and assignment [stopsource.inplace.cons]
in_place_stop_source () noexcept ;
-
Effects: Initializes a new stop state inside
.* this -
Postconditions:
isstop_requested () false
.
10.1.6.3. Members [stopsource.inplace.mem]
[[ nodiscard ]] in_place_stop_token get_token () const noexcept ;
-
Returns: A new associated
object.in_place_stop_token
[[ nodiscard ]] bool stop_requested () const noexcept ;
-
Returns:
true
if the stop state inside
has received a stop request; otherwise,* this false
.
bool request_stop () noexcept ;
-
Effects: Atomically determines whether the stop state inside
has received a stop request, and if not, makes a stop request. The determination and making of the stop request are an atomic read-modify-write operation ([intro.races]). If the request was made, the registered invocations are executed and the evaluations of the invocations are indeterminately sequenced. If an invocation of a callback exits via an exception then* this
is invoked ([except.terminate]).terminate -
Postconditions:
isstop_requested () true
. -
Returns:
true
if this call made a stop request; otherwisefalse
.
10.1.7. Class template in_place_stop_callback
[stopcallback.inplace]
Insert a new subclause, Class template
[stopcallback.inplace], after the section added above, as a new subclause of Stop tokens [thread.stoptoken].
10.1.7.1. General [stopcallback.inplace.general]
-
namespace std { template < class Callback > class in_place_stop_callback { public : using callback_type = Callback ; // [stopcallback.inplace.cons], constructors and destructor template < class C > explicit in_place_stop_callback ( in_place_stop_token st , C && cb ) noexcept ( is_nothrow_constructible_v < Callback , C > ); ~ in_place_stop_callback (); in_place_stop_callback ( in_place_stop_callback && ) = delete ; private : Callback callback_ ; // exposition only }; template < class Callback > in_place_stop_callback ( in_place_stop_token , Callback ) -> in_place_stop_callback < Callback > ; } -
Mandates:
is instantiated with an argument for the template parameterin_place_stop_callback
that satisfies bothCallback
andinvocable
.destructible -
Preconditions:
is instantiated with an argument for the template parameterin_place_stop_callback
that models bothCallback
andinvocable
.destructible -
Recommended practice: Implementations should use the storage of the
objects to store the state necessary for their association with anin_place_stop_callback
object.in_place_stop_source
10.1.7.2. Constructors and destructor [stopcallback.inplace.cons]
template < class C > explicit in_place_stop_callback ( in_place_stop_token st , C && cb ) noexcept ( is_nothrow_constructible_v < Callback , C > );
-
Constraints:
andCallback
satisfyC
.constructible_from < Callback , C > -
Preconditions:
andCallback
modelC
.constructible_from < Callback , C > -
Effects: Initializes
withcallback_
. Anystd :: forward < C > ( cb )
associated within_place_stop_source
becomes associated withst
. Registers ([stopsource.inplace.general]) the callback invocation* this
with the associatedstd :: forward < Callback > ( callback_ )()
, if any. If the registration fails, evaluates the callback invocation.in_place_stop_source -
Throws: Any exception thrown by the initialization of
.callback_ -
Remarks: If evaluating
exits via an exception, thenstd :: forward < Callback > ( callback_ )()
is invoked ([except.terminate]).terminate
~ in_place_stop_callback ();
-
Effects: Unregisters ([stopsource.inplace.general]) the callback invocation from the associated
object, if any.in_place_stop_source -
Remarks: A program has undefined behavior if the start of this destructor does not strongly happen before the start of the destructor of the associated
object, if any.in_place_stop_source
11. Execution control library [exec]
11.1. General [exec.general]
-
This Clause describes components supporting execution of function objects [function.objects].
-
The following subclauses describe the requirements, concepts, and components for execution control primitives as summarized in Table 1.
Subclause | Header | |
[exec.execute] | One-way execution |
-
[Note: A large number of execution control primitives are customization point objects. For an object one might define multiple types of customization point objects, for which different rules apply. Table 2 shows the types of customization point objects used in the execution control library:
Customization point object type | Purpose | Examples |
---|---|---|
core | provide core execution functionality, and connection between core components | , ,
|
completion functions | called by senders to announce the completion of the work (success, error, or cancellation) | , ,
|
senders | allow the specialization of the provided sender algorithms |
|
queries | allow querying different properties of objects |
|
-- end note]
-
This clause makes use of the following exposition-only entities:
-
template < class Fn , class ... Args > requires callable < Fn , Args ... > constexpr auto mandate - nothrow - call ( Fn && fn , Args && ... args ) noexcept -> call - result - t < Fn , Args ... > { return std :: forward < Fn > ( fn )( std :: forward < Args > ( args )...); } -
Mandates:
isnothrow - callable < Fn , Args ... > true
.
-
-
template < class T > concept movable - value = move_constructible < decay_t < T >> && constructible_from < decay_t < T > , T > ; -
For function types
andF1
denotingF2
andR1 ( Args1 ...)
respectively,R2 ( Args2 ...)
isMATCHING - SIG ( F1 , F2 ) true
if and only if
issame_as < R1 ( Args && ...), R2 ( Args2 && ...) > true
.
-
11.2. Queries and queryables [exec.queryable]
11.2.1. General [exec.queryable.general]
-
A queryable object is a read-only collection of key/value pairs where each key is a customization point object known as a query object. A query is an invocation of a query object with a queryable object as its first argument and a (possibly empty) set of additional arguments. The result of a query expression is valid as long as the queryable object is valid. A query imposes syntactic and semantic requirements on its invocations.
-
Given a subexpression
that refers to a queryable objecte
, a query objectq
, and a (possibly empty) pack of subexpressionsF
, the expressionargs
is equal to ([concepts.equality]) the expressionF ( e , args ...)
whereF ( c , args ...)
is ac
lvalue reference toconst
.q -
The type of a query expression can not be
.void -
The expression
is equality-preserving ([concepts.equality]) and does not modify the function object or the arguments.F ( e , args ...) -
Unless otherwise specified, the value returned by the expression
is valid as long asF ( e , args ...)
is valid.e
11.2.2. queryable
concept [exec.queryable.concept]
template < class T > concept queryable = destructible < T > ;
-
The
concept specifies the constraints on the types of queryable objects.queryable -
Let
be an object of typee
. The typeE
modelsE
if for each callable objectqueryable
and a pack of subexpressionsF
, ifargs
isrequires { F ( e , args ...) } true
then
meets any semantic requirements imposed byF ( e , args ...)
.F
11.3. Asynchronous operations [async.ops]
-
An execution resource is a program entity that manages a (possibly dynamic) set of execution agents ([thread.req.lockable.general]), which it uses to execute parallel work on behalf of callers. [Example 1: The currently active thread, a system-provided thread pool, and uses of an API associated with an external hardware accelerator are all examples of execution resources. -- end example] Execution resources execute asynchronous operations. An execution resource is either valid or invalid.
-
An asynchronous operation is a distinct unit of program execution that:
-
is explicitly created;
-
can be explicitly started; an asynchronous operation can be started once at most;
-
if started, eventually completes with a (possibly empty) set of result datums, and in exactly one of three modes: success, failure, or cancellation, known as the operation’s disposition; an asychronous operation can only complete once; a successful completion, also known as a value completion, can have an arbitrary number of result datums; a failure completion, also known as an error completion, has a single result datum; a cancellation completion, also known as a stopped completion, has no result datum; an asynchronous operation’s async result is its disposition and its (possibly empty) set of result datums.
-
can complete on a different execution resource than that on which it started; and
-
can create and start other asychronous operations called child operations. A child operation is an asynchronous operation that is created by the parent operation and, if started, completes before the parent operation completes. A parent operation is the asynchronous operation that created a particular child operation.
An asynchronous operation can in fact execute synchronously; that is, it can complete during the execution of its start operation on the thread of execution that started it.
-
-
An asynchronous operation has associated state known as its operation state.
-
An asynchronous operation has an associated environment. An environment is a queryable object ([exec.queryable]) representing the execution-time properties of the operation’s caller. The caller of an asynchronous operation is its parent operation or the function that created it. An asynchronous operation’s operation state owns the operation’s environment.
-
An asynchronous operation has an associated receiver. A receiver is an aggregation of three handlers for the three asynchronous completion dispositions: a value completion handler for a value completion, an error completion handler for an error completion, and a stopped completion handler for a stopped completion. A receiver has an associated environment. An asynchronous operation’s operation state owns the operation’s receiver. The environment of an asynchronous operation is equal to its receiver’s environment.
-
For each completion disposition, there is a completion function. A completion function is a customization point object ([customization.point.object]) that accepts an asynchronous operation’s receiver as the first argument and the result datums of the asynchronous operation as additional arguments. The value completion function invokes the receiver’s value completion handler with the value result datums; likewise for the error completion function and the stopped completion function. A completion function has an associated type known as its completion tag that names the unqualified type of the completion function. A valid invocation of a completion function is called a completion operation.
-
The lifetime of an asynchronous operation, also known as the operation’s async lifetime, begins when its start operation begins executing and ends when its completion operation begins executing. If the lifetime of an asynchronous operation’s associated operation state ends before the lifetime of the asynchronous operation, the behavior is undefined. After an asynchronous operation executes a completion operation, its associated operation state is invalid. Accessing any part of an invalid operation state is undefined behavior.
-
An asynchronous operation shall not execute a completion operation before its start operation has begun executing. After its start operation has begun executing, exactly one completion operation shall execute. The lifetime of an asynchronous operation’s operation state can end during the execution of the completion operation.
-
A sender is a factory for one or more asynchronous operations. Connecting a sender and a receiver creates an asynchronous operation. The asynchronous operation’s associated receiver is equal to the receiver used to create it, and its associated environment is equal to the environment associated with the receiver used to create it. The lifetime of an asynchronous operation’s associated operation state does not depend on the lifetimes of either the sender or the receiver from which it was created. A sender sends its results by way of the asynchronous operation(s) it produces, and a receiver receives those results. A sender is either valid or invalid; it becomes invalid when its parent sender (see below) becomes invalid.
-
A scheduler is an abstraction of an execution resource with a uniform, generic interface for scheduling work onto that resource. It is a factory for senders whose asynchronous operations execute value completion operations on an execution agent belonging to the scheduler’s associated execution resource. A schedule-expression obtains such a sender from a scheduler. A schedule sender is the result of a schedule expression. On success, an asynchronous operation produced by a schedule sender executes a value completion operation with an empty set of result datums. Multiple schedulers can refer to the same execution resource. A scheduler can be valid or invalid. A scheduler becomes invalid when the execution resource to which it refers becomes invalid, as do any schedule senders obtained from the scheduler, and any operation states obtained from those senders.
-
An asynchronous operation has one or more associated completion schedulers for each of its possible dispositions. A completion scheduler is a scheduler whose associated execution resource is used to execute a completion operation for an asynchronous operation. A value completion scheduler is a scheduler on which an asynchronous operation’s value completion operation can execute. Likewise for error completion schedulers and stopped completion schedulers.
-
A sender has an associated queryable object ([exec.queryable]) known as its attributes that describes various characteristics of the sender and of the asynchronous operation(s) it produces. For each disposition, there is a query object for reading the associated completion scheduler from a sender’s attributes; i.e., a value completion scheduler query object for reading a sender’s value completion scheduler, etc. If a completion scheduler query is well-formed, the returned completion scheduler is unique for that disposition for any asynchronous operation the sender creates. A schedule sender is required to have a value completion scheduler attribute whose value is equal to the scheduler that produced the schedule sender.
-
A completion signature is a function type that describes a completion operation. An asychronous operation has a finite set of possible completion signatures. The completion signature’s return type is the completion tag associated with the completion function that executes the completion operation. The completion signature’s argument types are the types and value categories of the asynchronous operation’s result datums. Together, a sender type and an environment type
determine the set of completion signatures of an operation state that results from connecting the sender with a receiver whose environment has typeE
. The type of the receiver does not affect an asychronous operation’s completion signatures, only the type of the receiver’s environment.E -
A sender algorithm is a function that takes and/or returns a sender. There are three categories of sender algorithms:
-
A sender factory is a function that takes non-senders as arguments and that returns a sender.
-
A sender adaptor is a function that constructs and returns a parent sender from a set of one or more child senders and a (possibly empty) set of additional arguments. An asynchronous operation created by a parent sender is a parent to the child operations created by the child senders.
-
A sender consumer is a function that takes one or more senders and a (possibly empty) set of additional arguments, and whose return type is not the type of a sender.
-
11.4. Header < execution >
synopsis [exec.syn]
namespace std { // [exec.general], helper concepts template < class T > concept movable - value = see - below ; // exposition only template < class From , class To > concept decays - to = same_as < decay_t < From > , To > ; // exposition only template < class T > concept class - type = decays - to < T , T > && is_class_v < T > ; // exposition only // [exec.queryable], queryable objects template < class T > concept queryable = destructible ; // [exec.queries], queries namespace queries { // exposition only struct forwarding_query_t ; struct get_allocator_t ; struct get_stop_token_t ; } using queries :: forwarding_query_t ; using queries :: get_allocator_t ; using queries :: get_stop_token_t ; inline constexpr forwarding_query_t forwarding_query {}; inline constexpr get_allocator_t get_allocator {}; inline constexpr get_stop_token_t get_stop_token {}; template < class T > using stop_token_of_t = remove_cvref_t < decltype ( get_stop_token ( declval < T > ())) > ; template < class T > concept forwarding - query = // exposition only forwarding_query ( T {}); namespace exec - envs { // exposition only struct empty_env {}; struct get_env_t ; } using envs - envs :: empty_env ; using envs - envs :: get_env_t ; inline constexpr get_env_t get_env {}; template < class T > using env_of_t = decltype ( get_env ( declval < T > ())); } namespace std :: execution { // [exec.queries], queries enum class forward_progress_guarantee ; namespace queries { // exposition only struct get_scheduler_t ; struct get_delegatee_scheduler_t ; struct get_forward_progress_guarantee_t ; template < class CPO > struct get_completion_scheduler_t ; } using queries :: get_scheduler_t ; using queries :: get_delegatee_scheduler_t ; using queries :: get_forward_progress_guarantee_t ; using queries :: get_completion_scheduler_t ; inline constexpr get_scheduler_t get_scheduler {}; inline constexpr get_delegatee_scheduler_t get_delegatee_scheduler {}; inline constexpr get_forward_progress_guarantee_t get_forward_progress_guarantee {}; template < class CPO > inline constexpr get_completion_scheduler_t < CPO > get_completion_scheduler {}; // [exec.sched], schedulers template < class S > concept scheduler = see - below ; // [exec.recv], receivers template < class R > inline constexpr bool enable_receiver = see - below ; template < class R > concept receiver = see - below ; template < class R , class Completions > concept receiver_of = see - below ; namespace receivers { // exposition only struct set_value_t ; struct set_error_t ; struct set_stopped_t ; } using receivers :: set_value_t ; using receivers :: set_error_t ; using receivers :: set_stopped_t ; inline constexpr set_value_t set_value {}; inline constexpr set_error_t set_error {}; inline constexpr set_stopped_t set_stopped {}; // [exec.opstate], operation states template < class O > concept operation_state = see - below ; namespace op - state { // exposition only struct start_t ; } using op - state :: start_t ; inline constexpr start_t start {}; // [exec.snd], senders template < class S > inline constexpr bool enable_sender = see below ; template < class S > concept sender = see - below ; template < class S , class E = empty_env > concept sender_in = see - below ; template < class S , class R > concept sender_to = see - below ; template < class S , class Sig , class E = empty_env > concept sender_of = see below ; template < class ... Ts > struct type - list ; // exposition only template < class S , class E = empty_env > using single - sender - value - type = see below ; // exposition only template < class S , class E = empty_env > concept single - sender = see below ; // exposition only // [exec.getcomplsigs], completion signatures namespace completion - signatures { // exposition only struct get_completion_signatures_t ; } using completion - signatures :: get_completion_signatures_t ; inline constexpr get_completion_signatures_t get_completion_signatures {}; template < class S , class E = empty_env > requires sender_in < S , E > using completion_signatures_of_t = call - result - t < get_completion_signatures_t , S , E > ; template < class ... Ts > using decayed - tuple = tuple < decay_t < Ts > ... > ; // exposition only template < class ... Ts > using variant - or - empty = see below ; // exposition only template < class S , class E = empty_env , template < class ... > class Tuple = decayed - tuple , template < class ... > class Variant = variant - or - empty > requires sender_in < S , E > using value_types_of_t = see below ; template < class S , class Env = empty_env , template < class ... > class Variant = variant - or - empty > requires sender_in < S , E > using error_types_of_t = see below ; template < class S , class E = empty_env > requires sender_in < S , E > inline constexpr bool sends_stopped = see below ; // [exec.connect], the connect sender algorithm namespace senders - connect { // exposition only struct connect_t ; } using senders - connect :: connect_t ; inline constexpr connect_t connect {}; template < class S , class R > using connect_result_t = decltype ( connect ( declval < S > (), declval < R > ())); // [exec.factories], sender factories namespace senders - factories { // exposition only struct schedule_t ; struct transfer_just_t ; } inline constexpr unspecified just {}; inline constexpr unspecified just_error {}; inline constexpr unspecified just_stopped {}; using senders - factories :: schedule_t ; using senders - factories :: transfer_just_t ; inline constexpr schedule_t schedule {}; inline constexpr transfer_just_t transfer_just {}; inline constexpr unspecified read {}; template < scheduler S > using schedule_result_t = decltype ( schedule ( declval < S > ())); // [exec.adapt], sender adaptors namespace sender - adaptor - closure { // exposition only template < class - type D > struct sender_adaptor_closure { }; } using sender - adaptor - closure :: sender_adaptor_closure ; namespace sender - adaptors { // exposition only struct on_t ; struct transfer_t ; struct schedule_from_t ; struct then_t ; struct upon_error_t ; struct upon_stopped_t ; struct let_value_t ; struct let_error_t ; struct let_stopped_t ; struct bulk_t ; struct split_t ; struct when_all_t ; struct when_all_with_variant_t ; struct transfer_when_all_t ; struct transfer_when_all_with_variant_t ; struct into_variant_t ; struct stopped_as_optional_t ; struct stopped_as_error_t ; struct ensure_started_t ; } using sender - adaptors :: on_t ; using sender - adaptors :: transfer_t ; using sender - adaptors :: schedule_from_t ; using sender - adaptors :: then_t ; using sender - adaptors :: upon_error_t ; using sender - adaptors :: upon_stopped_t ; using sender - adaptors :: let_value_t ; using sender - adaptors :: let_error_t ; using sender - adaptors :: let_stopped_t ; using sender - adaptors :: bulk_t ; using sender - adaptors :: split_t ; using sender - adaptors :: when_all_t ; using sender - adaptors :: when_all_with_variant_t ; using sender - adaptors :: transfer_when_all_t ; using sender - adaptors :: transfer_when_all_with_variant_t ; using sender - adaptors :: into_variant_t ; using sender - adaptors :: stopped_as_optional_t ; using sender - adaptors :: stopped_as_error_t ; using sender - adaptors :: ensure_started_t ; inline constexpr on_t on {}; inline constexpr transfer_t transfer {}; inline constexpr schedule_from_t schedule_from {}; inline constexpr then_t then {}; inline constexpr upon_error_t upon_error {}; inline constexpr upon_stopped_t upon_stopped {}; inline constexpr let_value_t let_value {}; inline constexpr let_error_t let_error {}; inline constexpr let_stopped_t let_stopped {}; inline constexpr bulk_t bulk {}; inline constexpr split_t split {}; inline constexpr when_all_t when_all {}; inline constexpr when_all_with_variant_t when_all_with_variant {}; inline constexpr transfer_when_all_t transfer_when_all {}; inline constexpr transfer_when_all_with_variant_t transfer_when_all_with_variant {}; inline constexpr into_variant_t into_variant {}; inline constexpr stopped_as_optional_t stopped_as_optional ; inline constexpr stopped_as_error_t stopped_as_error ; inline constexpr ensure_started_t ensure_started {}; // [exec.consumers], sender consumers namespace sender - consumers { // exposition only struct start_detached_t ; } using sender - consumers :: start_detached_t ; inline constexpr start_detached_t start_detached {}; // [exec.utils], sender and receiver utilities // [exec.utils.rcvr.adptr] template < class - type Derived , receiver Base = unspecified > // arguments are not associated entities ([lib.tmpl-heads]) class receiver_adaptor ; template < class Fn > concept completion - signature = // exposition only see below ; // [exec.utils.cmplsigs] template < completion - signature ... Fns > struct completion_signatures {}; template < class ... Args > // exposition only using default - set - value = completion_signatures < set_value_t ( Args ...) > ; template < class Err > // exposition only using default - set - error = completion_signatures < set_error_t ( Err ) > ; template < class Sigs > // exposition only concept valid - completion - signatures = see below ; // [exec.utils.mkcmplsigs] template < sender Sndr , class Env = empty_env , valid - completion - signatures AddlSigs = completion_signatures <> , template < class ... > class SetValue = see below , template < class > class SetError = see below , valid - completion - signatures SetStopped = completion_signatures < set_stopped_t () >> requires sender_in < Sndr , Env > using make_completion_signatures = completion_signatures < see below > ; // [exec.ctx], execution resources class run_loop ; } namespace std :: this_thread { // [exec.queries], queries namespace queries { // exposition only struct execute_may_block_caller_t ; } using queries :: execute_may_block_caller_t ; inline constexpr execute_may_block_caller_t execute_may_block_caller {}; namespace this - thread { // exposition only struct sync - wait - env ; // exposition only template < class S > requires sender_in < S , sync - wait - env > using sync - wait - type = see - below ; // exposition only template < class S > using sync - wait - with - variant - type = see - below ; // exposition only struct sync_wait_t ; struct sync_wait_with_variant_t ; } using this - thread :: sync_wait_t ; using this - thread :: sync_wait_with_variant_t ; inline constexpr sync_wait_t sync_wait {}; inline constexpr sync_wait_with_variant_t sync_wait_with_variant {}; } namespace std :: execution { // [exec.execute], one-way execution namespace execute { // exposition only struct execute_t ; } using execute :: execute_t ; inline constexpr execute_t execute {}; // [exec.as.awaitable] namespace coro - utils { // exposition only struct as_awaitable_t ; } using coro - utils :: as_awaitable_t ; inline constexpr as_awaitable_t as_awaitable ; // [exec.with.awaitable.senders] template < class - type Promise > struct with_awaitable_senders ; }
-
The exposition-only type
is defined as follows:variant - or - empty < Ts ... > -
If
is greater than zero,sizeof ...( Ts )
names the typevariant - or - empty < Ts ... >
wherevariant < Us ... >
is the packUs ...
with duplicate types removed.decay_t < Ts > ... -
Otherwise,
names the exposition-only class type:variant - or - empty < Ts ... > struct empty - variant { empty - variant () = delete ; };
-
11.5. Queries [exec.queries]
11.5.1. std :: get_env
[exec.get.env]
-
is a customization point object. For some subexpressionget_env
of typeo
,O
is expression-equivalent toget_env ( o ) -
if that expression is well-formed.tag_invoke ( std :: get_env , const_cast < const O &> ( o )) -
Mandates: The type of the expression above satisfies
([exec.queryable]).queryable
-
-
Otherwise,
.empty_env {}
-
-
The value of
shall be valid whileget_env ( o )
is valid.o -
When passed a sender object,
returns the sender’s attributes. When passed a receiver,get_env
returns the receiver’s environment.get_env
11.5.2. std :: forwarding_query
[exec.fwd.env]
-
asks a query object whether it should be forwarded through queryable adaptors.std :: forwarding_query -
The name
denotes a query object. For some query objectstd :: forwarding_query
of typeq
,Q
is expression-equivalent to:std :: forwarding_query ( q ) -
if that expression is well-formed.mandate - nothrow - call ( tag_invoke , std :: forwarding_query , q ) -
Mandates: The expression above has type
and is a core constant expressions ifbool
is a core constant expression.q
-
-
Otherwise,
true
if
isderived_from < Q , std :: forwarding_query_t > true
. -
Otherwise,
false
.
-
-
For a queryable object
, leto
be a queryable object such that for a query objectFWD - QUERIES ( o )
and a pack of subexpressionsq
, the expressionas
is ill-formed ifq ( FWD - QUERIES ( o ), as ...)
isforwarding_query ( q ) false
; otherwise, it is expression-equivalent to
.q ( o , as ...)
11.5.3. std :: get_allocator
[exec.get.allocator]
-
asks an object for its associated allocator.get_allocator -
The name
denotes a query object. For some subexpressionget_allocator
,r
is expression-equivalent toget_allocator ( r )
.mandate - nothrow - call ( tag_invoke , std :: get_allocator , as_const ( r )) -
Mandates: The type of the expression above satisfies Allocator.
-
-
isstd :: forwarding_query ( std :: get_allocator ) true
. -
(with no arguments) is expression-equivalent toget_allocator ()
([exec.read]).execution :: read ( std :: get_allocator )
11.5.4. std :: get_stop_token
[exec.get.stop.token]
-
asks an object for an associated stop token.get_stop_token -
The name
denotes a query object. For some subexpressionget_stop_token
,r
is expression-equivalent to:get_stop_token ( r ) -
, if this expression is well-formed.mandate - nothrow - call ( tag_invoke , std :: get_stop_token , as_const ( r )) -
Mandates: The type of the expression above satisfies
.stoppable_token
-
-
Otherwise,
.never_stop_token {}
-
-
isstd :: forwarding_query ( std :: get_stop_token ) true
. -
(with no arguments) is expression-equivalent toget_stop_token ()
([exec.read]).execution :: read ( std :: get_stop_token )
11.5.5. execution :: get_scheduler
[exec.get.scheduler]
-
asks an object for its associated scheduler.get_scheduler -
The name
denotes a query object. For some subexpressionget_scheduler
,r
is expression-equivalent toget_scheduler ( r )
.mandate - nothrow - call ( tag_invoke , get_scheduler , as_const ( r )) -
Mandates: The type of the expression above satisfies
.scheduler
-
-
isstd :: forwarding_query ( std :: get_scheduler ) true
. -
(with no arguments) is expression-equivalent toget_scheduler ()
([exec.read]).execution :: read ( get_scheduler )
11.5.6. execution :: get_delegatee_scheduler
[exec.get.delegatee.scheduler]
-
asks an object for a scheduler that can be used to delegate work to for the purpose of forward progress delegation.get_delegatee_scheduler -
The name
denotes a query object. For some subexpressionget_delegatee_scheduler
,r
is expression-equivalent toget_delegatee_scheduler ( r )
.mandate - nothrow - call ( tag_invoke , get_delegatee_scheduler , as_const ( r )) -
Mandates: The type of the expression above is satisfies
.scheduler
-
-
isstd :: forwarding_query ( std :: get_delegatee_scheduler ) true
. -
(with no arguments) is expression-equivalent toget_delegatee_scheduler ()
([exec.read]).execution :: read ( get_delegatee_scheduler )
11.5.7. execution :: get_forward_progress_guarantee
[exec.get.forward.progress.guarantee]
enum class forward_progress_guarantee { concurrent , parallel , weakly_parallel };
-
asks a scheduler about the forward progress guarantees of execution agents created by that scheduler.get_forward_progress_guarantee -
The name
denotes a query object. For some subexpressionget_forward_progress_guarantee
, lets
beS
. Ifdecltype (( s ))
does not satisfyS
,scheduler
is ill-formed. Otherwise,get_forward_progress_guarantee
is expression-equivalent to:get_forward_progress_guarantee ( s ) -
, if this expression is well-formed.mandate - nothrow - call ( tag_invoke , get_forward_progress_guarantee , as_const ( s )) -
Mandates: The type of the expression above is
.forward_progress_guarantee
-
-
Otherwise,
.forward_progress_guarantee :: weakly_parallel
-
-
If
for some schedulerget_forward_progress_guarantee ( s )
returnss
, all execution agents created by that scheduler shall provide the concurrent forward progress guarantee. If it returnsforward_progress_guarantee :: concurrent
, all execution agents created by that scheduler shall provide at least the parallel forward progress guarantee.forward_progress_guarantee :: parallel
11.5.8. this_thread :: execute_may_block_caller
[exec.execute.may.block.caller]
-
asks a schedulerthis_thread :: execute_may_block_caller
whether a calls
with any invocableexecute ( s , f )
may block the thread where such a call occurs.f -
The name
denotes a query object. For some subexpressionthis_thread :: execute_may_block_caller
, lets
beS
. Ifdecltype (( s ))
does not satisfyS
,scheduler
is ill-formed. Otherwise,this_thread :: execute_may_block_caller
is expression-equivalent to:this_thread :: execute_may_block_caller ( s ) -
, if this expression is well-formed.mandate - nothrow - call ( tag_invoke , this_thread :: execute_may_block_caller , as_const ( s )) -
Mandates: The type of the expression above is
.bool
-
-
Otherwise,
true
.
-
-
If
for some schedulerthis_thread :: execute_may_block_caller ( s )
returnss false
, no
call with some invocableexecute ( s , f )
shall block the calling thread.f
11.5.9. execution :: get_completion_scheduler
[exec.completion.scheduler]
-
obtains the completion scheduler associated with a completion tag from a sender’s attributes.get_completion_scheduler < completion - tag > -
The name
denotes a query object template. For some subexpressionget_completion_scheduler
, letq
beQ
. If the template argumentdecltype (( q ))
inTag
is not one ofget_completion_scheduler < Tag > ( q )
,set_value_t
, orset_error_t
,set_stopped_t
is ill-formed. Otherwise,get_completion_scheduler < Tag > ( q )
is expression-equivalent toget_completion_scheduler < Tag > ( q )
if this expression is well-formed.mandate - nothrow - call ( tag_invoke , get_completion_scheduler , as_const ( q )) -
Mandates: The type of the expression above satisfies
.scheduler
-
-
If, for some sender
and completion functions
that has an associated completion tagC
,Tag
is well-formed and results in a schedulerget_completion_scheduler < Tag > ( get_env ( s ))
, and the sendersch
invokess
, for some receiverC ( r , args ...)
that has been connected tor
, with additional argumentss
, on an execution agent that does not belong to the associated execution resource ofargs ...
, the behavior is undefined.sch -
The expression
has valueforwarding_query ( get_completion_scheduler < CPO > ) true
.
11.6. Schedulers [exec.sched]
-
The
concept defines the requirements of a scheduler type ([async.ops]).scheduler
is a customization point object that accepts a scheduler. A valid invocation ofschedule
is a schedule-expression.schedule template < class S > concept scheduler = queryable < S > && requires ( S && s , const get_completion_scheduler_t < set_value_t > tag ) { { schedule ( std :: forward < S > ( s )) } -> sender ; { tag_invoke ( tag , std :: get_env ( schedule ( std :: forward < S > ( s )))) } -> same_as < remove_cvref_t < S >> ; } && equality_comparable < remove_cvref_t < S >> && copy_constructible < remove_cvref_t < S >> ; -
Let
be the type of a scheduler and letS
be the type of an execution environment for whichE
issender_in < schedule_result_t < S > , E > true
. Then
shall besender_of < schedule_result_t < S > , set_value_t (), E > true
. -
None of a scheduler’s copy constructor, destructor, equality comparison, or
member functions shall exit via an exception.swap -
None of these member functions, nor a scheduler type’s
function, shall introduce data races as a result of concurrent invocations of those functions from different threads.schedule -
For any two (possibly
) valuesconst
ands1
of some scheduler types2
,S
shall returns1 == s2 true
only if both
ands1
share the same associated execution resource.s2 -
For a given scheduler expression
, the expressions
shall compare equal toget_completion_scheduler < set_value_t > ( std :: get_env ( schedule ( s )))
.s -
A scheduler type’s destructor shall not block pending completion of any receivers connected to the sender objects returned from
. The ability to wait for completion of submitted function objects can be provided by the associated execution resource of the scheduler.schedule
11.7. Receivers [exec.recv]
11.7.1. Receiver concepts [exec.recv.concepts]
-
A receiver represents the continuation of an asynchronous operation. The
concept defines the requirements for a receiver type ([async.ops]). Thereceiver
concept defines the requirements for a receiver type that is usable as the first argument of a set of completion operations corresponding to a set of completion signatures. Thereceiver_of
customization point is used to access a receiver’s associated environment.get_env template < class R > inline constexpr bool enable_receiver = requires { typename R :: is_receiver ; }; template < class R > concept receiver = enable_receiver < remove_cvref_t < R >> && requires ( const remove_cvref_t < R >& r ) { { get_env ( r ) } -> queryable ; } && move_constructible < remove_cvref_t < R >> && // rvalues are movable, and constructible_from < remove_cvref_t < R > , R > ; // lvalues are copyable template < class Signature , class R > concept valid - completion - for = // exposition only requires ( Signature * sig ) { [] < class Tag , class ... Args > ( Tag ( * )( Args ...)) requires callable < Tag , remove_cvref_t < R > , Args ... > {}( sig ); }; template < class R , class Completions > concept receiver_of = receiver < R > && requires ( Completions * completions ) { [] < valid - completion - for < R > ... Sigs > ( completion_signatures < Sigs ... >* ) {}( completions ); }; -
Remarks: Pursuant to [namespace.std], users can specialize
toenable_receiver true
for cv-unqualified program-defined types that model
, andreceiver false
for types that do not. Such specializations shall be usable in constant expressions ([expr.const]) and have type
.const bool -
Let
be a receiver and letr
be an operation state associated with an asynchronous operation created by connectingop_state
with a sender. Letr
be a stop token equal totoken
.get_stop_token ( get_env ( r ))
shall remain valid for the duration of the asynchronous operation’s lifetime ([async.ops]). This means that, unless it knows about further guarantees provided by the type of receivertoken
, the implementation ofr
can not useop_state
after it executes a completion operation. This also implies that any stop callbacks registered ontoken
must be destroyed before the invocation of the completion operation.token
11.7.2. execution :: set_value
[exec.set.value]
-
is a value completion function ([async.ops]). Its associated completion tag isset_value
. The expressionset_value_t
for some subexpressionset_value ( R , Vs ...)
and pack of subexpressionsR
is ill-formed ifVs
is an lvalue or aR
rvalue. Otherwise, it is expression-equivalent toconst
.mandate - nothrow - call ( tag_invoke , set_value , R , Vs ...)
11.7.3. execution :: set_error
[exec.set.error]
-
is an error completion function. Its associated completion tag isset_error
. The expressionset_error_t
for some subexpressionsset_error ( R , E )
andR
is ill-formed ifE
is an lvalue or aR
rvalue. Otherwise, it is expression-equivalent toconst
.mandate - nothrow - call ( tag_invoke , set_error , R , E )
11.7.4. execution :: set_stopped
[exec.set.stopped]
-
is a stopped completion function. Its associated completion tag isset_stopped
. The expressionset_stopped_t
for some subexpressionset_stopped ( R )
is ill-formed ifR
is an lvalue or aR
rvalue. Otherwise, it is expression-equivalent toconst
.mandate - nothrow - call ( tag_invoke , set_stopped , R )
11.8. Operation states [exec.opstate]
-
The
concept defines the requirements of an operation state type ([async.ops]).operation_state template < class O > concept operation_state = queryable < O > && is_object_v < O > && requires ( O & o ) { { start ( o ) } noexcept ; }; -
If an
object is moved during the lifetime of its asynchronous operation ([async.ops]), the behavior is undefined.operation_state -
Library-provided operation state types are non-movable.
11.8.1. execution :: start
[exec.opstate.start]
-
The name
denotes a customization point object that starts ([async.ops]) the asynchronous operation associated with the operation state object. The expressionstart
for some subexpressionstart ( O )
is ill-formed ifO
is an rvalue. Otherwise, it is expression-equivalent to:O mandate - nothrow - call ( tag_invoke , start , O ) -
If the function selected by
does not start the asynchronous operation associated with the operation statetag_invoke
, the behavior of callingO
is undefined.start ( O )
11.9. Senders [exec.snd]
11.9.1. Sender concepts [exec.snd.concepts]
-
The
concept defines the requirements for a sender type ([async.ops]). Thesender
concept defines the requirements for a sender type that can create asynchronous operations given an associated environment type. Thesender_in
concept defines the requirements for a sender type that can connect with a specific receiver type. Thesender_to
customization point object is used to access a sender’s associated attributes. Theget_env
customization point object is used to connect ([async.ops]) a sender and a receiver to produce an operation state.connect template < class Sigs > concept valid - completion - signatures = see below ; template < class S > inline constexpr bool enable_sender = requires { typename S :: is_sender ; }; template < is - awaitable < env - promise < empty_env >> S > // [exec.awaitables] inline constexpr bool enable_sender < S > = true; template < class S > concept sender = enable_sender < remove_cvref_t < S >> && requires ( const remove_cvref_t < S >& s ) { { get_env ( s ) } -> queryable ; } && move_constructible < remove_cvref_t < S >> && // rvalues are movable, and constructible_from < remove_cvref_t < S > , S > ; // lvalues are copyable template < class S , class E = empty_env > concept sender_in = sender < S > && requires ( S && s , E && e ) { { get_completion_signatures ( std :: forward < S > ( s ), std :: forward < E > ( e )) } -> valid - completion - signatures ; }; template < class S , class R > concept sender_to = sender_in < S , env_of_t < R >> && receiver_of < R , completion_signatures_of_t < S , env_of_t < R >>> && requires ( S && s , R && r ) { connect ( std :: forward < S > ( s ), std :: forward < R > ( r )); }; -
A type
satisfies and models the exposition-only conceptSigs
if it names a specialization of thevalid - completion - signatures
class template.completion_signatures -
Remarks: Pursuant to [namespace.std], users can specialize
toenable_sender true
for cv-unqualified program-defined types that model
, andsender false
for types that do not. Such specializations shall be usable in constant expressions ([expr.const]) and have type
.const bool -
The
concept defines the requirements for a sender type that completes with the completion signature specified for the given completion function.sender_of template < class > struct sender - of - helper ; // exposition only template < class R , class ... As > struct sender - of - helper < R ( As ...) > { using tag = R ; template < class ... Bs > using as - sig = R ( Bs ...); }; template < class S , class Sig , class E = empty_env > concept sender_of = sender_in < S , E > && MATCHING - SIG ( // see [exec.general] Sig , gather - signatures < // see [exec.utils.cmplsigs] typename sender - of - helper < Sig >:: tag , S , E , sender - of - helper < Sig >:: template as - sig , type_identity_t > ); -
[Example:
auto s1 = just () | then ([]{}); using S1 = decltype ( s1 ); static_assert ( sender_of < S1 , set_value_t () > ); static_assert ( sender_of < S1 , set_error_t ( exception_ptr ) > ); static_assert ( ! sender_of < S1 , set_stopped_t () > ); auto s2 = s1 | let_error ([]( auto ) { return just ( 'a' ); }); using S2 = decltype ( s2 ); static_assert ( ! sender_of < S2 , set_value_t () > ); static_assert ( ! sender_of < S2 , set_value_t ( char ) > ); static_assert ( ! sender_of < S2 , set_error_t ( exception_ptr ) > ); static_assert ( ! sender_of < S2 , set_stopped_t () > ); -- end example]
-
-
For a type
,T
names the typeSET - VALUE - SIG ( T )
ifset_value_t ()
is cvT
; otherwise, it names the typevoid
.set_value_t ( T ) -
Library-provided sender types:
-
Always expose an overload of a customization of
that accepts an rvalue sender.connect -
Only expose an overload of a customization of
that accepts an lvalue sender if they modelconnect
.copy_constructible -
Model
if they satisfycopy_constructible
.copy_constructible
-
11.9.2. Awaitable helpers [exec.awaitables]
-
The sender concepts recognize awaitables as senders. For this clause ([exec]), an awaitable is an expression that would be well-formed as the operand of a
expression within a given context.co_await -
For a subexpression
, letc
be expression-equivalent to the series of transformations and conversions applied toGET - AWAITER ( c , p )
as the operand of an await-expression in a coroutine, resulting in lvaluec
as described by [expr.await]/3.2-4, wheree
is an lvalue refering to the coroutine’s promise type,p
. This includes the invocation of the promise type’sP
member if any, the invocation of theawait_transform
picked by overload resolution if any, and any necessary implicit conversions and materializations.operator co_await I have opened cwg#250 to give these transformations a term-of-art so we can more easily refer to it here. -
Let
be the following exposition-only concept:is - awaitable template < class T > concept await - suspend - result = see below ; template < class A , class P > concept is - awaiter = // exposition only requires ( A & a , coroutine_handle < P > h ) { a . await_ready () ? 1 : 0 ; { a . await_suspend ( h ) } -> await - suspend - result ; a . await_resume (); }; template < class C , class P > concept is - awaitable = requires ( C ( * fc )() noexcept , P & p ) { { GET - AWAITER ( fc (), p ) } -> is - awaiter < P > ; };
isawait - suspend - result < T > true
if and only if one of the following istrue
:-
isT
, orvoid -
isT
, orbool -
is a specialization ofT
.coroutine_handle
-
-
For a subexpression
such thatc
is typedecltype (( c ))
, and an lvalueC
of typep
,P
names the typeawait - result - type < C , P >
.decltype ( GET - AWAITER ( c , p ). await_resume ()) -
Let
be the exposition-only class template:with - await - transform template < class Derived > struct with - await - transform { template < class T > T && await_transform ( T && value ) noexcept { return std :: forward < T > ( value ); } template < class T > requires tag_invocable < as_awaitable_t , T , Derived &> auto await_transform ( T && value ) noexcept ( nothrow_tag_invocable < as_awaitable_t , T , Derived &> ) -> tag_invoke_result_t < as_awaitable_t , T , Derived &> { return tag_invoke ( as_awaitable , std :: forward < T > ( value ), static_cast < Derived &> ( * this )); } }; -
Let
be the exposition-only class template:env - promise template < class Env > struct env - promise : with - await - transform < env - promise < Env >> { unspecified get_return_object () noexcept ; unspecified initial_suspend () noexcept ; unspecified final_suspend () noexcept ; void unhandled_exception () noexcept ; void return_void () noexcept ; coroutine_handle <> unhandled_stopped () noexcept ; friend const Env & tag_invoke ( get_env_t , const env - promise & ) noexcept ; }; Specializations of
are only used for the purpose of type computation; its members need not be defined.env - promise
11.9.3. execution :: get_completion_signatures
[exec.getcomplsigs]
-
is a customization point object. Letget_completion_signatures
be an expression such thats
isdecltype (( s ))
, and letS
be an expression such thate
isdecltype (( e ))
. ThenE
is expression-equivalent to:get_completion_signatures ( s , e ) -
if that expression is well-formed,tag_invoke_result_t < get_completion_signatures_t , S , E > {} -
Mandates:
, wherevalid - completion - signatures < Sigs >
names the typeSigs
.tag_invoke_result_t < get_completion_signatures_t , S , E >
-
-
Otherwise,
if that expression is well-formed,remove_cvref_t < S >:: completion_signatures {} -
Mandates:
, wherevalid - completion - signatures < Sigs >
names the typeSigs
.remove_cvref_t < S >:: completion_signatures
-
-
Otherwise, if
isis - awaitable < S , env - promise < E >> true
, then:completion_signatures < SET - VALUE - SIG ( await - result - type < S , env - promise < E >> ), // see [exec.snd.concepts] set_error_t ( exception_ptr ), set_stopped_t () > {} -
Otherwise,
is ill-formed.get_completion_signatures ( s , e )
-
-
Let
be an rvalue receiver of typer
, and letR
be the type of a sender such thatS
issender_in < S , env_of_t < R >> true
. Let
be the template arguments of theSigs ...
specialization named bycompletion_signatures
. Letcompletion_signatures_of_t < S , env_of_t < R >>
be a completion function. If senderCSO
or its operation state cause the expressionS
to be potentially evaluated ([basic.def.odr]) then there shall be a signatureCSO ( r , args ...)
inSig
such thatSigs ...
isMATCHING - SIG ( tag_t < CSO > ( decltype ( args )...), Sig ) true
([exec.general]).
11.9.4. execution :: connect
[exec.connect]
-
connects ([async.op]) a sender with a receiver.connect -
The name
denotes a customization point object. For subexpressionsconnect
ands
, letr
beS
anddecltype (( s ))
beR
, and letdecltype (( r ))
andDS
be the decayed types ofDR
andS
, respectively.R -
Let
be the following class:connect - awaitable - promise struct connect - awaitable - promise : with - await - transform < connect - awaitable - promise > { DR & rcvr ; // exposition only connect - awaitable - promise ( DS & , DR & r ) noexcept : rcvr ( r ) {} suspend_always initial_suspend () noexcept { return {}; } [[ noreturn ]] suspend_always final_suspend () noexcept { std :: terminate (); } [[ noreturn ]] void unhandled_exception () noexcept { std :: terminate (); } [[ noreturn ]] void return_void () noexcept { std :: terminate (); } coroutine_handle <> unhandled_stopped () noexcept { set_stopped (( DR && ) rcvr ); return noop_coroutine (); } operation - state - task get_return_object () noexcept { return operation - state - task { coroutine_handle < connect - awaitable - promise >:: from_promise ( * this )}; } friend auto tag_invoke ( get_env_t , connect - awaitable - promise & self ) noexcept ( nothrow - callable < get_env_t , const DR &> ) -> env_of_t < const DR &> { return get_env ( self . rcvr ); } }; -
Let
be the following class:operation - state - task struct operation - state - task { using promise_type = connect - awaitable - promise ; coroutine_handle <> coro ; // exposition only explicit operation - state - task ( coroutine_handle <> h ) noexcept : coro ( h ) {} operation - state - task ( operation - state - task && o ) noexcept : coro ( exchange ( o . coro , {})) {} ~ operation - state - task () { if ( coro ) coro . destroy (); } friend void tag_invoke ( start_t , operation - state - task & self ) noexcept { self . coro . resume (); } }; -
Let
name the typeV
, letawait - result - type < DS , connect - awaitable - promise >
name the type:Sigs completion_signatures < SET - VALUE - SIG ( V ), // see [exec.snd.concepts] set_error_t ( exception_ptr ), set_stopped_t () > and let
be an exposition-only coroutine defined as follows:connect - awaitable template < class Fun , class ... Ts > auto suspend - complete ( Fun fun , Ts && ... as ) noexcept { // exposition only auto fn = [ & , fun ]() noexcept { fun ( std :: forward < Ts > ( as )...); }; struct awaiter { decltype ( fn ) fn_ ; static bool await_ready () noexcept { return false; } void await_suspend ( coroutine_handle <> ) noexcept { fn_ (); } [[ noreturn ]] void await_resume () noexcept { unreachable (); } }; return awaiter { fn }; }; operation - state - task connect - awaitable ( DS s , DR r ) requires receiver_of < DR , Sigs > { exception_ptr ep ; try { if constexpr ( same_as < V , void > ) { co_await std :: move ( s ); co_await suspend - complete ( set_value , std :: move ( r )); } else { co_await suspend - complete ( set_value , std :: move ( r ), co_await std :: move ( s )); } } catch (...) { ep = current_exception (); } co_await suspend - complete ( set_error , std :: move ( r ), std :: move ( ep )); } -
If
does not satisfyS
or ifsender
does not satisfyR
,receiver
is ill-formed. Otherwise, the expressionconnect ( s , r )
is expression-equivalent to:connect ( s , r ) -
iftag_invoke ( connect , s , r )
is modeled.connectable - with - tag - invoke < S , R > -
Mandates: The type of the
expression above satisfiestag_invoke
.operation_state
-
-
Otherwise,
if that expression is well-formed.connect - awaitable ( s , r ) -
Otherwise,
is ill-formed.connect ( s , r )
-
11.9.5. Sender factories [exec.factories]
11.9.5.1. execution :: schedule
[exec.schedule]
-
obtains a schedule-sender ([async.ops]) from a scheduler.schedule -
The name
denotes a customization point object. For some subexpressionschedule
, the expressions
is expression-equivalent to:schedule ( s ) -
, if that expression is valid. If the function selected bytag_invoke ( schedule , s )
does not return a sender whosetag_invoke
completion scheduler is equivalent toset_value
, the behavior of callings
is undefined.schedule ( s ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
is ill-formed.schedule ( s )
-
11.9.5.2. execution :: just
, execution :: just_error
, execution :: just_stopped
[exec.just]
-
is a factory for senders whose asynchronous operations complete synchronously in their start operation with a value completion operation.just
is a factory for senders whose asynchronous operations complete synchronously in their start operation with an error completion operation.just_error
is a factory for senders whose asynchronous operations complete synchronously in their start operation with a stopped completion operation.just_stopped -
Let
be the class template:just - sender template < class Tag , movable - value ... Ts > struct just - sender { // exposition only using is_sender = unspecified ; using completion_signatures = execution :: completion_signatures < Tag ( Ts ...) > ; tuple < Ts ... > vs_ ; // exposition only template < class R > struct operation { // exposition only tuple < Ts ... > vs_ ; // exposition only R r_ ; // exposition only friend void tag_invoke ( start_t , operation & s ) noexcept { apply ([ & s ]( Ts & ... values ) { Tag ()( std :: move ( s . r_ ), std :: move ( values )...); }, s . vs_ ); } }; template < receiver_of < completion_signatures > R > requires ( copy_constructible < Ts > && ...) friend operation < decay_t < R >> tag_invoke ( connect_t , const just - sender & s , R && r ) { return { s . vs_ , std :: forward < R > ( r ) }; } template < receiver_of < completion_signatures > R > friend operation < decay_t < R >> tag_invoke ( connect_t , just - sender && s , R && r ) { return { std :: move ( s . vs_ ), std :: forward < R > ( r ) }; } }; -
The name
denotes a customization point object. For some pack of subexpressionsjust
, letvs
be the template paramter packVs
.decltype (( vs ))
is expression-equivalent tojust ( vs ...)
.just - sender < set_value_t , remove_cvref_t < Vs > ... > ({ vs ...}) -
The name
denotes a customization point object. For some subexpressionjust_error
, leterr
beErr
.decltype (( err ))
is expression-equivalent tojust_error ( err )
.just - sender < set_error_t , remove_cvref_t < Err >> ({ err }) -
Then name
denotes a customization point object.just_stopped
is expression-equivalent tojust_stopped ()
.just - sender < set_stopped_t > ()
11.9.5.3. execution :: transfer_just
[exec.transfer.just]
-
is a factory for senders whose asynchronous operations execute value completion operations on an execution agent belonging to the execution resource associated with a specified scheduler.transfer_just -
The name
denotes a customization point object. For some subexpressiontransfer_just
and pack of subexpressionss
, letvs
beS
and letdecltype (( s ))
be the template parameter packVs
. Ifdecltype (( vs ))...
does not satisfyS
, or any typescheduler
inV
does not satisfyVs
,movable - value
is ill-formed. Otherwise,transfer_just ( s , vs ...)
is expression-equivalent to:transfer_just ( s , vs ...) -
, if that expression is valid. Lettag_invoke ( transfer_just , s , vs ...)
be a pack of rvalue subexpressions of typesas
refering to objects direct-initilized fromdecay_t < Vs > ...
. If the function selected byvs
does not return a sender whose asynchronous operations execute value completion operations on an execution agent belonging to the execution resource associated withtag_invoke
, with value result datumss
, the behavior of callingas
is undefined.transfer_just ( s , vs ...) -
Mandates:
, wheresender_of < R , set_value_t ( decay_t < Vs > ...), E >
is the type of theR
expression above, andtag_invoke
is the type of an environment.E
-
-
Otherwise,
.transfer ( just ( vs ...), s )
-
11.9.5.4. execution :: read
[exec.read]
-
is a factory for a sender whose asynchronous operation completes synchronously in its start operation with a value completion result equal to a value read from the receiver’s associated environment.read -
is a customization point object of the unspecified class type:read template < class Tag > struct read - sender ; // exposition only struct read - t { // exposition only template < class Tag > constexpr read - sender < Tag > operator ()( Tag ) const noexcept { return {}; } }; -
is the exposition-only class template:read - sender template < class Tag > struct read - sender { // exposition only using is_sender = unspecified ; template < class R > struct operation - state { // exposition only R r_ ; // exposition only friend void tag_invoke ( start_t , operation - state & s ) noexcept { TRY - SET - VALUE ( std :: move ( s . r_ ), Tag {}( get_env ( s . r_ ))); } }; template < receiver R > friend operation - state < decay_t < R >> tag_invoke ( connect_t , read - sender , R && r ) { return { std :: forward < R > ( r ) }; } template < class Env > requires callable < Tag , Env > friend auto tag_invoke ( get_completion_signatures_t , read - sender , Env ) -> completion_signatures < set_value_t ( call - result - t < Tag , Env > ), set_error_t ( exception_ptr ) > ; // not defined template < class Env > requires nothrow - callable < Tag , Env > friend auto tag_invoke ( get_completion_signatures_t , read - sender , Env ) -> completion_signatures < set_value_t ( call - result - t < Tag , Env > ) > ; // not defined friend empty_env tag_invoke ( get_env_t , const read - sender & ) noexcept { return {}; } }; where
, for two subexpressionsTRY - SET - VALUE ( r , e )
andr
, is equivalent to:e try { set_value ( r , e ); } catch (...) { set_error ( r , current_exception ()); } if
is potentially-throwing; ore
otherwise.set_value ( r , e )
11.9.6. Sender adaptors [exec.adapt]
11.9.6.1. General [exec.adapt.general]
-
Subclause [exec.adapt] specifies a set of sender adaptors.
-
The bitwise OR operator is overloaded for the purpose of creating sender chains. The adaptors also support function call syntax with equivalent semantics.
-
Unless otherwise specified, a sender adaptor is required to not begin executing any functions that would observe or modify any of the arguments of the adaptor before the returned sender is connected with a receiver using
, andconnect
is called on the resulting operation state. This requirement applies to any function that is selected by the implementation of the sender adaptor.start -
Unless otherwise specified, a parent sender ([async.ops]) with a single child sender
has an associated attribute object equal tos
([exec.fwd.env]). Unless otherwise specified, a parent sender with more than one child senders has an associated attributes object equal toFWD - QUERIES ( get_env ( s ))
. These requirements apply to any function that is selected by the implementation of the sender adaptor.empty_env {} -
Unless otherwise specified, when a parent sender is connected to a receiver
, any receiver used to connect a child sender has an associated environment equal tor
. This requirements applies to any sender returned from a function that is selected by the implementation of such sender adaptor.FWD - QUERIES ( get_env ( r )) -
For any sender type, receiver type, operation state type, queryable type, or coroutine promise type that is part of the implementation of any sender adaptor in this subclause and that is a class template, the template arguments do not contribute to the associated entities ([basic.lookup.argdep]) of a function call where a specialization of the class template is an associated entity.
[Example:
namespace sender - adaptors { // exposition only template < class Sch , class S > // arguments are not associated entities ([lib.tmpl-heads]) class on - sender { // ... }; struct on_t { template < scheduler Sch , sender S > on - sender < Sch , S > operator ()( Sch && sch , S && s ) const { // ... } }; } inline constexpr sender - adaptors :: on_t on {}; -- end example]
-
If a sender returned from a sender adaptor specified in this subsection is specified to include
among its set of completion signatures whereset_error_t ( E )
names the typedecay_t < E >
, but the implementation does not potentially evaluate an error completion operation with anexception_ptr
argument, the implementation is allowed to omit theexception_ptr
error completion signature from the set.exception_ptr
11.9.6.2. Sender adaptor closure objects [exec.adapt.objects]
-
A pipeable sender adaptor closure object is a function object that accepts one or more
arguments and returns asender
. For a sender adaptor closure objectsender
and an expressionC
such thatS
modelsdecltype (( S ))
, the following expressions are equivalent and yield asender
:sender C ( S ) S | C Given an additional pipeable sender adaptor closure object
, the expressionD
produces another pipeable sender adaptor closure objectC | D
:E
is a perfect forwarding call wrapper ([func.require]) with the following properties:E -
Its target object is an object
of typed
direct-non-list-initialized withdecay_t < decltype (( D )) >
.D -
It has one bound argument entity, an object
of typec
direct-non-list-initialized withdecay_t < decltype (( C )) >
.C -
Its call pattern is
, whered ( c ( arg ))
is the argument used in a function call expression ofarg
.E
The expression
is well-formed if and only if the initializations of the state entities ofC | D
are all well-formed.E -
-
An object
of typet
is a pipeable sender adaptor closure object ifT
modelsT
,derived_from < sender_adaptor_closure < T >>
has no other base classes of typeT
for any other typesender_adaptor_closure < U >
, andU
does not modelT
.sender -
The template parameter
forD
can be an incomplete type. Before any expression of typesender_adaptor_closure
appears as an operand to thecv D
operator,|
shall be complete and modelD
. The behavior of an expression involving an object of typederived_from < sender_adaptor_closure < D >>
as an operand to thecv D
operator is undefined if overload resolution selects a program-defined|
function.operator | -
A pipeable sender adaptor object is a customization point object that accepts a
as its first argument and returns asender
.sender -
If a pipeable sender adaptor object accepts only one argument, then it is a pipeable sender adaptor closure object.
-
If a pipeable sender adaptor object
accepts more than one argument, then letadaptor
be an expression such thats
modelsdecltype (( s ))
, letsender
be arguments such thatargs ...
is a well-formed expression as specified in the rest of this subclause ([exec.adapt.objects]), and letadaptor ( s , args ...)
be a pack that denotesBoundArgs
. The expressiondecay_t < decltype (( args )) > ...
produces a pipeable sender adaptor closure objectadaptor ( args ...)
that is a perfect forwarding call wrapper with the following properties:f -
Its target object is a copy of
.adaptor -
Its bound argument entities
consist of objects of typesbound_args
direct-non-list-initialized withBoundArgs ...
, respectively.std :: forward < decltype (( args )) > ( args )... -
Its call pattern is
, whereadaptor ( r , bound_args ...)
is the argument used in a function call expression ofr
.f
The expression
is well-formed if and only if the initializations of the bound argument entities of the result, as specified above, are all well-formed.adaptor ( args ...) -
11.9.6.3. execution :: on
[exec.on]
-
adapts an input sender into a sender that will start on an execution agent belonging to a particular scheduler’s associated execution resource.on -
Let
be an expression denoting an objectreplace - scheduler ( e , sch )
such thate '
returns a copy ofget_scheduler ( e )
, andsch
is expression-equivalent totag_invoke ( tag , e ', args ...)
for all argumentstag ( e , args ...)
and for allargs ...
whose type satisfiestag
and is notforwarding - query
.get_scheduler_t -
The name
denotes a customization point object. For some subexpressionson
andsch
, lets
beSch
anddecltype (( sch ))
beS
. Ifdecltype (( s ))
does not satisfySch
, orscheduler
does not satisfyS
,sender
is ill-formed. Otherwise, the expressionon
is expression-equivalent to:on ( sch , s ) -
, if that expression is valid. If the function selected above does not return a sender which startstag_invoke ( on , sch , s )
on an execution agent of the associated execution resource ofs
when started, the behavior of callingsch
is undefined.on ( sch , s ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise, constructs a sender
. Whens1
is connected with some receivers1
, it:out_r -
Constructs a receiver
such that:r -
When
is called, it callsset_value ( r )
, whereconnect ( s , r2 )
is as specified below, which results inr2
. It callsop_state3
. If any of these throws an exception, it callsstart ( op_state3 )
onset_error
, passingout_r
as the second argument.current_exception () -
is expression-equivalent toset_error ( r , e )
.set_error ( out_r , e ) -
is expression-equivalent toset_stopped ( r )
.set_stopped ( out_r ) -
is expression-equivalent toget_env ( r )
.get_env ( out_r )
-
-
Calls
, which results inschedule ( sch )
. It then callss2
, resulting inconnect ( s2 , r )
.op_state2 -
is wrapped by a new operation state,op_state2
, that is returned to the caller.op_state1 -
is a receiver that wraps a reference tor2
and forwards all completion operations to it. In addition,out_r
returnsget_env ( r2 )
.replace - scheduler ( e , sch ) -
When
is called onstart
, it callsop_state1
onstart
.op_state2 -
The lifetime of
, once constructed, lasts until eitherop_state2
is constructed orop_state3
is destroyed, whichever comes first. The lifetime ofop_state1
, once constructed, lasts untilop_state3
is destroyed.op_state1
-
-
Given subexpressions
ands1
, wheree
is a sender returned froms1
or a copy of such, leton
beS1
. Letdecltype (( s1 ))
beE '
. Then the type ofdecltype (( replace - scheduler ( e , sch )))
shall be:tag_invoke ( get_completion_signatures , s1 , e ) make_completion_signatures < copy_cvref_t < S1 , S > , E ', make_completion_signatures < schedule_result_t < Sch > , E , completion_signatures < set_error_t ( exception_ptr ) > , no - value - completions >> ; where
names the typeno - value - completions < As ... >
for any set of typescompletion_signatures <>
.As ...
-
11.9.6.4. execution :: transfer
[exec.transfer]
-
adapts a sender into a sender with a different associatedtransfer
completion scheduler. [Note: it results in a transition between different execution resources when executed. --end note]set_value -
The name
denotes a customization point object. For some subexpressionstransfer
andsch
, lets
beSch
anddecltype (( sch ))
beS
. Ifdecltype (( s ))
does not satisfySch
, orscheduler
does not satisfyS
,sender
is ill-formed. Otherwise, the expressiontransfer
is expression-equivalent to:transfer ( s , sch ) -
, if that expression is valid.tag_invoke ( transfer , get_completion_scheduler < set_value_t > ( get_env ( s )), s , sch ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
, if that expression is valid.tag_invoke ( transfer , s , sch ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
.schedule_from ( sch , s )
If the function selected above does not return a sender which is a result of a call to
, whereschedule_from ( sch , s2 )
is a sender which sends values equivalent to those sent bys2
, the behavior of callings
is undefined.transfer ( s , sch ) -
-
For a sender
returned fromt
,transfer ( s , sch )
shall return a queryable objectget_env ( t )
such thatq
returns a copy ofget_completion_scheduler < CPO > ( q )
, wheresch
is eitherCPO
orset_value_t
. Theset_stopped_t
query is not implemented, as the scheduler cannot be guaranteed in case an error is thrown while trying to schedule work on the given scheduler object. For all other query objectsget_completion_scheduler < set_error_t >
whose type satisfiesQ
, the expressionforwarding - query
shall be equivalent toQ ( q , args ...)
.Q ( get_env ( s ), args ...)
11.9.6.5. execution :: schedule_from
[exec.schedule.from]
-
schedules work dependent on the completion of a sender onto a scheduler’s associated execution resource. [Note:schedule_from
is not meant to be used in user code; it is used in the implementation ofschedule_from
. -end note]transfer -
The name
denotes a customization point object. For some subexpressionsschedule_from
andsch
, lets
beSch
anddecltype (( sch ))
beS
. Ifdecltype (( s ))
does not satisfySch
, orscheduler
does not satisfyS
,sender
is ill-formed. Otherwise, the expressionschedule_from
is expression-equivalent to:schedule_from ( sch , s ) -
, if that expression is valid. If the function selected bytag_invoke ( schedule_from , sch , s )
does not return a sender that completes on an execution agent belonging to the associated execution resource oftag_invoke
and completing with the same async result ([async.ops]) assch
, the behavior of callings
is undefined.schedule_from ( sch , s ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise, constructs a sender
. Whens2
is connected with some receivers2
, it:out_r -
Constructs a receiver
such that when a receiver completion operationr
is called, it decay-copiesTag ( r , args ...)
intoargs ...
(see below) asop_state
and constructs a receiverargs '...
such that:r2 -
When
is called, it callsset_value ( r2 )
.Tag ( out_r , std :: move ( args ')...) -
is expression-equivalent toset_error ( r2 , e )
.set_error ( out_r , e ) -
is expression-equivalent toset_stopped ( r2 )
.set_stopped ( out_r )
It then calls
, resulting in a senderschedule ( sch )
. It then callss3
, resulting in an operation stateconnect ( s3 , r2 )
. It then callsop_state3
. If any of these throws an exception, it catches it and callsstart ( op_state3 )
. If any of these expressions would be ill-formed, thenset_error ( out_r , current_exception ())
is ill-formed.Tag ( r , args ...) -
-
Calls
resulting in an operation stateconnect ( s , r )
. If this expression would be ill-formed,op_state2
is ill-formed.connect ( s2 , out_r ) -
Returns an operation state
that containsop_state
. Whenop_state2
is called, callsstart ( op_state )
. The lifetime ofstart ( op_state2 )
ends whenop_state3
is destroyed.op_state
-
-
Given subexpressions
ands2
, wheree
is a sender returned froms2
or a copy of such, letschedule_from
beS2
and letdecltype (( s2 ))
beE
. Then the type ofdecltype (( e ))
shall be:tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , make_completion_signatures < schedule_result_t < Sch > , E , potenially - throwing - completions , no - completions > , value - completions , error - completions > ; where
,potentially - throwing - completions
,no - completions
, andvalue - completions
are defined as follows:error - completions template < class ... Ts > using all - nothrow - decay - copyable = boolean_constant < ( is_nothrow_constructible_v < decay_t < Ts > , Ts > && ...) > ; template < class ... Ts > using conjunction = boolean_constant < ( Ts :: value && ...) > ; using potentially - throwing - completions = conditional_t < error_types_of_t < copy_cvref_t < S2 , S > , E , all - nothrow - decay - copyable >:: value && value_types_of_t < copy_cvref_t < S2 , S > , E , all - nothrow - decay - copyable , conjunction >:: value , completion_signatures <> , completion_signatures < set_error_t ( exception_ptr ) > ; template < class ... > using no - completions = completion_signatures <> ; template < class ... Ts > using value - completions = completion_signatures < set_value_t ( decay_t < Ts >&& ...) > ; template < class T > using error - completions = completion_signatures < set_error_t ( decay_t < T >&& ) > ;
-
-
For a sender
returned fromt
,schedule_from ( sch , s )
shall return a queryable objectget_env ( t )
such thatq
returns a copy ofget_completion_scheduler < CPO > ( q )
, wheresch
is eitherCPO
orset_value_t
. Theset_stopped_t
query is not implemented, as the scheduler cannot be guaranteed in case an error is thrown while trying to schedule work on the given scheduler object. For all other query objectsget_completion_scheduler < set_error_t >
whose type satisfiesQ
, the expressionforwarding_query
shall be equivalent toQ ( q , args ...)
.Q ( get_env ( s ), args ...)
11.9.6.6. execution :: then
[exec.then]
-
attaches an invocable as a continuation for an input sender’s value completion operation.then -
The name
denotes a customization point object. For some subexpressionsthen
ands
, letf
beS
, letdecltype (( s ))
be the decayed type ofF
, and letf
be an xvalue refering to an object decay-copied fromf '
. Iff
does not satisfyS
, orsender
does not modelF
,movable - value
is ill-formed. Otherwise, the expressionthen
is expression-equivalent to:then ( s , f ) -
, if that expression is valid.tag_invoke ( then , get_completion_scheduler < set_value_t > ( get_env ( s )), s , f ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
, if that expression is valid.tag_invoke ( then , s , f ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise, constructs a sender
. Whens2
is connected with some receivers2
, it:out_r -
Constructs a receiver
such that:r -
When
is called, letset_value ( r , args ...)
be the expressionv
. Ifinvoke ( f ', args ...)
isdecltype ( v )
, callsvoid
; otherwise, it callsset_value ( out_r )
. If any of these throw an exception, it catches it and callsset_value ( out_r , v )
. If any of these expressions would be ill-formed, the expressionset_error ( out_r , current_exception ())
is ill-formed.set_value ( r , args ...) -
is expression-equivalent toset_error ( r , e )
.set_error ( out_r , e ) -
is expression-equivalent toset_stopped ( r )
.set_stopped ( out_r )
-
-
Returns an expression-equivalent to
.connect ( s , r ) -
Let
name the typecompl - sig - t < Tag , Args ... >
ifTag ()
is a template paramter pack containing the single typeArgs ...
; otherwise,void
. Given subexpressionsTag ( Args ...)
ands2
wheree
is a sender returned froms2
or a copy of such, letthen
beS2
and letdecltype (( s2 ))
beE
. The type ofdecltype (( e ))
shall be equivalent to:tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , set - error - signature , set - value - completions > ; where
is an alias for:set - value - completions template < class ... As > set - value - completions = completion_signatures < compl - sig - t < set_value_t , invoke_result_t < F , As ... >>> and
is an alias forset - error - signature
if any of the types in thecompletion_signatures < set_error_t ( exception_ptr ) >
named bytype - list
arevalue_types_of_t < copy_cvref_t < S2 , S > , E , potentially - throwing , type - list >
; otherwise,true_type
, wherecompletion_signatures <>
is the template alias:potentially - throwing template < class ... As > using potentially - throwing = bool_constant <! is_nothrow_invocable_v < F , As ... >> ;
-
If the function selected above does not return a sender that invokes
with the value result datums off
usings
's return value as the sender’s value completion, and forwards the non-value completion operations unchanged, the behavior of callingf
is undefined.then ( s , f ) -
11.9.6.7. execution :: upon_error
[exec.upon.error]
-
maps an input sender’s error completion operation into a value completion operation using the provided invocable.upon_error -
The name
denotes a customization point object. For some subexpressionsupon_error
ands
, letf
beS
, letdecltype (( s ))
be the decayed type ofF
, and letf
be an xvalue refering to an object decay-copied fromf '
. Iff
does not satisfyS
, orsender
does not modelF
,movable - value
is ill-formed. Otherwise, the expressionupon_error
is expression-equivalent to:upon_error ( s , f ) -
, if that expression is valid.tag_invoke ( upon_error , get_completion_scheduler < set_error_t > ( get_env ( s )), s , f ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
, if that expression is valid.tag_invoke ( upon_error , s , f ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise, constructs a sender
. Whens2
is connected with some receivers2
, it:out_r -
Constructs a receiver
such that:r -
is expression-equivalent toset_value ( r , args ...)
.set_value ( out_r , args ...) -
When
is called, letset_error ( r , e )
be the expressionv
. Ifinvoke ( f ', e )
isdecltype ( v )
, callsvoid
; otherwise, it callsset_value ( out_r )
. If any of these throw an exception, it catches it and callsset_value ( out_r , v )
. If any of these expressions would be ill-formed, the expressionset_error ( out_r , current_exception ())
is ill-formed.set_error ( r , e ) -
is expression-equivalent toset_stopped ( r )
.set_stopped ( out_r )
-
-
Returns an expression-equivalent to
.connect ( s , r ) -
Let
name the typecompl - sig - t < Tag , Args ... >
ifTag ()
is a template paramter pack containing the single typeArgs ...
; otherwise,void
. Given subexpressionsTag ( Args ...)
ands2
wheree
is a sender returned froms2
or a copy of such, letupon_error
beS2
and letdecltype (( s2 ))
beE
. The type ofdecltype (( e ))
shall be equivalent to:tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , set - error - signature , default - set - value , set - error - completion > ; where
is the template alias:set - error - completion template < class E > set - error - completion = completion_signatures < compl - sig - t < set_value_t , invoke_result_t < F , E >>> and
is an alias forset - error - signature
if any of the types in thecompletion_signatures < set_error_t ( exception_ptr ) >
named bytype - list
areerror_types_of_t < copy_cvref_t < S2 , S > , E , potentially - throwing >
; otherwise,true_type
, wherecompletion_signatures <>
is the template alias:potentially - throwing template < class ... Es > using potentially - throwing = type - list <! bool_constant < is_nothrow_invocable_v < F , Es >> ... > ;
-
If the function selected above does not return a sender which invokes
with the error result datum off
usings
's return value as the sender’s value completion, and forwards the non-error completion operations unchanged, the behavior of callingf
is undefined.upon_error ( s , f ) -
11.9.6.8. execution :: upon_stopped
[exec.upon.stopped]
-
maps an input sender’s stopped completion operation into a value completion operation using the provided invocable.upon_stopped -
The name
denotes a customization point object. For some subexpressionsupon_stopped
ands
, letf
beS
, letdecltype (( s ))
be the decayed type ofF
, and letf
be an xvalue refering to an object decay-copied fromf '
. Iff
does not satisfyS
, orsender
does not model bothF
andmovable - value
,invocable
is ill-formed. Otherwise, the expressionupon_stopped
is expression-equivalent to:upon_stopped ( s , f ) -
, if that expression is valid.tag_invoke ( upon_stopped , get_completion_scheduler < set_stopped_t > ( get_env ( s )), s , f ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
, if that expression is valid.tag_invoke ( upon_stopped , s , f ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise, constructs a sender
. Whens2
is connected with some receivers2
, it:out_r -
Constructs a receiver
such that:r -
is expression-equivalent toset_value ( r , args ...)
.set_value ( out_r , args ...) -
is expression-equivalent toset_error ( r , e )
.set_error ( out_r , e ) -
When
is called, letset_stopped ( r )
be the expressionv
. Ifinvoke ( f ')
has typev
, callsvoid
; otherwise, callsset_value ( out_r )
. If any of these throw an exception, it catches it and callsset_value ( out_r , v )
. If any of these expressions would be ill-formed, the expressionset_error ( out_r , current_exception ())
is ill-formed.set_stopped ( r )
-
-
Returns an expression-equivalent to
.connect ( s , r ) -
Let
name the typecompl - sig - t < Tag , Args ... >
ifTag ()
is a template paramter pack containing the single typeArgs ...
; otherwise,void
. Given subexpressionsTag ( Args ...)
ands2
wheree
is a sender returned froms2
or a copy of such, letupon_stopped
beS2
and letdecltype (( s2 ))
beE
. The type ofdecltype (( e ))
shall be equivalent to:tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , set - error - signature , default - set - value , default - set - error , set - stopped - completions > ; where
names the typeset - stopped - completions
, andcompletion_signatures < compl - sig - t < set_value_t , invoke_result_t < F >>
names the typeset - error - signature
ifcompletion_signatures < set_error_t ( exception_ptr ) >
isis_nothrow_invocable_v < F > true
, or
otherwise.completion_signatures <>
-
If the function selected above does not return a sender that invokes
whenf
executes a stopped completion, usings
's return value as the sender’s the value completion, and propagatesf
's other completion operations unchanged, the behavior of callings
is undefined.upon_stopped ( s , f ) -
11.9.6.9. execution :: let_value
, execution :: let_error
, execution :: let_stopped
, [exec.let]
-
transforms a sender’s value completion into a new child asynchronous operation.let_value
transforms a sender’s error completion into a new child asynchronous operation.let_error
transforms a sender’s stopped completion into a new child asynchronous operation.let_stopped -
The names
,let_value
, andlet_error
denote customization point objects. Let the expressionlet_stopped
be one oflet - cpo
,let_value
, orlet_error
. For subexpressionslet_stopped
ands
, letf
beS
, letdecltype (( s ))
be the decayed type ofF
, and letf
be an xvalue that refers to an object decay-copied fromf '
. Iff
does not satisfyS
, the expressionsender
is ill-formed. Iflet - cpo ( s , f )
does not satisfyF
, the expressioninvocable
is ill-formed. Otherwise, the expressionlet_stopped ( s , f )
is expression-equivalent to:let - cpo ( s , f ) -
, if that expression is valid.tag_invoke ( let - cpo , get_completion_scheduler < set_value_t > ( get_env ( s )), s , f ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
, if that expression is valid.tag_invoke ( let - cpo , s , f ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise, given a receiver
and an lvalueout_r
refering to an object decay-copied fromout_r '
.out_r -
For
, letlet_value
beset - cpo
. Forset_value
, letlet_error
beset - cpo
. Forset_error
, letlet_stopped
beset - cpo
. Letset_stopped
be one ofcompletion - function
,set_value
, orset_error
.set_stopped -
Let
be an rvalue of a receiver typer
such that:R -
When
is called, the receiverset - cpo ( r , args ...)
decay-copiesr
intoargs ...
asop_state2
, then callsargs '...
, resulting in a senderinvoke ( f ', args '...)
. It then callss3
, resulting in an operation stateconnect ( s3 , std :: move ( out_r '))
.op_state3
is saved as a part ofop_state3
. It then callsop_state2
. If any of these throws an exception, it catches it and callsstart ( op_state3 )
. If any of these expressions would be ill-formed,set_error ( std :: move ( out_r '), current_exception ())
is ill-formed.set - cpo ( r , args ...) -
is expression-equivalent tocompletion - function ( r , args ...)
, whencompletion - function ( std :: move ( out_r '), args ...)
is different fromcompletion - function
.set - cpo
-
-
returns a senderlet - cpo ( s , f )
such that:s2 -
If the expression
is ill-formed,connect ( s , r )
is ill-formed.connect ( s2 , out_r ) -
Otherwise, let
be the result ofop_state2
.connect ( s , r )
returns an operation stateconnect ( s2 , out_r )
that storesop_state
.op_state2
is expression-equivalent tostart ( op_state )
.start ( op_state2 )
-
-
Given subexpressions
ands2
, wheree
is a sender returned froms2
or a copy of such, letlet - cpo ( s , f )
beS2
, letdecltype (( s2 ))
beE
, and letdecltype (( e ))
beDS
. Then the type ofcopy_cvref_t < S2 , S >
is specified as follows:tag_invoke ( get_completion_signatures , s2 , e ) -
If
issender_in < DS , E > false
, the expression
is ill-formed.tag_invoke ( get_completion_signatures , s2 , e ) -
Otherwise, let
be the set of template arguments of theSigs ...
specialization named bycompletion_signatures
, letcompletion_signatures_of_t < DS , E >
be the set of function types inSigs2 ...
whose return type isSigs ...
, and letset - cpo
be the set of function types inRest ...
but notSigs ...
.Sigs2 ... -
For each
inSig2 i
, letSigs2 ...
be the set of function arguments inVs i ...
and letSig2 i
beS3 i
. Ifinvoke_result_t < F , decay_t < Vs i >& ... >
is ill-formed, or ifS3 i
is not satisfied, then the expressionsender_in < S3 i , E >
is ill-formed.tag_invoke ( get_completion_signatures , s2 , e ) -
Otherwise, let
be the set of template arguments of theSigs3 i ...
specialization named bycompletion_signatures
. Then the type ofcompletion_signatures_of_t < S3 i , E >
shall be equivalent totag_invoke ( get_completion_signatures , s2 , e )
, wherecompletion_signatures < Sigs3 0 ..., Sigs3 1 ..., ... Sigs3 n -1 . .., Rest ..., set_error_t ( exception_ptr ) >
isn
.sizeof ...( Sigs2 )
-
-
If
does not return a sender that invokeslet - cpo ( s , f )
whenf
is called, and makes its completion dependent on the completion of a sender returned byset - cpo
, and propagates the other completion operations sent byf
, the behavior of callings
is undefined.let - cpo ( s , f ) -
11.9.6.10. execution :: bulk
[exec.bulk]
-
runs a task repeatedly for every index in an index space.bulk -
The name
denotes a customization point object. For some subexpressionsbulk
,s
, andshape
, letf
beS
,decltype (( s ))
beShape
, anddecltype (( shape ))
beF
. Ifdecltype (( f ))
does not satisfyS
orsender
does not satisfyShape
,integral
is ill-formed. Otherwise, the expressionbulk
is expression-equivalent to:bulk ( s , shape , f ) -
, if that expression is valid.tag_invoke ( bulk , get_completion_scheduler < set_value_t > ( get_env ( s )), s , shape , f ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
, if that expression is valid.tag_invoke ( bulk , s , shape , f ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise, constructs a sender
. Whens2
is connected with some receivers2
, it:out_r -
Constructs a receiver
:r -
When
is called, callsset_value ( r , args ...)
for eachf ( i , args ...)
of typei
fromShape
to0
, then callsshape
. If any of these throws an exception, it catches it and callsset_value ( out_r , args ...)
.set_error ( out_r , current_exception ()) -
When
is called, callsset_error ( r , e )
.set_error ( out_r , e ) -
When
is called, callsset_stopped ( r )
.set_stopped ( out_r , e )
-
-
Calls
, which results in an operation stateconnect ( s , r )
.op_state2 -
Returns an operation state
that containsop_state
. Whenop_state2
is called, callsstart ( op_state )
.start ( op_state2 ) -
Given subexpressions
ands2
wheree
is a sender returned froms2
or a copy of such, letbulk
beS2
, letdecltype (( s2 ))
beE
, letdecltype (( e ))
beDS
, letcopy_cvref_t < S2 , S >
beShape
and letdecltype (( shape ))
be the alias template:nothrow - callable template < class ... As > using nothrow - callable = bool_constant < is_nothrow_invocable_v < decay_t < F >& , Shape , As ... >> ; -
If any of the types in the
named bytype - list
arevalue_types_of_t < DS , E , nothrow - callable , type - list >
, then the type offalse_type
shall be equivalent to:tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < DS , E , completion_signatures < set_error_t ( exception_ptr ) >> -
Otherwise, the type of
shall be equivalent totag_invoke ( get_completion_signatures , s2 , e )
.completion_signatures_of_t < DS , E >
-
-
-
If the function selected above does not return a sender that invokes
for eachf ( i , args ...)
of typei
fromShape
to0
whereshape
is a pack of subexpressions refering to the value completion result datums of the input sender, or does not execute a value completion operation with said datums, the behavior of callingargs
is undefined.bulk ( s , shape , f )
-
11.9.6.11. execution :: split
[exec.split]
-
adapts an arbitrary sender into a sender that can be connected multiple times.split -
Let
be the type of an environment such that, given an instancesplit - env
, the expressione
is well-formed and has typeget_stop_token ( e )
.stop_token -
The name
denotes a customization point object. For some subexpressionsplit
, lets
beS
. Ifdecltype (( s ))
orsender_in < S , split - env >
isconstructible_from < decay_t < env_of_t < S >> , env_of_t < S >> false
,
is ill-formed. Otherwise, the expressionsplit
is expression-equivalent to:split ( s ) -
, if that expression is valid.tag_invoke ( split , get_completion_scheduler < set_value_t > ( get_env ( s )), s ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
, if that expression is valid.tag_invoke ( split , s ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise, constructs a sender
, which:s2 -
Creates an object
that contains ash_state
, a list of pointers to operation states awaiting the completion ofstop_source
, and that also reserves space for storing:s -
the operation state that results from connecting
withs
described below, andr -
the sets of values and errors with which
can complete, with the addition ofs
.exception_ptr -
the result of decay-copying
.get_env ( s )
-
-
Constructs a receiver
such that:r -
When
is called, decay-copies the expressionsset_value ( r , args ...)
intoargs ...
. It then notifies all the operation states insh_state
's list of operation states that the results are ready. If any exceptions are thrown, the exception is caught andsh_state
is called instead.set_error ( r , current_exception ()) -
When
is called, decay-copiesset_error ( r , e )
intoe
. It then notifies the operation states insh_state
's list of operation states that the results are ready.sh_state -
When
is called, notifies the operation states inset_stopped ( r )
's list of operation states that the results are ready.sh_state -
is an expressionget_env ( r )
of typee
such thatsplit - env
is well-formed and returns the results of callingget_stop_token ( e )
onget_token ()
's stop source.sh_state
-
-
Calls
and decay-copies the result intoget_env ( s )
.sh_state -
Calls
, resulting in an operation stateconnect ( s , r )
.op_state2
is saved inop_state2
.sh_state -
When
is connected with a receivers2
of typeout_r
, it returns an operation state objectOutR
that contains:op_state -
An object
of typeout_r '
decay-copied fromOutR
,out_r -
A reference to
,sh_state -
A stop callback of type
, whereoptional < stop_token_of_t < env_of_t < OutR >>:: callback_type < stop - callback - fn >>
is the unspecified class type:stop - callback - fn struct stop - callback - fn { stop_source & stop_src_ ; void operator ()() noexcept { stop_src_ . request_stop (); } };
-
-
When
is called:start ( op_state ) -
If one of
's completion functions has executed, then letr
be the completion function that was called. CallsTag
, whereTag ( out_r ', args2 ...)
is a pack of const lvalues referencing the subobjects ofargs2 ...
that have been saved by the original call tosh_state
and returns.Tag ( r , args ...) -
Otherwise, it emplace constructs the stop callback optional with the arguments
andget_stop_token ( get_env ( out_r '))
, wherestop - callback - fn { stop - src }
refers to the stop source ofstop - src
.sh_state -
Otherwise, it adds a pointer to
to the list of operation states inop_state
. Ifsh_state
is the first such state added to the list:op_state -
If
isstop - src . stop_requested () true
, all of the operation states in
's list of operation states are notified as ifsh_state
had been called.set_stopped ( r ) -
Otherwise,
is called.start ( op_state2 )
-
-
-
When
completes it will notifyr
that the result are ready. Letop_state
be whichever completion function was called on receiverTag
.r
's stop callback optional is reset. Thenop_state
is called, whereTag ( std :: move ( out_r '), args2 ...)
is a pack of const lvalues referencing the subobjects ofargs2 ...
that have been saved by the original call tosh_state
.Tag ( r , args ...) -
Ownership of
is shared bysh_state
and by everys2
that results from connectingop_state
to a receiver.s2
-
-
Given subexpressions
wheres2
is a sender returned froms2
or a copy of such,split
shall return an lvalue reference to the object inget_env ( s2 )
that was initialized with the result ofsh_state
.get_env ( s ) -
Given subexpressions
ands2
wheree
is a sender returned froms2
or a copy of such, letsplit
beS2
and letdecltype (( s2 ))
beE
. The type ofdecltype (( e ))
shall be equivalent to:tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , completion_signatures < set_error_t ( exception_ptr ), set_error_t ( Es )... > , value - signatures , error - signatures > ; where
is a (possibly empty) template parameter pack,Es
is the alias template:value - signatures template < class ... Ts > using value - signatures = completion_signatures < set_value_t ( const decay_t < Ts >& ...) > ; and
is the alias template:error - signatures template < class E > using error - signatures = completion_signatures < set_error_t ( const decay_t < E >& ) > ; -
Let
be a sender expression,s
be an instance of the receiver type described above,r
be a sender returned froms2
or a copy of such,split ( s )
is the receiver to whichr2
is connected, ands2
is the pack of subexpressions passed toargs
's completion functionr
whenCSO
completes.s
shall invokes2
whereCSO ( r2 , args2 ...)
is a pack of const lvalue references to objects decay-copied fromargs2
, or by callingargs
for some subexpressionset_error ( r2 , e2 )
. The objects passed toe2
's completion operation shall be valid until after the completion of the invocation ofr2
's completion operation.r2
-
11.9.6.12. execution :: when_all
[exec.when.all]
-
andwhen_all
both adapt multiple input senders into a sender that completes when all input senders have completed.when_all_with_variant
only accepts senders with a single value completion signature and on success concatenates all the input senders' value result datums into its own value completion operation.when_all
is semantically equivilant towhen_all_with_variant ( s ...)
, wherewhen_all ( into_variant ( s )...)
is a pack of subexpressions of sender types.s -
The name
denotes a customization point object. For some subexpressionswhen_all
, lets i ...
beS i ...
. The expressiondecltype (( s i ))...
is ill-formed if any of the following is true:when_all ( s i ...) -
If the number of subexpressions
is 0, ors i ... -
If any type
does not satisfyS i
.sender
Otherwise, the expression
is expression-equivalent to:when_all ( s i ...) -
, if that expression is valid. If the function selected bytag_invoke ( when_all , s i ...)
does not return a sender that sends a concatenation of values sent bytag_invoke
when they all complete withs i ...
, the behavior of callingset_value
is undefined.when_all ( s i ...) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise, constructs a sender
of typew
. WhenW
is connected with some receiverw
of typeout_r
, it returns an operation stateOutR
specified as below:op_state -
For each sender
, constructs a receivers i
such that:r i -
If
is called for everyset_value ( r i , t i ...)
,r i
's associated stop callback optional is reset andop_state
is called, whereset_value ( out_r , t 0 ..., t 1 ..., ..., t n -1 . ..)
the number of subexpressions inn
.s i ... -
Otherwise,
orset_error
was called for at least one receiverset_stopped
. If the first such to complete did so with the callr i
,set_error ( r i , e )
is called onrequest_stop
's associated stop source. When all child operations have completed,op_state
's associated stop callback optional is reset andop_state
is called.set_error ( out_r , e ) -
Otherwise,
is called onrequest_stop
's associated stop source. When all child operations have completed,op_state
's associated stop callback optional is reset andop_state
is called.set_stopped ( out_r ) -
For each receiver
,r i
is an expressionget_env ( r i )
such thate
is well-formed and returns the results of callingget_stop_token ( e )
onget_token ()
's associated stop source, and for whichop_state
is expression-equivalent totag_invoke ( tag , e , args ...)
for all argumentstag ( get_env ( out_r ), args ...)
and allargs ...
whose type satisfiestag
and is notforwarding - query
.get_stop_token_t
-
-
For each sender
, callss i
, resulting in operation statesconnect ( s i , r i )
.child_op i -
Returns an operation state
that contains:op_state -
Each operation state
,child_op i -
A stop source of type
,in_place_stop_source -
A stop callback of type
, whereoptional < stop_token_of_t < env_of_t < OutR >>:: callback_type < stop - callback - fn >>
is the unspecified class type:stop - callback - fn struct stop - callback - fn { in_place_stop_source & stop_src_ ; void operator ()() noexcept { stop_src_ . request_stop (); } };
-
-
When
is called it:start ( op_state ) -
Emplace constructs the stop callback optional with the arguments
andget_stop_token ( get_env ( out_r ))
, wherestop - callback - fn { stop - src }
refers to the stop source ofstop - src
.op_state -
Then, it checks to see if
is true. If so, it callsstop - src . stop_requested ()
.set_stopped ( out_r ) -
Otherwise, calls
for eachstart ( child_op i )
.child_op i
-
-
Given subexpressions
ands2
wheree
is a sender returned froms2
or a copy of such, letwhen_all
beS2
, letdecltype (( s2 ))
beE
, and letdecltype (( e ))
be the decayed types of the arguments to theSs ...
expression that createdwhen_all
. Lets2
be a type such thatWE
isstop_token_of_t < WE >
andin_place_stop_token
names the type, if any, oftag_invoke_result_t < Tag , WE , As ... >
for all typescall - result - t < Tag , E , As ... >
and all typesAs ...
besidesTag
. The type ofget_stop_token_t
shall be as follows:tag_invoke ( get_completion_signatures , s2 , e ) -
For each type
inS i
, letSs ...
name the typeDS i
. If for any typecopy_cvref_t < S2 , S i >
, the typeDS i
is ill-formed, the expression ofcompletion_signatures_of_t < DS i , WE >
is ill-formed.tag_invoke ( get_completion_signatures , s2 , e ) -
Otherwise, for each type
, letDS i
be the set of template arguments in the specialization ofSigs i ...
named bycompletion_signatures
, and letcompletion_signatures_of_t < DS i , WE >
be the count of function types inC i
for which the return type isSigs i ...
. If anyset_value_t
is two or greater, then the expressionC i
is ill-formed.tag_invoke ( get_completion_signatures , s2 , e ) -
Otherwise, let
be the set of function types inSigs2 i ...
whose return types are notSigs i ...
, and letset_value_t
be the unique set of types inWs ...
, where[ Sigs2 0 ..., Sigs2 1 ..., ... Sigs2 n -1 . .., set_stopped_t ()]
isn
. If anysizeof ...( Ss )
isC i
, then the type of0
shall betag_invoke ( get_completion_signatures , s2 , e )
.completion_signatures < Ws ... > -
Otherwise, let
be the function argument types of the single type inV i ...
for which the return type isSigs i ...
. Then the type ofset_value_t
shall betag_invoke ( get_completion_signatures , s2 , e )
.completion_signatures < Ws ..., set_value_t ( decay_t < V 0 >&& ..., decay_t < V 1 >&& ..., ... decay_t < V n -1 >&& ...) >
-
-
-
-
The name
denotes a customization point object. For some subexpressionswhen_all_with_variant
, lets ...
beS
. If any typedecltype (( s ))
inS i
does not satisfyS ...
,sender
is ill-formed. Otherwise, the expressionwhen_all_with_variant
is expression-equivalent to:when_all_with_variant ( s ...) -
, if that expression is valid. If the function selected bytag_invoke ( when_all_with_variant , s ...)
does not return a sender that, when connected with a receiver of typetag_invoke
, sends the typesR
when they all complete withinto - variant - type < S , env_of_t < R >> ...
, the behavior of callingset_value
is undefined.when_all ( s i ...) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
.when_all ( into_variant ( s )...)
-
-
For a sender
returned froms2
orwhen_all
,when_all_with_variant
shall return an instance of a class equivalent toget_env ( s2 )
.empty_env
11.9.6.13. execution :: transfer_when_all
[exec.transfer.when.all]
-
andtransfer_when_all
both adapt multiple input senders into a sender that completes when all input senders have completed, ensuring the input senders complete on the specified scheduler.transfer_when_all_with_variant
only accepts senders with a single value completion signature and on success concatenates all the input senders' value result datums into its own value completion operation;transfer_when_all
is semantically equivalent totransfer_when_all ( scheduler , input - senders ...)
.transfer ( when_all ( input - senders ...), scheduler )
is semantically equivilant totransfer_when_all_with_variant ( scheduler , input - senders ...)
. These customizable composite algorithms can allow for more efficient customizations in some cases.transfer_when_all ( scheduler , into_variant ( intput - senders )...) -
The name
denotes a customization point object. For some subexpressionstransfer_when_all
andsch
, lets ...
beSch
anddecltype ( sch )
beS
. Ifdecltype (( s ))
does not satisfySch
, or any typescheduler
inS i
does not satisfyS ...
,sender
is ill-formed. Otherwise, the expressiontransfer_when_all
is expression-equivalent to:transfer_when_all ( sch , s ...) -
, if that expression is valid. If the function selected bytag_invoke ( transfer_when_all , sch , s ...)
does not return a sender that sends a concatenation of values sent bytag_invoke
when they all complete withs ...
, or does not send its completion operation, other than ones resulting from a scheduling error, on an execution agent belonging to the associated execution resource ofset_value
, the behavior of callingsch
is undefined.transfer_when_all ( sch , s ...) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
.transfer ( when_all ( s ...), sch )
-
-
The name
denotes a customization point object. For some subexpressionstransfer_when_all_with_variant
andsch
, lets ...
beSch
and letdecltype (( sch ))
beS
. If any typedecltype (( s ))
inS i
does not satisfyS ...
,sender
is ill-formed. Otherwise, the expressiontransfer_when_all_with_variant
is expression-equivalent to:transfer_when_all_with_variant ( sch , s ...) -
, if that expression is valid. If the function selected bytag_invoke ( transfer_when_all_with_variant , s ...)
does not return a sender that, when connected with a receiver of typetag_invoke
, sends the typesR
when they all complete withinto - variant - type < S , env_of_t < R >> ...
, the behavior of callingset_value
is undefined.transfer_when_all_with_variant ( sch , s ...) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
.transfer_when_all ( sch , into_variant ( s )...)
-
-
For a sender
returned fromt
,transfer_when_all ( sch , s ...)
shall return a queryable objectget_env ( t )
such thatq
returns a copy ofget_completion_scheduler < CPO > ( q )
, wheresch
is eitherCPO
orset_value_t
. Theset_stopped_t
query is not implemented, as the scheduler cannot be guaranteed in case an error is thrown while trying to schedule work on the given scheduler object.get_completion_scheduler < set_error_t >
11.9.6.14. execution :: into_variant
[exec.into.variant]
-
adapts a sender with multiple value completion signatures into a sender with just one consisting of ainto_variant
ofvariant
s.tuple -
The template
computes the type sent by a sender returned frominto - variant - type
.into_variant template < class S , class E > requires sender_in < S , E > using into - variant - type = value_types_of_t < S , E > ; -
is a customization point object. For some subexpressioninto_variant
, lets
beS
. Ifdecltype (( s ))
does not satisfyS
,sender
is ill-formed. Otherwise,into_variant ( s )
returns a senderinto_variant ( s )
. Whens2
is connected with some receivers2
, it:out_r -
Constructs a receiver
:r -
If
is called, callsset_value ( r , ts ...)
. If this expression throws an exception, callsset_value ( out_r , into - variant - type < S , env_of_t < decltype (( r )) >> ( decayed - tuple < decltype ( ts )... > ( ts ...)))
.set_error ( out_r , current_exception ()) -
is expression-equivalent toset_error ( r , e )
.set_error ( out_r , e ) -
is expression-equivalent toset_stopped ( r )
.set_stopped ( out_r )
-
-
Calls
, resulting in an operation stateconnect ( s , r )
.op_state2 -
Returns an operation state
that containsop_state
. Whenop_state2
is called, callsstart ( op_state )
.start ( op_state2 ) -
Given subexpressions
ands2
, wheree
is a sender returned froms2
or a copy of such, letinto_variant
beS2
anddecltype (( s2 ))
beE
. Letdecltype (( e ))
be the class template:into - variant - set - value template < class S , class E > struct into - variant - set - value { template < class ... Args > using apply = set_value_t ( into - variant - type < S , E > ); }; Let
be the class template:into - variant - is - nothrow template < class S , class E > struct into - variant - is - nothrow { template < class ... Args > requires constructible_from < decayed - tuple < Args ... > , Args ... > using apply = bool_constant < noexcept ( into - variant - type < S , E > ( decayed - tuple < Args ... > ( declval < Args > ()...))) > ; }; Let
beINTO - VARIANT - ERROR - SIGNATURES ( S , E )
if any of the types in thecompletion_signatures < set_error_t ( exception_ptr ) >
named bytype - list
arevalue_types_of_t < S , E , into - variant - is - nothrow < S , E >:: template apply , type - list >
; otherwise,false_type
.completion_signatures <> The type of
shall be equivalent to:tag_invoke ( get_completion_signatures_t {}, s2 , e ) make_completion_signatures < S2 , E , INTO - VARIANT - ERROR - SIGNATURES ( S , E ), into - variant - set - value < S2 , E >:: template apply >
-
11.9.6.15. execution :: stopped_as_optional
[exec.stopped.as.optional]
-
maps an input sender’s stopped completion operation into the value completion operation as an empty optional. The input sender’s value completion operation is also converted into an optional. The result is a sender that never completes with stopped, reporting cancellation by completing with an empty optional.stopped_as_optional -
The name
denotes a customization point object. For some subexpressionstopped_as_optional
, lets
beS
. Letdecltype (( s ))
be an expression such that, when it isget - env - sender
ed with a receiverconnect
,r
on the resulting operation state completes immediately by callingstart
. The expressionset_value ( r , get_env ( r ))
is expression-equivalent to:stopped_as_optional ( s ) let_value ( get - env - sender , [] < class E > ( const E & ) requires single - sender < S , E > { return let_stopped ( then ( s , [] < class T > ( T && t ) { return optional < decay_t < single - sender - value - type < S , E >>> { std :: forward < T > ( t ) }; } ), [] () noexcept { return just ( optional < decay_t < single - sender - value - type < S , E >>> {}); } ); } )
11.9.6.16. execution :: stopped_as_error
[exec.stopped.as.error]
-
maps an input sender’s stopped completion operation into an error completion operation as a custom error type. The result is a sender that never completes with stopped, reporting cancellation by completing with an error.stopped_as_error -
The name
denotes a customization point object. For some subexpressionsstopped_as_error
ands
, lete
beS
and letdecltype (( s ))
beE
. If the typedecltype (( e ))
does not satisfyS
or if the typesender
doesn’t satisfyE
,movable - value
is ill-formed. Otherwise, the expressionstopped_as_error ( s , e )
is expression-equivalent to:stopped_as_error ( s , e ) let_stopped ( s , [] { return just_error ( e ); })
11.9.6.17. execution :: ensure_started
[exec.ensure.started]
-
eagerly starts the execution of a sender, returning a sender that is usable as intput to additional sender algorithms.ensure_started -
Let
be the type of an execution environment such that, given an instanceensure - started - env
, the expressione
is well-formed and has typeget_stop_token ( e )
.stop_token -
The name
denotes a customization point object. For some subexpressionensure_started
, lets
beS
. Ifdecltype (( s ))
orsender_in < S , ensure - started - env >
isconstructible_from < decay_t < env_of_t < S >> , env_of_t < S >> false
,
is ill-formed. Otherwise, the expressionensure_started ( s )
is expression-equivalent to:ensure_started ( s ) -
, if that expression is valid.tag_invoke ( ensure_started , get_completion_scheduler < set_value_t > ( get_env ( s )), s ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise,
, if that expression is valid.tag_invoke ( ensure_started , s ) -
Mandates: The type of the
expression above satisfiestag_invoke
.sender
-
-
Otherwise, constructs a sender
, which:s2 -
Creates an object
that contains ash_state
, an initially null pointer to an operation state awaitaing completion, and that also reserves space for storing:stop_source -
the operation state that results from connecting
withs
described below, andr -
the sets of values and errors with which
can complete, with the addition ofs
.exception_ptr -
the result of decay-copying
.get_env ( s )
shares ownership ofs2
withsh_state
described below.r -
-
Constructs a receiver
such that:r -
When
is called, decay-copies the expressionsset_value ( r , args ...)
intoargs ...
. It then checkssh_state
to see if there is an operation state awaiting completion; if so, it notifies the operation state that the results are ready. If any exceptions are thrown, the exception is caught andsh_state
is called instead.set_error ( r , current_exception ()) -
When
is called, decay-copiesset_error ( r , e )
intoe
. If there is an operation state awaiting completion, it then notifies the operation state that the results are ready.sh_state -
When
is called, it then notifies any awaiting operation state that the results are ready.set_stopped ( r ) -
is an expressionget_env ( r )
of typee
such thatensure - started - env
is well-formed and returns the results of callingget_stop_token ( e )
onget_token ()
's stop source.sh_state -
shares ownership ofr
withsh_state
. Afters2
has been completed, it releases its ownership ofr
.sh_state
-
-
Calls
and decay-copies the result intoget_env ( s )
.sh_state -
Calls
, resulting in an operation stateconnect ( s , r )
.op_state2
is saved inop_state2
. It then callssh_state
.start ( op_state2 ) -
When
is connected with a receivers2
of typeout_r
, it returns an operation state objectOutR
that contains:op_state -
An object
of typeout_r '
decay-copied fromOutR
,out_r -
A reference to
,sh_state -
A stop callback of type
, whereoptional < stop_token_of_t < env_of_t < OutR >>:: callback_type < stop - callback - fn >>
is the unspecified class type:stop - callback - fn struct stop - callback - fn { stop_source & stop_src_ ; void operator ()() noexcept { stop_src_ . request_stop (); } };
transfers its ownership ofs2
tosh_state
.op_state -
-
When
is called:start ( op_state ) -
If
has already been completed, then letr
be whichever completion function was used to completeCF
. Callsr
, whereCF ( out_r ', args2 ...)
is a pack of xvalues referencing the subobjects ofargs2 ...
that have been saved by the original call tosh_state
and returns.CF ( r , args ...) -
Otherwise, it emplace constructs the stop callback optional with the arguments
andget_stop_token ( get_env ( out_r '))
, wherestop - callback - fn { stop - src }
refers to the stop source ofstop - src
.sh_state -
Then, it checks to see if
isstop - src . stop_requested () true
. If so, it calls
.set_stopped ( out_r ') -
Otherwise, it sets
operation state pointer to the address ofsh_state
, registering itself as awaiting the result of the completion ofop_state
.r
-
-
When
completes it will notifyr
that the result are ready. Letop_state
be whichever completion function was used to completeCF
.r
's stop callback optional is reset. Thenop_state
is called, whereCF ( std :: move ( out_r '), args2 ...)
is a pack of xvalues referencing the subobjects ofargs2 ...
that have been saved by the original call tosh_state
.CF ( r , args ...) -
[Note: If sender
is destroyed without being connected to a receiver, or if it is connected but the operation state is destroyed without having been started, then whens2
completes and it releases its shared ownership ofr
,sh_state
will be destroyed and the results of the operation are discarded. -- end note]sh_state
-
-
Given a subexpression
, lets
be the result ofs2
. The result ofensure_started ( s )
shall return an lvalue reference to the object inget_env ( s2 )
that was initialized with the result ofsh_state
.get_env ( s ) -
Given subexpressions
ands2
wheree
is a sender returned froms2
or a copy of such, letensure_started
beS2
and letdecltype (( s2 ))
beE
. The type ofdecltype (( e ))
shall be equivalent to:tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , ensure - started - env , completion_signatures < set_error_t ( exception_ptr && ), set_error_t ( Es )... > , set - value - signature , error - types > where
is a (possibly empty) template parameter pack,Es
is the alias template:set - value - signature template < class ... Ts > using set - value - signature = completion_signatures < set_value_t ( decay_t < Ts >&& ...) > ; and
is the alias template:error - types template < class E > using error - types = completion_signatures < set_error_t ( decay_t < E >&& ) > ;
-
-
Let
be a sender expression,s
be an instance of the receiver type described above,r
be a sender returned froms2
or a copy of such,ensure_started ( s )
is the receiver to whichr2
is connected, ands2
is the pack of subexpressions passed toargs
's completion functionr
whenCSO
completes.s
shall invokes2
whereCSO ( r2 , args2 ...)
is a pack of xvalue references to objects decay-copied fromargs2
, or by callingargs
for some subexpressionset_error ( r2 , e2 )
. The objects passed toe2
's completion operation shall be valid until after the completion of the invocation ofr2
's completion operation.r2
11.9.7. Sender consumers [exec.consumers]
11.9.7.1. execution :: start_detached
[exec.start.detached]
-
eagerly starts a sender without the caller needing to manage the lifetimes of any objects.start_detached -
The name
denotes a customization point object. For some subexpressionstart_detached
, lets
beS
. Ifdecltype (( s ))
does not satisfyS
,sender
is ill-formed. Otherwise, the expressionstart_detached
is expression-equivalent to:start_detached ( s ) -
, if that expression is valid.tag_invoke ( start_detached , get_completion_scheduler < set_value_t > ( get_env ( s )), s ) -
Mandates: The type of the
expression above istag_invoke
.void
-
-
Otherwise,
, if that expression is valid.tag_invoke ( start_detached , s ) -
Mandates: The type of the
expression above istag_invoke
.void
-
-
Otherwise:
-
Let
be the type of a receiver, letR
be an rvalue of typer
, and letR
be a lvalue reference tocr
such that:const R -
The expression
is not potentially-throwing and has no effect,set_value ( r ) -
For any subexpression
, the expressione
is expression-equivalent toset_error ( r , e )
,terminate () -
The expression
is not potentially-throwing and has no effect, andset_stopped ( r ) -
The expression
is expression-equivalent toget_env ( cr )
.empty_env {}
-
-
Calls
, resulting in an operation stateconnect ( s , r )
, then callsop_state
.start ( op_state )
-
If the function selected above does not eagerly start the sender
after connecting it with a receiver that ignores value and stopped completion operations and callss
on error completions, the behavior of callingterminate ()
is undefined.start_detached ( s ) -
11.9.7.2. this_thread :: sync_wait
[exec.sync.wait]
-
andthis_thread :: sync_wait
are used to block a current thread until a sender passed into it as an argument has completed, and to obtain the values (if any) it completed with.this_thread :: sync_wait_with_variant
requires that the input sender has exactly one value completion signature.sync_wait -
For any receiver
created by an implementation ofr
andsync_wait
, the expressionssync_wait_with_variant
andget_scheduler ( get_env ( r ))
shall be well-formed. For a receiver created by the default implementation ofget_delegatee_scheduler ( get_env ( r ))
, these expressions shall return a scheduler to the same thread-safe, first-in-first-out queue of work such that tasks scheduled to the queue execute on the thread of the caller ofthis_thread :: sync_wait
. [Note: The scheduler for an instance ofsync_wait
that is a local variable withinrun_loop
is one valid implementation. -- end note]sync_wait -
The templates
andsync - wait - type
are used to determine the return types ofsync - wait - with - variant - type
andthis_thread :: sync_wait
. Letthis_thread :: sync_wait_with_variant
be the type of the expressionsync - wait - env
whereget_env ( r )
is an instance of the receiver created by the default implementation ofr
.sync_wait template < sender_in < sync - wait - env > S > using sync - wait - type = optional < value_types_of_t < S , sync - wait - env , decayed - tuple , type_identity_t >> ; template < sender_in < sync - wait - env > S > using sync - wait - with - variant - type = optional < into - variant - type < S , sync - wait - env >> ; -
The name
denotes a customization point object. For some subexpressionthis_thread :: sync_wait
, lets
beS
. Ifdecltype (( s ))
issender_in < S , sync - wait - env > false
, or the number of the arguments
passed into thecompletion_signatures_of_t < S , sync - wait - env >:: value_types
template parameter is not 1,Variant
is ill-formed. Otherwise,this_thread :: sync_wait ( s )
is expression-equivalent to:this_thread :: sync_wait ( s ) -
, if this expression is valid.tag_invoke ( this_thread :: sync_wait , get_completion_scheduler < set_value_t > ( get_env ( s )), s ) -
Mandates: The type of the
expression above istag_invoke
.sync - wait - type < S , sync - wait - env >
-
-
Otherwise,
, if this expression is valid and its type is.tag_invoke ( this_thread :: sync_wait , s ) -
Mandates: The type of the
expression above istag_invoke
.sync - wait - type < S , sync - wait - env >
-
-
Otherwise:
-
Constructs a receiver
.r -
Calls
, resulting in an operation stateconnect ( s , r )
, then callsop_state
.start ( op_state ) -
Blocks the current thread until a completion operation of
is executed. When it is:r -
If
has been called, returnsset_value ( r , ts ...)
. If that expression exits exceptionally, the exception is propagated to the caller ofsync - wait - type < S , sync - wait - env > { decayed - tuple < decltype ( ts )... > { ts ...}}
.sync_wait -
If
has been called, letset_error ( r , e )
be the decayed type ofE
. Ife
isE
, callsexception_ptr
. Otherwise, if thestd :: rethrow_exception ( e )
isE
, throwserror_code
. Otherwise, throwssystem_error ( e )
.e -
If
has been called, returnsset_stopped ( r )
.sync - wait - type < S , sync - wait - env > {}
-
-
-
-
The name
denotes a customization point object. For some subexpressionthis_thread :: sync_wait_with_variant
, lets
be the type ofS
. Ifinto_variant ( s )
issender_in < S , sync - wait - env > false
,
is ill-formed. Otherwise,this_thread :: sync_wait_with_variant ( s )
is expression-equivalent to:this_thread :: sync_wait_with_variant ( s ) -
, if this expression is valid.tag_invoke ( this_thread :: sync_wait_with_variant , get_completion_scheduler < set_value_t > ( get_env ( s )), s ) -
Mandates: The type of the
expression above istag_invoke
.sync - wait - with - variant - type < S , sync - wait - env >
-
-
Otherwise,
, if this expression is valid.tag_invoke ( this_thread :: sync_wait_with_variant , s ) -
Mandates: The type of the
expression above istag_invoke
.sync - wait - with - variant - type < S , sync - wait - env >
-
-
Otherwise,
.this_thread :: sync_wait ( into_variant ( s ))
-
11.10. execution :: execute
[exec.execute]
-
creates fire-and-forget tasks on a specified scheduler.execute -
The name
denotes a customization point object. For some subexpressionsexecute
andsch
, letf
beSch
anddecltype (( sch ))
beF
. Ifdecltype (( f ))
does not satisfySch
orscheduler
does not satisfyF
,invocable
is ill-formed. Otherwise,execute
is expression-equivalent to:execute -
, if that expression is valid. If the function selected bytag_invoke ( execute , sch , f )
does not invoke the functiontag_invoke
(or an object decay-copied fromf
) on an execution agent belonging to the associated execution resource off
, or if it does not callsch
if an error occurs after control is returned to the caller, the behavior of callingstd :: terminate
is undefined.execute -
Mandates: The type of the
expression above istag_invoke
.void
-
-
Otherwise,
.start_detached ( then ( schedule ( sch ), f ))
-
11.11. Sender/receiver utilities [exec.utils]
-
This section makes use of the following exposition-only entities:
// [ Editorial note: copy_cvref_t as in [[P1450R3]] -- end note ] // Mandates: is_base_of_v<T, remove_reference_t<U>> is true template < class T , class U > copy_cvref_t < U && , T > c - style - cast ( U && u ) noexcept requires decays - to < T , T > { return ( copy_cvref_t < U && , T > ) std :: forward < U > ( u ); } -
[Note: The C-style cast in c-style-cast is to disable accessibility checks. -- end note]
11.11.1. execution :: receiver_adaptor
[exec.utils.rcvr.adptr]
template < class - type Derived , receiver Base = unspecified > // arguments are not associated entities ([lib.tmpl-heads]) class receiver_adaptor ;
-
simplifies the implementation of one receiver type in terms of another. It definesreceiver_adaptor
overloads that forward to named members if they exist, and to the adapted receiver otherwise.tag_invoke -
If
is an alias for the unspecified default template argument, then:Base -
Let
beHAS - BASE false
, and -
Let
beGET - BASE ( d )
.d . base ()
otherwise, let:
-
Let
beHAS - BASE true
, and -
Let
beGET - BASE ( d )
.c - style - cast < receiver_adaptor < Derived , Base >> ( d ). base ()
Let
be the type ofBASE - TYPE ( D )
.GET - BASE ( declval < D > ()) -
-
is equivalent to the following:receiver_adaptor < Derived , Base > template < class - type Derived , receiver Base = unspecified > // arguments are not associated entities ([lib.tmpl-heads]) class receiver_adaptor { friend Derived ; public : using is_receiver = unspecified ; // Constructors receiver_adaptor () = default ; template < class B > requires HAS - BASE && constructible_from < Base , B > explicit receiver_adaptor ( B && base ) : base_ ( std :: forward < B > ( base )) {} private : using set_value = unspecified ; using set_error = unspecified ; using set_stopped = unspecified ; using get_env = unspecified ; // Member functions template < class Self > requires HAS - BASE decltype ( auto ) base ( this Self && self ) noexcept { return ( std :: forward < Self > ( self ). base_ ); } // [exec.utils.rcvr.adptr.nonmembers] Non-member functions template < class ... As > friend void tag_invoke ( set_value_t , Derived && self , As && ... as ) noexcept ; template < class E > friend void tag_invoke ( set_error_t , Derived && self , E && e ) noexcept ; friend void tag_invoke ( set_stopped_t , Derived && self ) noexcept ; friend decltype ( auto ) tag_invoke ( get_env_t , const Derived & self ) noexcept ( see below ); [[ no_unique_address ]] Base base_ ; // present if and only if HAS-BASE is true }; -
[Note:
providesreceiver_adaptor
overloads on behalf of the derived classtag_invoke
, which is incomplete whenDerived
is instantiated.]receiver_adaptor -
[Example:
using _int_completion = completion_signatures < set_value_t ( int ) > ; template < receiver_of < _int_completion > R > class my_receiver : receiver_adaptor < my_receiver < R > , R > { friend receiver_adaptor < my_receiver , R > ; void set_value () && { set_value ( std :: move ( * this ). base (), 42 ); } public : using receiver_adaptor < my_receiver , R >:: receiver_adaptor ; }; -- end example]
11.11.1.1. Non-member functions [exec.utils.rcvr.adptr.nonmembers]
template < class ... As > friend void tag_invoke ( set_value_t , Derived && self , As && ... as ) noexcept ;
-
Let
be the expressionSET - VALUE
.std :: move ( self ). set_value ( std :: forward < As > ( as )...) -
Constraints: Either
is a valid expression orSET - VALUE
denotes a type andtypename Derived :: set_value
iscallable < set_value_t , BASE - TYPE ( Derived ), As ... > true
. -
Mandates:
, if that expression is valid, is not potentially-throwing.SET - VALUE -
Effects: Equivalent to:
-
If
is a valid expression,SET - VALUE
;SET - VALUE -
Otherwise,
.set_value ( GET - BASE ( std :: move ( self )), std :: forward < As > ( as )...)
-
template < class E > friend void tag_invoke ( set_error_t , Derived && self , E && e ) noexcept ;
-
Let
be the expressionSET - ERROR
.std :: move ( self ). set_error ( std :: forward < E > ( e )) -
Constraints: Either
is a valid expression orSET - ERROR
denotes a type andtypename Derived :: set_error
iscallable < set_error_t , BASE - TYPE ( Derived ), E > true
. -
Mandates:
, if that expression is valid, is not potentially-throwing.SET - ERROR -
Effects: Equivalent to:
-
If
is a valid expression,SET - ERROR
;SET - ERROR -
Otherwise,
.set_error ( GET - BASE ( std :: move ( self )), std :: forward < E > ( e ))
-
friend void tag_invoke ( set_stopped_t , Derived && self ) noexcept ;
-
Let
be the expressionSET - STOPPED
.std :: move ( self ). set_stopped () -
Constraints: Either
is a valid expression orSET - STOPPED
denotes a type andtypename Derived :: set_stopped
iscallable < set_stopped_t , BASE - TYPE ( Derived ) > true
. -
Mandates:
, if that expression is valid, is not potentially-throwing.SET - STOPPED -
Effects: Equivalent to:
-
If
is a valid expression,SET - STOPPED
;SET - STOPPED -
Otherwise,
.set_stopped ( GET - BASE ( std :: move ( self )))
-
friend decltype ( auto ) tag_invoke ( get_env_t , const Derived & self ) noexcept ( see below );
-
Constraints: Either
is a valid expression orself . get_env ()
denotes a type andtypename Derived :: get_env
iscallable < get_env_t , BASE - TYPE ( const Derived & ) > true
. -
Effects: Equivalent to:
-
If
is a valid expression,self . get_env ()
;self . get_env () -
Otherwise,
.std :: get_env ( GET - BASE ( self ))
-
-
Remarks: The expression in the
clause is:noexcept -
If
is a valid expression,self . get_env ()
;noexcept ( self . get_env ()) -
Otherwise,
.noexcept ( std :: get_env ( GET - BASE ( self )))
-
11.11.2. execution :: completion_signatures
[exec.utils.cmplsigs]
-
is a type that encodes a set of completion signatures ([async.ops]).completion_signatures -
[Example:
class my_sender { using completion_signatures = completion_signatures < set_value_t (), set_value_t ( int , float ), set_error_t ( exception_ptr ), set_error_t ( error_code ), set_stopped_t () > ; }; // Declares my_sender to be a sender that can complete by calling // one of the following for a receiver expression R: // set_value(R) // set_value(R, int{...}, float{...}) // set_error(R, exception_ptr{...}) // set_error(R, error_code{...}) // set_stopped(R) -- end example]
-
This section makes use of the following exposition-only entities:
template < class Fn > concept completion - signature = see below ; template < bool > struct indirect - meta - apply { template < template < class ... > class T , class ... As > using meta - apply = T < As ... > ; // exposition only }; template < class ... > concept always - true= true; // exposition only -
A type
satisfiesFn
if and only if it is a function type with one of the following forms:completion - signature -
, whereset_value_t ( Vs ...)
is an arbitrary parameter pack.Vs -
, whereset_error_t ( E )
is an arbitrary type.E -
set_stopped_t ()
-
template < class Tag , class S , class E , template < class ... > class Tuple , template < class ... > class Variant > requires sender_in < S , E > using gather - signatures = see below ; -
Let
be a template parameter pack of the arguments of theFns ...
specialization named bycompletion_signatures
, letcompletion_signatures_of_t < S , E >
be a template parameter pack of the function types inTagFns
whose return types areFns
, and letTag
be a template parameter pack of the function argument types in theTs n
-th type inn
. Then, given two variadic templatesTagFns
andTuple
, the typeVariant
names the typegather - signatures < Tag , S , E , Tuple , Variant >
, whereMETA - APPLY ( Variant , META - APPLY ( Tuple , Ts 0 ...), META - APPLY ( Tuple , Ts 1 ...), ... META - APPLY ( Tuple , Ts m -1 . ..))
is the size of the parameter packm
andTagFns
is equivalent to:META - APPLY ( T , As ...) typename indirect - meta - apply < always - true< As ... >>:: template meta - apply < T , As ... > ; -
The purpose of
is to make it valid to use non-variadic templates asMETA - APPLY
andVariant
arguments toTuple
.gather - signatures
-
-
template < completion - signature ... Fns > struct completion_signatures {}; template < class S , class E = empty_env , template < class ... > class Tuple = decayed - tuple , template < class ... > class Variant = variant - or - empty > requires sender_in < S , E > using value_types_of_t = gather - signatures < set_value_t , S , E , Tuple , Variant > ; template < class S , class E = empty_env , template < class ... > class Variant = variant - or - empty > requires sender_in < S , E > using error_types_of_t = gather - signatures < set_error_t , S , E , type_identity_t , Variant > ; template < class S , class E = empty_env > requires sender_in < S , E > inline constexpr bool sends_stopped = ! same_as < type - list <> , gather - signatures < set_stopped_t , S , E , type - list , type - list >> ;
11.11.3. execution :: make_completion_signatures
[exec.utils.mkcmplsigs]
-
is an alias template used to adapt the completion signatures of a sender. It takes a sender, and environment, and several other template arguments that apply modifications to the sender’s completion signatures to generate a new specialization ofmake_completion_signatures
.completion_signatures -
[Example:
// Given a sender S and an environment Env, adapt S’s completion // signatures by lvalue-ref qualifying the values, adding an additional // exception_ptr error completion if its not already there, and leaving the // other completion signatures alone. template < class ... Args > using my_set_value_t = completion_signatures < set_value_t ( add_lvalue_reference_t < Args > ...) > ; using my_completion_signatures = make_completion_signatures < S , Env , completion_signatures < set_error_t ( exception_ptr ) > , my_set_value_t > ; -- end example]
-
This section makes use of the following exposition-only entities:
template < class ... As > using default - set - value = completion_signatures < set_value_t ( As ...) > ; template < class Err > using default - set - error = completion_signatures < set_error_t ( Err ) > ; -
template < sender Sndr , class Env = empty_env , valid - completion - signatures AddlSigs = completion_signatures <> , template < class ... > class SetValue = default - set - value , template < class > class SetError = default - set - error , valid - completion - signatures SetStopped = completion_signatures < set_stopped_t () >> requires sender_in < Sndr , Env > using make_completion_signatures = completion_signatures < see below > ; -
shall name an alias template such that for any template parameter packSetValue
, the typeAs ...
is either ill-formed or elseSetValue < As ... >
is satisfied.valid - completion - signatures < SetValue < As ... >> -
shall name an alias template such that for any typeSetError
,Err
is either ill-formed or elseSetError < Err >
is satisfied.valid - completion - signatures < SetError < Err >>
Then:
-
Let
be a pack of the types in theVs ...
named bytype - list
.value_types_of_t < Sndr , Env , SetValue , type - list > -
Let
be a pack of the types in theEs ...
named bytype - list
, whereerror_types_of_t < Sndr , Env , error - list >
is an alias template such thaterror - list
nameserror - list < Ts ... >
.type - list < SetError < Ts > ... > -
Let
name the typeSs
ifcompletion_signatures <>
issends_stopped < Sndr , Env > false
; otherwise,
.SetStopped
Then:
-
If any of the above types are ill-formed, then
is ill-formed,make_completion_signatures < Sndr , Env , AddlSigs , SetValue , SetError , SetStopped > -
Otherwise,
names the typemake_completion_signatures < Sndr , Env , AddlSigs , SetValue , SetError , SetStopped >
wherecompletion_signatures < Sigs ... >
is the unique set of types in all the template arguments of all theSigs ...
specializations incompletion_signatures
.[ AddlSigs , Vs ..., Es ..., Ss ]
-
11.12. Execution contexts [exec.ctx]
-
This section specifies some execution resources on which work can be scheduled.
11.12.1. run_loop
[exec.run.loop]
-
A
is an execution resource on which work can be scheduled. It maintains a simple, thread-safe first-in-first-out queue of work. Itsrun_loop
member function removes elements from the queue and executes them in a loop on whatever thread of execution callsrun ()
.run () -
A
instance has an associated count that corresponds to the number of work items that are in its queue. Additionally, arun_loop
has an associated state that can be one of starting, running, or finishing.run_loop -
Concurrent invocations of the member functions of
, other thanrun_loop
and its destructor, do not introduce data races. The member functionsrun
,pop_front
, andpush_back
execute atomically.finish -
[Note: Implementations are encouraged to use an intrusive queue of operation states to hold the work units to make scheduling allocation-free. — end note]
class run_loop { // [exec.run.loop.types] Associated types class run - loop - scheduler ; // exposition only class run - loop - sender ; // exposition only struct run - loop - opstate - base { // exposition only virtual void execute () = 0 ; run_loop * loop_ ; run - loop - opstate - base * next_ ; }; template < receiver_of < completion_signatures < set_value_t () >> R > using run - loop - opstate = unspecified ; // exposition only // [exec.run.loop.members] Member functions: run - loop - opstate - base * pop_front (); // exposition only void push_back ( run - loop - opstate - base * ); // exposition only public : // [exec.run.loop.ctor] construct/copy/destroy run_loop () noexcept ; run_loop ( run_loop && ) = delete ; ~ run_loop (); // [exec.run.loop.members] Member functions: run - loop - scheduler get_scheduler (); void run (); void finish (); };
11.12.1.1. Associated types [exec.run.loop.types]
class run - loop - scheduler ;
-
is an unspecified type that models therun - loop - scheduler
concept.scheduler -
Instances of
remain valid until the end of the lifetime of therun - loop - scheduler
instance from which they were obtained.run_loop -
Two instances of
compare equal if and only if they were obtained from the samerun - loop - scheduler
instance.run_loop -
Let
be an expression of typesch
. The expressionrun - loop - scheduler
is not potentially-throwing and has typeschedule ( sch )
.run - loop - sender
class run - loop - sender ;
-
is an unspecified type such thatrun - loop - sender
issender_of < run - loop - sender , set_value_t () > true
. Additionally, the types reported by its
associated type iserror_types
, and the value of itsexception_ptr
trait issends_stopped true
. -
An instance of
remains valid until the end of the lifetime of its associatedrun - loop - sender
instance.run_loop -
Let
be an expression of types
, letrun - loop - sender
be an expression such thatr
models thedecltype ( r )
concept, and letreceiver_of
be eitherC
orset_value_t
. Then:set_stopped_t -
The expression
has typeconnect ( s , r )
and is potentially-throwing if and only if the initialiation ofrun - loop - opstate < decay_t < decltype ( r ) >>
fromdecay_t < decltype ( r ) >
is potentially-throwing.r -
The expression
is not potentially-throwing, has typeget_completion_scheduler < C > ( get_env ( s ))
, and compares equal to therun - loop - scheduler
instance from whichrun - loop - scheduler
was obtained.s
-
template < receiver_of < completion_signatures < set_value_t () >> R > // arguments are not associated entities ([lib.tmpl-heads]) struct run - loop - opstate ;
-
inherits unambiguously fromrun - loop - opstate < R >
.run - loop - opstate - base -
Let
be a non-o
lvalue of typeconst
, and letrun - loop - opstate < R >
be a non-REC ( o )
lvalue reference to an instance of typeconst
that was initialized with the expressionR
passed to the invocation ofr
that returnedconnect
. Then:o -
The object to which
refers remains valid for the lifetime of the object to whichREC ( o )
refers.o -
The type
overridesrun - loop - opstate < R >
such thatrun - loop - opstate - base :: execute ()
is equivalent to the following:o . execute () if ( get_stop_token ( REC ( o )). stop_requested ()) { set_stopped ( std :: move ( REC ( o ))); } else { set_value ( std :: move ( REC ( o ))); } -
The expression
is equivalent to the following:start ( o ) try { o . loop_ -> push_back ( & o ); } catch (...) { set_error ( std :: move ( REC ( o )), current_exception ()); }
-
11.12.1.2. Constructor and destructor [exec.run.loop.ctor]
run_loop :: run_loop () noexcept ;
-
Postconditions: count is
and state is starting.0
run_loop ::~ run_loop ();
-
Effects: If count is not
or if state is running, invokes0
. Otherwise, has no effects.terminate ()
11.12.1.3. Member functions [exec.run.loop.members]
run - loop - opstate - base * run_loop :: pop_front ();
-
Effects: Blocks ([defns.block]) until one of the following conditions is
true
:-
count is
and state is finishing, in which case0
returnspop_front
; ornullptr -
count is greater than
, in which case an item is removed from the front of the queue, count is decremented by0
, and the removed item is returned.1
-
void run_loop::push_back ( run - loop - opstate - base * item );
-
Effects: Adds
to the back of the queue and increments count byitem
.1 -
Synchronization: This operation synchronizes with the
operation that obtainspop_front
.item
run - loop - scheduler run_loop :: get_scheduler ();
-
Returns: an instance of
that can be used to schedule work onto thisrun - loop - scheduler
instance.run_loop
void run_loop::run ();
-
Effects: Equivalent to:
while ( auto * op = pop_front ()) { op -> execute (); } -
Precondition: state is starting.
-
Postcondition: state is finishing.
-
Remarks: While the loop is executing, state is running. When state changes, it does so without introducing data races.
void run_loop::finish ();
-
Effects: Changes state to finishing.
-
Synchronization: This operation synchronizes with all
operations on this object.pop_front
11.13. Coroutine utilities [exec.coro.utils]
11.13.1. execution :: as_awaitable
[exec.as.awaitable]
-
transforms an object into one that is awaitable within a particular coroutine. This section makes use of the following exposition-only entities:as_awaitable template < class S , class E > using single - sender - value - type = see below ; template < class S , class E > concept single - sender = sender_in < S , E > && requires { typename single - sender - value - type < S , E > ; }; template < class S , class P > concept awaitable - sender = single - sender < S , ENV - OF ( P ) > && sender_to < S , awaitable - receiver > && // see below requires ( P & p ) { { p . unhandled_stopped () } -> convertible_to < coroutine_handle <>> ; }; template < class S , class P > class sender - awaitable ; where
names the typeENV - OF ( P )
if that type is well-formed, orenv_of_t < P >
otherwise.empty_env -
Alias template single-sender-value-type is defined as follows:
-
If
would have the formvalue_types_of_t < S , E , Tuple , Variant >
, thenVariant < Tuple < T >>
is an alias for typesingle - sender - value - type < S , E >
.decay_t < T > -
Otherwise, if
would have the formvalue_types_of_t < S , E , Tuple , Variant >
orVariant < Tuple <>>
, thenVariant <>
is an alias for typesingle - sender - value - type < S , E >
.void -
Otherwise,
is ill-formed.single - sender - value - type < S , E >
-
-
The type
is equivalent to the following:sender - awaitable < S , P > template < class S , class P > // arguments are not associated entities ([lib.tmpl-heads]) class sender - awaitable { struct unit {}; using value_t = single - sender - value - type < S , ENV - OF ( P ) > ; using result_t = conditional_t < is_void_v < value_t > , unit , value_t > ; struct awaitable - receiver ; variant < monostate , result_t , exception_ptr > result_ {}; connect_result_t < S , awaitable - receiver > state_ ; public : sender - awaitable ( S && s , P & p ); bool await_ready () const noexcept { return false; } void await_suspend ( coroutine_handle < P > ) noexcept { start ( state_ ); } value_t await_resume (); }; -
is equivalent to the following:awaitable - receiver struct awaitable - receiver { using is_receiver = unspecified ; variant < monostate , result_t , exception_ptr >* result_ptr_ ; coroutine_handle < P > continuation_ ; // ... see below }; Let
be an rvalue expression of typer
, letawaitable - receiver
be acr
lvalue that refers toconst
, letr
be an arbitrary function parameter pack of typesvs ...
, and letVs ...
be an arbitrary expression of typeerr
. Then:Err -
If
is satisfied, the expressionconstructible_from < result_t , Vs ... >
is equivalent to:set_value ( r , vs ...) try { r . result_ptr_ -> emplace < 1 > ( vs ...); } catch (...) { r . result_ptr_ -> emplace < 2 > ( current_exception ()); } r . continuation_ . resume (); Otherwise,
is ill-formed.set_value ( r , vs ...) -
The expression
is equivalent to:set_error ( r , err ) r . result_ptr_ -> emplace < 2 > ( AS - EXCEPT - PTR ( err )); r . continuation_ . resume (); where
is:AS - EXCEPT - PTR ( err ) -
iferr
names the same type asdecay_t < Err >
,exception_ptr -
Otherwise,
ifmake_exception_ptr ( system_error ( err ))
names the same type asdecay_t < Err >
,error_code -
Otherwise,
.make_exception_ptr ( err )
-
-
The expression
is equivalent toset_stopped ( r )
.static_cast < coroutine_handle <>> ( r . continuation_ . promise (). unhandled_stopped ()). resume () -
For any expression
whose type satisfiestag
and for any pack of subexpressionsforwarding - query
,as
is expression-equivalent totag_invoke ( tag , get_env ( cr ), as ...)
when that expression is well-formed.tag ( get_env ( as_const ( cr . continuation_ . promise ())), as ...)
-
-
sender - awaitable :: sender - awaitable ( S && s , P & p ) -
Effects: initializes
withstate_
.connect ( std :: forward < S > ( s ), awaitable - receiver { & result_ , coroutine_handle < P >:: from_promise ( p )})
-
-
value_t sender - awaitable :: await_resume () -
Effects: equivalent to:
if ( result_ . index ()) == 2 ) rethrow_exception ( get < 2 > ( result_ )); if constexpr ( ! is_void_v < value_t > ) return std :: forward < value_t > ( get < 1 > ( result_ ));
-
-
-
-
is a customization point object. For some subexpressionsas_awaitable
ande
wherep
is an lvalue,p
names the typeE
anddecltype (( e ))
names the typeP
,decltype (( p ))
is expression-equivalent to the following:as_awaitable ( e , p ) -
if that expression is well-formed.tag_invoke ( as_awaitable , e , p ) -
Mandates:
isis - awaitable < A , P > true
, where
is the type of theA
expression above.tag_invoke
-
-
Otherwise,
ife
isis - awaitable < E , U > true
, where
is an unspecified class type that lacks a member namedU
. The condition is notawait_transform
as that creates the potential for constraint recursion.is - awaitable < E , P > -
Preconditions:
isis - awaitable < E , P > true
and the expression
in a coroutine with promise typeco_await e
is expression-equivalent to the same expression in a coroutine with promise typeU
.P
-
-
Otherwise,
ifsender - awaitable { e , p }
isawaitable - sender < E , P > true
. -
Otherwise,
.e
-
11.13.2. execution :: with_awaitable_senders
[exec.with.awaitable.senders]
-
, when used as the base class of a coroutine promise type, makes senders awaitable in that coroutine type.with_awaitable_senders In addition, it provides a default implementation of
such that if a sender completes by callingunhandled_stopped ()
, it is treated as if an uncatchable "stopped" exception were thrown from the await-expression. In practice, the coroutine is never resumed, and theset_stopped
of the coroutine caller’s promise type is called.unhandled_stopped template < class - type Promise > struct with_awaitable_senders { template < OtherPromise > requires ( ! same_as < OtherPromise , void > ) void set_continuation ( coroutine_handle < OtherPromise > h ) noexcept ; coroutine_handle <> continuation () const noexcept { return continuation_ ; } coroutine_handle <> unhandled_stopped () noexcept { return stopped_handler_ ( continuation_ . address ()); } template < class Value > see - below await_transform ( Value && value ); private : // exposition only [[ noreturn ]] static coroutine_handle <> default_unhandled_stopped ( void * ) noexcept { terminate (); } coroutine_handle <> continuation_ {}; // exposition only // exposition only coroutine_handle <> ( * stopped_handler_ )( void * ) noexcept = & default_unhandled_stopped ; }; -
void set_continuation ( coroutine_handle < OtherPromise > h ) noexcept -
Effects: equivalent to:
continuation_ = h ; if constexpr ( requires ( OtherPromise & other ) { other . unhandled_stopped (); } ) { stopped_handler_ = []( void * p ) noexcept -> coroutine_handle <> { return coroutine_handle < OtherPromise >:: from_address ( p ) . promise (). unhandled_stopped (); }; } else { stopped_handler_ = default_unhandled_stopped ; }
-
-
call - result - t < as_awaitable_t , Value , Promise &> await_transform ( Value && value ) -
Effects: equivalent to:
return as_awaitable ( std :: forward < Value > ( value ), static_cast < Promise &> ( * this ));
-