1. Introduction
This paper proposes a self-contained design for a Standard C++ framework for managing asynchronous execution on generic execution contexts. It is based on the ideas in [P0443R14] and its companion papers.
1.1. Motivation
Today, C++ software is increasingly asynchronous and parallel, a trend that is likely to only continue going forward. Asynchrony and parallelism appears everywhere, from processor hardware interfaces, to networking, to file I/O, to GUIs, to accelerators. Every C++ domain and every platform need to deal with asynchrony and parallelism, from scientific computing to video games to financial services, from the smallest mobile devices to your laptop to GPUs in the world’s fastest supercomputer.
While the C++ Standard Library has a rich set concurrency primitives (
This paper proposes a Standard C++ model for asynchrony, based around three key abstractions: schedulers, senders, and receivers, and a set of customizable asynchronous algorithms.
1.2. Priorities
- 
     Be composable and generic, allowing users to write code that can be used with many different types of execution contexts. 
- 
     Encapsulate common asynchronous patterns in customizable and reusable algorithms, so users don’t have to invent things themselves. 
- 
     Make it easy to be correct by construction. 
- 
     Support both lazy and eager execution in a way that does not compromise the efficiency of either and allows users to write code that is agnostic to eagerness. 
- 
     Support the diversity of execution contexts and execution agents, because not all execution agents are created equal; some are less capable than others, but not less important. 
- 
     Allow everything to be customized by an execution context, including transfer to other execution contexts, but don’t require that execution contexts customize everything. 
- 
     Care about all reasonable use cases, domains and platforms. 
- 
     Errors must be propagated, but error handling must not present a burden. 
- 
     Support cancellation, which is not an error. 
- 
     Have clear and concise answers for where things execute. 
- 
     Be able to manage and terminate the lifetimes of objects asynchronously. 
1.3. Examples
See § 4.12 User-facing sender factories, § 4.13 User-facing sender adaptors, and § 4.14 User-facing sender consumers for short explanations of the algorithms used in these code examples.
1.3.1. Hello world
using namespace std :: execution ; scheduler auto sch = get_thread_pool (). scheduler (); // 1 sender auto begin = schedule ( sch ); // 2 sender auto hi_again = then ( begin , []{ // 3 std :: cout << "Hello world! Have an int." ; // 3 return 13 ; // 3 }); // 3 sender auto add_42 = then ( hi_again , []( int arg ) { return arg + 42 ; }); // 4 auto [ i ] = this_thread :: sync_wait ( add_42 ). value (); // 5 
This example demonstrates the basics of schedulers, senders, and receivers:
- 
     First we need to get a scheduler from somewhere, such as a thread pool. A scheduler is a lightweight handle to an execution resource. 
- 
     To start a chain of work on a scheduler, we call § 4.12.1 execution::schedule, which returns a sender that completes on the scheduler. sender describes asynchronous work and sends a signal (value, error, or done) to some recipient(s) when that work completes. 
- 
     We use sender algorithms to produce senders and compose asynchronous work. § 4.13.2 execution::then is a sender adaptor that takes an input sender and a std :: invocable std :: invocable then schedule void std :: invocable int 
- 
     Now, we add another operation to the chain, again using § 4.13.2 execution::then. This time, we get sent a value - the int 42 
- 
     Finally, we’re ready to submit the entire asynchronous pipeline and wait for its completion. Everything up until this point has been completely asynchronous; the work may not have even started yet. To ensure the work has started and then block pending its completion, we use § 4.14.2 this_thread::sync_wait, which will either return a std :: optional < std :: tuple < ... >> std :: optional 
1.3.2. Asynchronous inclusive scan
using namespace std :: execution ; sender auto async_inclusive_scan ( scheduler auto sch , // 2 std :: span < const double > input , // 1 std :: span < double > output , // 1 double init , // 1 std :: size_t tile_count ) // 3 { std :: size_t const tile_size = ( input . size () + tile_count - 1 ) / tile_count ; std :: vector < double > partials ( tile_count + 1 ); // 4 partials [ 0 ] = init ; // 4 return transfer_just ( sch , std :: move ( partials )) // 5 | bulk ( tile_count , // 6 [ = ]( std :: size_t i , std :: vector < double >& partials ) { // 7 auto start = i * tile_size ; // 8 auto end = std :: min ( input . size (), ( i + 1 ) * tile_size ); // 8 partials [ i + 1 ] = *-- std :: inclusive_scan ( begin ( input ) + start , // 9 begin ( input ) + end , // 9 begin ( output ) + start ); // 9 }) // 10 | then ( // 11 []( std :: vector < double >& partials ) { std :: inclusive_scan ( begin ( partials ), end ( partials ), // 12 begin ( partials )); // 12 return std :: move ( partials ); // 13 }) | bulk ( tile_count , // 14 [ = ]( std :: size_t i , std :: vector < double >& partials ) { // 14 auto start = i * tile_size ; // 14 auto end = std :: min ( input . size (), ( i + 1 ) * tile_size ); // 14 std :: for_each ( output + start , output + end , // 14 [ & ] ( double & e ) { e = partials [ i ] + e ; } // 14 ); }) | then ( // 15 [ = ]( std :: vector < double >& partials ) { // 15 return output ; // 15 }); // 15 } 
This example builds an asynchronous computation of an inclusive scan:
- 
     It scans a sequence of double std :: span < const double > input double std :: span < double > output 
- 
     It takes a scheduler, which specifies what execution context the scan should be launched on. 
- 
     It also takes a tile_count 
- 
     First we need to allocate temporary storage needed for the algorithm, which we’ll do with a std :: vector partials double 
- 
     Next we’ll create our initial sender with § 4.12.3 execution::transfer_just. This sender will send the temporary storage, which we’ve moved into the sender. The sender has a completion scheduler of sch sch 
- 
     Senders and sender adaptors support composition via operator | operator | tile_count 
- 
     Each agent will call a std :: invocable i [ 0 , tile_count ) 
- 
     We start by computing the start and end of the range of input and output elements that this agent is responsible for, based on our agent index. 
- 
     Then we do a sequential std :: inclusive_scan partials 
- 
     After all computation in that initial § 4.13.7 execution::bulk pass has completed, every one of the spawned execution agents will have written the sum of its elements into its slot in partials 
- 
     Now we need to scan all of the values in partials 
- 
     § 4.13.2 execution::then takes an input sender and an std :: invocable std :: invocable std :: invocable std :: inclusive_scan partials 
- 
     Then we return partials 
- 
     Finally we do another § 4.13.7 execution::bulk of the same shape as before. In this § 4.13.7 execution::bulk, we will use the scanned values in partials 
- 
     async_inclusive_scan std :: span < double > async_inclusive_scan 
1.3.3. Asynchronous dynamically-sized read
using namespace std :: execution ; sender_of < std :: size_t > auto async_read ( // 1 sender_of < std :: span < std :: byte >> auto buffer , // 1 auto handle ); // 1 struct dynamic_buffer { // 3 std :: unique_ptr < std :: byte [] > data ; // 3 std :: size_t size ; // 3 }; // 3 sender_of < dynamic_buffer > auto async_read_array ( auto handle ) { // 2 return just ( dynamic_buffer {}) // 4 | let_value ([] ( dynamic_buffer & buf ) { // 5 return just ( std :: as_writeable_bytes ( std :: span ( & buf . size , 1 )) // 6 | async_read ( handle ) // 7 | then ( // 8 [ & ] ( std :: size_t bytes_read ) { // 9 assert ( bytes_read == sizeof ( buf . size )); // 10 buf . data = std :: make_unique ( new std :: byte [ buf . size ]); // 11 return std :: span ( buf . data . get (), buf . size ); // 12 } | async_read ( handle ) // 13 | then ( [ & ] ( std :: size_t bytes_read ) { assert ( bytes_read == buf . size ); // 14 return std :: move ( buf ); // 15 }); }); } 
This example demonstrates a common asynchronous I/O pattern - reading a payload of a dynamic size by first reading the size, then reading the number of bytes specified by the size:
- 
     async_read std :: span < std :: byte > std :: span 
- 
     async_read_array dynamic_buffer 
- 
     dynamic_buffer std :: unique_ptr < std :: byte [] > 
- 
     The first thing we do inside of async_read_array dynamic_array operator | 
- 
     We need the lifetime of this dynamic_array let_value std :: invocable let_value std :: invocable std :: invocable 
- 
     Inside of the let_value std :: invocable async_read std :: span buf . size 
- 
     We chain the async_read operator | 
- 
     Next, we pipe a std :: invocable async_read 
- 
     That std :: invocable 
- 
     We need to check that the number of bytes read is what we expected. 
- 
     Now that we have read the size of the data, we can allocate storage for it. 
- 
     We return a std :: span < std :: byte > std :: invocable 
- 
     And that recipient will be another async_read 
- 
     Once the data has been read, in another § 4.13.2 execution::then, we confirm that we read the right number of bytes. 
- 
     Finally, we move out of and return our dynamic_buffer async_read_array 
1.4. What this proposal is not
This paper is not a patch on top of [P0443R14]; we are not asking to update the existing paper, we are asking to retire it in favor of this paper, which is already self-contained; any example code within this paper can be written in Standard C++, without the need to standardize any further facilities.
This paper is not an alternative design to [P0443R14]; rather, we have taken the design in the current executors paper, and applied targeted fixes to allow it to fulfill the promises of the sender/receiver model, as well as provide all the facilities we consider essential when writing user code using standard execution concepts; we have also applied the guidance of removing one-way executors from the paper entirely, and instead provided an algorithm based around senders that serves the same purpose.
1.5. Design changes from P0443
- 
     The executor 
- 
     Properties are not included in this paper. We see them as a possible future extension, if the committee gets more comfortable with them. 
- 
     Users now have a choice between using a strictly lazy vs a possibly eager version of most sender algorithms. 
- 
     Senders now advertise what scheduler, if any, their evaluation will complete on. 
- 
     The places of execution of user code in P0443 weren’t precisely defined, whereas they are in this paper. See § 4.5 Senders can propagate completion schedulers. 
- 
     P0443 did not propose a suite of sender algorithms necessary for writing sender code; this paper does. See § 4.12 User-facing sender factories, § 4.13 User-facing sender adaptors, and § 4.14 User-facing sender consumers. 
- 
     P0443 did not specify the semantics of variously qualified connect 
- 
     Specific type erasure facilities are omitted, as per LEWG direction. Type erasure facilities can be built on top of this proposal, as discussed in § 5.9 Ranges-style CPOs vs tag_invoke. 
- 
     A specific thread pool implementation is omitted, as per LEWG direction. 
1.6. Prior art
This proposal builds upon and learns from years of prior art with asynchronous and parallel programming frameworks in C++.
Futures, as traditionally realized, require the dynamic allocation and management of a shared state, synchronization, and typically type-erasure of work and continuation. Many of these costs are inherent in the nature of "future" as a handle to work that is already scheduled for execution. These expenses rule out the future abstraction for many uses and makes it a poor choice for a basis of a generic mechanism.
Coroutines suffer many of the same problems, but can avoid synchronizing when chaining dependent work because they typically start suspended. In many cases, coroutine frames require unavoidable dynamic allocation. Consequently, coroutines in embedded or heterogeneous environments require great attention to detail. Nor are coroutines good candidates for cancellation because the early and safe termination of coroutines requires unsatisfying solutions. On the one hand, exceptions are inefficient and disallowed in many environments. Alternatively, clumsy ad-hoc mechanisms, whereby 
Callbacks are the simplest, most powerful, and most efficient mechanism for creating chains of work, but suffer problems of their own. Callbacks must propagate either errors or values. This simple requirement yields many different interface possibilities, but the lack of a standard obstructs generic design. Additionally, few of these possibilities accommodate cancellation signals when the user requests upstream work to stop and clean up.
1.7. Field experience
This proposal draws heavily from our field experience with libunifex, Thrust, and Agency. It is also inspired by the needs of countless other C++ frameworks for asynchrony, parallelism, and concurrency, including:
Before this proposal is approved, we will present a new implementation of this proposal written from the specification and intended as a contribution to libc++. This implementation will demonstrate the viability of the design across the use cases and execution contexts that the committee has identified as essential.
2. Revision history
2.1. R1
The changes since R0 are as follows:
- 
     Added a new concept, sender_of 
- 
     Added a new scheduler query, this_thread :: execute_may_block_caller 
- 
     Added a new scheduler query, get_forward_progress_guarantee 
- 
     Removed the unschedule 
- 
     Various fixes of typos and bugs. 
2.2. R0
Initial revision.
3. Design - introduction
The following four sections describe the entirety of the proposed design.
- 
     § 3 Design - introduction describes the conventions used through the rest of the design sections, as well as an example illustrating how we envision code will be written using this proposal. 
- 
     § 4 Design - user side describes all the functionality from the perspective we intend for users: it describes the various concepts they will interact with, and what their programming model is. 
- 
     § 5 Design - implementer side describes the machinery that allows for that programming model to function, and the information contained there is necessary for people implementing senders and sender algorithms (including the standard library ones) - but is not necessary to use senders productively. 
3.1. Conventions
The following conventions are used throughout the design section:
- 
     The namespace proposed in this paper is the same as in [P0443R14]: std :: execution std :: execution :: foo std :: execution :: foo 
- 
     Universal references and explicit calls to std :: move std :: forward 
- 
     None of the names proposed here are names that we are particularly attached to; consider the names to be reasonable placeholders that can freely be changed, should the committee want to do so. 
3.2. Queries and algorithms
A query is a 
An algorithm is a 
4. Design - user side
4.1. Execution contexts describe the place of execution
An execution context is a resource that represents the place where execution will happen. This could be a concrete resource - like a specific thread pool object, or a GPU - or a more abstract one, like the current thread of execution. Execution contexts don’t need to have a representation in code; they are simply a term describing certain properties of execution of a function.
4.2. Schedulers represent execution contexts
A scheduler is a lightweight handle that represents a strategy for scheduling work onto an execution context. Since execution contexts don’t necessarily manifest in C++ code, it’s not possible to program
directly against their API. A scheduler is a solution to that problem: the scheduler concept is defined by a single sender algorithm, 
execution :: scheduler auto sch = get_thread_pool (). scheduler (); execution :: sender auto snd = execution :: schedule ( sch ); // snd is a sender (see below) describing the creation of a new execution resource // on the execution context associated with sch 
Note that a particular scheduler type may provide other kinds of scheduling operations
which are supported by its associated execution context. It is not limited to scheduling
purely using the 
Future papers will propose additional scheduler concepts that extend 
- 
     A time_scheduler scheduler schedule_after ( sched , duration ) schedule_at ( sched , time_point ) now ( sched ) 
- 
     Concepts that extend scheduler 
- 
     Concepts that extend scheduler 
4.3. Senders describe work
A sender is an object that describes work. Senders are similar to futures in existing asynchrony designs, but unlike futures, the work that is being done to arrive at the values they will send is also directly described by the sender object itself. A sender is said to send some values if a receiver connected (see § 5.3 execution::connect) to that sender will eventually receive said values.
The primary defining sender algorithm is § 5.3 execution::connect; this function, however, is not a user-facing API; it is used to facilitate communication between senders and various sender algorithms, but end user code is not expected to invoke it directly.
The way user code is expected to interact with senders is by using sender algorithms. This paper proposes an initial set of such sender algorithms, which are described in § 4.4 Senders are composable through sender algorithms, § 4.12 User-facing sender factories, § 4.13 User-facing sender adaptors, and § 4.14 User-facing sender consumers. For example, here is how a user can create a new sender on a scheduler, attach a continuation to it, and then wait for execution of the continuation to complete:
execution :: scheduler auto sch = get_thread_pool (). scheduler (); execution :: sender auto snd = execution :: schedule ( sch ); execution :: sender auto cont = execution :: then ( snd , []{ std :: fstream file { "result.txt" }; file << compute_result ; }); this_thread :: sync_wait ( cont ); // at this point, cont has completed execution 
4.4. Senders are composable through sender algorithms
Asynchronous programming often departs from traditional code structure and control flow that we are familiar with. A successful asynchronous framework must provide an intuitive story for composition of asynchronous work: expressing dependencies, passing objects, managing object lifetimes, etc.
The true power and utility of senders is in their composability. With senders, users can describe generic execution pipelines and graphs, and then run them on and across a variety of different schedulers. Senders are composed using sender algorithms:
- 
     sender factories, algorithms that take no senders and return a sender. 
- 
     sender adaptors, algorithms that take (and potentially execution :: connect 
- 
     sender consumers, algorithms that take (and potentially execution :: connect 
4.5. Senders can propagate completion schedulers
One of the goals of executors is to support a diverse set of execution contexts, including traditional thread pools, task and fiber frameworks (like HPX) and Legion), and GPUs and other accelerators (managed by runtimes such as CUDA or SYCL). On many of these systems, not all execution agents are created equal and not all functions can be run on all execution agents. Having precise control over the execution context used for any given function call being submitted is important on such systems, and the users of standard execution facilities will expect to be able to express such requirements.
[P0443R14] was not always clear about the place of execution of any given piece of code. Precise control was present in the two-way execution API present in earlier executor designs, but it has so far been missing from the senders design. There has been a proposal ([P1897R3]) to provide a number of sender algorithms that would enforce certain rules on the places of execution of the work described by a sender, but we have found those sender algorithms to be insufficient for achieving the best performance on all platforms that are of interest to us. The implementation strategies that we are aware of result in one of the following situations:
- 
     trying to submit work to one execution context (such as a CPU thread pool) from another execution context (such as a GPU or a task framework), which assumes that all execution agents are as capable as a std :: thread 
- 
     forcibly interleaving two adjacent execution graph nodes that are both executing on one execution context (such as a GPU) with glue code that runs on another execution context (such as a CPU), which is prohibitively expensive for some execution contexts (such as CUDA or SYCL). 
- 
     having to customise most or all sender algorithms to support an execution context, so that you can avoid problems described in 1. and 2, which we believe is impractical and brittle based on months of field experience attempting this in Agency. 
None of these implementation strategies are acceptable for many classes of parallel runtimes, such as task frameworks (like HPX) or accelerator runtimes (like CUDA or SYCL).
Therefore, in addition to the 
4.5.1. execution :: get_completion_scheduler 
   
execution :: scheduler auto cpu_sched = new_thread_scheduler {}; execution :: scheduler auto gpu_sched = cuda :: scheduler (); execution :: sender auto snd0 = execution :: schedule ( cpu_sched ); execution :: scheduler auto completion_sch0 = execution :: get_completion_scheduler < execution :: set_value_t > ( snd0 ); // completion_sch0 is equivalent to cpu_sched execution :: sender auto snd1 = execution :: then ( snd0 , []{ std :: cout << "I am running on cpu_sched! \n " ; }); execution :: scheduler auto completion_sch1 = execution :: get_completion_scheduler < execution :: set_value_t > ( snd1 ); // completion_sch1 is equivalent to cpu_sched execution :: sender auto snd2 = execution :: transfer ( snd1 , gpu_sched ); execution :: sender auto snd3 = execution :: then ( snd2 , []{ std :: cout << "I am running on gpu_sched! \n " ; }); execution :: scheduler auto completion_sch3 = execution :: get_completion_scheduler < execution :: set_value_t > ( snd3 ); // completion_sch3 is equivalent to gpu_sched 
4.6. Execution context transitions are explicit
[P0443R14] does not contain any mechanisms for performing an execution context transition. The only sender algorithm that can create a sender that will move execution to a specific execution context is 
We propose that, for senders advertising their completion scheduler, all execution context transitions must be explicit; running user code anywhere but where they defined it to run must be considered a bug.
The 
execution :: scheduler auto sch1 = ...; execution :: scheduler auto sch2 = ...; execution :: sender auto snd1 = execution :: schedule ( sch1 ); execution :: sender auto then1 = execution :: then ( snd1 , []{ std :: cout << "I am running on sch1! \n " ; }); execution :: sender auto snd2 = execution :: transfer ( then1 , sch2 ); execution :: sender auto then2 = execution :: then ( snd2 , []{ std :: cout << "I am running on sch2! \n " ; }); this_thread :: sync_wait ( then2 ); 
4.7. Senders can be either multi-shot or single-shot
Some senders may only support launching their operation a single time, while others may be repeatable and support being launched multiple times. Executing the operation may consume resources owned by the sender.
For example, a sender may contain a 
A single-shot sender can only be connected to a receiver at most once. Its implementation of 
A multi-shot sender can be connected to multiple receivers and can be launched multiple
times. Mult-shot senders customise 
If the user of a sender does not require the sender to remain valid after connecting it to a
receiver then it can pass an rvalue-reference to the sender to the call to 
If the caller does wish for the sender to remain valid after the call then it can pass an lvalue-qualified sender
to the call to 
Algorithms that accept senders will typically either decay-copy an input sender and store it somewhere
for later usage (for example as a data-member of the returned sender) or will immediately call 
Some multi-use sender algorithms may require that an input sender be copy-constructible but will only call 
For a sender to be usable in both multi-use scenarios, it will generally be required to be both copy-constructible and lvalue-connectable.
4.8. Senders are forkable
Any non-trivial program will eventually want to fork a chain of senders into independent streams of work, regardless of whether they are single-shot or multi-shot. For instance, an incoming event to a middleware system may be required to trigger events on more than one downstream system. This requires that we provide well defined mechanisms for making sure that connecting a sender multiple times is possible and correct.
The 
auto some_algorithm ( execution :: sender auto && input ) { execution :: sender auto multi_shot = split ( input ); // "multi_shot" is guaranteed to be multi-shot, // regardless of whether "input" was multi-shot or not return when_all ( then ( multi_shot , [] { std :: cout << "First continuation \n " ; }), then ( multi_shot , [] { std :: cout << "Second continuation \n " ; }) ); } 
4.9. Senders are joinable
Similarly to how it’s hard to write a complex program that will eventually want to fork sender chains into independent streams, it’s also hard to write a program that does not want to eventually create join nodes, where multiple independent streams of execution are merged into a single one in an asynchronous fashion.
4.10. Schedulers advertise their forward progress guarantees
To decide whether a scheduler (and its associated execution context) is sufficient for a specific task, it may be necessary to know what kind of forward progress guarantees it provides for the execution agents it creates. The C++ Standard defines the following forward progress guarantees:
- 
     concurrent, which requires that a thread makes progress eventually; 
- 
     parallel, which requires that a thread makes progress once it executes a step; and 
- 
     weakly parallel, which does not require that the thread makes progress. 
This paper introduces a scheduler query function, 
4.11. Most sender adaptors are pipeable
To facilitate an intuitive syntax for composition, most sender adaptors are pipeable; they can be composed (piped) together with 
execution :: bulk ( snd , N , [] ( std :: size_t i , auto d ) {}); execution :: bulk ( N , [] ( std :: size_t i , auto d ) {})( snd ); snd | execution :: bulk ( N , [] ( std :: size_t i , auto d ) {}); 
Piping enables you to compose together senders with a linear syntax. Without it, you’d have to use either nested function call syntax, which would cause a syntactic inversion of the direction of control flow, or you’d have to introduce a temporary variable for each stage of the pipeline. Consider the following example where we want to execute first on a CPU thread pool, then on a CUDA GPU, then back on the CPU thread pool:
| Syntax Style | Example | 
|---|---|
| Function call (nested) | 
 | 
| Function call (named temporaries) | 
 | 
| Pipe | 
 | 
Certain sender adaptors are not be pipeable, because using the pipeline syntax can result in confusion of the semantics of the adaptors involved. Specifically, the following sender adaptors are not pipeable.
- 
     execution :: when_all execution :: when_all_with_variant 
- 
     execution :: on execution :: lazy_on transfer 
Sender consumers could be made pipeable, but we have chosen to not do so. However, since these are terminal nodes in a pipeline and nothing can be piped after them, we believe a pipe syntax may be confusing as well as unnecessary, as consumers cannot be chained. We believe sender consumers read better with function call syntax.
4.12. User-facing sender factories
A sender factory is an algorithm that takes no senders as parameters and returns a sender.
4.12.1. execution :: schedule 
execution :: sender auto schedule ( execution :: scheduler auto scheduler ); 
Returns a sender describing the start of a task graph on the provided scheduler. See § 4.2 Schedulers represent execution contexts.
execution :: scheduler auto sch1 = get_system_thread_pool (). scheduler (); execution :: sender auto snd1 = execution :: schedule ( sch1 ); // snd1 describes the creation of a new task on the system thread pool 
4.12.2. execution :: just 
execution :: sender auto just ( auto ... && values ); 
Returns a sender with no completion schedulers, which sends the provided values. If a provided value is an lvalue reference, a copy is made inside the returned sender and a non-const lvalue reference to the copy is sent. If the provided value is an rvalue reference, it is moved into the returned sender and an rvalue reference to it is sent.
execution :: sender auto snd1 = execution :: just ( 3.14 ); execution :: sender auto then1 = execution :: then ( snd1 , [] ( double d ) { std :: cout << d << " \n " ; }); execution :: sender auto snd2 = execution :: just ( 3.14 , 42 ); execution :: sender auto then2 = execution :: then ( snd1 , [] ( double d , int i ) { std :: cout << d << ", " << i << " \n " ; }); std :: vector v3 { 1 , 2 , 3 , 4 , 5 }; execution :: sender auto snd3 = execution :: just ( v3 ); execution :: sender auto then3 = execution :: then ( snd3 , [] ( std :: vector < int >& v3copy ) { for ( auto && e : v3copy ) { e *= 2 ; } return v3copy ; } auto && [ v3copy ] = this_thread :: sync_wait ( then3 ). value (); // v3 contains {1, 2, 3, 4, 5}; v3copy will contain {2, 4, 6, 8, 10}. execution :: sender auto snd4 = execution :: just ( std :: vector { 1 , 2 , 3 , 4 , 5 }); execution :: sender auto then4 = execution :: then ( snd4 , [] ( std :: vector < int >&& v4 ) { for ( auto && e : v4 ) { e *= 2 ; } return std :: move ( v4 ); }); auto && [ v4 ] = this_thread :: sync_wait ( then4 ). value (); // v4 contains {2, 4, 6, 8, 10}. 
4.12.3. execution :: transfer_just 
execution :: sender auto transfer_just ( execution :: scheduler auto scheduler , auto ... && values ); 
Returns a sender whose value completion scheduler is the provided scheduler, which sends the provided values in the same manner as 
execution :: sender auto vals = execution :: transfer_just ( get_system_thread_pool (). scheduler (), 1 , 2 , 3 ); execution :: sender auto snd = execution :: then ( pred , []( auto ... args ) { std :: ( args ..); }); // when snd is executed, it will print "123" 
This adaptor is included as it greatly simplifies lifting values into senders.
4.13. User-facing sender adaptors
A sender adaptor is an algorithm that takes one or more senders, which it may 
Many sender adaptors come in two versions: a strictly lazy one, which is never allowed to submit any work for execution prior to the returned sender being started later on, and a potentially eager one, which is allowed to submit work prior to the returned sender being started. Sender consumers such as § 4.13.11 execution::ensure_started, § 4.14.1 execution::start_detached, and § 4.14.2 this_thread::sync_wait start senders; the implementations of non-lazy versions of the sender adaptors are allowed, but not guaranteed, to start senders.
The strictly lazy versions of the adaptors below (that is, all the versions whose names start with 
For more implementer-centric description of starting senders, see § 5.5 Laziness is defined by sender adaptors.
4.13.1. execution :: transfer 
execution :: sender auto transfer ( execution :: sender auto input , execution :: scheduler auto scheduler ); execution :: sender auto lazy_transfer ( execution :: sender auto input , execution :: scheduler auto scheduler ); 
Returns a sender describing the transition from the execution agent of the input sender to the execution agent of the target scheduler. See § 4.6 Execution context transitions are explicit.
execution :: scheduler auto cpu_sched = get_system_thread_pool (). scheduler (); execution :: scheduler auto gpu_sched = cuda :: scheduler (); execution :: sender auto cpu_task = execution :: schedule ( cpu_sched ); // cpu_task describes the creation of a new task on the system thread pool execution :: sender auto gpu_task = execution :: transfer ( cpu_task , gpu_sched ); // gpu_task describes the transition of the task graph described by cpu_task to the gpu 
4.13.2. execution :: then 
execution :: sender auto then ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function ); execution :: sender auto lazy_then ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function ); 
execution :: sender auto input = get_input (); execution :: sender auto snd = execution :: then ( input , []( auto ... args ) { std :: ( args ..); }); // snd describes the work described by pred // followed by printing all of the values sent by pred 
This adaptor is included as it is necessary for writing any sender code that actually performs a useful function.
4.13.3. execution :: upon_ * 
execution :: sender auto upon_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto lazy_upon_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto upon_error ( execution :: sender auto input , std :: invocable <> function ); execution :: sender auto lazy_upon_error ( execution :: sender auto input , std :: invocable <> function ); 
4.13.4. execution :: let_ * 
execution :: sender auto let_value ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function ); execution :: sender auto lazy_let_value ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function ); execution :: sender auto let_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto lazy_let_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto let_done ( execution :: sender auto input , std :: invocable <> function ); execution :: sender auto lazy_let_done ( execution :: sender auto input , std :: invocable <> function ); 
4.13.5. execution :: on 
execution :: sender auto on ( execution :: scheduler auto sched , execution :: sender auto snd ); execution :: sender auto lazy_on ( execution :: scheduler auto sched , execution :: sender auto snd ); 
Returns a sender which, when started, will start the provided sender on an execution agent belonging to the execution context associated with the provided scheduler. This returned sender has no completion schedulers.
4.13.6. execution :: into_variant 
execution :: sender into_variant ( execution :: sender auto snd ); 
Returns a sender which sends a variant of tuples of all the possible sets of types sent by the input sender. Senders can send multiple sets of values depending on runtime conditions; this is a helper function that turns them into a single variant value.
4.13.7. execution :: bulk 
execution :: sender auto bulk ( execution :: sender auto input , std :: integral auto size , invocable < decltype ( size ), values - sent - by ( input ) ... > function ); execution :: sender auto lazy_bulk ( execution :: sender auto input , std :: integral auto size , invocable < decltype ( size ), values - sent - by ( input ) ... > function ); 
Returns a sender describing the task of invoking the provided function with the values sent by the input sender for every index in the provided shape.
In this paper, only integral types satisfy the concept of a shape, but future papers will explore bulk shapes of different kinds in more detail.
4.13.8. execution :: split 
execution :: sender auto split ( execution :: sender auto sender ); execution :: sender auto lazy_split ( execution :: sender auto sender ); 
If the provided sender is a multi-shot sender, returns that sender. Otherwise, returns a multi-shot sender which sends values equivalent to the values sent by the provided sender. See § 4.7 Senders can be either multi-shot or single-shot.
4.13.9. execution :: when_all 
execution :: sender auto when_all ( execution :: sender auto ... inputs ); execution :: sender auto when_all_with_variant ( execution :: sender auto ... inputs ); 
The returned sender has no completion schedulers.
See § 4.9 Senders are joinable.
execution :: scheduler auto sched = get_thread_pool (). scheduler (); execution :: sender auto sends_1 = ...; execution :: sender auto sends_abc = ...; execution :: sender auto both = execution :: when_all ( sched , sends_1 , sends_abc ); execution :: sender auto final = execution :: then ( both , []( auto ... args ){ std :: cout << std :: format ( "the two args: {}, {}" , args ...); }); // when final executes, it will print "the two args: 1, abc" 
4.13.10. execution :: transfer_when_all 
execution :: sender auto transfer_when_all ( execution :: scheduler auto sched , execution :: sender auto ... inputs ); execution :: sender auto transfer_when_all_with_variant ( execution :: scheduler auto sched , execution :: sender auto ... inputs ); execution :: sender auto lazy_transfer_when_all ( execution :: scheduler auto sched , execution :: sender auto ... inputs ); execution :: sender auto lazy_transfer_when_all_with_variant ( execution :: scheduler auto sched , execution :: sender auto ... inputs ); 
Similar to § 4.13.9 execution::when_all, but returns a sender whose value completion scheduler is the provided scheduler.
See § 4.9 Senders are joinable.
4.13.11. execution :: ensure_started 
execution :: sender auto ensure_started ( execution :: sender auto sender ); 
Once 
4.14. User-facing sender consumers
A sender consumer is an algorithm that takes one or more senders, which it may 
4.14.1. execution :: start_detached 
void auto start_detached ( execution :: sender auto sender ); 
Like 
4.14.2. this_thread :: sync_wait 
auto sync_wait ( execution :: sender auto sender ) requires ( always - sends - same - values ( sender )) -> std :: optional < std :: tuple < values - sent - by ( sender ) >> ; 
If the provided sender sends an error instead of values, 
If the provided sender sends the "done" signal instead of values, 
For an explanation of the 
Note: This function is specified inside 
4.15. execution :: execute 
   In addition to the three categories of functions presented above, we also propose to include a convenience function for fire-and-forget eager one-way submission of an invocable to a scheduler, to fulfil the role of one-way executors from P0443.
void execution :: execute ( execution :: schedule auto sched , std :: invocable auto fn ); 
Submits the provided function for execution on the provided scheduler, as-if by:
auto snd = execution :: schedule ( sched ); auto work = execution :: then ( snd , fn ); execution :: start_detached ( work ); 
5. Design - implementer side
5.1. Receivers serve as glue between senders
A receiver is a callback that supports more than one channel. In fact, it supports three of them:
- 
     set_value operator () 
- 
     set_error 
- 
     set_done set_value set_error 
Exactly one of these channels must be successfully (i.e. without an exception being thrown) invoked on a receiver before it is destroyed; if a call to 
While the receiver interface may look novel, it is in fact very similar to the interface of 
Receivers are not a part of the end-user-facing API of this proposal; they are necessary to allow unrelated senders communicate with each other, but the only users who will interact with receivers directly are authors of senders.
Receivers are what is passed as the second argument to § 5.3 execution::connect.
5.2. Operation states represent work
An operation state is an object that represents work. Unlike senders, it is not a chaining mechanism; instead, it is a concrete object that packages the work described by a full sender chain, ready to be executed. An operation state is neither movable nor
copyable, and its interface consists of a single algorithm: 
Operation states are not a part of the user-facing API of this proposal; they are necessary for implementing sender consumers like 
The return value of § 5.3 execution::connect must satisfy the operation state concept.
5.3. execution :: connect 
   
execution :: sender auto snd = some input sender ; execution :: receiver auto rcv = some receiver ; execution :: operation_state auto state = execution :: connect ( snd , rcv ); execution :: start ( state ); // at this point, it is guaranteed that the work represented by state has been submitted // to an execution context, and that execution context will eventually fulfill the // receiver contract of rcv // operation states are not movable, and therefore this operation state object must be // kept alive until the operation finishes 
5.4. Sender algorithms are customizable
Senders being able to advertise what their completion schedulers are fulfills one of the promises of senders: that of being able to customize an implementation of a sender algorithm based on what scheduler any work it depends on will complete on.
The simple way to provide customizations for functions like 
- 
     sender . then ( invocable ) 
- 
     then ( sender , invocable ) 
- 
     a default implementation of then 
However, this definition is problematic. Imagine another sender adaptor, 
execution :: scheduler auto cuda_sch = cuda_scheduler {}; execution :: sender auto initial = execution :: schedule ( cuda_sch ); // the type of initial is a type defined by the cuda_scheduler // let’s call it cuda::schedule_sender<> execution :: sender auto next = execution :: then ( cuda_sch , []{ return 1 ; }); // the type of next is a standard-library implementation-defined sender adaptor // that wraps the cuda sender // let’s call it execution::then_sender_adaptor<cuda::schedule_sender<>> execution :: sender auto kernel_sender = execution :: bulk ( next , shape , []( int i ){ ... }); 
How can we specialize the 
namespace cuda :: for_adl_purposes { template < typename ... SentValues > class schedule_sender { execution :: operation_state auto connect ( execution :: receiver auto rcv ); execution :: scheduler auto get_completion_scheduler () const ; }; execution :: sender auto bulk ( execution :: sender auto && input , execution :: shape auto && shape , invocable < sender - values ( input ) > auto && fn ) { // return a cuda sender representing a bulk kernel launch } } // namespace cuda::for_adl_purposes 
However, if the input sender is not just a 
This means that well-meant specialization of sender algorithms that are entirely scheduler-agnostic can have negative consequences. The scheduler-specific specialization - which is essential for good performance on platforms providing specialized ways to launch certain sender algorithms - would not be selected in such cases. But it’s really the scheduler that should control the behavior of sender algorithms when a non-default implementation exists, not the sender. Senders merely describe work; schedulers, however, are the handle to the runtime that will eventually execute said work, and should thus have the final say in how the work is going to be executed.
Therefore, we are proposing the following customization scheme (also modified to take § 5.9 Ranges-style CPOs vs tag_invoke into account): the expression 
- 
     tag_invoke ( < sender - algorithm > , get_completion_scheduler < Signal > ( sender ), sender , args ...) 
- 
     tag_invoke ( < sender - algorithm > , sender , args ...) 
- 
     a default implementation, if there exists a default implementation of the given sender algorithm. 
where 
For sender algorithms which accept concepts other than 
5.5. Laziness is defined by sender adaptors
We distinguish two different guarantees about when work is submitted to an execution context:
- 
     strictly lazy submission, which means that there is a guarantee that no work is submitted to an execution context before a receiver is connected to a sender, and execution :: start 
- 
     potentially eager submission, which means that work may be submitted to an execution context as soon as all the information necessary to perform it is provided. 
If a sender adaptor requires potentially eager submission, strictly lazy submission is acceptable as an implementation, because it does fulfill the potentially eager guarantee. This is why the default implementations for the non-strictly-lazy sender adaptors are specified to dispatch to the strictly lazy ones; for an author of a specific sender, it is sufficient to specialize the strictly lazy version, to also achieve a specialization of the potentially eager one.
As has been described in § 4.13 User-facing sender adaptors, whether a sender adaptor is guaranteed to perform strictly lazy submission or not is defined by the adaptor used to perform it; the adaptors whose names begin with 
5.6. Lazy senders provide optimization opportunities
Because lazy senders fundamentally describe work, instead of describing or representing the submission of said work to an execution context, and thanks to the flexibility of the customization of most sender algorithms, they provide an opportunity for fusing multiple algorithms in a sender chain together, into a single function that can later be submitted for execution by an execution context. There are two ways this can happen.
The first (and most common) way for such optimizations to happen is thanks to the structure of the implementation: because all the work is done within callbacks invoked on the completion of an earlier sender, recursively up to the original source of computation, the compiler is able to see a chain of work described using senders as a tree of tail calls, allowing for inlining and removal of most of the sender machinery. In fact, when work is not submitted to execution contexts outside of the current thread of execution, compilers are capable of removing the senders abstraction entirely, while still allowing for composition of functions across different parts of a program.
The second way for this to occur is when a sender algorithm is specialized for a specific set of arguments. For instance, we expect that, for senders which are known to have been started already, § 4.13.11 execution::ensure_started will be an identity transformation, because the sender algorithm will be specialized for such senders. Similarly, an implementation could recognize two subsequent lazy § 4.13.7 execution::bulks of compatible shapes, and merge them together into a single submission of a GPU kernel.
5.7. Execution context transitions are two-step
Because 
This, however, is a problem: because customization of sender algorithms must be controlled by the scheduler they will run on (see § 5.4 Sender algorithms are customizable), the type of the sender returned from 
To allow for such customization from both ends, we propose the inclusion of a secondary transitioning sender adaptor, called 
The default implementation of 
5.8. Most senders are typed
All senders should advertise the types they will send when they complete. This is necessary for a number of features, and writing code in a way that’s agnostic of whether an imput sender is typed or not in common sender adaptors such as 
The mechanism for this advertisement is the same as in [P0443R14]; the way to query the types is through 
There’s a choice made in the specification of § 4.14.2 this_thread::sync_wait: it returns a tuple of values sent by the sender passed to it, wrapped in 
execution :: sender auto sends_1 = ...; execution :: sender auto sends_2 = ...; execution :: sender auto sends_3 = ...; auto [ a , b , c ] = this_thread :: sync_wait ( execution :: transfer_when_all ( execution :: get_completion_scheduler < execution :: set_value_t > ( sends_1 ), sends_1 , sends_2 , sends_3 )). value (); // a == 1 // b == 2 // c == 3 
This works well for senders that always send the same set of arguments. If we ignore the possibility of having a sender that sends different sets of arguments into a receiver, we can specify the "canonical" (i.e. required to be followed by all senders) form of 
template < template < typename ... > typename TupleLike > using value_types = TupleLike ; 
If senders could only ever send one specific set of values, this would probably need to be the required form of 
This matter is somewhat complicated by the fact that (1) 
template < template < typename ... > typename TupleLike , template < typename ... > typename VariantLike > using value_types = VariantLike < TupleLike < Types1 ... > , TupleLike < Types2 ... > , ..., TupleLike < Types3 ... > > ; 
This, however, introduces a couple of complications:
- 
     A just ( 1 ) std :: variant < std :: tuple < int >> value_types 
- 
     As a consequence of (1): because sync_wait std :: tuple < int > just ( 1 ) std :: variant < std :: tuple < int >> sync_wait 
One possible solution to (2) above is to place a requirement on 
auto sync_wait_with_variant ( execution :: sender auto sender ) -> std :: optional < std :: variant < std :: tuple < values 0 - sent - by ( sender ) > , std :: tuple < values 1 - sent - by ( sender ) > , ..., std :: tuple < values n - sent - by ( sender ) > >> ; auto sync_wait ( execution :: sender auto sender ) requires ( always - sends - same - values ( sender )) -> std :: optional < std :: tuple < values - sent - by ( sender ) >> ; 
5.9. Ranges-style CPOs vs tag_invoke 
   The contemporary technique for customization in the Standard Library is customization point objects. A customization point object, will it look for member functions and then for nonmember functions with the same name as the customization point, and calls those if they match. This is the technique used by the C++20 ranges library, and previous executors proposals ([P0443R14] and [P1897R3]) intended to use it as well. However, it has several unfortunate consequences:
- 
     It does not allow for easy propagation of customization points unknown to the adaptor to a wrapped object, which makes writing universal adapter types much harder - and this proposal uses quite a lot of those. 
- 
     It effectively reserves names globally. Because neither member names nor ADL-found functions can be qualified with a namespace, every customization point object that uses the ranges scheme reserves the name for all types in all namespaces. This is unfortunate due to the sheer number of customization points already in the paper, but also ones that we are envisioning in the future. It’s also a big problem for one of the operations being proposed already: sync_wait std :: this_fiber :: sync_wait std :: this_thread :: sync_wait 
This paper proposes to instead use the mechanism described in [P1895R0]: 
In short, instead of using globally reserved names, 
Using 
- 
     It reserves only a single global name, instead of reserving a global name for every customization point object we define. 
- 
     It is possible to propagate customizations to a subobject, because the information of which customization point is being resolved is in the type of an argument, and not in the name of the function: // forward most customizations to a subobject template < typename Tag , typename ... Args > friend auto tag_invoke ( Tag && tag , wrapper & self , Args && ... args ) { return std :: forward < Tag > ( tag )( self . subobject , std :: forward < Args > ( args )...); } // but override one of them with a specific value friend auto tag_invoke ( specific_customization_point_t , wrapper & self ) { return self . some_value ; } 
- 
     It is possible to pass those as template arguments to types, because the information of which customization point is being resolved is in the type. Similarly to how [P0443R14] defines a polymorphic executor wrapper which accepts a list of properties it supports, we can imagine scheduler and sender wrappers that accept a list of queries and operations they support. That list can contain the types of the customization point objects, and the polymorphic wrappers can then specialize those customization points on themselves using tag_invoke unifex :: any_unique 
6. Specification
Much of this wording follows the wording of [P0443R14].
§ 7 General utilities library [utilities] is meant to be a diff relative to the wording of the [utilities] clause of [N4885]. This diff applies changes from [P1895R0].
§ 8 Thread support library [thread] is meant to be a diff relative to the wording of the [thread] clause of [N4885]. This diff applies changes from [P2175R0].
§ 9 Execution control library [execution] is meant to be added as a new library clause to the working draft of C++.
7. General utilities library [utilities]
7.1. Function objects [function.objects]
7.1.1. Header < functional > 
   At the end of this subclause, insert the following declarations into the synopsis within 
// [func.tag_invoke], tag_invoke inline namespace unspecified { inline constexpr unspecified tag_invoke = unspecified ; } template < auto & Tag > using tag_t = decay_t < decltype ( Tag ) > ; template < class Tag , class ... Args > concept tag_invocable = invocable < decltype ( tag_invoke ), Tag , Args ... > ; template < class Tag , class ... Args > concept nothrow_tag_invocable = tag_invocable < Tag , Args ... > && is_nothrow_invocable_v < decltype ( tag_invoke ), Tag , Args ... > ; template < class Tag , class ... Args > using tag_invoke_result = invoke_result < decltype ( tag_invoke ), Tag , Args ... > ; template < class Tag , class ... Args > using tag_invoke_result_t = invoke_result_t < decltype ( tag_invoke ), Tag , Args ... > ; 
7.1.2. execution :: tag_invoke 
   Insert this section as a new subclause, between Searchers [func.search] and Class template 
The name
denotes a customization point object. For some subexpressionsstd :: tag_invoke andtag ,args ... is expression-equivalent to an unqualified call totag_invoke ( tag , args ...) with overload resolution performed in a context that includes the declaration:tag_invoke ( decay - copy ( tag ), args ...) void tag_invoke (); and that does not include the the
name.std :: tag_invoke 
8. Thread support library [thread]
Note: The specification in this section is incomplete; it does not provide an API specification for the new types added into 
8.1. Stop tokens [thread.stoptoken]
8.1.1. Header < stop_token > 
   At the beginning of this subclause, insert the following declarations into the synopsis within 
template < template < typename > class > struct check - type - alias - exists ; // exposition-only template < typename T > concept stoppable_token = see - below ; template < typename T , typename CB , typename Initializer = CB > concept stoppable_token_for = see - below ; template < typename T > concept unstoppable_token = see - below ; 
At the end of this subclause, insert the following declarations into the synopsis of within 
// [stoptoken.never], class never_stop_token class never_stop_token ; // [stoptoken.inplace], class in_place_stop_token class in_place_stop_token ; // [stopsource.inplace], class in_place_stop_source class in_place_stop_source ; // [stopcallback.inplace], class template in_place_stop_callback template < typename Callback > class in_place_stop_callback ; 
8.1.2. Stop token concepts [thread.stoptoken.concepts]
Insert this section as a new subclause between Header 
The
concept checks for the basic interface of a “stop token” which is copyable and allows polling to see if stop has been requested and also whether a stop request is possible. It also requires an associated nested template-type-alias,stoppable_token , that identifies the stop-callback type to use to register a callback to be executed if a stop-request is ever made on a stoppable_token of type,T :: callback_type < CB > . TheT concept checks for a stop token type compatible with a given callback type. Thestoppable_token_for concept checks for a stop token type that does not allow stopping.unstoppable_token template < typename T > concept stoppable_token = copy_constructible < T > && move_constructible < T > && is_nothrow_copy_constructible_v < T > && is_nothrow_move_constructible_v < T > && equality_comparable < T > && requires ( const T & token ) { { token . stop_requested () } noexcept -> boolean - testable ; { token . stop_possible () } noexcept -> boolean - testable ; typename check - type - alias - exists < T :: template callback_type > ; }; template < typename T , typename CB , typename Initializer = CB > concept stoppable_token_for = stoppable_token < T > && invocable < CB > && requires { typename T :: template callback_type < CB > ; } && constructible_from < CB , Initializer > && constructible_from < typename T :: template callback_type < CB > , T , Initializer > && constructible_from < typename T :: template callback_type < CB > , T & , Initializer > && constructible_from < typename T :: template callback_type < CB > , const T , Initializer > && constructible_from < typename T :: template callback_type < CB > , const T & , Initializer > ; template < typename T > concept unstoppable_token = stoppable_token < T > && requires { { T :: stop_possible () } -> boolean - testable ; } && ( ! T :: stop_possible ()); 
Let
andt be distinct object of typeu . The typeT modelsT only if:stoppable_token 
All copies of a
reference the same logical shared stop state and shall report values consistent with each other.stoppable_token 
If
evaluates tot . stop_possible () falsethen, if, references the same logical shared stop state,u shall also subsequently evaluate tou . stop_possible () falseandshall also subsequently evaluate tou . stop_requested () false.
If
evaluates tot . stop_requested () truethen, if, references the same logical shared stop state,u shall also subsequently evaluate tou . stop_requested () trueandshall also subsequently evaluate tou . stop_possible () true.
Given a callback-type, CB, and a callback-initializer argument,
, of typeinit then constructing an instance,Initializer , of typecb , passingT :: callback_type < CB > as the first argument andt as the second argument to the constructor, shall, ifinit ist . stop_possible () true, construct an instance,, of typecallback , direct-initialized withCB , and register callback withinit ’s shared stop state such that callback will be invoked with an empty argument list if a stop request is made on the shared stop state.t 
If
ist . stop_requested () trueat the time callback is registered then callback may be invoked immediately inline inside the call to’s constructor.cb 
If callback is invoked then, if
references the same shared stop state asu , an evaluation oft will beu . stop_requested () trueif the beginning of the invocation of callback strongly-happens-before the evaluation of.u . stop_requested () 
If
evaluates tot . stop_possible () falsethen the construction ofis not required to construct and initializecb .callback 
Construction of a
instance shall only throw exceptions thrown by the initialization of theT :: callback_type < CB > instance from the value of typeCB .Initializer 
Destruction of the
object,T :: callback_type < CB > , removescb from the shared stop state such thatcallback will not be invoked after the destructor returns.callback 
If
is currently being invoked on another thread then the destructor ofcallback will block until the invocation ofcb returns such that the return from the invocation ofcallback strongly-happens-before the destruction ofcallback .callback 
Destruction of a callback
shall not block on the completion of the invocation of some other callback registered with the same shared stop state.cb 
9. Execution control library [execution]
- 
     This Clause describes components supporting execution of function objects [function.objects]. 
- 
     The following subclauses describe the requirements, concepts, and components for execution control primitives as summarized in Table 1. 
| Subclause | Header | |
| [execution.execute] | One-way execution | 
9.1. Header < execution > 
namespace std :: execution { // [execution.helpers], helper concepts template < class T > concept moveable - value = see - below ; // exposition only // [execution.schedulers], schedulers template < class S > concept scheduler = see - below ; // [execution.schedulers.queries], scheduler queries enum class forward_progress_guarantee ; inline namespace unspecified { struct get_forward_progress_guarantee_t ; inline constexpr get_forward_progress_guarantee_t get_forward_progress_guarantee {}; } } namespace std :: this_thread { inline namespace unspecified { struct execute_may_block_caller_t ; inline constexpr execute_may_block_caller_t execute_may_block_caller {}; } } namespace std :: execution { // [execution.receivers], receivers template < class T , class E = exception_ptr > concept receiver = see - below ; template < class T , class ... An > concept receiver_of = see - below ; inline namespace unspecified { struct set_value_t ; inline constexpr set_value_t set_value {}; struct set_error_t ; inline constexpr set_error_t set_error {}; struct set_done_t ; inline constexpr set_done_t set_done {}; } // [execution.receivers.queries], receiver queries inline namespace unspecified { struct get_scheduler_t ; inline constexpr get_scheduler_t get_scheduler {}; struct get_allocator_t ; inline constexpr get_allocator_t get_allocator {}; struct get_stop_token_t ; inline constexpr get_stop_token_t get_stop_token {}; } // [execution.op_state], operation states template < class O > concept operation_state = see - below ; inline namespace unspecified { struct start_t ; inline constexpr start_t start {}; } // [execution.senders], senders template < class S > concept sender = see - below ; template < class S , class R > concept sender_to = see - below ; template < class S > concept has - sender - types = see - below ; // exposition only template < class S > concept typed_sender = see - below ; template < class ... Ts > struct type - list = see - below ; // exposition only template < class S , class ... Ts > concept sender_of = see - below ; // [execution.senders.traits], sender traits inline namespace unspecified { struct sender_base {}; } template < class S > struct sender_traits ; inline namespace unspecified { // [execution.senders.connect], the connect sender algorithm struct connect_t ; inline constexpr connect_t connect {}; // [execution.senders.queries], sender queries template < class CPO > struct get_completion_scheduler_t ; template < class CPO > inline constexpr get_completion_scheduler_t get_completion_scheduler {}; // [execution.senders.factories], sender factories struct schedule_t ; inline constexpr schedule_t schedule {}; template < class ... Ts > struct just - sender ; // exposition only template < moveable - value ... Ts > just - sender < remove_cvref_t < Ts > ... > just ( Ts && ...); struct transfer_just_t ; inline constexpr transfer_just_t transfer_just {}; // [execution.senders.adaptors], sender adaptors struct on_t ; inline constexpr on_t on {}; struct lazy_on_t ; inline constexpr lazy_on_t lazy_on {}; struct transfer_t ; inline constexpr transfer_t transfer {}; struct lazy_transfer_t ; inline constexpr lazy_transfer_t lazy_transfer {}; struct schedule_from_t ; inline constexpr schedule_from_t schedule_from {}; struct lazy_schedule_from_t ; inline constexpr lazy_schedule_from_t lazy_schedule_from {}; struct then_t ; inline constexpr then_t then {}; struct lazy_then_t ; inline constexpr lazy_then_t lazy_then {}; struct upon_error_t ; inline constexpr upon_error_t upon_error {}; struct lazy_upon_error_t ; inline constexpr lazy_upon_error_t lazy_upon_error {}; struct upon_done_t ; inline constexpr upon_done_t upon_done {}; struct lazy_upon_done_t ; inline constexpr lazy_upon_done_t lazy_upon_done {}; struct let_value_t ; inline constexpr let_value_t let_value {}; struct lazy_let_value_t ; inline constexpr lazy_let_value_t lazy_let_value {}; struct let_error_t ; inline constexpr let_error_t let_error {}; struct lazy_let_error_t ; inline constexpr lazy_let_error_t lazy_let_error {}; struct let_done_t ; inline constexpr let_done_t let_done {}; struct lazy_let_done_t ; inline constexpr lazy_let_done_t lazy_let_done {}; struct bulk_t ; inline constexpr bulk_t bulk {}; struct lazy_bulk_t ; inline constexpr lazy_bulk_t lazy_bulk {}; struct split_t ; inline constexpr split_t split {}; struct lazy_split_t ; inline constexpr lazy_split_t lazy_split {}; struct when_all_t ; inline constexpr when_all_t when_all {}; struct when_all_with_variant_t ; inline constexpr when_all_with_variant_t when_all_with_variant {}; struct transfer_when_all_t ; inline constexpr transfer_when_all_t transfer_when_all {}; struct lazy_transfer_when_all_t ; inline constexpr lazy_transfer_when_all_t lazy_transfer_when_all {}; struct transfer_when_all_with_variant_t ; inline constexpr transfer_when_all_with_variant_t transfer_when_all_with_variant {}; struct lazy_transfer_when_all_with_variant_t ; inline constexpr lazy_transfer_when_all_with_variant_t lazy_transfer_when_all_with_variant {}; template < typed_sender S > using into - variant - type = see - below ; // exposition-only template < typed_sender S > see - below into_variant ( S && ); // [execution.senders.consumers], sender consumers struct ensure_started_t ; inline constexpr ensure_started_t ensure_started {}; struct start_detached_t ; inline constexpr start_detached_t start_detached {}; } } namespace std :: this_thread { inline namespace unspecified { template < typed_sender S > using sync - wait - type = see - below ; // exposition-only template < typed_sender S > using sync - wait - with - variant - type = see - below ; // exposition-only struct sync_wait_t ; inline constexpr sync_wait_t sync_wait {}; struct sync_wait_with_variant_t ; inline constexpr sync_wait_with_variant_t sync_wait_with_variant {}; } } namespace std :: execution { inline namespace unspecified { // [execution.execute], one-way execution struct execute_t ; inline constexpr execute_t execute {}; } } 
9.2. Helper concepts [execution.helpers]
template < class T > concept moveable - value = // exposition only move_constructible < remove_cvref_t < T >> && constructible_from < remove_cvref_t < T > , T > ; 
9.3. Schedulers [execution.schedulers]
- 
     The scheduler template < class S > concept scheduler = copy_constructible < remove_cvref_t < S >> && equality_comparable < remove_cvref_t < S >> && requires ( S && s ) { execution :: schedule (( S && ) s ); }; 
- 
     None of a scheduler’s copy constructor, destructor, equality comparison, or swap 
- 
     None of these member functions, nor a scheduler type’s schedule 
- 
     For any two (possibly const) values s1 s2 S s1 == s2 trueonly if boths1 s2 
- 
     A scheduler type’s destructor shall not block pending completion of any receivers connected to the sender objects returned from schedule 
9.3.1. Scheduler queries [execution.schedulers.queries]
9.3.1.1. execution :: get_forward_progress_guarantee 
enum class forward_progress_guarantee { concurrent , parallel , weakly_parallel }; 
- 
     execution :: get_forward_progress_guarantee 
- 
     The name execution :: get_forward_progress_guarantee s S decltype (( s )) S execution :: scheduler execution :: get_forward_progress_guarantee execution :: get_forward_progress_guarantee ( s ) - 
       tag_invoke ( execution :: get_forward_progress_guarantee , as_const ( s )) execution :: forward_progress_guarantee noexcept 
- 
       Otherwise, execution :: forward_progress_guarantee :: weakly_parallel 
 
- 
       
- 
     If execution :: get_forward_progress_guarantee ( s ) s execution :: forward_progress_guarantee :: concurrent execution :: forward_progress_guarantee :: parallel 
9.3.1.2. this_thread :: execute_may_block_caller 
   - 
     this_thread :: execute_may_block_caller s execution :: execute ( s , f ) f 
- 
     The name this_thread :: execute_may_block_caller s S decltype (( s )) S execution :: scheduler this_thread :: execute_may_block_caller this_thread :: execute_may_block_caller ( s ) - 
       tag_invoke ( this_thread :: execute_may_block_caller , as_const ( s )) bool noexcept 
- 
       Otherwise, true.
 
- 
       
- 
     If this_thread :: execute_may_block_caller ( s ) s false, noexecution :: execute ( s , f ) f 
9.4. Receivers [execution.receivers]
- 
     A receiver represents the continuation of an asynchronous operation. An asynchronous operation may complete with a (possibly empty) set of values, an error, or it may be cancelled. A receiver has three principal operations corresponding to the three ways an asynchronous operation may complete: set_value set_error set_done 
- 
     The receiver receiver_of std :: exception_ptr template < class T , class E = exception_ptr > concept receiver = move_constructible < remove_cvref_t < T >> && constructible_from < remove_cvref_t < T > , T > && requires ( remove_cvref_t < T >&& t , E && e ) { { execution :: set_done ( std :: move ( t )) } noexcept ; { execution :: set_error ( std :: move ( t ), ( E && ) e ) } noexcept ; }; template < class T , class ... An > concept receiver_of = receiver < T > && requires ( remove_cvref_t < T >&& t , An && ... an ) { execution :: set_value ( std :: move ( t ), ( An && ) an ...); }; 
- 
     The receiver’s completion-signal operations have semantic requirements that are collectively known as the receiver contract, described below: - 
       None of a receiver’s completion-signal operations shall be invoked before execution :: start execution :: connect 
- 
       Once execution :: start 
- 
       If execution :: set_value execution :: set_error execution :: set_done execution :: set_value 
 
- 
       
- 
     Once one of a receiver’s completion-signal operations has completed non-exceptionally, the receiver contract has been satisfied. 
9.4.1. execution :: set_value 
   - 
     execution :: set_value 
- 
     The name execution :: set_value execution :: set_value ( R , Vs ...) R Vs ... - 
       tag_invoke ( execution :: set_value , R , Vs ...) tag_invoke Vs ... R 
- 
       Otherwise, execution :: set_value ( R , Vs ...) 
 
- 
       
9.4.2. execution :: set_error 
   - 
     execution :: set_error 
- 
     The name execution :: set_error execution :: set_error ( R , E ) R E - 
       tag_invoke ( execution :: set_error , R , E ) tag_invoke E R 
- 
       Otherwise, execution :: set_error ( R , E ) 
 
- 
       
9.4.3. execution :: set_done 
   - 
     execution :: set_done 
- 
     The name execution :: set_done execution :: set_done ( R ) R - 
       tag_invoke ( execution :: set_done , R ) tag_invoke R 
- 
       Otherwise, execution :: set_done ( R ) 
 
- 
       
9.4.4. Receiver queries [execution.receivers.queries]
9.4.4.1. execution :: get_scheduler 
   - 
     execution :: get_scheduler 
- 
     The name execution :: get_scheduler r R decltype (( r )) R execution :: receiver execution :: get_scheduler execution :: get_scheduler ( r ) - 
       tag_invoke ( execution :: get_scheduler , as_const ( r )) execution :: scheduler noexcept 
- 
       Otherwise, execution :: get_scheduler ( r ) 
 
- 
       
9.4.4.2. execution :: get_allocator 
   - 
     execution :: get_allocator 
- 
     The name execution :: get_allocator r R decltype (( r )) R execution :: receiver execution :: get_allocator execution :: get_allocator ( r ) - 
       tag_invoke ( execution :: get_allocator , as_const ( r )) noexcept 
- 
       Otherwise, execution :: get_allocator ( r ) 
 
- 
       
9.4.4.3. execution :: get_stop_token 
   - 
     execution :: get_stop_token 
- 
     The name execution :: get_stop_token r R decltype (( r )) R execution :: receiver execution :: get_stop_token execution :: get_stop_token ( r ) - 
       tag_invoke ( execution :: get_stop_token , as_const ( r )) stoppable_token noexcept 
- 
       Otherwise, never_stop_token {} 
 
- 
       
- 
     Let r s op_state execution :: connect ( s , r ) token execution :: get_stop_token ( r ) token r r op_state token r token op_state s r 
9.5. Operation states [execution.op_state]
- 
     The operation_state template < class O > concept operation_state = destructible < O > && is_object_v < O > && requires ( O & o ) { { execution :: start ( o ) } noexcept ; }; 
9.5.1. execution :: start 
   - 
     execution :: start 
- 
     The name execution :: start execution :: start ( O ) O - 
       tag_invoke ( execution :: start , O ) tag_invoke O 
- 
       Otherwise, execution :: start ( O ) 
 
- 
       
- 
     The caller of execution :: start ( O ) O R execution :: connect O 
9.6. Senders [execution.senders]
- 
     A sender describes a potentially asynchronous operation. A sender’s responsibility is to fulfill the receiver contract of a connected receiver by delivering one of the receiver completion-signals. 
- 
     The sender sender_to template < class S > concept sender = move_constructible < remove_cvref_t < S >> && ! requires { typename sender_traits < remove_cvref_t < S >>:: __unspecialized ; // exposition only }; template < class S , class R > concept sender_to = sender < S > && receiver < R > && requires ( S && s , R && r ) { execution :: connect (( S && ) s , ( R && ) r ); }; 
- 
     A sender is typed if it declares what types it sends through a connected receiver’s channels. 
- 
     The typed_sender template < class S > concept has - sender - types = // exposition only requires { typename has - value - types < S :: template value_types > ; typename has - error - types < S :: template error_types > ; typename bool_constant < S :: sends_done > ; }; template < class S > concept typed_sender = sender < S > && has - sender - types < sender_traits < remove_cvref_t < S >>> ; 
- 
     The sender_of template < class ... Ts > struct type - list {}; template < class S , class ... Ts > concept sender_of = typed_sender < S > && same_as < type - list < Ts ... > , typename sender_traits < S >:: value_types < type - list , type_identity_t > > ; 
9.6.1. Sender traits [execution.senders.traits]
- 
     The class sender_base value_types error_types sends_done 
- 
     The class template sender_traits 
- 
     The primary class template sender_traits < S > sender - traits - base < S > - 
       If has - sender - types < S > true, thensender - traits - base < S > template < class S > struct sender - traits - base { template < template < class ... > class Tuple , template < class ... > class Variant > using value_types = typename S :: template value_types < Tuple , Variant > ; template < template < class ... > class Variant > using error_types = typename S :: template error_types < Variant > ; static constexpr bool sends_done = S :: sends_done ; }; 
- 
       Otherwise, if derived_from < S , sender_base > true, thensender - traits - base < S > template < class S > struct sender - traits - base {}; 
- 
       Otherwise, sender - traits - base < S > template < class S > struct sender - traits - base { using __unspecialized = void ; // exposition only }; 
 
- 
       
- 
     If sender_traits < S >:: value_types < Tuple , Variant > S Variant < Tuple < Args0 ..., Args1 ..., ..., ArgsN ... >> Args0 ArgsN S execution :: set_value S execution :: set_value ( r , args ...) r decltype ( args ) Args0 ArgsN 
- 
     If sender_traits < S >:: error_types < Variant > S Variant < E0 , E1 , ..., EN > E0 EN S execution :: set_error S execution :: set_error ( r , e ) r decltype ( e ) E0 EN 
- 
     If sender_traits < S >:: sends_done true, and such senderS execution :: set_done ( r ) r 
- 
     Users may specialize sender_traits 
9.6.2. execution :: connect 
   - 
     execution :: connect 
- 
     The name execution :: connect s r S decltype (( s )) R decltype (( r )) R execution :: receiver S execution :: sender execution :: connect ( s , r ) execution :: connect ( s , r ) - 
       tag_invoke ( execution :: connect , s , r ) execution :: operation_state tag_invoke execution :: start s 
- 
       Otherwise, execution :: connect ( s , r ) 
 
- 
       
- 
     Standard sender types shall always expose an rvalue-qualified overload of a customization of execution :: connect execution :: connect 
9.6.3. Sender queries [execution.senders.queries]
9.6.3.1. execution :: get_completion_scheduler 
   - 
     execution :: get_completion_scheduler 
- 
     The name execution :: get_completion_scheduler s S decltype (( s )) S execution :: sender execution :: get_completion_scheduler CPO execution :: get_completion_scheduler < CPO > execution :: set_value_t execution :: set_error_t execution :: set_done_t execution :: get_completion_scheduler < CPO > execution :: get_completion_scheduler < CPO > ( s ) - 
       tag_invoke ( execution :: get_completion_scheduler < CPO > , as_const ( s )) execution :: scheduler noexcept 
- 
       Otherwise, execution :: get_completion_scheduler < CPO > ( s ) 
 
- 
       
- 
     If, for some sender s CPO execution :: get_completion_scheduler < decltype ( CPO ) > ( s ) sch s CPO ( r , args ...) r s args ... sch 
9.6.4. Sender factories [execution.senders.factories]
9.6.4.1. General [execution.senders.factories.general]
- 
     Subclause [execution.senders.factories] defines sender factories, which are utilities that return senders without accepting senders as arguments. 
9.6.4.2. execution :: schedule 
   - 
     execution :: schedule 
- 
     The name execution :: schedule s S decltype (( s )) S execution :: scheduler execution :: schedule execution :: schedule ( s ) - 
       tag_invoke ( execution :: schedule , s ) execution :: sender tag_invoke set_value s 
- 
       Otherwise, execution :: schedule ( s ) 
 
- 
       
9.6.4.3. execution :: just 
   - 
     execution :: just template < class ... Ts > struct just - sender // exposition only { std :: tuple < Ts ... > vs_ ; template < template < class ... > class Tuple , template < class ... > class Tuple > using value_types = Variant < Tuple < Ts ... >> ; template < template < class ... > class Variant > using error_types = Variant <> ; static const constexpr auto sends_done = false; template < class R > struct operation_state { std :: tuple < Ts ... > vs_ ; R r_ ; void tag_invoke ( execution :: start_t ) noexcept ( noexcept ( execution :: set_value ( declval < R > (), declval < Ts > ()...) )) { try { apply ([ & ]( Ts & ... values_ ) { execution :: set_value ( move ( r_ ), move ( values_ )...); }, vs_ ); } catch (...) { execution :: set_error ( move ( r_ ), current_exception ()); } } }; template < receiver R > requires receiver_of < R , Ts ... > && ( copyable < Ts > ... && ) auto tag_invoke ( execution :: connect_t , R && r ) const & { return operation_state < R > { vs_ , std :: forward < R > ( r ) }; } template < receiver R > requires receiver_of < R , Ts ... > auto tag_invoke ( execution :: connect_t , R && r ) && { return operation_state < R > { std :: move ( vs_ ), std :: forward < R > ( r ) }; } }; template < moveable - value ... Ts > just - sender < remove_cvref_t < Ts > ... > just ( Ts && ... ts ) noexcept ( see - below ); 
- 
     Effects: Initializes vs_ make_tuple ( forward < Ts > ( ts )...) 
- 
     Remarks: The expression in the noexcept - specifier ( is_nothrow_constructible_v < remove_cvref_t < Ts > , Ts > && ...) 
9.6.4.4. execution :: transfer_just 
   - 
     execution :: transfer_just 
- 
     The name execution :: transfer_just s vs ... S decltype (( s )) Vs ... decltype (( vs )) S execution :: scheduler V Vs moveable - value execution :: transfer_just ( s , vs ...) execution :: transfer_just ( s , vs ...) - 
       tag_invoke ( execution :: transfer_just , s , vs ...) execution :: typed_sender tag_invoke set_value s vs ... 
- 
       Otherwise, execution :: transfer ( execution :: just ( vs ...), s ) 
 
- 
       
9.6.5. Sender adaptors [execution.senders.adaptors]
9.6.5.1. General [execution.senders.adaptors.general]
- 
     Subclause [execution.senders.adaptors] defines sender adaptors, which are utilities that transform one or more senders into a sender with custom behaviors. When they accept a single sender argument, they can be chained to create sender chains. 
- 
     The bitwise OR operator is overloaded for the purpose of creating sender chains. The adaptors also support function call syntax with equivalent semantics. 
- 
     Most sender adaptors have two versions, an potentially eager version, and a strictly lazy version. For such sender adaptors, adaptor lazy_ adaptor 
- 
     A strictly lazy version of a sender adaptor is required to not begin executing any functions which would observe or modify any of the arguments of the adaptor before the returned sender is connected with a receiver using execution :: connect execution :: start 
- 
     Unless otherwise specified, all sender adaptors which accept a single sender 
- 
     Unless otherwise specified, whenever a strictly lazy sender adaptor constructs a receiver it passes to another sender’s connect, that receiver shall propagate receiver queries to a receiver accepted as an argument of execution :: connect 
9.6.5.2. Sender adaptor closure objects [execution.senders.adaptor.objects]
- 
     A pipeable sender adaptor closure object is a function object that accepts one or more sender sender C S decltype (( S )) sender sender C ( S ) S | C Given an additional pipeable sender adaptor closure object D C | D S | C | D S | ( C | D ) 
- 
     A pipeable sender adaptor object is a customization point object that accepts a sender sender 
- 
     If a pipeable sender adaptor object accepts only one argument, then it is a pipeable sender adaptor closure object. 
- 
     If a pipeable sender adaptor object accepts more than one argument, then the following expressions are equivalent: adaptor ( sender , args ...) adaptor ( args ...)( sender ) sender | adaptor ( args ...) In that case, adaptor ( args ...) 
9.6.5.3. execution :: on 
   - 
     execution :: on execution :: lazy_on 
- 
     The name execution :: on sch s Sch decltype (( sch )) S decltype (( s )) Sch execution :: scheduler S execution :: sender execution :: on execution :: on ( sch , s ) - 
       tag_invoke ( execution :: on , sch , s ) execution :: sender 
- 
       Otherwise, lazy_on ( sch , s ) 
 If the function selected above does not return a sender which starts s sch 
- 
       
- 
     The name execution :: lazy_on sch s Sch decltype (( sch )) S decltype (( s )) Sch execution :: scheduler S execution :: sender execution :: lazy_on execution :: lazy_on ( sch , s ) - 
       tag_invoke ( execution :: lazy_on , sch , s ) execution :: sender s sch 
- 
       Otherwise, constructs a sender s2 s2 out_r op_state execution :: start op_state - 
         Constructs a receiver r - 
           When execution :: set_value ( r ) execution :: connect ( s , out_r ) op_state2 execution :: start ( op_state2 ) execution :: set_error out_r current_exception () 
- 
           When execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           When execution :: set_done ( r ) execution :: set_done ( out_r ) 
 
- 
           
- 
         Calls execution :: schedule ( sch ) s3 execution :: connect ( s3 , r ) op_state3 execution :: start ( op_state3 ) execution :: set_error ( out_r , current_exception ()) 
 
- 
         
 
- 
       
- 
     Any receiver r on lazy_on get_scheduler sch on lazy_on 
9.6.5.4. execution :: transfer 
   - 
     execution :: transfer execution :: lazy_transfer set_value 
- 
     The name execution :: transfer sch s Sch decltype (( sch )) S decltype (( s )) Sch execution :: scheduler S execution :: sender execution :: transfer execution :: transfer ( s , sch ) - 
       tag_invoke ( execution :: transfer , get_completion_scheduler < set_value_t > ( s ), s , sch ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: transfer , s , sch ) execution :: sender 
- 
       Otherwise, schedule_from ( sch , s ) 
 If the function selected above does not return a sender which is a result of a call to execution :: schedule_from ( sch , s2 ) s2 s 
- 
       
- 
     The name execution :: lazy_transfer sch s Sch decltype (( sch )) S decltype (( s )) Sch execution :: scheduler S execution :: sender execution :: lazy_transfer execution :: lazy_transfer ( s , sch ) - 
       tag_invoke ( execution :: lazy_transfer , get_completion_scheduler < set_value_t > ( s ), s , sch ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: lazy_transfer , s , sch ) execution :: sender 
- 
       Otherwise, lazy_schedule_from ( sch , s ) 
 If the function selected above does not return a sender which is a result of a call to execution :: lazy_schedule_from ( sch , s2 ) s2 s 
- 
       
- 
     Senders returned from execution :: transfer execution :: lazy_transfer get_completion_scheduler < CPO > sch 
9.6.5.5. execution :: schedule_from 
   - 
     execution :: schedule_from execution :: lazy_schedule_from schedule_from lazy_schedule_from transfer lazy_transfer 
- 
     The name execution :: schedule_from sch s Sch decltype (( sch )) S decltype (( s )) Sch execution :: scheduler S execution :: typed_sender execution :: schedule_from execution :: schedule_from ( sch , s ) - 
       tag_invoke ( execution :: schedule_from , sch , s ) execution :: sender tag_invoke sch s 
- 
       Otherwise, lazy_schedule_from ( sch , s ) 
 
- 
       
- 
     The name execution :: lazy_schedule_from sch s Sch decltype (( sch )) S decltype (( s )) Sch execution :: scheduler S execution :: typed_sender execution :: lazy_schedule_from execution :: lazy_schedule_from ( sch , s ) - 
       tag_invoke ( execution :: lazy_schedule_from , sch , s ) execution :: sender tag_invoke sch s 
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r 
- 
         Calls execution :: connect ( s , r ) op_state2 execution :: set_error out_r current_exception () 
- 
         When a receiver completion-signal Signal ( r , args ...) r2 - 
           When execution :: set_value ( r2 ) Signal ( out_r , args ...) 
- 
           When execution :: set_error ( r2 , e ) execution :: set_error ( out_r , e ) 
- 
           When execution :: done ( r2 ) execution :: set_done ( out_r ) 
 It then calls execution :: schedule ( sch ) s3 execution :: connect ( s3 , r2 ) op_state3 execution :: start ( op_state3 ) execution :: set_error ( out_r , current_exception ()) 
- 
           
- 
         Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
 
- 
         
 
- 
       
- 
     Senders returned from execution :: transfer execution :: lazy_transfer get_completion_scheduler < CPO > sch 
9.6.5.6. execution :: then 
   - 
     execution :: then execution :: lazy_then 
- 
     The name execution :: then s f S decltype (( s )) S execution :: sender execution :: then execution :: then ( s , f ) - 
       tag_invoke ( execution :: then , get_completion_scheduler < set_value_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: then , s , f ) execution :: sender 
- 
       Otherwise, lazy_then ( s , f ) 
 If the function selected above does not return a sender which invokes f set_value s s 
- 
       
- 
     The name execution :: lazy_then s f S decltype (( s )) S execution :: sender execution :: lazy_then execution :: lazy_then ( s , f ) - 
       tag_invoke ( execution :: lazy_then , get_completion_scheduler < set_value_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: lazy_then , s , f ) execution :: sender 
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) invoke ( f , args ...) v execution :: set_value ( out_r , v ) execution :: set_error ( out_r , current_exception ()) 
- 
           When execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           When execution :: set_done ( r ) execution :: set_done ( out_r ) 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 
- 
         Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
 
- 
         
 If the function selected above does not return a sender which invokes f set_value s s 
- 
       
9.6.5.7. execution :: upon_error 
   - 
     execution :: upon_error execution :: lazy_upon_error 
- 
     The name execution :: upon_error s f S decltype (( s )) S execution :: sender execution :: upon_error execution :: upon_error ( s , f ) - 
       tag_invoke ( execution :: upon_error , get_completion_scheduler < set_error_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: upon_error , s , f ) execution :: sender 
- 
       Otherwise, lazy_upon_error ( s , f ) 
 If the function selected above does not return a sender which invokes f set_error s s 
- 
       
- 
     The name execution :: lazy_upon_error s f S decltype (( s )) S execution :: sender execution :: lazy_upon_error execution :: lazy_upon_error ( s , f ) - 
       tag_invoke ( execution :: lazy_upon_error , get_completion_scheduler < set_error_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: lazy_upon_error , s , f ) execution :: sender 
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) execution :: set_value ( out_r , args ...) 
- 
           When execution :: set_error ( r , e ) invoke ( f , e ) v execution :: set_value ( out_r , v ) execution :: set_error ( out_r , current_exception ()) 
- 
           When execution :: set_done ( r ) execution :: set_done ( out_r ) 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 
- 
         Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
 
- 
         
 If the function selected above does not return a sender which invokes f set_error s s 
- 
       
9.6.5.8. execution :: upon_done 
   - 
     execution :: upon_done execution :: lazy_upon_done 
- 
     The name execution :: upon_done s f S decltype (( s )) S execution :: sender execution :: upon_done execution :: upon_done ( s , f ) - 
       tag_invoke ( execution :: upon_done , get_completion_scheduler < set_done_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: upon_done , s , f ) execution :: sender 
- 
       Otherwise, lazy_upon_done ( s , f ) 
 If the function selected above does not return a sender which invokes f set_done s s 
- 
       
- 
     The name execution :: lazy_upon_done s f S decltype (( s )) S execution :: sender execution :: lazy_upon_done execution :: lazy_upon_done ( s , f ) - 
       tag_invoke ( execution :: lazy_upon_done , get_completion_scheduler < set_done_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: lazy_upon_done , s , f ) execution :: sender 
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) execution :: set_value ( out_r , args ...) 
- 
           When execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           When execution :: set_done ( r ) invoke ( f ) v execution :: set_value ( out_r , v ) execution :: set_error ( out_r , current_exception ()) 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 
- 
         Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
 
- 
         
 If the function selected above does not return a sender which invokes f set_done s s 
- 
       
9.6.5.9. execution :: let_value 
   - 
     execution :: let_value execution :: lazy_let_value 
- 
     The name execution :: let_value s f S decltype (( s )) S execution :: sender execution :: let_value execution :: let_value ( s , f ) - 
       tag_invoke ( execution :: let_value , get_completion_scheduler < set_value_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: let_value , s , f ) execution :: sender 
- 
       Otherwise, lazy_let_value ( s , f ) 
 If the function selected above does not return a sender which invokes f set_value f s 
- 
       
- 
     The name execution :: lazy_let_value s f S decltype (( s )) S execution :: sender execution :: lazy_let_value execution :: lazy_let_value ( s , f ) - 
       tag_invoke ( execution :: lazy_let_value , get_completion_scheduler < set_value_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: lazy_let_value , s , f ) execution :: sender 
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) args ... op_state2 args2 ... invoke ( f , args2 ...) s3 execution :: connect ( s3 , out_r ) op_state3 op_state3 op_state2 execution :: start ( op_state3 ) execution :: set_error ( out_r , current_exception ()) 
- 
           When execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           When execution :: set_done ( r , e ) execution :: set_done ( out_r ) 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 
- 
         Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
 
- 
         
 If the function selected above does not return a sender which invokes f set_value f s 
- 
       
9.6.5.10. execution :: let_error 
   - 
     execution :: let_error execution :: lazy_let_error 
- 
     The name execution :: let_error s f S decltype (( s )) S execution :: sender execution :: let_error execution :: let_error ( s , f ) - 
       tag_invoke ( execution :: let_error , get_completion_scheduler < set_error_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: let_error , s , f ) execution :: sender 
- 
       Otherwise, lazy_let_error ( s , f ) 
 If the function selected above does not return a sender which invokes f set_error f s 
- 
       
- 
     The name execution :: lazy_let_error s f S decltype (( s )) S execution :: sender execution :: lazy_let_error execution :: lazy_let_error ( s , f ) - 
       tag_invoke ( execution :: lazy_let_error , get_completion_scheduler < set_error_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: lazy_let_error , s , f ) execution :: sender 
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) execution :: set_value ( out_r , args ...) 
- 
           When execution :: set_error ( r , e ) e op_state2 e invoke ( f , e ) s3 execution :: connect ( s3 , out_r ) op_state3 op_state3 op_state2 execution :: start ( op_state3 ) execution :: set_error ( out_r , current_exception ()) 
- 
           When execution :: set_done ( r , e ) execution :: set_done ( out_r ) 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 
- 
         Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
 
- 
         
 If the function selected above does not return a sender which invokes f set_error f s 
- 
       
9.6.5.11. execution :: let_done 
   - 
     execution :: let_done execution :: lazy_let_done 
- 
     The name execution :: let_done s f S decltype (( s )) S execution :: sender execution :: let_done execution :: let_done ( s , f ) - 
       tag_invoke ( execution :: let_done , get_completion_scheduler < set_done_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: let_done , s , f ) execution :: sender 
- 
       Otherwise, lazy_let_done ( s , f ) 
 If the function selected above does not return a sender which invokes f set_done f s 
- 
       
- 
     The name execution :: lazy_let_done s f S decltype (( s )) S execution :: sender execution :: lazy_let_done execution :: lazy_let_done ( s , f ) - 
       tag_invoke ( execution :: lazy_let_done , get_completion_scheduler < set_done_t > ( s ), s , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: lazy_let_done , s , f ) execution :: sender 
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) execution :: set_value ( out_r , args ...) 
- 
           When execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           When execution :: set_done ( r ) invoke ( f ) s3 execution :: connect ( s3 , out_r ) op_state3 op_state3 op_state2 execution :: start ( op_state3 ) execution :: set_error ( out_r , current_exception ()) 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 
- 
         Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
 
- 
         
 If the function selected above does not return a sender which invokes f set_done f s 
- 
       
9.6.5.12. execution :: bulk 
   - 
     execution :: bulk execution :: lazy_bulk 
- 
     The name execution :: bulk s shape f S decltype (( s )) Shape decltype (( shape )) F decltype (( f )) S execution :: sender Shape integral execution :: bulk execution :: bulk ( s , shape , f ) - 
       tag_invoke ( execution :: bulk , get_completion_scheduler < set_value_t > ( s ), s , shape , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: bulk , s , shape , f ) execution :: sender 
- 
       Otherwise, lazy_bulk ( s , shape , f ) 
 
- 
       
- 
     The name execution :: lazy_bulk s shape f S decltype (( s )) Shape decltype (( shape )) F decltype (( f )) S execution :: sender Shape integral execution :: bulk execution :: bulk ( s , shape , f ) - 
       tag_invoke ( execution :: bulk , get_completion_scheduler < set_value_t > ( s ), s , shape , f ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: bulk , s , shape , f ) execution :: sender 
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) f ( i , args ...) i Shape 0 shape execution :: set_value ( out_r , args ...) execution :: set_error ( out_r , current_exception ()) 
- 
           When execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           When execution :: set_done ( r , e ) execution :: set_done ( out_r , e ) 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 
- 
         Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
 
- 
         
 If the function selected above does not return a sender which invokes f ( i , args ...) i Shape 0 shape args ... 
- 
       
9.6.5.13. execution :: split 
   - 
     execution :: split execution :: lazy_split 
- 
     The name execution :: split s S decltype (( s )) S execution :: typed_sender execution :: split execution :: split ( s ) - 
       tag_invoke ( execution :: split , get_completion_scheduler < set_value_t > ( s ), s ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: split , s ) execution :: sender 
- 
       Otherwise, lazy_split ( s ) 
 If the function selected above does not return a sender which sends references to values sent by s 
- 
       
- 
     The name execution :: lazy_split s S decltype (( s )) S execution :: typed_sender execution :: lazy_split execution :: lazy_split ( s ) - 
       tag_invoke ( execution :: lazy_split , get_completion_scheduler < set_value_t > ( s ), s ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: lazy_split , s ) execution :: sender 
- 
       Otherwise, constructs a sender s2 - 
         Creates an object sh_state sh_state execution :: connect ( s , some_r ) some_r 
- 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) args ... sh_state 
- 
           When execution :: set_error ( r , e ) e sh_state 
- 
           When execution :: set_done ( r ) sh_state 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 op_state2 sh_state 
- 
         When s2 out_r op_state execution :: start ( op_state ) execution :: start ( op_state2 ) execution :: start ( op_state ) Signal ( r , args ...) Signal ( out_r , args2 ...) args2 ... sh_state Signal ( r , args ...) 
 
- 
         
 If the function selected above does not return a sender which sends references to values sent by s 
- 
       
9.6.5.14. execution :: when_all 
   - 
     execution :: when_all execution :: when_all_with_variant 
- 
     The name execution :: when_all s ... S decltype (( s )) S i S ... execution :: typed_sender sender_traits < S i >:: value_types Variant execution :: when_all execution :: when_all ( s ...) - 
       tag_invoke ( execution :: when_all , s ...) execution :: sender tag_invoke s ... set_value 
- 
       Otherwise, constructs a sender s s out_r - 
         For each sender s i s ... r i - 
           If execution :: set_value ( r i , t i ...) r i execution :: set_value ( out_r , t 0 ..., t 1 ..., ..., t n ...) n sizeof ...( s ) - 1 
- 
           Otherwise, if execution :: set_error ( r i , e ) r i execution :: set_error ( out_r , e ) 
- 
           Otherwise, if execution :: set_done ( r i ) r i execution :: set_done ( out_r ) 
 
- 
           
- 
         For each sender s i s ... execution :: connect ( s i , r i ) op_state i 
- 
         Returns an operation state op_state op_state i execution :: start ( op_state ) execution :: start ( op_state i ) op_state i 
 
- 
         
 
- 
       
- 
     The name execution :: when_all_with_variant s ... S decltype (( s )) S i S ... execution :: typed_sender execution :: when_all_with_variant execution :: when_all_with_variant ( s ...) - 
       tag_invoke ( execution :: when_all_with_variant , s ...) execution :: sender tag_invoke into - variant - type < S > ... set_value 
- 
       Otherwise, execution :: when_all ( execution :: into_variant ( s )...) 
 
- 
       
- 
     Adaptors defined in this subclause are strictly lazy. 
- 
     Senders returned from adaptors defined in this subclause shall not expose the sender queries get_completion_scheduler < CPO > 
- 
     tag_invoke 
9.6.5.15. execution :: transfer_when_all 
   - 
     execution :: transfer_when_all execution :: lazy_transfer_when_all execution :: transfer_when_all_with_variant execution :: lazy_transfer_when_all_with_variant 
- 
     The name execution :: transfer_when_all sch s ... Sch decltype ( sch ) S decltype (( s )) Sch scheduler S i S ... execution :: typed_sender sender_traits < S i >:: value_types Variant execution :: transfer_when_all execution :: transfer_when_all ( sch , s ...) - 
       tag_invoke ( execution :: transfer_when_all , sch , s ...) execution :: sender tag_invoke s ... set_value sch 
- 
       Otherwise, transfer ( when_all ( s ...), sch ) 
 
- 
       
- 
     The name execution :: lazy_transfer_when_all sch s ... Sch decltype ( sch ) S decltype (( s )) Sch scheduler S i S ... execution :: typed_sender sender_traits < S i >:: value_types Variant execution :: lazy_transfer_when_all execution :: lazy_transfer_when_all ( sch , s ...) - 
       tag_invoke ( execution :: lazy_transfer_when_all , sch , s ...) execution :: sender tag_invoke s ... set_value sch 
- 
       Otherwise, lazy_transfer ( when_all ( s ...), sch ) 
 
- 
       
- 
     The name execution :: transfer_when_all_with_variant s ... S decltype (( s )) S i S ... execution :: typed_sender execution :: transfer_when_all_with_variant execution :: transfer_when_all_with_variant ( s ...) - 
       tag_invoke ( execution :: transfer_when_all_with_variant , s ...) execution :: sender tag_invoke into - variant - type < S > ... set_value 
- 
       Otherwise, execution :: transfer_when_all ( sch , execution :: into_variant ( s )...) 
 
- 
       
- 
     The name execution :: lazy_transfer_when_all_with_variant s ... S decltype (( s )) S i S ... execution :: typed_sender execution :: lazy_transfer_when_all_with_variant execution :: lazy_transfer_when_all_with_variant ( s ...) - 
       tag_invoke ( execution :: lazy_transfer_when_all_with_variant , s ...) execution :: sender tag_invoke into - variant - type < S > ... set_value 
- 
       Otherwise, execution :: lazy_transfer_when_all ( sch , execution :: into_variant ( s )...) 
 
- 
       
- 
     Senders returned from execution :: transfer_when_all execution :: lazy_transfer_when_all get_completion_scheduler < CPO > sch 
9.6.5.16. execution :: into_variant 
   - 
     execution :: into_variant 
- 
     The template into - variant - type execution :: into_variant template < typed_sender S > using into - with - variant - type = typename execution :: sender_traits < remove_cvref_t < S >> :: template value_types < tuple , variant > ; template < typed_sender S > see - below into_variant ( S && s ); 
- 
     Effects: Returns a sender s2 s2 out_r - 
       Constructs a receiver r - 
         If execution :: set_value ( r , ts ...) execution :: set_value ( out_r , into - variant - type < S > ( make_tuple ( ts ...))) 
- 
         If execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
         If execution :: set_done ( r ) execution :: set_done ( out_r ) 
 
- 
         
- 
       Calls execution :: connect ( s , r ) op_state2 
- 
       Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
 
- 
       
9.6.5.17. execution :: ensure_started 
   - 
     execution :: ensure_started 
- 
     The name execution :: ensure_started s S decltype (( s )) S execution :: typed_sender execution :: ensure_started execution :: ensure_started ( s ) - 
       tag_invoke ( execution :: ensure_started , get_completion_scheduler < set_value_t > ( s ), s ) execution :: sender 
- 
       Otherwise, tag_invoke ( execution :: ensure_started , s ) execution :: sender 
- 
       Otherwise: - 
         Constructs a receiver r 
- 
         Calls execution :: connect ( s , r ) op_state execution :: start ( op_state ) execution :: set_error ( r , current_exception ()) 
- 
         Constructs a sender s2 s2 out_r op_state2 execution :: start ( op_state2 ) r - 
           If execution :: set_value ( r , ts ...) execution :: set_value ( out_r , ts ...) 
- 
           If execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           If execution :: set_done ( r ) execution :: set_done ( out_r ) 
 
- 
           
 
- 
         
 If the function selected above does not eagerly start the sender s s 
- 
       
9.6.6. Sender consumers [execution.senders.consumers]
9.6.6.1. execution :: start_detached 
   - 
     execution :: start_detached 
- 
     The name execution :: start_detached s S decltype (( s )) S execution :: sender execution :: start_detached execution :: start_detached ( s ) - 
       tag_invoke ( execution :: start_detached , execution :: get_completion_scheduler < execution :: set_value_t > ( s ), s ) void 
- 
       Otherwise, tag_invoke ( execution :: start_detached , s ) void 
- 
       Otherwise: - 
         Constructs a receiver r - 
           When set_value ( r , ts ...) 
- 
           When set_error ( r , e ) std :: terminate 
- 
           When set_done ( r ) 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state execution :: start ( op_state ) 
 
- 
         
 If the function selected above does not eagerly start the sender s set_value set_done std :: terminate set_error 
- 
       
9.6.6.2. this_thread :: sync_wait 
   - 
     this_thread :: sync_wait this_thread :: sync_wait_with_variant 
- 
     The templates sync - wait - type sync - wait - with - variant - type this_thread :: sync_wait this_thread :: sync_wait_with_variant template < typed_sender S > using sync - wait - type = optional < typename execution :: sender_traits < remove_cvref_t < S >> :: template value_types < tuple , type_identity_t >> ; template < typed_sender S > using sync - wait - with - variant - type = optional < into - variant - type < S >> ; 
- 
     The name this_thread :: sync_wait s S decltype (( s )) S execution :: typed_sender sender_traits < S >:: value_types Variant this_thread :: sync_wait this_thread :: sync_wait - 
       tag_invoke ( this_thread :: sync_wait , execution :: get_completion_scheduler < execution :: set_value_t > ( s ), s ) sync - wait - type < S > 
- 
       Otherwise, tag_invoke ( this_thread :: sync_wait , s ) sync - wait - type < S > 
- 
       Otherwise: - 
         Constructs a receiver r 
- 
         Calls execution :: connect ( s , r ) op_state execution :: start ( op_state ) 
- 
         Blocks the current thread until a receiver completion-signal of r - 
           If execution :: set_value ( r , ts ...) sync - wait - type < S > ( make_tuple ( ts ...)) > 
- 
           If execution :: set_error ( r , e ...) remove_cvref_t ( decltype ( e )) exception_ptr std :: rethrow_exception ( e ) e 
- 
           If execution :: set_done ( r ) sync - wait - type < S ( nullopt ) > 
 
- 
           
 
- 
         
 
- 
       
- 
     The name this_thread :: sync_wait_with_variant s S decltype (( s )) S execution :: typed_sender this_thread :: sync_wait_with_variant this_thread :: sync_wait_with_variant - 
       tag_invoke ( this_thread :: sync_wait_with_variant , execution :: get_completion_scheduler < execution :: set_value_t > ( s ), s ) sync - wait - with - variant - type < S > 
- 
       Otherwise, tag_invoke ( this_thread :: sync_wait_with_variant , s ) sync - wait - with - variant - type < S > 
- 
       Otherwise, this_thread :: sync_wait ( execution :: into_variant ( s )) 
 
- 
       
- 
     Any receiver r sync_wait sync_wait_with_variant get_scheduler 
9.7. execution :: execute 
   - 
     execution :: execute 
- 
     The name execution :: execute sch f Sch decltype (( sch )) F decltype (( f )) Sch execution :: scheduler F invocable <> execution :: execute execution :: execute - 
       tag_invoke ( execution :: execute , sch , f ) void tag_invoke f sch std :: terminate 
- 
       Otherwise, execution :: start_detached ( execution :: then ( execution :: schedule ( sch ), f )) 
 
-