1. Acknowledgments
Thanks to David Goldblatt, Dietmar Kühl and Jens Maurer for their help with the wording!
2. Revision History
This paper revises P0260R15 - 2025-02-13 as follows.
-
removed exception support helpers
-
removed discussion points and possible LEWG polls
-
changed
to not take an output parameter but returntry_pop expected -
moved discussion for
to historic contentsexpected -
introduced order
for sequential consistencyQ -
added wording to make clear that a value is removed from a queue if the constructor called by a pop operation throws an exception
-
added list with wording still to be reviewed
P0260R15 revises P0260R14 - 2025-01-12 as follows.
-
added more introductory description for the proposed wording
-
updated examples and design section to reflect
forset_error
andasync_push async_pop -
provided generally more rationale
-
added wording for stop tokens
-
added possible polls for LEWG
-
fixed wording for
pop -
removed
for pop operationsstrongly happens before -
added
busy_async -
added
from P3570 with error-as-optional to return anget_await_completion_adapter_t
for coroutines formoptional < T > async_pop -
added error-as-bool to return a
for coroutines frombool async_push
P0260R14 revises P0260R13 - 2024-12-10 as follows.
-
updated wording based on feedback from Dietmar and Jens
-
added example for non-blocking functions
-
added discussion for std::expected
-
added discussion for async sender behaviour
-
wording now proposes for
andasync_push
to callasync_pop
on a closed queueset_error
P0260R13 revises P0260R12 - 2024-11-21 as follows.
-
added
tosuccess conqueue_errc -
updated error handling
-
added rationale for concepts
-
updated API (error handling, return types, emplace), including rationale
-
updated examples
P0260R12 revises P0260R11 - 2024-10-12 as follows.
-
Implemented feedback from LEWG and SG1 in St. Louis:
-
Added wording for allocator awareness
-
Added rationale for not providing single-ended interfaces
-
Added rationale for
not being movablebounded_queue -
Added rationale for
not providingbounded_queue emplace
-
-
Implemented feedback from SG1 in Wroclaw:
-
Fixed wording for sequential consistency
-
Fixed wording for
async_ * -
Fixed ordering specification of
bounded_queue
-
-
Provided usage examples
P0260R11 revises P0260R10 - 2024-06-26 as follows.
-
Implement feedback from LEWG and SG1 in St. Louis:
-
Make concept exposition only
-
Require async operations to run on the scheduler of the receiver of the operation
-
Call
for async operations if queue is closedset_error -
Reintroduced accidentally removed data-race freeness from R9
-
Require sequential constistent semantics for
bounded_queue -
Remove
leftoversexperimental
P0260R10 revises P0260R9 - 2024-05-19 as follows.
-
Implement feedback from SG1 in St. Louis:
-
Split concept into three
-
Require
to be lockfree (w/ error codetry_ *
)busy -
Drop
is_always_lock_free -
Drop
capacity () -
Removed discussion points forSG1
-
Removed TS ship vehicle
-
General cleanup in the design part
P0260R9 revises P0260R8 - 2024-03-08 as follows.
-
Added more wording
-
Fixed
capacity -
Changed push/pop wording to use "strongly happens before"
-
Added discussion points
-
Removed paragraph about ctors/dtors returning on same thread
-
now never blocks (even for contention on internal synchronization)try_ * -
Wording for spurious failures of
try_ * -
The constructors for
that pre-fill the queue have been removedbounded_queue -
Reference to P3282 added
-
Moved discussion about
to historic contents.try_push ( T && , T & )
P0260R8 revises P0260R7 - 2023-06-15 as follows.
-
Restructured document
-
Fix typos
-
Implement LEWG feedback to derive conqueue_errc from system_error
-
Implement LEWG feedback to add range constructor and go back to InputIterator
-
size_t capacity() added
-
Added TBB concurrent_bounded_queue as existing practice
-
Moved discussion about pop() interface to separate paper
-
reintegrated P1958
buffer_queue -
Renamed
tobuffer_queue bounded_queue -
Added proposed wording (incomplete)
-
Updated TS questions
Older revision history was moved to after the proposed wording.
2.1. Review Topics
2.1.1. SG1
Wording Changes that still need to be reviewed by SG1:
-
spurious failure for
operationstry_ * -
wording for the total order
Q
2.1.2. LWG
Wording Changes that still need to be reviewed by LWG:
-
all
2.1.3. Own Review
The current wording is known to be still buggy/incomplete in some places and still needs to be fixed:
-
wording needs to handle the case where the value directly comes from a complimmentary function and was never stored in the queueasync_ * -
wording needs to be careful not to require
if the value does not come out of the queue (or goes into?), affacts onlymove async_ * -
is wrongset_error ( r , exception_ptr )
3. Introduction
Queues provide a mechanism for communicating data between components of a system.
The existing
in the standard library is an inherently sequential
data structure. Its reference-returning element access operations cannot
synchronize access to those elements with other queue operations. So,
concurrent pushes and pops on queues require a different interface to
the queue structure.
Moreover, concurrency adds a new dimension for performance and semantics. Different queue implementation must trade off uncontended operation cost, contended operation cost, and element order guarantees. Some of these trade-offs will necessarily result in semantics weaker than a serial queue.
Concurrent queues come in a several different flavours, e.g.
-
bounded vs. unbounded
-
blocking vs. overwriting
-
single-ended vs. multi-ended
-
strict FIFO ordering vs. priority based ordering
The syntactic concepts proposed here should be valid for all of these flavours, while the concrete semantics might differ.
4. Existing Practice
4.1. Concept of a Bounded Queue
The basic concept of a bounded queue with potentially blocking push and pop operations is very old and widely used. It’s generally provided as an operating system level facility, like other concurrency primitives.
POSIX 2001 has
message queues (with priorities and timeout).
FreeRTOS, Mbed, vxWorks provide bounded queues.
4.2. Bounded and Unbounded Queues with C++ Interface
4.2.1. Literature
The first concurrent queue I’ve seen was in [Hughes97]. It was full of bugs and as such shows what will go wrong if C++ doesn’t provide a standard queue. It’s unbounded.
Anthony Williams provided a queue in C++ Concurrency in Action. It’s unbounded.
4.2.2. Boost
Boost has a number of queues, as official library and as example uses of synchronization primitives.
Boost Message Queue only transfers bytes, not objects. It’s bounded.
Boost Lock-Free Queue and Boost Lock-Free SPSC Queue have only non-blocking operations. Boost Lock-Free Queue is bounded or unbounded, Boost Lock-Free SPSC Queue is bounded.
Boost Synchronized Queue is an implementation of an early version of this proposal.
4.2.3. TBB
TBB has
(TBB Bounded Queue) and an unbounded version
TBB Unbounded Queue.
5. Examples and Implementation
5.1. Implementation
A partial implementation is available at github.com/GorNishanov/conqueue.
Another partial implementation is available at gitlab.com/cppzs/bounded-queue.
A free, open-source implementation of an earlier version of these
interfaces is avaliable at the Google Concurrency Library project at github.com/alasdairmackintosh/google-concurrency-library.
The original
is in ..../blob/master/include/buffer_queue.h.
The concrete
is in ..../blob/master/include/lock_free_buffer_queue.h.
The corresponding implementation of the conceptual tools is in ..../blob/master/include/queue_base.h.
5.2. Examples
Here we provide some examples how the API may be used.
The examples are available at https://gitlab.com/cppzs/bounded-queue/-/tree/master/demo.
5.3. Find Files with String
This example was presented in Object-Oriented Multithreading Using C++. The programm searches for files that contain a specific string. The idea is to have one thread to collect the file names from the filesystem and have several threads that read these files and searches them for the string.
Here we have some global definitions:
namespace fs = std :: filesystem ; typedef std :: bounded_queue < fs :: path > FileList ;
Here’s the function that searches for the files and pushes them into a queue:
void searchFiles ( FileList & files , fs :: path dir ) { if ( fs :: exists ( dir ) && fs :: is_directory ( dir )) { fs :: directory_iterator end ; for ( fs :: directory_iterator i ( dir ); i != end ; ++ i ) { if ( ! fs :: is_directory ( * i )) { files . push ( * i ); // A } } } else { cerr << "no such directory" << endl ; } files . close (); // B }
At A the files are pushed into the queue and after all files are pushed at B the queue is closed.
Here’s the function that picks a file and searches it for the string:
void searchWord ( FileList & files , string_view str ) { std :: optional < fs :: path > fname ; while (( fname = files . pop ( ec ))) // C { std :: ifstream f ( * fname ); string line ; while ( getline ( f , line )) { if ( line . find ( str ) != line . npos ) { cout << "found in " << fname -> string () << endl ; break ; } } } }
Don’t check for the queue being closed at C.
Instead
, and if the queue was closed and no files a still in the queue the returned optional will be empty.
And here’s the
function to put everything together:
int main ( int argc , char * argv []) { FileList files ( 100 ); // D thread a ( searchFiles , std :: ref ( files ), argv [ 2 ]); thread b1 ( searchWord , std :: ref ( files ), argv [ 1 ]); thread b2 ( searchWord , std :: ref ( files ), argv [ 1 ]); a . join (); b1 . join (); b2 . join (); delete files ; return 0 ; }
The queue is created (D) before the threads are started.
5.4. Find Files with String (Async Coroutine Version)
Note: this async version is only presented to demonstrate the use
of the interface.
Without an async
and an async
this async version doesn’t make much sense.
For an async version (using coroutines) only very few things change:
is now a coroutine returning
that simply calls
instead of
has a few more changes:
exec :: task < void > searchWord ( FileList & files , stdexec :: run_loop & loop , // A string_view str ) { std :: optional < fs :: path > fname ; while (( fname = co_await ( files . async_pop ()))) // B { // ... same as before ... } loop . finish (); // D }
At B we now
.
We now get a
as parameter (A) on which we call
(C)
once we’re done.
now sets up a
and
instead of starting threads:
int main ( int argc , char * argv []) { stdexec :: run_loop loop ; exec :: async_scope scope ; FileList files ( 100 ); scope . spawn ( stdexec :: on ( loop . get_scheduler (), searchFiles ( files , argv [ 2 ]))); scope . spawn ( stdexec :: on ( loop . get_scheduler (), searchWord ( files , loop , argv [ 1 ]))); loop . run (); stdexec :: sync_wait ( scope . on_empty ()); return 0 ; }
5.5. Find Files with String (Async S/R Version)
This version looks pretty different.
The function that creates the sender to search for files looks like this:
template < class Case0 , class Case1 , class Case2 > exec :: variant_sender < Case0 , Case1 , Case2 > branch ( unsigned condition , Case0 sndr0 , Case1 sndr1 , Case2 sndr2 ) { if ( condition == 0 ) return std :: move ( sndr0 ); if ( condition == 1 ) return std :: move ( sndr1 ); return std :: move ( sndr2 ); } unsigned iterSelect ( fs :: directory_iterator & i ) { unsigned ret = 2 ; if ( i == fs :: directory_iterator {}) { ret = 0 ; } else { if ( fs :: is_directory ( * i )) ret = 1 ; ++ i ; // A } return ret ; } stdexec :: sender auto searchForFiles ( FileList & files , fs :: path dir ) { return stdexec :: let_value ( // B stdexec :: just ( fs :: directory_iterator ( dir )), [ & files ] ( fs :: directory_iterator & i ) // C { stdexec :: sender auto nextFile = stdexec :: let_value ( // D stdexec :: just (), [ & i , & files ] { fs :: path file ; if ( i != fs :: directory_iterator {}) { file = * i ; // E } return branch ( // F iterSelect ( i ), // Sndr0: close // G stdexec :: then ( stdexec :: just (), [ & files ] { std :: cout << "closing \n " ; files . close (); return true; }), // Sndr1: skip directory // H stdexec :: just ( false), // Sndr2: push file // I stdexec :: then ( files . async_push ( file ), [] { return false; })); }); return exec :: repeat_effect_until ( nextFile ); // J }); }
at
ensures that the
at
is alive throughout
its whole scope, i.e. for all iterations of
at
.
at
ensures that
at
is alive for a single iteration.
at
is a helper that puts the correct sender into
. The correct sender is selected in
that also increments our iterator.
We have three cases:
-
The iterator is at the end, than we close the queue (
).G -
The iterator points to a directory, then we skip the entry (
).H -
Otherwise we push the entry into the queue (
).I
The closing sender returns true
to end the
,
the others return false
to continue.
The function that creates the sender to search a single file for a string looks like this:
stdexec :: sender auto searchWord ( FileList & files , std :: string_view str ) { stdexec :: sender auto popAndProcess = files . async_pop () // A | stdexec :: then ( [ str ] ( const fs :: path & fname ) noexcept -> bool // B { //std::cout << "searching " << fname.string() << std::endl; std :: ifstream f ( fname ); std :: string line ; while ( std :: getline ( f , line )) { if ( line . find ( str ) != line . npos ) { std :: cout << "found in " << fname . string () << '\n' ; break ; } } return false; }) | stdexec :: upon_error ( // C [] ( auto err ) noexcept -> bool { return true; }); return exec :: repeat_effect_until ( popAndProcess ); }
The sender created by
sends either a
or an error
if the queue is empty and closed.
The first case is covered by the
algorithm at
and processes the file.
The second case is covered by the
algorithm at
and returns true
to end the loop.
5.6. Logging
This is an example from an embedded system where no blocking is allowed.
Logging messages can be pushed into a queue from all kind of contexts, while a background task regularly pulls from the queue and sends messages to a serial interface.
This is the push function:
void enqueueLogMessage ( std :: string && msg ) { if ( logQ . try_push ( std :: move ( msg )) != conqueue_errc :: success ) { ++ lostLogMessages ; } }
This is the function that pulls the messages:
void outputLogMessage () { static std :: string bufferedString ; unsigned busyCnt = 0 ; while ( outputBuffer . add ( bufferedString ) > 0 ) // returns free space { auto val = logQ . try_pop (); if ( val ) { bufferedString = std :: move ( * val ); continue ; } conqueue_errc ec ( val . error ()); assert ( ec != conqueue_errc :: closed ); if ( ec == conqueue_errc :: busy ) { ++ busyCnt ; if ( busyCnt > 2 ) break ; } break ; } }
6. Conceptual Interface
We provide basic queue operations, and then extend those operations to cover other important use cases.
Due to performance issues, the conceptual interface is split
into three separate interfaces.
The
concept provides the interface
and
and closing the queue.
The
concept is based on
and provides the additional interface
and
.
The
concept is also based on
and provides the additional interface
and
.
The concrete queue
models all these concepts.
There’s no single queue implementation that can handle all use cases
efficiently. Also not all use cases require both
and
interfaces, and and only providing one of them allows
for more efficient implementations.
Therefore it’s expected that there will be many different implementations
of these concepts with different trade-offs.
Some of them might be standardized, most will be not.
But the existence of the concepts allows for adapters (like single-ended interfaces to be used with different
implementations.
LEWG requested in St. Louis to make these concepts exposition-only,
as this proposal doesn’t include any adapters using these concepts.
6.1. Error Class
We introduce an
to define the possible states
that operations on the queue may return.
For historical reasons this is currently named
,
but we hope to rename it.
enum class conqueue_errc { success , empty , full , closed , busy , busy_async };
6.2. Basic Operations
The essential solution to the problem of concurrent queuing is to shift to value-based operations, rather than reference-based operations.
The basic operations are:
bool queue::push ( const T & x ); bool queue::push ( T && x ); template < typename ... Args > bool emplace ( Args && ... as );
Pushes
onto the queue via copy, move or argument based construction
(and possibly blocks).
It returns true
on success, and false
if the queue is closed.
std :: optional < T > queue :: pop ();
Pops a value from the queue via move construction into the return value.
If the queue is empty and closed, returns
.
If queue is empty and open,
the operation blocks until an element is available.
6.3. Non-Waiting Operations
Waiting on a full or empty queue can take a while, which has an opportunity cost. Avoiding that wait enables algorithms to do other work rather than wait for a push on a full queue, and to do other work rather than wait for a pop on an empty queue. More importantly, there are contexts with weakly parallel forward progres that don’t allow blocking synchronization, but where pushing into and popping from a queue is still desired.
Therefore we provide
versions of push and pop operations.
These operations will never block.
If they would have to wait for internal synchronization
the queue status is
.
If they would have to schedule a continuation of an async operation
the queue status is
.
conqueue_errc queue::try_push ( const T & x ); conqueue_errc queue::try_push ( T && x ); template < typename ... Args > conqueue_errc try_emplace ( Args && ... as );
Note: the return type has changed from earlier versions.
Returning a
and providing the reason for a failure in an extra
output parameter feels wrong.
If no object could be placed into the queue, returns the respective status.
Otherwise, pushes the value onto the
queue via copy, move or argument based construction and returns
.
expected < T , conqueue_errc > queue :: try_pop ();
If no object could be obtained from the queue,
returns an
with an
that contains the queue status.
Otherwise, pops the element from the queue
via move construction into the
.
This is the interface agreed on in LEWG in Hagenberg.
6.4. Asynchronous Operations
sender auto queue :: async_push ( T & x ); sender auto queue :: async_push ( T && x ); template < typename ... Args > sender auto async_emplace ( Args && ... as ); sender auto queue :: async_pop ();
These operations return a sender that will push or pop the element.
The operations support cancellation and if the receiver is currently
waiting on a push or pop operation and no longer interested in
performing the operation, it should be removed from any waiting
queues, if any, and be completed with
.
Sender based async interfaces have two main ways of usage: native S/R with receivers and coroutines where the receivers are provided by the coroutine library.
In Wroclaw only the coroutine usage was shown and LEWG voted there
for an interface that favours coroutines.
The sender returned by
would call
which mirrors the synchronous
.
Afterwards it was pointed out that this interface is suboptimal
for native receivers that can have multiple
overloads.
So R12 proposed to call
if a value was retrieved
and
otherwise.
One change from R12 as presented in a telecon in December 2024
is that the close case takes the
path and not
.
One reason for this is that if the queue is closed, the pop fails to provide
a value. In many cases it’s not really an error, but it’s still
some kind of failure, and using the
channel for this feels wrong.
The native sender/receiver usage could look like this:
files . async_pop () | stdexec :: then ( [ str ] ( const fs :: path & fname ) noexcept { //... return false; }) | stdexec :: upon_error ( [] ( auto err ) noexcept -> bool { return true; });
But a native sender/receiver usage could also provide an application specific receiver that now can handle the value case and the closed case completely separately.
Another reason for using
is a practical one.
If
returns a value you often want to handle this value
using more than one pipeline stage:
files . async_pop () | stdexec :: then ( [ str ] ( T1 v1 ) noexcept { //... return T2 {...}; }) | some_worker . async_consume () | stdexec :: upon_error ( [] ( auto err ) noexcept -> bool { return true; });
If the closed case would call the
channel, it would be harder
to write such chains, and it’s much easier to handle all failure cases
with a single
at the end.
By symmetry,
(and
) also call
on a closed queue.
For success they simply call
.
P3570 provides a mechanism to return different values for coroutines than for direct receivers. This revision proposes to use this mechanism.
For the coroutine use it provides an exposition-only adapter error-as-optional that returns an
that contains a value if a value was retrieved
and an empty
if the queue was closed.
For
an exposition-only adapter error-as-bool is provided
that returns for coroutines true
if the push succeeded
and false
if the queue was closed.
6.5. Closed Queues
Threads using a queue for communication need some mechanism to signal when the queue is no longer needed. The usual approach is add an additional out-of-band signal. However, this approach suffers from the flaw that threads waiting on either full or empty queues need to be woken up when the queue is no longer needed. To do that, you need access to the condition variables used for full/empty blocking, which considerably increases the complexity and fragility of the interface. It also leads to performance implications with additional mutexes or atomics. Rather than require an out-of-band signal, we chose to directly support such a signal in the queue itself, which considerably simplifies coding.
To achieve this signal, a thread may
a queue. Once closed,
no new elements may be pushed onto the queue. Push operations on a
closed queue will return false
or
.
Elements already on the queue may be popped. When a queue is empty and
closed, pop operations will either (
) return an empty
or (
) return an
with an
result
with value
.
The additional operations are as follows:
void queue::close () noexcept ;
Close the queue.
bool queue::is_closed () const noexcept ;
Return true iff the queue is closed.
6.6. emplace
As queues have a
operation, it might be plausible
to add an
operation as well.
However, this might be misleading.
typically exists for performance reason for types that don’t
provide cheap move construction.
So for containers where objects are meant to stay it makes sense
to construct the objects in place.
But queues aren’t containers. Queues are mechanisms to push objects
into them to be popped out again as soon as possible.
So if the move of an object is expensive, it might be efficient
to construct it inside a queue, but the pop would still be expensive.
In such cases it might be more efficient to push a
or similar
through a queue instead of the actual values.
Providing an
operation might lead users to use a queue
with object values and believe that’s efficient.
However,
is definitely useful as no object
is created at all if it’s not inserted into the queue.
And if
exists, for symmetry
and
should be provided as well, so this revision proposes them.
6.7. Element Type Requirements
The above operations require element types with copy/move constructors, and destructor. These operations may be trivial. The copy/move constructors operators may throw, but must leave the objects in a valid state for subsequent operations.
6.8. "Error" Handling
"One person’s exception is another person’s expected result."
It’s extremely hard to define what results of a queue function actually constitutes an error. Instead of trying so, this proposal takes the approach that nothing is really an error.
Instead different function results due to different queue states are delivered by special return types.
returns
(as decided in Wroclaw):
empty
for the
state, and an optional containing
a
otherwise.
returns
: false
for
and true
for successful enqueuing.
currently returns
and takes an extra
output parameter of type
:
an optional with a
on success and an empty optional otherwise
with the queue state in the output parameter.
returns
:
a
on success and the queue state otherwise.
returns as
the queue state directly.
returns a sender that calls
if
a value could be obtained from the queue and
if the queue was closed.
returns a sender that calls
if the value could be deposited and
if the queue was closed.
Concurrent queues cannot completely hide the effect of exceptions thrown by the element type, in part because changes cannot be transparently undone when other threads are observing the queue.
Queues may rethrow exceptions from storage allocation or mutexes.
If the element type operations required do not
throw exceptions, then only the exceptions above are rethrown.
In practice, for a
storage allocation only happens
on construction and mutex
and
generally don’t throw.
When an element copy/move may throw, some queue operations have additional behavior.
-
A push operation shall rethrow and the state of the queue is unaffected.
-
A pop operation shall rethrow and the element is popped from the queue. The value popped is effectively lost. (Doing otherwise would likely clog the queue with a bad element.)
6.9. Single-Ended Interfaces
In many applications one part only uses one end of a queue, while a different purt uses the other end. I.e. producer and consumer are logically separated. So it makes a lot of sense to provide interface types that only provide either the push or the pop interface.
But providing such separate interfaces requires additional overhead to ensure there arise no lifetime issues. So requiring such interfaces is not part of the proposed concept.
And then it’s assumed that a lot of different implementations of the concept will exist, with different tradeoffs. A separate type for the single-ended interfaces that works for all these implementation is more efficent in terms of implementation. Such a type is not part of this proposal but should be a separate proposal. And as separate type it can easily be added later.
7. Concrete Queues
In addition to the concepts, the standard needs at least one concrete queue. This will not provide the most efficient implementation but serves all synchronization use cases. For this reason it’s bounded to allow for slowing down producers that are too fast.
This paper proposes a fixed-size
.
It meets all three concepts for queues.
The constructor takes a parameter specifying the maximum number of elements
in the queue and an optional allocator.
is only allowed to allocate in its constructor.
7.1. Movability
The
implementation will probably contain synchronization
mechanisms like
and
.
These are not movable, so
isn’t movable either.
7.2. Memory Order Guarantees
provides sequentially consistent semantics.
The reasoning for this is that for a queue (as high-level data structure) provided by the IS usage safety should be more important than efficiency. Background on this is found in Memory Model Issues for Concurrent Data Structures (P0387R1) and Concurrency Safety in C++ Data Structures (P0495).
The example from P0387R1 illustrates the problem clearly:
and
are concurrent queues:
Thread 1 | Thread 2 |
---|---|
|
|
With only acquire/release this could result in
having
and
, while
has
and
.
will probably need a mutex based locking implementation anyway,
which provides sequential consistency for free.
8. Proposed Wording
Suggested location of concurrent queues in the standard is [thread].
8.1. Header < version >
synopsis [version.syn]
#define __cpp_lib_conqueue <editor supplied value> // also in <conqueue>
8.2. Concurrent Queues [conqueues]
8.2.1. General [conqueues.general]
Concurrent queues provide a mechanism to transfer objects from one point in a program to another without producing data races. Push operations insert objects into a queue from one execution context and pop operations retrieve them typically from another execution context. Push and pop operations can be blocking, non-blocking or asynchronous. If a queue gets closed, any subsequent push operations will fail with a respective queue status. Any pop operations after a close will succeed while there are still objects in the queue and fail afterwards. Any pending operations will be resumed on a close with a failure indication.
8.2.2. Header < conqueue >
synopsis [conqueues.syn]
namespace std { enum class conqueue_errc { success = unspecified , empty = unspecified , full = unspecified , closed = unspecified , busy = unspecified , busy_async = unspecified }; template < class Q > concept basic - concurrent - queue ; // exposition only template < class Q > concept concurrent - queue ; // exposition only template < class Q > concept async - concurrent - queue ; // exposition only struct error - as - optional - t { see below }; // exposition only struct error - as - bool - t { see below }; // exposition only inline constexpr error - as - optional - t error - as - optional ; // exposition only inline constexpr error - as - bool - t error - as - bool ; // exposition only struct async - pop - env { // exposition only auto query ( get_await_completion_adapter_t ) const -> error - as - optional - t { return {}; } } struct async - push - env { // exposition only auto query ( get_await_completion_adapter_t ) const -> error - as - bool - t { return {}; } } template < class T , class Allocator = std :: allocator < T >> class bounded_queue ; namespace pmr { template < class T > using bounded_queue = std :: bounded_queue < T , polymorphic_allocator < T >> ; } }
8.2.3. Exposition-only Concurrent Queue Concept [conqueue.concept]
8.2.3.1. Basic Concurrent Queue Concept [conqueue.concept.basic]
-
The exposition-only concept basic-concurrent-queue defines the requirements for a basic queue type.
namespace std { template < class Q > concept basic - concurrent - queue = // exposition only move_constructible < remove_cvref_t < typename Q :: value_type >> && requires ( Q q , const Q cq , typename Q :: value_type && t ) { { cq . is_closed () } noexcept -> same_as < bool > ; { q . close () } noexcept -> same_as < void > ; { q . push ( std :: forward < typename Q :: value_type > ( t )) } -> same_as < bool > ; { [] < class ... Args > ( Q & q , Args ... args ) { bool b { q . emplace ( std :: forward < Args > ( args )...)}; } }; { q . pop () } -> same_as < optional < typename Q :: value_type >> ; }; } -
In the following description,
denotes a type modeling the basic-concurrent-queue concept,Q
denotes an object of typeq
, andQ
denotes an object convertible tot
.Q :: value_type -
The expression
has the following semantics:q . is_closed () -
Returns:
true
if the queue is closed,false
otherwise.
-
-
The expression
has the following semantics:q . close () -
Effects: Closes the queue.
-
-
andq . push ( std :: forward < T > ( t ))
are push operations andq . emplace ( std :: forward & lt ; Args > ( args )...)
is a pop operation.q . pop () -
A push operation deposits an object into a queue. Hereby it is made available to be extracted from the queue A pop operation extracts and return an object from a queue. push-deposited is the event when the object is made available. pop-claimed is the event when a pop operation decides to extract the object.
-
If an object
is deposited by a push operation, the call of the constructor of this object inside the queue’s storage strongly happens before the return of the pop operation that extractst
. A pop operation on a queue can only extract objects that were deposited into the same queue and each object can only be extracted once.t -
[Note: This doesn’t specify when the push-deposited actually happens during the push operation, and likewise the pop-claimed during the pop operation. So it can happen that a pop operation already claimed a specific object but is blocked because the constructor of the object hasn’t finished yet. -- end note]
-
Concepts that subsume basic-concurrent-queue may have additional push and pop operations.
-
Calls to operations (except constructor and destructor) on the same queue from different threads of execution do not introduce data races.
-
[Note: This concept doesn’t specify whether a push may block for space available in the queue (bounded queue) or whether a push may allocate new space (unbounded queue). -- end note]
-
The expression
has the following semantics:q . emplace ( args ...) -
Effects: If the queue is not closed, a value is deposited into
. In this case the value in the queue storage is direct-non-list-initialized withq
.std :: forward < Args > ( args )... -
Returns:
-
true
if
was deposited intot
;q false
otherwise.
-
-
Throws: Any exception thrown by the invoked constructor of
. A concrete queue may throw additional exceptions. If an exception is thrown, no value is deposited intoT
.q
-
-
The expression
is equivalent toq . push ( t )
.return q . emplace ( std :: forward < T > ( t )) -
The expression
has the following semantics:q . pop () -
Effects: Blocks the current thread until there is an object available in the queue or until the queue is closed. If there is an object available in the queue, extract the object and return an
with a move-constructed value. Otherwise, if the queue is closed return a default constructedoptional < T >
.optional < T > -
Returns: as specified above.
-
Throws: Any exception thrown by the invoked move constructor of
. A concrete queue may throw additional exceptions. Even if an exception is thrown, the value is extracted fromT
.q
-
8.2.3.2. Concurrent Queue Concept [conqueue.concept.concurrent]
-
The exposition-only concept concurrent-queue defines the requirements for a concurrent queue type.
namespace std { template < class Q > concept concurrent - queue = // exposition only basic - concurrent - queue < Q > && requires ( Q q , typename Q :: value_type && t , conqueue_errc ec ) { { q . try_push ( std :: forward < typename Q :: value_type > ( t )) } noexcept ( see below ) -> same_as < conqueue_errc > ; { [] < class ... Args > ( Q & q , Args ... args ) { conqueue_errc e { q . try_emplace ( std :: forward < Args > ( args )...)}; } } noexcept ( see below ) ; { q . try_pop () } noexcept ( see below ) -> same_as < expected < typename Q :: value_type , conqueue_errc >> ; }; } -
,try_push
andtry_emplace
aretry_pop
if the called constructor ofnoexcept
isT
.noexcept -
In the following description,
denotes a type modeling the concurrent-queue concept,Q
denotes an object of typeq
,Q
denotes an object convertible tot
andQ :: value_type
denotes an object of typeec
.conqueue_errc -
andtry_push
are push operations if they deposit an object into the queue andtry_emplace
is a pop operation if it extracts an object from a queue.try_pop -
In classes modelling this concept
ortry_push
may returntry_emplace
even if the queue has space for the element. Similarlyconqueue_errc :: full
may return antry_pop
with valuesunexpected
even if the queue has an element to be extracted. [Note: This spurious failure is normally uncommon. -- end note] An implementation should ensure thatconqueue_errc :: empty
andtry_push ()
do not consistently spuriously fail.try_pop () -
The expression
has the following semantics:q . try_emplace ( t ) -
Effects: If the queue is not closed, and space is available in the queue, a value is deposited into
. In this case the value in the queue storage is direct-non-list-initialized withq
. The operation will not block.std :: forward < Args > ( args )... -
Returns:
-
ifconqueue_errc :: success
was deposited intot
,q -
if the queue is closed,conqueue_errc :: closed -
if the queue doesn’t have space,conqueue_errc :: full -
if the operation would block for internal synchronization,conqueue_errc :: busy -
if the queue also models the async-concurrent-queue concept, the operation would have to schedule the continuation of anconqueue_errc :: busy_async
, and the implementation can not detect if this scheduling would block for internal synchronization of the scheduler.async_pop
-
-
[Note: An implementation may return consistently
if there are onlyconqueue_errc :: busy_async
operations waiting.]async_pop -
Throws: Any exception thrown by the invoked constructor of
. A concrete queue may throw additional exceptions. If an exception is thrown, no value is deposited intoT
.q
-
-
The expression
is equivalent toq . try_push ( t )
.return q . try_emplace ( std :: forward < T > ( t )) -
The expression
has the following semantics:q . try_pop () -
Effects: If there is an object available in the queue and any internal synchronization wouldn’t block, it will be extracted from the queue and returned.
The operation will not block.
-
Returns: An object of type
. The return value will contain a move-constructed value of typeexpected < T , conqueue_errc >
if a value could be extracted. Otherwise anT
will be returned with the value:unexpected -
if the queue is closed,conqueue_errc :: closed -
if the queue doesn’t have an object available,conqueue_errc :: empty -
if the operation would block for internal synchronization,conqueue_errc :: busy -
if the queue also models the async-concurrent-queue concept, the operation would have to schedule the continuation of anconqueue_errc :: busy_async
, and the implementation can not detect if this scheduling would block for internal synchronization of the scheduler.async_push
-
-
[Note: An implementation may return consistently
if there are onlyconqueue_errc :: busy_async
operations waiting.]async_push -
Throws: Any exception thrown by the invoked constructor of
. A concrete queue may throw additional exceptions. Even if an exception is thrown, the value is extracted fromT
.q
-
8.2.3.3. Asynchronous Queue Concept [conqueue.concept.async]
-
The exposition-only concept async-concurrent-queue defines the requirements for an asynchronous queue type.
namespace std { template < class Q > concept async - concurrent - queue = // exposition only async - concurrent - queue < Q > && requires ( Q q , typename Q :: value_type && t ) { { q . async_push ( std :: forward < typename Q :: value_type > ( t )) } noexcept ; { q . async_emplace ( std :: forward < typename Q :: value_type > ( t )) } noexcept ; { [] < class ... Args > ( Q & q , Args ... args ) { q . async_emplace ( std :: forward < Args > ( args )...)}; } }; { q . async_pop () } noexcept ; }; } -
The exposition only name error-as-optional denotes a pipeable sender adaptor object. For a subexpression
, letsndr
beSndr
. The expression error-as-optionaldecltype (( sndr ))
is expression-equivalent to:( sndr ) transform_sender ( get - domain - early ( sndr ), make - sender ( error - as - optional , {}, sndr )) except that
is only evaluated once.sndr -
Let
andsndr
be subexpressions such thatenv
isSndr
anddecltype (( sndr ))
isEnv
. The expression error-as-optionaldecltype (( env ))
is equivalent to:. transform_sender ( sndr , env ) auto && [ _ , _ , child ] = sndr ; return let_error ( then ( std :: forward_like < Sndr > ( child ), []( T && ) noexcept ( is_nothrow_move_constructible_v < T > ) { return optional < T > ( in_place , std :: move ( t )); }), []( auto e ) noexcept { if constexpr ( is_same_v < decltype ( e ), conqueue_errc > ) { return just ( optional < T > ()); } else { return just_error ( e ); } }); -
The exposition only name error-as-bool denotes a pipeable sender adaptor object. For a subexpression
, letsndr
beSndr
. The expression error-as-booldecltype (( sndr ))
is expression-equivalent to:( sndr ) transform_sender ( get - domain - early ( sndr ), make - sender ( error - as - bool , {}, sndr )) except that
is only evaluated once.sndr -
Let
andsndr
be subexpressions such thatenv
isSndr
anddecltype (( sndr ))
isEnv
. The expression error-as-booldecltype (( env ))
is equivalent to:. transform_sender ( sndr , env ) auto && [ _ , _ , child ] = sndr ; return let_error ( then ( std :: forward_like < Sndr > ( child ), []() noexcept { return just ( true); }), []( auto e ) noexcept { if constexpr ( is_same_v < decltype ( e ), conqueue_errc > ) { return just ( false); } else { return just_error ( e ); } }); []() noexcept { return false; }); -
In the following description,
denotes a type modeling the async-concurrent-queue concept,Q
denotes an object of typeq
andQ
denotes an object convertible tot
.Q :: value_type -
andasync_push
are push operations andasync_emplace
is a pop operations.async_pop -
The expression
has the following semantics:q . async_emplace ( args ...) -
Let
be a sender object returned by the expression,w
be an operation state obtained from connectingop
to a receiverw
andr
be a stop token returned fromst
.get_stop_token ( get_env ( w )) -
Returns: A sender object
that behaves as follows:w -
After
is called, a completion function ofop . start ()
is called when there s space in the queue or when the queue is closed.r -
If
returnsst . stop_requested () true
will be called.set_stopped ( r ) -
If there is space in the queue
will be deposited into the queue andt
will be called. The value in the queue storage is direct-non-list-initialized withset_value ( r )
.std :: forward < Args > ( args )... -
If the invoked constructor throws an exception,
will be called. No value will be deposited intoset_error ( r , exception_ptr )
.q -
If the queue is closed
will be called.set_error ( r , conqueue_errc :: closed ) -
,set_value ()
orset_stopped ()
will be called on the scheduler ofset_error ()
.r -
returns an object of tye async-push-env.get_env ( w )
-
-
-
The expression
is equivalent toq . async_push ( t )
.return q . async_emplace ( std :: forward < T > ( t )) -
The expression
has the following semantics:q . async_pop () -
Let
be a sender object returned by the expression,w
be an operation state obtained from connectingop
to a receiverw
andr
be a stop token returned fromst
.get_stop_token ( get_env ( w )) -
Returns: A sender object
that behaves as follows:w -
After
is called, a completion function ofop . start ()
is called when there is an object available in the queue or when the queue is closed.r -
If
returnsst . stop_requested () true
will be called.set_stopped ( r ) -
If there is an object
available in the queue it will be extracted andt
will be called.set_value ( r , std :: move ( t )) -
If the constructor that is invoked by the call to
throws an exception,set_value
will be called. The value will be extracted from the queue nevertheless.set_error ( r , exception_ptr ) -
If the queue is closed
will be called.set_error ( r , conqueue_errc :: closed ) -
,set_value ()
orset_stopped ()
will be called on the scheduler ofset_error ()
.r -
returns an object of tye async-pop-env.get_env ( w )
-
-
8.2.4. Class template bounded_queue
[bounded.queue]
8.2.4.1. General [bounded.queue.general]
-
A
models concurrent-queue and async-concurrent-queue and can hold a fixed number of objects which is given at construction time.bounded_queue -
template < class T , class Allocator = allocator < T >> class bounded_queue { bounded_queue ( bounded_queue && ) = delete ; public : using value_type = T ; using allocator_type = Allocator ; // construct/destroy explicit bounded_queue ( size_t max_elems , const Allocator & alloc = {}); ~ bounded_queue (); // observers bool is_closed () const noexcept ; allocator_type get_allocator () const ; // modifiers void close () noexcept ; bool push ( const T & x ); bool push ( T && x ); template bool emplace ( Args && ... as ); conqueue_errc try_push ( const T & x ); conqueue_errc try_push ( T && x ); template conqueue_errc try_emplace ( Args && ... as ); sender auto async_push ( const T & ); sender auto async_push ( T && ); template sender auto async_emplace ( Args && ... as ); optional < T > pop (); expected < T , conqueue_errc > try_pop (); sender auto async_pop (); private : allocator_type alloc ; // exposition only }; -
shall be a type that meets the Cpp17Destructible requirements (Table [tab:cpp17.destructible]).T -
Template argument
shall satisfy the Cpp17Allocator requirements ([allocator.requirements]). An instance ofAllocator
is maintained by theAllocator
object during the lifetime of the object. The allocator instance is set atbounded_queue
object creation time.bounded_queue -
The values inside the storage of the queue deposited by push operations are constructed using
, whereallocator_traits < Allocator >:: construct ( alloc , p , std :: forward < T > ( x ))
is the address of the element being constructed.p
8.2.4.2. Constructor [bounded.queue.ctor]
explicit bounded_queue ( size_t max_elems , const Allocator & alloc = Allocator ());
-
Effects: Constructs an object with no elements, but with storage for
. The storage for the elements will be allocated usingmax_elems
.alloc -
Remarks: The operations of
will not allocate any memory outside the constructor.bounded_queue
8.2.4.3. Destructor [bounded.queue.dtor]
~ bounded_queue ();
-
Effects: Destroys all objects in the queue and deallocates the storage for the elements using
.alloc
8.2.4.4. Allocator [bounded.queue.alloc]
allocator_type get_allocator () const ;
-
Returns: A copy of the
that was passed to the object’s constructor.Allocator
8.2.4.5. Modifiers [bounded.queue.modifiers]
-
The push and pop operations of
provide sequentially consistent semantics.bounded_queue
void push ( const T & x ); void push ( T && x ); template < class ... Args > bool emplace ( Args && ... as );
-
Effects: Blocks the current thread until there is space in the queue or until the queue is closed.
-
Let Push1 and Push2 be push operations and Pop1 and Pop2 be pop operations, where Pop1 returns the value of the parameter given to Push1 and Pop2 returns the value of the parameter given to Push2. There is a total order
shared by allQ
objects consistent with the single total orderbounded_queue
ofS
operations [atomics.order]p4 of push-deposited and pop-claimed of Push1, Push2, Pop1 and Pop2, and moreover if push-deposited of Push1 is before push-deposited of Push2 in that order, then pop-claimed of Pop1 is before pop-claimed of Pop2 in that order.memory_order :: seq_cst -
It is unspecified whether the constructors and destructors of the objects in the internal storage of the queue run under a lock of the queue implementation.
-
[Note: This guarantees FIFO behaviour, but for two concurrent pushes the constructors cannot determine the order in which the values are enqueued, and the constructors can run concurrently as well. -- end note]
-
[Note: This does not guarantee that constructors or destructors may ever run concurrently. An implementation may decide that two pushes (or two pops) never run concurrently. -- end note]
-
[Note: A constructor or destructor can deadlock if it tries a push or pop on the same queue. -- end note]
T pop (); optional < T > pop (); expected < T , conqueue_errc > try_pop (); sender auto async_pop ();
-
Effects: The object in the queue is destroyed using
.allocator_traits < Allocator >:: destroy -
Remarks: The constructor of the returned object and the destructor of the internal object run on the same thread of execution.
bool is_closed () const noexcept ; void close () noexcept ; conqueue_errc try_push ( const T & x ); conqueue_errc try_push ( T && x ); template < class ... Args > conqueue_errc try_emplace ( Args && ... as ); sender auto async_push ( const T & ); sender auto async_push ( T && ); template < class ... Args > sender auto async_emplace ( Args && ... as );
-
These functions behave as described in the respective concepts.
9. Old Revision History
P0260R6 revises P0260R5 - 2023-01-15 as follows.
-
Fixing typos.
-
Added a scope for the target TS.
-
Added questions to be answered by a TS.
-
Added asynchronous interface
P0260R5 revises P0260R4 - 2020-01-12 as follows.
-
Added more introductory material.
-
Added response to feedback by LEWGI at Prague meeting 2020.
-
Added section on existing practice.
-
Replaced
withvalue_pop
.pop -
Replaced
withis_lock_free
.is_always_lockfree -
Removed
andis_empty
.is_full -
Added move-into parameter to
try_push ( Element && ) -
Added note that exception thrown by the queue operations themselves are derived from
.std :: exception -
Added a note that the wording is partly invalid.
-
Moved more contents into the "Abandoned" part to avoid confusion.
P0260R4 revised P0260R3 - 2019-01-20 as follows.
-
Remove the binding of
to a value of zero.queue_op_status :: success -
Correct stale use of the
template parameter inQueue
toshared_queue_front
.Value -
Change the return type of
from ashare_queue_ends
to a custom struct.pair -
Move the concrete queue proposal to a separate paper, [P1958].
P0260R3 revised P0260R2 - 2017-10-15 as follows.
-
Convert
to aqueue_wrapper
-like interface. This conversion removes thefunction
class. Thanks to Zach Lane for the approach.queue_base -
Removed the requirement that element types have a default constructor. This removal implies that statically sized buffers cannot use an array implmentation and must grow a vector implementation to the maximum size.
-
Added a discussion of checking for output iterator end in the wording.
-
Fill in synopsis section.
-
Remove stale discussion of
.queue_owner -
Move all abandoned interface discussion to a new section.
-
Update paper header to current practice.
P0260R2 revised P0260R1 - 2017-02-05 as follows.
-
Emphasize that non-blocking operations were removed from the proposed changes.
-
Correct syntax typos for noexcept and template alias.
-
Remove
fromstatic
foris_lock_free
andgeneric_queue_back
.generic_queue_front
P0260R1 revised P0260R0 - 2016-02-14 as follows.
-
Remove pure virtuals from
.queue_wrapper -
Correct
toqueue :: pop
.value_pop -
Remove nonblocking operations.
-
Remove non-locking buffer queue concrete class.
-
Tighten up push/pop wording on closed queues.
-
Tighten up push/pop wording on synchronization.
-
Add note about possible non-FIFO behavior.
-
Define
to be FIFO.buffer_queue -
Make wording consistent across attributes.
-
Add a restriction on element special methods using the queue.
-
Make
for only non-waiting functions.is_lock_free () -
Make
static for non-indirect classes.is_lock_free () -
Make
.is_lock_free () noexcept -
Make
.has_queue () noexcept -
Make destructors
.noexcept -
Replace "throws nothing" with
.noexcept -
Make the remarks about the usefulness of
andis_empty ()
into notes.is_full -
Make the non-static member functions
andis_ ...
functionshas_ ...
.const
P0260R0 revised N3533 - 2013-03-12 as follows.
-
Update links to source code.
-
Add wording.
-
Leave the name facility out of the wording.
-
Leave the push-front facility out of the wording.
-
Leave the reopen facility out of the wording.
-
Leave the storage iterator facility out of the wording.
N3532 revised N3434 = 12-0124 - 2012-09-23 as follows.
-
Add more exposition.
-
Provide separate non-blocking operations.
-
Add a section on the lock-free queues.
-
Argue against push-back operations.
-
Add a cautionary note on the usefulness of
.is_closed () -
Expand the cautionary note on the usefulness of
. Addis_empty ()
.is_full () -
Add a subsection on element type requirements.
-
Add a subsection on exception handling.
-
Clarify ordering constraints on the interface.
-
Add a subsection on a lock-free concrete queue.
-
Add a section on content iterators, distinct from the existing streaming iterators section.
-
Swap front and back names, as requested.
-
General expository cleanup.
-
Add an "Revision History" section.
N3434 revised N3353 = 12-0043 - 2012-01-14 as follows.
-
Change the inheritance-based interface to a pure conceptual interface.
-
Put
operations into a separate subsection.try_ ... -
Add a subsection on non-blocking operations.
-
Add a subsection on push-back operations.
-
Add a subsection on queue ordering.
-
Merge the "Binary Interface" and "Managed Indirection" sections into a new "Conceptual Tools" section. Expand on the topics and their rationale.
-
Add a subsection to "Conceptual Tools" that provides for type erasure.
-
Remove the "Synopsis" section.
-
Add an "Implementation" section.
10. Historic Contents
The Contents in this section is for historic reference only.
10.1. Abandoned Interfaces
10.1.1. Re-opening a Queue
There are use cases for opening a queue that is closed. While we are not aware of an implementation in which the ability to reopen a queue would be a hardship, we also imagine that such an implementation could exist. Open should generally only be called if the queue is closed and empty, providing a clean synchronization point, though it is possible to call open on a non-empty queue. An open operation following a close operation is guaranteed to be visible after the close operation and the queue is guaranteed to be open upon completion of the open call. (But of course, another close call could occur immediately thereafter.)
- Open the queue.
Note that when
returns false, there is no assurance that
any subsequent operation finds the queue closed because some other
thread may close it concurrently.
If an open operation is not available, there is an assurance that once closed, a queue stays closed. So, unless the programmer takes care to ensure that all other threads will not close the queue, only a return value of true has any meaning.
Given these concerns with reopening queues, we do not propose wording to reopen a queue.
10.1.2. Non-Blocking Operations
For cases when blocking for mutual exclusion is undesirable, one can
consider non-blocking operations. The interface is the same as the try
operations but is allowed to also return
in case
the operation is unable to complete without blocking.
\
- If the operation would block, return
. Otherwise, if the queue is full, returnqueue_op_status :: busy
. Otherwise, push thequeue_op_status :: full
onto the queue. ReturnElement
.queue_op_status :: success
- If the operation would block, return
. Otherwise, if the queue is empty, returnqueue_op_status :: busy
. Otherwise, pop thequeue_op_status :: empty
from the queue. The element will be moved out of the queue in preference to being copied. ReturnElement
.queue_op_status :: success
These operations will neither wait nor block. However, they may do nothing.
The non-blocking operations highlight a terminology problem. In terms of
synchronization effects,
on queues is equivalent to
on mutexes. And so one could conclude that the existing
should be renamed
and
should be renamed
. However, at least Thread Building Blocks
uses the existing terminology. Perhaps better is to not use
and instead use
and
.
**In November 2016, the Concurrency Study Group chose to defer
non-blocking operations. Hence, the proposed wording does not include
these functions. In addition, as these functions were the only ones that
returned
, that enumeration is also not included.**
10.1.3. Push Front Operations
Occasionally, one may wish to return a popped item to the queue. We can
provide for this with
operations.
\
- Push the
onto the back of the queue, i.e. in at the end of the queue that is normally popped. ReturnElement
.queue_op_status :: success
\
- If the queue was full, return
. Otherwise, push thequeue_op_status :: full
onto the front of the queue, i.e. in at the end of the queue that is normally popped. ReturnElement
.queue_op_status :: success
\
- If the operation would block, return
. Otherwise, if the queue is full, returnqueue_op_status :: busy
. Otherwise, push thequeue_op_status :: full
onto the front queue. i.e. in at the end of the queue that is normally popped. ReturnElement
.queue_op_status :: success
This feature was requested at the Spring 2012 meeting. However, we do not think the feature works.
-
The name
is inconsistent with existing \"push back\" nomenclature.push_front -
The effects of
are only distinguishable from a regular push when there is a strong ordering of elements. Highly concurrent queues will likely have no strong ordering.push_front -
The
call may fail due to full queues, closed queues, etc. In which case the operation will suffer contention, and may succeed only after interposing push and pop operations. The consequence is that the original push order is not preserved in the final pop order. So,push_front
cannot be directly used as an \'undo\'.push_front -
The operation implies an ability to reverse internal changes at the front of the queue. This ability implies a loss efficiency in some implementations.
In short, we do not think that in a concurrent environment
provides sufficient semantic value to justify its cost. Consequently,
the proposed wording does not provide this feature.
10.1.4. Queue Names
It is sometimes desirable for queues to be able to identify themselves. This feature is particularly helpful for run-time diagnotics, particularly when \'ends\' become dynamically passed around between threads. See Managed Indirection.
- Return the name string provided as a parameter to queue construction.
There is some debate on this facility, but we see no way to effectively replicate the facility. However, in recognition of that debate, the wording does not provide the name facility.
10.1.5. Lock-Free Buffer Queue
We provide a concrete concurrent queue in the form of a fixed-size
. It meets the
concept. The queue is still under development, so details may change.
In November 2016, the Concurrency Study Group chose to defer lock-free queues. Hence, the proposed wording does not include a concrete lock-free queue.
10.1.6. Storage Iterators
In addition to iterators that stream data into and out of a queue, we could provide an iterator over the storage contents of a queue. Such and iterator, even when implementable, would mostly likely be valid only when the queue is otherwise quiecent. We believe such an iterator would be most useful for debugging, which may well require knowledge of the concrete class. Therefore, we do not propose wording for this feature.
10.1.7. Empty and Full Queues
It is sometimes desirable to know if a queue is empty.
- Return true iff the queue is empty.
This operation is useful only during intervals when the queue is known to not be subject to pushes and pops from other threads. Its primary use case is assertions on the state of the queue at the end if its lifetime, or when the system is in quiescent state (where there no outstanding pushes).
We can imagine occasional use for knowing when a queue is full, for instance in system performance polling. The motivation is significantly weaker though.
- Return true iff the queue is full.
Not all queues will have a full state, and these would always return false.
10.1.8. Queue Ordering
The conceptual queue interface makes minimal guarantees.
-
The queue is not empty if there is an element that has been pushed but not popped.
-
A push operation synchronizes with the pop operation that obtains that element.
-
A close operation synchronizes with an operation that observes that the queue is closed.
-
There is a sequentially consistent order of operations.
In particular, the conceptual interface does not guarantee that the sequentially consistent order of element pushes matches the sequentially consistent order of pops. Concrete queues could specify more specific ordering guarantees.
10.1.9. Lock-Free Implementations
Lock-free queues will have some trouble waiting for the queue to be
non-empty or non-full. Therefore, we propose two closely-related
concepts. A full concurrent queue concept as described above, and a
non-waiting concurrent queue concept that has all the operations except
,
,
and
. That is, it has only
non-waiting operations (presumably emulated with busy wait) and
non-blocking operations, but no waiting operations. We propose naming
these
and
,
respectively.
Note: Adopting this conceptual split requires splitting some of the facilities defined later.
For generic code it\'s sometimes important to know if a concurrent queue has a lock free implementation.
- Return true iff the has a lock-free implementation of the non-waiting operations.
10.2. Abandoned Additional Conceptual Tools
There are a number of tools that support use of the conceptual interface. These tools are not part of the queue interface, but provide restricted views or adapters on top of the queue useful in implementing concurrent algorithms.
10.2.1. Fronts and Backs
Restricting an interface to one side of a queue is a valuable code
structuring tool. This restriction is accomplished with the classes
and
parameterized on the
concrete queue implementation. These act as pointers with access to only
the front or the back of a queue. The front of the queue is where
elements are popped. The back of the queue is where elements are pushed.
void send ( int number , generic_queue_back < buffer_queue < int >> arv );
These fronts and backs are also able to provide
and
operations that unambiguously stream data into or out of a queue.
10.2.2. Streaming Iterators
In order to enable the use of existing algorithms streaming through concurrent queues, they need to support iterators. Output iterators will push to a queue and input iterators will pop from a queue. Stronger forms of iterators are in general not possible with concurrent queues.
Iterators implicitly require waiting for the advance, so iterators are
only supportable with the
concept.
void iterate ( generic_queue_back < buffer_queue < int >>:: iterator bitr , generic_queue_back < buffer_queue < int >>:: iterator bend , generic_queue_front < buffer_queue < int >>:: iterator fitr , generic_queue_front < buffer_queue < int >>:: iterator fend , int ( * compute )( int ) ) { while ( fitr != fend && bitr != bend ) * bitr ++ = compute ( * fitr ++ ); }
Note that contrary to existing iterator algorithms, we check both iterators for reaching their end, as either may be closed at any time.
Note that with suitable renaming, the existing standard front insert and back insert iterators could work as is. However, there is nothing like a pop iterator adapter.
10.2.3. Binary Interfaces
The standard library is template based, but it is often desirable to
have a binary interface that shields client from the concrete
implementations. For example,
is a binary interface to
callable object (of a given signature). We achieve this capability in
queues with type erasure.
We provide a
class template parameterized by the value
type. Its operations are virtual. This class provides the essential
independence from the queue representation.
We also provide
and
class templates
parameterized by the value types. These are essentially
and
, respectively.
To obtain a pointer to
from an non-virtual concurrent
queue, construct an instance the
class template, which
is parameterized on the queue and derived from
. Upcasting a
pointer to the
instance to a
instance thus
erases the concrete queue type.
extern void seq_fill ( int count , queue_back < int > b ); buffer_queue < int > body ( 10 /*elements*/ , /*named*/ "body" ); queue_wrapper < buffer_queue < int >> wrap ( body ); seq_fill ( 10 , wrap . back () );
10.2.4. Managed Indirection
Long running servers may have the need to reconfigure the relationship between queues and threads. The ability to pass \'ends\' of queues between threads with automatic memory management eases programming.
To this end, we provide
and
template classes. These act as reference-counted versions of the
and
template classes.
The
template function will provide a
pair of
and
to a dynamically
allocated
instance containing an instance of the
specified implementation queue. When the last of these fronts and backs
are deleted, the queue itself will be deleted. Also, when the last of
the fronts or the last of the backs is deleted, the queue will be
closed.
auto x = share_queue_ends < buffer_queue < int >> ( 10 , "shared" ); shared_queue_back < int > b ( x . back ); shared_queue_front < int > f ( x . front ); f . push ( 3 ); assert ( 3 == b . value_pop ());
10.3. API Discussions
10.3.1. try_push ( T && , T & )
REVISITED in Varna
The following version was introduced in response to LEWG-I concerns about loosing the element if an rvalue cannot be stored in the queue.
However, SG1 reaffirmed the APIs above with the following rationale:
It seems that it is possible not to loose the element in both versions:
T x = get_something (); if ( q . try_push ( std :: move ( x ))) ...
With two parameter version:
T x ; if ( q . try_push ( get_something (), x )) ...
Ergonomically they are roughly identical. API is slightly simpler with one argument version, therefore, we reverted to original one argument version.
10.3.2. try_pop
Queue Status Return
This is the interface agreed on in St. Louis, based on discussion in [P2921R0]:
POLL: LEWG would like to add a
interface for concurrent queues.
SF | F | N | A | SA |
---|---|---|---|---|
0 | 2 | 5 | 3 | 2 |
However, [P2921R0] mainly compares
vs. exceptions.
Now concurrent queue operations don’t throw own exceptions anymore
(they might forward exceptions from constructors).
For
[P2921R0] has the following example:
10.3.2.1. Drain the queue without blocking
conqueue_errc ec ; while ( auto val = q . try_pop ( ec )) println ( "got {}" , * val ); if ( ec == conqueue_errc :: closed ) return ; // do something else.
based has unfortunate duplication
since we want to know why the pop failed
and we have to move
out of the loop condition.
auto val = q . try_pop (); while ( val ) { println ( "got {}" , * val ); val = q . try_pop (); } if ( val . error () == conqueue_errc :: closed ) return ; // do something else
So for
loops with the
inside the loop condition
the separate output parameter has advantages.
The logging example above shows a case where
might be
considered to be superior.
So it might make sense to revisit the decision from St. Louis.