1. Acknowledgments
Thanks to David Goldblatt for his help with the wording!
2. Revision History
This paper revises P0260R8 - 2024-03-08 as follows.
-
Added more wording
-
Fixed
capacity -
Changed push/pop wording to use "strongly happens before"
-
Added discussion points
-
Removed paragraph about ctors/dtors returning on same thread
-
now never blocks (even for contention on internal synchronization)try_ * -
Wording for spurious failures of
try_ * -
The constructors for
that pre-fill the queue have been removedbounded_queue -
Reference to P3282 added
-
Moved discussion about
to historic contents.try_push ( T && , T & )
This paper revises P0260R7 - 2023-06-15 as follows.
-
Restructured document
-
Fix typos
-
Implement LEWG feedback to derive conqueue_errc from system_error
-
Implement LEWG feedback to add range constructor and go back to InputIterator
-
size_t capacity() added
-
Added TBB concurrent_bounded_queue as existing practice
-
Moved discussion about pop() interface to separate paper
-
reintegrated P1958
buffer_queue -
Renamed
tobuffer_queue bounded_queue -
Added proposed wording (incomplete)
-
Updated TS questions
Older revision history was moved to after the proposed wording.
2.1. Discussion Points For This Revision
2.1.1. Discussion Points For SG1
-
for concept or for concrete queues? Return value for unbounded queues?capacity () -
Blocking for internal synchronization in
? Special error code?try_ * -
on concept?is_always_lock_free
2.1.2. Discussion Points For LEWG
-
return:pop ()
oroptional
? Other operations withexpected
: returnerror_code
? This is Exploring std::expected based API alternatives for buffer_queue (P2921).expected -
Are value initializing constructors required?
-
A concurrent queue is not a container. Which of the requirements in
should be met anyways ([ container . reqmts ]
,value_type
,reference
, ...)?get_allocator () -
: Is closed condition a stop condition or a special return or an error?async_ *
3. Introduction
Queues provide a mechanism for communicating data between components of a system.
The existing
in the standard library is an inherently sequential
data structure. Its reference-returning element access operations cannot
synchronize access to those elements with other queue operations. So,
concurrent pushes and pops on queues require a different interface to
the queue structure.
Moreover, concurrency adds a new dimension for performance and semantics. Different queue implementation must trade off uncontended operation cost, contended operation cost, and element order guarantees. Some of these trade-offs will necessarily result in semantics weaker than a serial queue.
Concurrent queues come in a several different flavours, e.g.
-
bounded vs. unbounded
-
blocking vs. overwriting
-
single-ended vs. multi-ended
-
strict FIFO ordering vs. priority based ordering
The syntactic concept proposed here should be valid for all of these flavours, while the concrete semantics might differ.
3.1. Target Vehicle
This proposal targets a TS. It was originally sent to LEWG for inclusion into Concurrency TS v2. As Concurrency TS v2 will probably be published before this proposal is ready to be published, we propose to include concurrent queues into Concurrency TS v3 and publish this as soon as concurrent queues are ready. This leaves the door open for other proposal to share the same ship vehicle.
The scope for Concurrency TS v3 would be the same as that for v2:
"This document describes requirements for implementations of an interface that computer programs written in the C++ programming language may use to invoke algorithms with concurrent execution. The algorithms described by this document are realizable across a broad class of computer architectures."
Should the committee decide to restrict the scope of the TS to only contain concurrent queues, we propose a slightly different scope:
"This document describes requirements for implementations of an interface that computer programs written in the C++ programming language may use to communicate between different execution agents of algorithms with concurrent execution. The algorithms described by this document are realizable across a broad class of computer architectures."
3.1.1. Questions for a TS to Answer
We expect that the TS will inform future work on a variety of questions, particularly those listed below, using real-world implementation experience that cannot be obtained without a TS.
-
Is the proposed concept useful? Specifically, does it cover different implementations and does it work together with other concepts for concurrent queues, e.g. queues with only non-blocking functions or queues with an asynchronous interface?
-
What other concrete implementations should be provided?
-
Is the interface adequate for execution contexts from
with weakly parallel forward progress?std :: execution
4. Existing Practice
4.1. Concept of a Bounded Queue
The basic concept of a bounded queue with potentially blocking push and pop operations is very old and widely used. It’s generally provided as an operating system level facility, like other concurrency primitives.
POSIX 2001 has
message queues (with priorities and timeout).
FreeRTOS, Mbed, vxWorks
4.2. Bounded Queues with C++ Interface
4.2.1. Literature
The first concurrent queue I’ve seen was in [Hughes97]. It was full of bugs and as such shows what will go wrong if C++ doesn’t provide a standard queue. It’s unbounded.
Anthony Williams provided a queue in C++ Concurrency in Action. It’s unbounded.
4.2.2. Boost
Boost has a number of queues, as official library and as example uses of synchronization primitives.
Boost Message Queue only transfers bytes, not objects. It’s bounded.
Boost Lock-Free Queue and Boost Lock-Free SPSC Queue have only non-blocking operations. Boost Lock-Free Queue is bounded or unbounded, Boost Lock-Free SPSC Queue is bounded.
Boost Synchronized Queue is an implementation of an early version of this proposal.
4.2.3. TBB
TBB has
(TBB Bounded Queue) and an unbounded version
TBB Unbounded Queue.
5. Conceptual Interface
We provide basic queue operations, and then extend those operations to cover other important issues.
By analogy with how
defines their errors, we introduce
enum and
as follows:
enum class conqueue_errc { success , empty , full , closed }; template <> struct is_error_code_enum < conqueue_errc > : public true_type {}; const error_category & conqueue_category () noexcept ; error_code make_error_code ( conqueue_errc e ) noexcept ; error_condition make_error_condition ( conqueue_errc e ) noexcept ; class conqueue_error : public system_error ;
These errors will be reported from concurrent queue operations as specified below.
5.1. Basic Operations
The essential solution to the problem of concurrent queuing is to shift to value-based operations, rather than reference-based operations.
The basic operations are:
void queue::push ( const T & x ); void queue::push ( T && x ); bool queue::push ( const T & x , std :: error_code & ec ); bool queue::push ( T && x , std :: error_code & ec );
Pushes
onto the queue via copy or move construction. The first
version throws
if the
queue is closed. The second version returns true
on success, and false
and sets
to
if the
queue is closed.
T queue::pop (); std :: optional < T > queue :: pop ( std :: error_code & ec );
Pops a value from the queue via move construction into the return value.
The first version throws
if
the queue is empty and closed; the second version, if the queue is empty
and closed, returns
and sets
to
. If queue is empty and open,
the operation blocks until an element is available.
In the original buffer_queue paper, the pop function had signature
. Subsequently, it was changed to
due to
concern about the problem of loosing elements when an error occurs.
The exploration of different version of error reporting was moved to a separate paper [P2921R0].
5.2. Asynchronous Operations
sender auto queue :: async_push ( T x ); sender auto queue :: async_pop ();
These operations return a sender that will push or pop the element.
Senders must support cancellation and if the receiver is currently
waiting on a push or pop operation and no longer interested in
performing the operation, it should be removed from any waiting
queues, if any, and be completed with
.
5.3. Non-Waiting Operations
Waiting on a full or empty queue can take a while, which has an opportunity cost. Avoiding that wait enables algorithms to avoid queuing speculative work when a queue is full, to do other work rather than wait for a push on a full queue, and to do other work rather than wait for a pop on an empty queue.
bool queue::try_push ( const T & x , std :: error_code & ec ); bool queue::try_push ( T && x , std :: error_code & ec );
If the queue is full or closed, returns false
and sets the
respective status in the
. Otherwise, push the value onto the
queue via copy or move construction and returns true
.
optional < T > queue :: try_pop ( std :: error_code & ec );
If the queue is empty, returns
and set ec to
. Otherwise, pop the element from the queue
via move construction into the optional. Return true
and set
{.variable} to
.
These operations will not wait when the queue is full or empty. They may block for mutual exclusion.
5.4. Closed Queues
Threads using a queue for communication need some mechanism to signal when the queue is no longer needed. The usual approach is add an additional out-of-band signal. However, this approach suffers from the flaw that threads waiting on either full or empty queues need to be woken up when the queue is no longer needed. To do that, you need access to the condition variables used for full/empty blocking, which considerably increases the complexity and fragility of the interface. It also leads to performance implications with additional mutexes or atomics. Rather than require an out-of-band signal, we chose to directly support such a signal in the queue itself, which considerably simplifies coding.
To achieve this signal, a thread may
a queue. Once closed,
no new elements may be pushed onto the queue. Push operations on a
closed queue will either return
(when they have
parameter) or throw
(when they do not). Elements
already on the queue may be popped off. When a queue is empty and
closed, pop operations will either set
to
(when they have a
parameter) or
throw
otherwise.
The additional operations are as follows:
void queue::close () noexcept ;
Close the queue.
bool queue::is_closed () const noexcept ;
Return true iff the queue is closed.
5.5. Element Type Requirements
The above operations require element types with copy/move constructors, and destructor. These operations may be trivial. The copy/move constructors operators may throw, but must leave the objects in a valid state for subsequent operations.
5.6. Exception Handling
and
may throw an exceptions of type
that’s derived from
and will contain a
.
Concurrent queues cannot completely hide the effect of exceptions thrown by the element type, in part because changes cannot be transparently undone when other threads are observing the queue.
Queues may rethrow exceptions from storage allocation, mutexes, or condition variables.
If the element type operations required do not throw exceptions, then only the exceptions above are rethrown.
When an element copy/move may throw, some queue operations have additional behavior.
-
Construction shall rethrow, destroying any elements allocated. [@@@: We should remove the ctors that have elements.]
-
A push operation shall rethrow and the state of the queue is unaffected.
-
A pop operation shall rethrow and the element is popped from the queue. The value popped is effectively lost. (Doing otherwise would likely clog the queue with a bad element.)
6. Concrete Queues
In addition to the concept, the standard needs at least one concrete
queue.
This paper proposes a fixed-size
.
It meets the concept of a concurrent queue.
It provides for construction of an empty queue,
and construction of a queue from an
or a pair of iterators.
Constructors take a parameter specifying the maximum number of elements
in the queue.
is only allowed to allocate in its constructor. Static Storage for C++ Concurrent bounded_queue (P3282) proposes to add a constructor where the queue doesn’t own
the storage.
7. Proposed Wording
Add a new subclause to clause [thread] with the following contents.
7.1. Concurrent Queues
7.1.1. General
Concurrent queues provide a mechanism to transfer objects from one point in a program to another without producing data races.
7.1.2. Header < experimental / conqueue >
synopsis
namespace std :: experimental { template < class T , class Q > concept concurrent_queue = see below ; enum class conqueue_errc { success , empty , full , closed }; const error_category & conqueue_category () noexcept ; error_code make_error_code ( conqueue_errc e ) noexcept ; error_condition make_error_condition ( conqueue_errc e ) noexcept ; class conqueue_error ; template < typename T , class Allocator = std :: allocator < T >> class bounded_queue ; }
7.1.3. Error reporting
const error_category & conqueue_category () noexcept ;
-
Returns: A reference to an object of a type derived from class
.error_category -
The object’s
anddefault_error_condition
virtual functions shall behave as specified for the classequivalent
. The object’serror_category
virtual function shall return a pointer to the string "conqueue".name
error_code make_error_code ( conqueue_errc e ) noexcept ;
-
Returns:
.error_code ( static_cast < int > ( e ), conqueue_category ())
error_condition make_error_condition ( conqueue_errc e ) noexcept ;
-
Returns:
.error_condition ( static_cast < int > ( e ), conqueue_category ())
7.1.4. Class conqueue_error
namespace std :: experimental { class conqueue_error : system_error { public : explicit conqueue_error ( conqueue_errc e ); const error_code & code () const noexcept ; const char * what () const noexcept ; private : error_code ec_ ; // exposition only }; }
explicit conqueue_error ( conqueue_errc e );
-
Effects: Initializes
withec_
.make_error_code ( e )
const error_code & code () const noexcept ;
-
Returns:
.ec_
const char * what () const noexcept ;
-
Returns: An
incorporatingNTBS
.code (). message ()
7.1.5. Concurrent Queues Concept
-
The
concept defines the requirements for a concurrent queue type.concurrent_queue namespace std :: experimental { template < class T , class Q > concept concurrent_queue = move_constructible < remove_cvref_t < T >> && same_as < decay_t < T > , queue :: value_type > && requires ( Q q , T && t , std :: error_code ec ) { q . capacity () noexcept -> size_t ; q . is_closed () noexcept -> bool ; q . close () noexcept -> void ; q . push ( std :: forward < T > ( t )) -> void ; q . push ( std :: forward < T > ( t ), ec ) -> bool ; q . try_push ( std :: forward < T > ( t ), ec ) -> bool ; q . async_push ( const T & ) -> std :: execution :: sender_of < void > ; q . async_push ( T && ) -> std :: execution :: sender_of < void > ; q . pop () -> T ; q . pop ( ec ) -> std :: optional < T > ; q . try_pop ( ec ) -> std :: optional < T > ; q . async_pop () -> std :: execution :: sender_of < T > ; }; } -
In the following description,
denotes a type conforming to theQ
concept,concurrent_queue
denotes an object of typeq
andQ
denotes an object convertible to thet
.Q :: value_type -
,push
andtry_push
are push operations.async_push -
,pop
andtry_pop
are pop operations.async_pop -
Calls to operations (except constructor and destructor) on the same queue from different threads of execution do not introduce data races.
-
Successful push and pop operations will call a constructor of
. For the description of concrete queues, pre-ctor is the call of the constructor and post-ctor is the return of the constructor. Likewise successful pop operations will call the destructor ofT
and we have pre-dtor and post-dtor.T -
In the following description, if an object is deposited into a queue by a push operation it can be extracted from the queue and returned by a pop operation. The post-ctor of the push operation of the deposit strongly happens before the return of the pop operation. A pop operation on a queue can only extract objects that were deposited into the same queue and each object can only be extracted once.
-
Concrete queues shall specify whether a push may block for space available in the queue (bounded queue) or whether a push may allocate new space (unbounded queue). [Note: Even an unbounded queue is required to provide
andasync_push
. -- end note]try_push -
may returntry_push false
even if the queue has space for the element. Similarly
may returntry_pop false
even if the queue has an element to be extracted. [Note: This spurious failure is normally uncommon. -- end note] An implementation should ensure that
andtry_push ()
do not consistently spuriously fail.try_pop () -
The expression
has the following semantics:q . capacity () -
Returns: If the queue is bounded, the maximum number of elements the queue can hold, otherwise
.0
-
-
The expression
has the following semantics:q . is_closed () -
Returns:
true
iff the queue is closed.
-
-
The expression
has the following semantics:q . close () -
Effects: Closes the queue.
-
-
The expression
has the following semantics:q . push ( t ) -
Effects: If the queue is not closed,
is deposited intot
.q -
Throws: If the queue is closed throw
. A concrete queue may throw additional exceptions.conqueue_error ( conqueue_errc :: closed )
-
-
The expression
has the following semantics:q . try_push ( t , ec ) -
Effects: If the queue is not closed, and space is available in the queue,
is deposited intot
. The operation will not block.q -
Returns:
true
if
was deposited intot
, otherwiseq false
.
-
-
The expression
has the following semantics:q . async_push ( t ) -
Returns: A sender object
that behaves as follows:w -
When
is connected with some receiverw
, it returns an operation stater
that behaves as follows:op -
It waits until there’s space in the queue or until the queue is closed.
-
If there’s space in the queue
will be deposited into the queue andt
will be called.set_value ( r ) -
If the queue is closed
will be called.set_stopped ( r )
-
-
-
-
The expression
has the following semantics:q . pop () -
Effects: Blocks the current thread until there’s an object available in the queue or until the queue is closed. If the queue is closed, throw. Otherwise, if there is an object available in the queue, extract the object and return it.
-
Throws: If the queue is closed throw
. A concrete queue may throw additional exceptions.conqueue_error ( conqueue_errc :: closed )
-
-
The expression
has the following semantics:q . try_pop ( ec ) -
Effects: If there’s an object available in the queue it will be extracted from the queue and returned. If the queue is closed an empty
is returned. The operation will not block for an object to be available in the queue. A concrete queue shall specify whetheroptional
may block for internal synchronization.try_pop -
Returns:
ifoptional < T > ( t )
was the available object in the queue,t
otherwise.optional < T > ()
-
-
The expression
has the following semantics:q . async_pop () -
Returns: A sender object
that behaves as follows:w -
When
is connected with some receiverw
, it returns an operation stater
that behaves as follows:op -
It waits until there’s an object available in the queue or until the queue is closed.
-
If there’s an object
available in the queue it will be extracted andt
will be called.set_value ( r , t ) -
If the queue is closed
will be called.set_stopped ( r )
-
-
-
7.1.6. Class template bounded_queue
7.1.6.1. General
-
A
is abounded_queue
and can hold a fixed size of objects which is given at construction time.concurrent_queue -
template < typename T , class Allocator = std :: allocator < T >> class bounded_queue { bounded_queue () = delete ; bounded_queue ( const bounded_queue & ) = delete ; bounded_queue & operator = ( const bounded_queue & ) = delete ; public : typedef T value_type ; // construct/destroy explicit bounded_queue ( size_t max_elems , const Allocator & alloc = Allocator ()); ~ bounded_queue () noexcept ; // observers size_t capacity () const noexcept ; bool is_closed () const noexcept ; static constexpr bool is_always_lock_free = implementation - defined ; // modifiers void close () noexcept ; void push ( const T & x ); void push ( T && x ); bool push ( const T & x , std :: error_code & ec ); bool push ( T && x , std :: error_code & ec ); bool try_push ( const T & x , std :: error_code & ec ); bool try_push ( T && x , std :: error_code & ec ); sender auto async_push ( const T & ); sender auto async_push ( T && ); T pop (); optional < T > pop ( std :: error_code & ec ); optional < T > try_pop ( std :: error_code & ec ); sender auto async_pop (); };
static constexpr bool is_always_lock_free ;
-
The static data member
isis_always_lock_free true
if the
operations are always lock-free, andtry_ * false
otherwise.
7.1.6.2. Constructor
explicit bounded_queue ( size_t max_elems , const Allocator & alloc = Allocator ());
-
Effects: Constructs an object with no elements, but with storage for
. The storage for the elements will be allocated usingmax_elems
.alloc -
Postcondition:
is equal tocapacity ()
.max_elems -
Remarks: The operations of
will not allocate any memory outside the constructor.bounded_queue
7.1.6.3. Modifiers
void push ( const T & x ); void push ( T && x ); bool push ( const T & x , std :: error_code & ec ); bool push ( T && x , std :: error_code & ec );
-
Effects: Blocks the current thread until there’s space in the queue or until the queue is closed.
-
Let Push1 and Push2 be push operations and Pop1 and Pop2 be pop operations, where Pop1 returns the value of the parameter given to Push1 and Pop2 returns the value of the parameter given to Push2, then there is a total order consistent with
of pre-ctor, post-ctor, pre-dtor and post-dtor of Push1, Push2, Pop1 and Pop2, and moreover if post-ctor of Push1 is before post-ctor of Push2 in the order, then pre-ctor of Pop1 happens before pre-ctor of Pop2.memory_order :: seq_cst -
[Note: This guarantees FIFO behaviour, but for two concurrent pushes the constructors can not determine the order in which the values are enqueued, and the constructors can run concurrently as well. -- end note]
-
[Note: This does not guarantee that constructors or destructors may ever run concurrently. An implementation may decide that two pushes (or to pops) never run concurrently. -- end note]
-
[Note: A constructor can deadlock if it tries a push or pop on the same queue. -- end note]
T pop (); optional < T > pop ( std :: error_code & ec ); optional < T > try_pop ( std :: error_code & ec ); sender auto async_pop ();
-
Remarks: The constructor of the returned object and the destructor of the internal object run on the same thread of execution.
bool try_push ( const T & x , std :: error_code & ec ); bool try_push ( T && x , std :: error_code & ec ); optional < T > try_pop ( std :: error_code & ec );
-
Remarks: These operations may block for internal synchronization.
8. Old Revision History
This paper revises P0260R7 - 2023-06-15 as follows.
-
Fix typos.
-
Implement LEWG feedback to derive conqueue_errc from system_error
-
Implement LEWG feedback to add range constructor and go back to InputIterator
-
size_t capacity() added
-
Added TBB concurrent_bounded_queue as existing practice
-
Moved discussion about pop() interface to separate paper
P0260R6 revises P0260R5 - 2023-01-15 as follows.
-
Fixing typos.
-
Added a scope for the target TS.
-
Added questions to be answered by a TS.
-
Added asynchronous interface
P0260R5 revises P0260R4 - 2020-01-12 as follows.
-
Added more introductory material.
-
Added response to feedback by LEWGI at Prague meeting 2020.
-
Added section on existing practice.
-
Replaced
withvalue_pop
.pop -
Replaced
withis_lock_free
.is_always_lockfree -
Removed
andis_empty
.is_full -
Added move-into parameter to
try_push ( Element && ) -
Added note that exception thrown by the queue operations themselves are derived from
.std :: exception -
Added a note that the wording is partly invalid.
-
Moved more contents into the "Abandoned" part to avoid confusion.
P0260R4 revised P0260R3 - 2019-01-20 as follows.
-
Remove the binding of
to a value of zero.queue_op_status :: success -
Correct stale use of the
template parameter inQueue
toshared_queue_front
.Value -
Change the return type of
from ashare_queue_ends
to a custom struct.pair -
Move the concrete queue proposal to a separate paper, [P1958].
P0260R3 revised P0260R2 - 2017-10-15 as follows.
-
Convert
to aqueue_wrapper
-like interface. This conversion removes thefunction
class. Thanks to Zach Lane for the approach.queue_base -
Removed the requirement that element types have a default constructor. This removal implies that statically sized buffers cannot use an array implmentation and must grow a vector implementation to the maximum size.
-
Added a discussion of checking for output iterator end in the wording.
-
Fill in synopsis section.
-
Remove stale discussion of
.queue_owner -
Move all abandoned interface discussion to a new section.
-
Update paper header to current practice.
P0260R2 revised P0260R1 - 2017-02-05 as follows.
-
Emphasize that non-blocking operations were removed from the proposed changes.
-
Correct syntax typos for noexcept and template alias.
-
Remove
fromstatic
foris_lock_free
andgeneric_queue_back
.generic_queue_front
P0260R1 revised P0260R0 - 2016-02-14 as follows.
-
Remove pure virtuals from
.queue_wrapper -
Correct
toqueue :: pop
.value_pop -
Remove nonblocking operations.
-
Remove non-locking buffer queue concrete class.
-
Tighten up push/pop wording on closed queues.
-
Tighten up push/pop wording on synchronization.
-
Add note about possible non-FIFO behavior.
-
Define
to be FIFO.buffer_queue -
Make wording consistent across attributes.
-
Add a restriction on element special methods using the queue.
-
Make
for only non-waiting functions.is_lock_free () -
Make
static for non-indirect classes.is_lock_free () -
Make
.is_lock_free () noexcept -
Make
.has_queue () noexcept -
Make destructors
.noexcept -
Replace "throws nothing" with
.noexcept -
Make the remarks about the usefulness of
andis_empty ()
into notes.is_full -
Make the non-static member functions
andis_ ...
functionshas_ ...
.const
P0260R0 revised N3533 - 2013-03-12 as follows.
-
Update links to source code.
-
Add wording.
-
Leave the name facility out of the wording.
-
Leave the push-front facility out of the wording.
-
Leave the reopen facility out of the wording.
-
Leave the storage iterator facility out of the wording.
N3532 revised N3434 = 12-0043 - 2012-01-14 as follows.
-
Add more exposition.
-
Provide separate non-blocking operations.
-
Add a section on the lock-free queues.
-
Argue against push-back operations.
-
Add a cautionary note on the usefulness of
.is_closed () -
Expand the cautionary note on the usefulness of
. Addis_empty ()
.is_full () -
Add a subsection on element type requirements.
-
Add a subsection on exception handling.
-
Clarify ordering constraints on the interface.
-
Add a subsection on a lock-free concrete queue.
-
Add a section on content iterators, distinct from the existing streaming iterators section.
-
Swap front and back names, as requested.
-
General expository cleanup.
-
Add an "Revision History" section.
N3434 revised N3353 = 12-0043 - 2012-01-14 as follows.
-
Change the inheritance-based interface to a pure conceptual interface.
-
Put
operations into a separate subsection.try_ ... -
Add a subsection on non-blocking operations.
-
Add a subsection on push-back operations.
-
Add a subsection on queue ordering.
-
Merge the "Binary Interface" and "Managed Indirection" sections into a new "Conceptual Tools" section. Expand on the topics and their rationale.
-
Add a subsection to "Conceptual Tools" that provides for type erasure.
-
Remove the "Synopsis" section.
-
Add an "Implementation" section.
9. Implementation
An implementation is available at github.com/GorNishanov/conqueue.
A free, open-source implementation of an earlier version of these
interfaces is avaliable at the Google Concurrency Library project at github.com/alasdairmackintosh/google-concurrency-library.
The original
is in ..../blob/master/include/buffer_queue.h.
The concrete
is in ..../blob/master/include/lock_free_buffer_queue.h.
The corresponding implementation of the conceptual tools is in ..../blob/master/include/queue_base.h.
10. Historic Contents
The Contents in this section is for historic reference only.
10.1. Abandoned Interfaces
10.1.1. Re-opening a Queue
There are use cases for opening a queue that is closed. While we are not aware of an implementation in which the ability to reopen a queue would be a hardship, we also imagine that such an implementation could exist. Open should generally only be called if the queue is closed and empty, providing a clean synchronization point, though it is possible to call open on a non-empty queue. An open operation following a close operation is guaranteed to be visible after the close operation and the queue is guaranteed to be open upon completion of the open call. (But of course, another close call could occur immediately thereafter.)
- Open the queue.
Note that when
returns false, there is no assurance that
any subsequent operation finds the queue closed because some other
thread may close it concurrently.
If an open operation is not available, there is an assurance that once closed, a queue stays closed. So, unless the programmer takes care to ensure that all other threads will not close the queue, only a return value of true has any meaning.
Given these concerns with reopening queues, we do not propose wording to reopen a queue.
10.1.2. Non-Blocking Operations
For cases when blocking for mutual exclusion is undesirable, one can
consider non-blocking operations. The interface is the same as the try
operations but is allowed to also return
in case
the operation is unable to complete without blocking.
\
- If the operation would block, return
. Otherwise, if the queue is full, returnqueue_op_status :: busy
. Otherwise, push thequeue_op_status :: full
onto the queue. ReturnElement
.queue_op_status :: success
- If the operation would block, return
. Otherwise, if the queue is empty, returnqueue_op_status :: busy
. Otherwise, pop thequeue_op_status :: empty
from the queue. The element will be moved out of the queue in preference to being copied. ReturnElement
.queue_op_status :: success
These operations will neither wait nor block. However, they may do nothing.
The non-blocking operations highlight a terminology problem. In terms of
synchronization effects,
on queues is equivalent to
on mutexes. And so one could conclude that the existing
should be renamed
and
should be renamed
. However, at least Thread Building Blocks
uses the existing terminology. Perhaps better is to not use
and instead use
and
.
**In November 2016, the Concurrency Study Group chose to defer
non-blocking operations. Hence, the proposed wording does not include
these functions. In addition, as these functions were the only ones that
returned
, that enumeration is also not included.**
10.1.3. Push Front Operations
Occasionally, one may wish to return a popped item to the queue. We can
provide for this with
operations.
\
- Push the
onto the back of the queue, i.e. in at the end of the queue that is normally popped. ReturnElement
.queue_op_status :: success
\
- If the queue was full, return
. Otherwise, push thequeue_op_status :: full
onto the front of the queue, i.e. in at the end of the queue that is normally popped. ReturnElement
.queue_op_status :: success
\
- If the operation would block, return
. Otherwise, if the queue is full, returnqueue_op_status :: busy
. Otherwise, push thequeue_op_status :: full
onto the front queue. i.e. in at the end of the queue that is normally popped. ReturnElement
.queue_op_status :: success
This feature was requested at the Spring 2012 meeting. However, we do not think the feature works.
-
The name
is inconsistent with existing \"push back\" nomenclature.push_front -
The effects of
are only distinguishable from a regular push when there is a strong ordering of elements. Highly concurrent queues will likely have no strong ordering.push_front -
The
call may fail due to full queues, closed queues, etc. In which case the operation will suffer contention, and may succeed only after interposing push and pop operations. The consequence is that the original push order is not preserved in the final pop order. So,push_front
cannot be directly used as an \'undo\'.push_front -
The operation implies an ability to reverse internal changes at the front of the queue. This ability implies a loss efficiency in some implementations.
In short, we do not think that in a concurrent environment
provides sufficient semantic value to justify its cost. Consequently,
the proposed wording does not provide this feature.
10.1.4. Queue Names
It is sometimes desirable for queues to be able to identify themselves. This feature is particularly helpful for run-time diagnotics, particularly when \'ends\' become dynamically passed around between threads. See Managed Indirection.
- Return the name string provided as a parameter to queue construction.
There is some debate on this facility, but we see no way to effectively replicate the facility. However, in recognition of that debate, the wording does not provide the name facility.
10.1.5. Lock-Free Buffer Queue
We provide a concrete concurrent queue in the form of a fixed-size
. It meets the
concept. The queue is still under development, so details may change.
In November 2016, the Concurrency Study Group chose to defer lock-free queues. Hence, the proposed wording does not include a concrete lock-free queue.
10.1.6. Storage Iterators
In addition to iterators that stream data into and out of a queue, we could provide an iterator over the storage contents of a queue. Such and iterator, even when implementable, would mostly likely be valid only when the queue is otherwise quiecent. We believe such an iterator would be most useful for debugging, which may well require knowledge of the concrete class. Therefore, we do not propose wording for this feature.
10.1.7. Empty and Full Queues
It is sometimes desirable to know if a queue is empty.
- Return true iff the queue is empty.
This operation is useful only during intervals when the queue is known to not be subject to pushes and pops from other threads. Its primary use case is assertions on the state of the queue at the end if its lifetime, or when the system is in quiescent state (where there no outstanding pushes).
We can imagine occasional use for knowing when a queue is full, for instance in system performance polling. The motivation is significantly weaker though.
- Return true iff the queue is full.
Not all queues will have a full state, and these would always return false.
10.1.8. Queue Ordering
The conceptual queue interface makes minimal guarantees.
-
The queue is not empty if there is an element that has been pushed but not popped.
-
A push operation synchronizes with the pop operation that obtains that element.
-
A close operation synchronizes with an operation that observes that the queue is closed.
-
There is a sequentially consistent order of operations.
In particular, the conceptual interface does not guarantee that the sequentially consistent order of element pushes matches the sequentially consistent order of pops. Concrete queues could specify more specific ordering guarantees.
10.1.9. Lock-Free Implementations
Lock-free queues will have some trouble waiting for the queue to be
non-empty or non-full. Therefore, we propose two closely-related
concepts. A full concurrent queue concept as described above, and a
non-waiting concurrent queue concept that has all the operations except
,
,
and
. That is, it has only
non-waiting operations (presumably emulated with busy wait) and
non-blocking operations, but no waiting operations. We propose naming
these
and
,
respectively.
Note: Adopting this conceptual split requires splitting some of the facilities defined later.
For generic code it\'s sometimes important to know if a concurrent queue has a lock free implementation.
- Return true iff the has a lock-free implementation of the non-waiting operations.
10.2. Abandoned Additional Conceptual Tools
There are a number of tools that support use of the conceptual interface. These tools are not part of the queue interface, but provide restricted views or adapters on top of the queue useful in implementing concurrent algorithms.
10.2.1. Fronts and Backs
Restricting an interface to one side of a queue is a valuable code
structuring tool. This restriction is accomplished with the classes
and
parameterized on the
concrete queue implementation. These act as pointers with access to only
the front or the back of a queue. The front of the queue is where
elements are popped. The back of the queue is where elements are pushed.
void send ( int number , generic_queue_back < buffer_queue < int >> arv );
These fronts and backs are also able to provide
and
operations that unambiguously stream data into or out of a queue.
10.2.2. Streaming Iterators
In order to enable the use of existing algorithms streaming through concurrent queues, they need to support iterators. Output iterators will push to a queue and input iterators will pop from a queue. Stronger forms of iterators are in general not possible with concurrent queues.
Iterators implicitly require waiting for the advance, so iterators are
only supportable with the
concept.
void iterate ( generic_queue_back < buffer_queue < int >>:: iterator bitr , generic_queue_back < buffer_queue < int >>:: iterator bend , generic_queue_front < buffer_queue < int >>:: iterator fitr , generic_queue_front < buffer_queue < int >>:: iterator fend , int ( * compute )( int ) ) { while ( fitr != fend && bitr != bend ) * bitr ++ = compute ( * fitr ++ ); }
Note that contrary to existing iterator algorithms, we check both iterators for reaching their end, as either may be closed at any time.
Note that with suitable renaming, the existing standard front insert and back insert iterators could work as is. However, there is nothing like a pop iterator adapter.
10.2.3. Binary Interfaces
The standard library is template based, but it is often desirable to
have a binary interface that shields client from the concrete
implementations. For example,
is a binary interface to
callable object (of a given signature). We achieve this capability in
queues with type erasure.
We provide a
class template parameterized by the value
type. Its operations are virtual. This class provides the essential
independence from the queue representation.
We also provide
and
class templates
parameterized by the value types. These are essentially
and
, respectively.
To obtain a pointer to
from an non-virtual concurrent
queue, construct an instance the
class template, which
is parameterized on the queue and derived from
. Upcasting a
pointer to the
instance to a
instance thus
erases the concrete queue type.
extern void seq_fill ( int count , queue_back < int > b ); buffer_queue < int > body ( 10 /*elements*/ , /*named*/ "body" ); queue_wrapper < buffer_queue < int >> wrap ( body ); seq_fill ( 10 , wrap . back () );
10.2.4. Managed Indirection
Long running servers may have the need to reconfigure the relationship between queues and threads. The ability to pass \'ends\' of queues between threads with automatic memory management eases programming.
To this end, we provide
and
template classes. These act as reference-counted versions of the
and
template classes.
The
template function will provide a
pair of
and
to a dynamically
allocated
instance containing an instance of the
specified implementation queue. When the last of these fronts and backs
are deleted, the queue itself will be deleted. Also, when the last of
the fronts or the last of the backs is deleted, the queue will be
closed.
auto x = share_queue_ends < buffer_queue < int >> ( 10 , "shared" ); shared_queue_back < int > b ( x . back ); shared_queue_front < int > f ( x . front ); f . push ( 3 ); assert ( 3 == b . value_pop ());
10.3. try_push ( T && , T & )
REVISITED in Varna
The following version was introduced in response to LEWG-I concerns about loosing the element if an rvalue cannot be stored in the queue.
However, SG1 reaffirmed the APIs above with the following rationale:
It seems that it is possible not to loose the element in both versions:
T x = get_something (); if ( q . try_push ( std :: move ( x ))) ...
With two parameter version:
T x ; if ( q . try_push ( get_something (), x )) ...
Ergonomically they are roughly identical. API is slightly simpler with one argument version, therefore, we reverted to original one argument version.