ISO/IEC JTC1 SC22 WG21 N3434 = 12-0124 - 2012-09-23
Lawrence Crowl, crowl@google.com, Lawrence@Crowl.org
Introduction
Conceptual Interface
Basic Operations
Trying Operations
Non-Blocking Operations
Push Back Operations
Closed Queues
Empty Queues
Queue Ordering
Queue Names
A Concrete Buffer Queue
Conceptual Tools
Fronts and Backs
Iterators
Binary Interfaces
Managed Indirection
Implementation
This paper revises N3353 = 12-0043 - 2012-01-14.
The existing deque
in the standard library
is an inherently sequential data structure.
Its reference-returning element access operations
cannot synchronize access to those elements
with other queue operations.
Concurrent pushes and pops on queues requires a different interface to the queues.
We provide basic queue operations, and then extend those operations to cover other important issues.
The essential solution to the problem is to shift to a value-based operations, rather than reference-based operations.
The waiting operations are:
void
queue::push(const Element&);
void
queue::push(Element&&);
Push the Element
onto the queue.
If the operation cannot be completed immediately,
wait until it can be completed.
Element queue::value_pop();
Pop the Element
from the queue.
If the operation cannot be completed immediately,
wait until it can be completed.
The element will be moved out of the queue, if possible.
Of course, there are operations to attempt a push or pop, and execute that operation only if it can be completed soon.
queue_op_status
queue::try_push(const Element&);
queue_op_status
queue::try_push(Element&&);
Try to immediately push the Element
onto the queue.
If successful, return queue_op_status::success
.
If the queue was full, return queue_op_status::full
.
queue_op_status
queue::try_pop(Element&);
Try to immediately pop the Element
from the queue.
If successful, return queue_op_status::success
.
If the queue was empty, return queue_op_status::empty
.
The element will be moved out of the queue, if possible.
The above try_push
operations
will avoid waiting only when the queue is full.
If the queue is not full,
but contention requires the operation to wait for a mutex to discover that,
the operation will block.
The above try_pop
operations behave similarly.
We could change these operations to also report failure to acquire the mutex,
perhaps by returning queue_op_status::busy
.
These operations could then form the core of
a non-blocking conceptual interface.
That non-blocking interface could be extended with the basic operations,
provided that the basic operations
also be permitted to throw queue_op_status::busy
.
Occasionally, one may wish to return a popped item to the queue.
We can provide for this with a push_back
operations.
These operations may have performance implications for some queues,
so these operations should be carefully considered.
void
queue::push_back(const Element&);
void
queue::push_back(Element&&);
Push the Element
onto the back of the queue,
i.e. in at the end of the queue that is normally popped.
If the operation cannot be completed immediately,
wait until it can be completed.
When successful, return queue_op_status::success
.
queue_op_status
queue::try_push_back(const Element&);
queue_op_status
queue::try_push_back(Element&&);
Try to immediately push the Element
onto the back of the queue,
i.e. in at the end of the queue that is normally popped.
If successful, return queue_op_status::success
.
If the queue was full, return queue_op_status::full
.
Should these operations be adopted,
renaming the ...push
operations
to ...push_front
operations
seems most consistent with the current library.
Threads using a queue for communication need some mechanism to signal when the queue is no longer needed. Rather than require an out-of-band signal, we chose to directly support such a signal in the queue itself, which considerably simplified coding.
To achieve this signal, a thread may close a queue.
Once closed, no new elements may be pushed onto the queue.
Push operations on a closed queue
will either return queue_op_status::closed
(when they have a status return)
or throw queue_op_status::closed
(when they do not).
Elements already on the queue may be popped off.
When an queue is empty and closed,
pop operations will either
return queue_op_status::closed
(when they have a status return)
or throw queue_op_status::closed
(when they do not).
The operations are as follows.
void queue::close();
Close the queue.
bool queue::is_closed();
Return true iff the queue is closed.
queue_op_status
queue::wait_push(const Element&);
queue_op_status
queue::wait_push(Element&&);
Push the Element
onto the queue.
If successful, return queue_op_status::success
.
If the queue was closed, return queue_op_status::closed
.
Otherwise, wait for either of the above to become true.
queue_op_status
queue::wait_push_back(const Element&);
queue_op_status
queue::wait_push_back(Element&&);
Push the Element
onto the back of the queue,
i.e. in at the end of the queue that is normally popped.
If successful, return queue_op_status::success
.
If the queue was closed, return queue_op_status::closed
.
Otherwise, wait for either of the above to become true.
Note that this operation
implies a less efficient single-writer single-reader queue.
queue_op_status
queue::wait_pop(Element&);
Try to immediately pop the Element
from the queue.
If successful, return queue_op_status::success
.
If the queue was closed, return queue_op_status::closed
.
Otherwise, wait for either of the above to become true.
The element will be moved out of the queue, if possible.
It is sometimes desirable to know if a queue is empty.
bool queue::is_empty();
Return true iff the queue is empty.
This operation is useful only during intervals when the queue is known to not be subject to pushes and pops from other threads. Its primary use case is assertions on the state of the queue at the end if its lifetime.
The conceptual queue interface does not specify an order of elements. Concrete queues should specify this ordering, if any.
A push operation synchronizes with the pop operation that obtains that element.
It is sometimes desirable for queues to be able to identify themselves. This feature is particularly helpful for run-time diagnotics, particularly when 'ends' become dynamically passed around between threads. See Managed Indirection below. There is some debate on this facility, but I see no way to effectively replicate the facility.
const char* queue::name();
Return the name string provided as a parameter to queue construction.
We provide a concrete concurrent queue
in the form of a fixed-size buffer_queue
.
It provides the conceptual operations above,
construction of an empty queue,
and construction of a queue from a pair of iterators.
Constructors take a parameter
specifying the maximum number of elements in the buffer.
Constructors may also take a parameter
specifying the name of the queue.
If the name is not present, it defaults to the empty string.
The buffer_queue
deletes the default constructor, the copy constructor,
and the copy assignment operator.
We feel that their benefit might not justify their potential confusion.
There are a number of tools that support use of the conceptual interface.
Restricting an interface to one side of a queue
is a valuable code structuring tool.
This restriction is accomplished with
the classes generic_queue_front<Queue>
and generic_queue_back<Queue>
.
These act as pointers
with access to only the front or the back of a queue.
void send( int number, generic_queue_front<buffer_queue<int>> f );
These fronts and backs
are also able to provide begin
and end
operations that produce iterators
because there is no ambiguity
on whether the iterators should be input or output iterators.
In order to enable the use of existing algorithms with concurrent queues, they need to support iterators. Output iterators will push to a queue and input iterators will pop from a queue. Stronger forms of iterators are in general not possible with concurrent queues.
void iterate(
generic_queue_back<buffer_queue<int>>::iterator bitr,
generic_queue_back<buffer_queue<int>>::iterator bend,
generic_queue_front<buffer_queue<int>>::iterator fitr,
generic_queue_front<buffer_queue<int>>::iterator fend,
int (*compute)( int ) )
{
while ( bitr != bend && fitr != fend )
*fitr++ = compute(*bitr++);
}
Note that with suitable renaming, the existing standard front insert and back insert iterators could work as is. However, there is nothing like a pop iterator adapter.
The standard library is template based, but it is often desirable to have a binary interface that shields the client from the concrete queue implementation. We achieve this capability with type erasure.
We provide a queue_base
class template
parameterized by the value type.
Its operations are virtual.
This class provides the essential independence
from the queue representation.
We also provide queue_front
and queue_back
class templates parameterized by the value types.
These are essentially
generic_queue_front<queue_base<Value_type>>
and
generic_queue_front<queue_base<Value_type>>
,
respectively.
Indeed, they probably could be template aliases.
To obtain a pointer to queue_base
from an non-virtual concurrent queue,
construct an instance the queue_wrapper
class template,
which is parameterized on the queue
and derived from queue_base
.
Upcasting a pointer to the queue_wrapper
instance
to a queue_base
instance
thus erases the concrete queue type.
extern void seq_fill( int count, queue_front<int> f );
buffer_queue<int> body(1, "body");
queue_wrapper<buffer_queue<int>> wrap(&body);
seq_fill(1, &wrap);
Long running servers may have the need to reconfigure the relationship between queues and threads. The ability to pass 'ends' of queues between threads with automatic memory management eases programming.
To this end, we provide
shared_queue_front
and
shared_queue_back
template classes.
These act as reference-counted versions
of the queue_front
and
queue_back
template classes.
These shared versions
act on a queue_counted
class template,
which is a counted version of queue_base
.
The concrete counted queues are
the queue_owner
class template,
which takes ownership of a raw pointer to a queue,
and the queue_object
class template,
which constructs and instance of the implementation queue within itself.
Both class templates are derived from queue_counted
.
queue_owner<buffer_queue<int>> own( new buffer_queue<int>(1, "body") );
seq_fill(1, &own);
The share_queue_ends(Args ... args)
template function
will provide a pair of
shared_queue_front
and shared_queue_back
to a dynamically allocated queue_object
instance
containing some implementation queue.
When the last of these fronts and backs are deleted,
the queue itself will be deleted.
Also, when the last of the fronts or the last of the backs is deleted,
the queue will be closed.
auto x = share_queue_ends<buffer_queue<int>>(kSmall);
shared_queue_front<int> f(x.first);
shared_queue_back<int> b(x.second);
f.push(3);
assert(3 == b.value_pop());
A free, open-source implementation of these interfaces
is avaliable at the Google Concurrency Library project
at
http://code.google.com/p/google-concurrency-library/.
The concrete buffer_queue
is in
..../source/browse/include/buffer_queue.h.
The corresponding implementation of the conceptual tools is in
..../source/browse/include/queue_base.h.