constexpr atomic<T> and atomic_ref<T>
This paper proposes marking most of atomic<T>
methods and associated functions constexpr
to allow usage of atomic code without changes in a constexpr and consteval code.
Proposed changes will allow implementing other types (std::shared_ptr<T>
, persistent data structures with atomic pointers) and algorithms (thread safe data-processing, like scanning data with atomic counter) with just sprinkling constexpr
to their specification.
- R1 → R2: Added clarification for behaviour of
wait
and notify
functions.
- R0 → R1: Make
wait
and notify
functions as requested by SG1. Wording changed accordingly. Updaded link to implementation on Compiler Explorer.
SG1: Forward P3309 to LEWG with the following notes:
- Add constexpr to the wait and notify functions in the next revision of P3309
atomic<shared_ptr>
should be supported in constexpr whenever shared_ptr
is supported in constexpr (whichever paper lands second should have this change)
is_lock_free()
should not be made constexpr
Intention for wording changes
Mark all functions in [atomics] constexpr excluding all volatile
overloads. As all these can be implemented in constant expression evaluator or using if consteval
:
template<class T>
constexpr T atomic_fetch_add(atomic<T>* target, typename atomic<T>::difference_type diff) noexcept {
if consteval {
const auto previous = target->value;
target->value += diff;
return previous;
} else {
return __c11_atomic_fetch_add(&target->value, diff);
}
}
Synchronization functions and helpers can be implemented as no-ops (std::kill_dependency
, std::atomic_thread_fence
). Memory order parameters should be just ignored as constant evaluated
code doesn't have multiple threads.
Alternative implementation strategy is to allow atomic builtins to work in constant evaluator.
Wait
and notify
operations should work during constant evaluation as expected in a single-threaded environment. Notify
will be noop and wait
-ing for a different value will be a deadlock which should result in constant evaluation failure according to [expr.const#5.7] an expression that would exceed the implementation-defined limits as every check for the value
is considered [intro.progress#4] continous executing of execution steps while waiting for the condition.
Question answered by SG1
- Should we make
is_lock_free
functions also constexpr? No, keep it non-constexpr as it can be different on running environment.
Question for LEWG
This example shows how you can easily reuse code between runtime and constant evaluated code without duplication. Without this paper you need to duplicate multiple functions.
constexpr bool process_first_unprocessed(std::atomic<size_t> & counter, std::span<cell> subject) {
// BEFORE: compile-time error when you try to evaluate this inside constant evaluated code
// AFTER: work sequentialy in constant-evaluated code
const size_t current = counter.fetch_add(1);
if (current >= subject.size()) {
return false;
}
process(subject[current]);
return true;
}
constexpr void process_all(std::span<cell> subject, unsigned thread_count = 1) {
// BEFORE: calling following function in constant evaluated code will always fail with any number of requested threads
// AFTER: calling it with argument thread_count == 1 will succeed in constant evaluated code
std::atomic<size_t> counter{0};
auto threads = std::vector<std::jthread>{};
assert(thread_count >= 1);
for (unsigned i = 1; i < thread_count; ++i) {
threads.emplace_back([&]{
while (process_first_unprocessed(counter, subject));
});
}
while (process_first_unprocessed(counter, subject));
}
link to compiler-explorer.com
33 Concurrency support library [thread]
Subclause [atomics] describes components for fine-grained atomic access
. This access is provided via operations on atomic objects
.
The type aliases
atomic_intN_t,
atomic_uintN_t,
atomic_intptr_t, and
atomic_uintptr_t
are defined if and only if
intN_t,
uintN_t,
intptr_t, and
uintptr_t
are defined, respectively
.The type aliases
atomic_signed_lock_free and
atomic_unsigned_lock_free
name specializations of
atomic
whose template arguments are integral types, respectively signed and unsigned,
and whose
is_always_lock_free property is
true. [
Note 1:
These aliases are optional in freestanding implementations (
[compliance])
. —
end note]
Implementations should choose for these aliases
the integral specializations of
atomic
for which the atomic waiting and notifying operations (
[atomics.wait])
are most efficient
.
namespace std {
enum class memory_order : unspecified {
relaxed, consume, acquire, release, acq_rel, seq_cst
};
}
The enumeration
memory_order specifies the detailed regular
(non-atomic) memory synchronization order as defined in
[intro.multithread] and may provide for operation ordering
. Its
enumerated values and their meanings are as follows:
memory_order::relaxed: no operation orders memory
. memory_order::release,
memory_order::acq_rel, and
memory_order::seq_cst: a store operation performs a release operation on the
affected memory location
. memory_order::consume: a load operation performs a consume operation on the
affected memory location
. [
Note 1:
Prefer
memory_order::acquire, which provides stronger guarantees
than
memory_order::consume. Implementations have found it infeasible
to provide performance better than that of
memory_order::acquire. Specification revisions are under consideration
. —
end note]
memory_order::acquire,
memory_order::acq_rel, and
memory_order::seq_cst: a load operation performs an acquire operation on the
affected memory location
.
[
Note 2:
Atomic operations specifying
memory_order::relaxed are relaxed
with respect to memory ordering
. Implementations must still guarantee that any
given atomic access to a particular atomic object be indivisible with respect
to all other atomic accesses to that object
. —
end note]
An atomic operation
A that performs a release operation on an atomic
object
M synchronizes with an atomic operation
B that performs
an acquire operation on
M and takes its value from any side effect in the
release sequence headed by
A.An atomic operation
A on some atomic object
M is
coherence-ordered before
another atomic operation
B on
M if
- A is a modification, and
B reads the value stored by A, or
- A precedes B
in the modification order of M, or
- A and B are not
the same atomic read-modify-write operation, and
there exists an atomic modification X of M
such that A reads the value stored by X and
X precedes B
in the modification order of M, or
- there exists an atomic modification X of M
such that A is coherence-ordered before X and
X is coherence-ordered before B.
There is a single total order
S
on all
memory_order::seq_cst operations, including fences,
that satisfies the following constraints
. First, if
A and
B are
memory_order::seq_cst operations and
A strongly happens before
B,
then
A precedes
B in
S. Second, for every pair of atomic operations
A and
B on an object
M,
where
A is coherence-ordered before
B,
the following four conditions are required to be satisfied by
S:
- if A and B are both
memory_order::seq_cst operations,
then A precedes B in S; and
- if A is a memory_order::seq_cst operation and
B happens before
a memory_order::seq_cst fence Y,
then A precedes Y in S; and
- if a memory_order::seq_cst fence X
happens before A and
B is a memory_order::seq_cst operation,
then X precedes B in S; and
- if a memory_order::seq_cst fence X
happens before A and
B happens before
a memory_order::seq_cst fence Y,
then X precedes Y in S.
[
Note 3:
This definition ensures that
S is consistent with
the modification order of any atomic object
M. It also ensures that
a
memory_order::seq_cst load
A of
M
gets its value either from the last modification of
M
that precedes
A in
S or
from some non-
memory_order::seq_cst modification of
M
that does not happen before any modification of
M
that precedes
A in
S. —
end note]
[
Note 4:
We do not require that
S be consistent with
“happens before” (
[intro.races])
. This allows more efficient implementation
of
memory_order::acquire and
memory_order::release
on some machine architectures
. It can produce surprising results
when these are mixed with
memory_order::seq_cst accesses
. —
end note]
[
Note 5:
memory_order::seq_cst ensures sequential consistency only
for a program that is free of data races and
uses exclusively
memory_order::seq_cst atomic operations
. Any use of weaker ordering will invalidate this guarantee
unless extreme care is used
. In many cases,
memory_order::seq_cst atomic operations are reorderable
with respect to other atomic operations performed by the same thread
. —
end note]
Implementations should ensure that no “out-of-thin-air” values are computed that
circularly depend on their own computation
.[
Note 6:
For example, with x and y initially zero,
r1 = y.load(memory_order::relaxed);
x.store(r1, memory_order::relaxed);
r2 = x.load(memory_order::relaxed);
y.store(r2, memory_order::relaxed);
this recommendation discourages producing
r1 == r2 == 42, since the store of 42 to
y is only
possible if the store to
x stores
42, which circularly depends on the
store to
y storing
42. Note that without this restriction, such an
execution is possible
. —
end note]
[
Note 7:
The recommendation similarly disallows r1 == r2 == 42 in the
following example, with x and y again initially zero:
r1 = x.load(memory_order::relaxed);
if (r1 == 42) y.store(42, memory_order::relaxed);
r2 = y.load(memory_order::relaxed);
if (r2 == 42) x.store(42, memory_order::relaxed);
— end note]
Atomic read-modify-write operations shall always read the last value
(in the modification order) written before the write associated with
the read-modify-write operation
.Recommended practice: The implementation should make atomic stores visible to atomic loads,
and atomic loads should observe atomic stores,
within a reasonable amount of time
. template<class T>
constexpr T kill_dependency(T y) noexcept;
#define ATOMIC_BOOL_LOCK_FREE unspecified
#define ATOMIC_CHAR_LOCK_FREE unspecified
#define ATOMIC_CHAR8_T_LOCK_FREE unspecified
#define ATOMIC_CHAR16_T_LOCK_FREE unspecified
#define ATOMIC_CHAR32_T_LOCK_FREE unspecified
#define ATOMIC_WCHAR_T_LOCK_FREE unspecified
#define ATOMIC_SHORT_LOCK_FREE unspecified
#define ATOMIC_INT_LOCK_FREE unspecified
#define ATOMIC_LONG_LOCK_FREE unspecified
#define ATOMIC_LLONG_LOCK_FREE unspecified
#define ATOMIC_POINTER_LOCK_FREE unspecified
The
ATOMIC_..._LOCK_FREE macros indicate the lock-free property of the
corresponding atomic types, with the signed and unsigned variants grouped
together
. The properties also apply to the corresponding (partial) specializations of the
atomic template
. A value of 0 indicates that the types are never
lock-free
. A value of 1 indicates that the types are sometimes lock-free
. A
value of 2 indicates that the types are always lock-free
.On a hosted implementation (
[compliance]),
at least one signed integral specialization of the
atomic template,
along with the specialization
for the corresponding unsigned type (
[basic.fundamental]),
is always lock-free
. In any given program execution, the
result of the lock-free query
is the same for all atomic objects of the same type
. Atomic operations that are not lock-free are considered to potentially
block (
[intro.progress])
.Recommended practice: Operations that are lock-free should also be address-free
.
The implementation of these operations should not depend on any per-process state
. [
Note 1:
This restriction enables communication by memory that is
mapped into a process more than once and by memory that is shared between two
processes
. —
end note]
An atomic waiting operation may block until it is unblocked
by an atomic notifying operation, according to each function's effects
. [
Note 1:
Programs are not guaranteed to observe transient atomic values,
an issue known as the A-B-A problem,
resulting in continued blocking if a condition is only temporarily met
. —
end note]
[
Note 2:
The following functions are atomic waiting operations:
- atomic<T>::wait,
- atomic_flag::wait,
- atomic_wait and atomic_wait_explicit,
- atomic_flag_wait and atomic_flag_wait_explicit, and
- atomic_ref<T>::wait.
—
end note]
[
Note 3:
The following functions are atomic notifying operations:
- atomic<T>::notify_one and atomic<T>::notify_all,
- atomic_flag::notify_one and atomic_flag::notify_all,
- atomic_notify_one and atomic_notify_all,
- atomic_flag_notify_one and atomic_flag_notify_all, and
- atomic_ref<T>::notify_one and atomic_ref<T>::notify_all.
—
end note]
A call to an atomic waiting operation on an atomic object
M
is
eligible to be unblocked
by a call to an atomic notifying operation on
M
if there exist side effects
X and
Y on
M such that:
- the atomic waiting operation has blocked after observing the result of X,
- X precedes Y in the modification order of M, and
- Y happens before the call to the atomic notifying operation.
namespace std {
template<class T> struct atomic_ref {
private:
T* ptr;
public:
using value_type = T;
static constexpr size_t required_alignment = implementation-defined;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const noexcept;
constexpr explicit atomic_ref(T&);
constexpr atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
constexpr void store(T, memory_order = memory_order::seq_cst) const noexcept;
constexpr T operator=(T) const noexcept;
constexpr T load(memory_order = memory_order::seq_cst) const noexcept;
constexpr operator T() const noexcept;
constexpr T exchange(T, memory_order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_weak(T&, T,
memory_order, memory_order) const noexcept;
constexpr bool compare_exchange_strong(T&, T,
memory_order, memory_order) const noexcept;
constexpr bool compare_exchange_weak(T&, T,
memory_order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_strong(T&, T,
memory_order = memory_order::seq_cst) const noexcept;
constexpr void wait(T, memory_order = memory_order::seq_cst) const noexcept;
constexpr void notify_one() const noexcept;
constexpr void notify_all() const noexcept;
};
}
An
atomic_ref object applies atomic operations (
[atomics.general]) to
the object referenced by
*ptr such that,
for the lifetime (
[basic.life]) of the
atomic_ref object,
the object referenced by
*ptr is an atomic object (
[intro.races])
.The program is ill-formed if
is_trivially_copyable_v<T> is
false.The lifetime (
[basic.life]) of an object referenced by
*ptr
shall exceed the lifetime of all
atomic_refs that reference the object
. While any
atomic_ref instances exist
that reference the
*ptr object,
all accesses to that object shall exclusively occur
through those
atomic_ref instances
. No subobject of the object referenced by
atomic_ref
shall be concurrently referenced by any other
atomic_ref object
.Atomic operations applied to an object
through a referencing
atomic_ref are atomic with respect to
atomic operations applied through any other
atomic_ref
referencing the same object
. [
Note 1:
Atomic operations or the
atomic_ref constructor can acquire
a shared resource, such as a lock associated with the referenced object,
to enable atomic operations to be applied to the referenced object
. —
end note]
static constexpr size_t required_alignment;
The alignment required for an object to be referenced by an atomic reference,
which is at least
alignof(T).[
Note 1:
Hardware could require an object
referenced by an
atomic_ref
to have stricter alignment (
[basic.align])
than other objects of type
T. Further, whether operations on an
atomic_ref
are lock-free could depend on the alignment of the referenced object
. For example, lock-free operations on
std::complex<double>
could be supported only if aligned to
2*alignof(double). —
end note]
static constexpr bool is_always_lock_free;
The static data member
is_always_lock_free is
true
if the
atomic_ref type's operations are always lock-free,
and
false otherwise
.bool is_lock_free() const noexcept;
Returns:
true if operations on all objects of the type
atomic_ref<T>
are lock-free,
false otherwise
. constexpr atomic_ref(T& obj);
Preconditions: The referenced object is aligned to
required_alignment. Postconditions:
*this references
obj. constexpr atomic_ref(const atomic_ref& ref) noexcept;
Postconditions:
*this references the object referenced by
ref. constexpr void store(T desired, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value referenced by
*ptr
with the value of
desired. Memory is affected according to the value of
order.constexpr T operator=(T desired) const noexcept;
Effects: Equivalent to:
store(desired);
return desired;
constexpr T load(memory_order order = memory_order::seq_cst) const noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::consume,
memory_order::ac-
quire, or
memory_order::seq_cst. Effects: Memory is affected according to the value of
order. Returns: Atomically returns the value referenced by
*ptr. constexpr operator T() const noexcept;
Effects: Equivalent to: return load();
constexpr T exchange(T desired, memory_order order = memory_order::seq_cst) const noexcept;
Effects: Atomically replaces the value referenced by
*ptr
with
desired. Memory is affected according to the value of
order. Returns: Atomically returns the value referenced by
*ptr
immediately before the effects
. constexpr bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) const noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) const noexcept;
constexpr bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order::seq_cst) const noexcept;
Preconditions:
failure is
memory_order::relaxed,
memory_order::consume,
memory_order::acquire, or
memory_order::seq_cst. Effects: Retrieves the value in
expected. It then atomically compares the value representation of
the value referenced by
*ptr for equality
with that previously retrieved from
expected,
and if
true, replaces the value referenced by
*ptr
with that in
desired. If and only if the comparison is
true,
memory is affected according to the value of
success, and
if the comparison is
false,
memory is affected according to the value of
failure. When only one
memory_order argument is supplied,
the value of
success is
order, and
the value of
failure is
order
except that a value of
memory_order::acq_rel shall be replaced by
the value
memory_order::acquire and
a value of
memory_order::release shall be replaced by
the value
memory_order::relaxed. If and only if the comparison is
false then,
after the atomic operation,
the value in
expected is replaced by
the value read from the value referenced by
*ptr
during the atomic comparison
. If the operation returns
true,
these operations are atomic read-modify-write operations (
[intro.races])
on the value referenced by
*ptr. Otherwise, these operations are atomic load operations on that memory
.Returns: The result of the comparison
. Remarks: A weak compare-and-exchange operation may fail spuriously
. That is, even when the contents of memory referred to
by
expected and
ptr are equal,
it may return
false and
store back to
expected the same memory contents
that were originally there
. [
Note 2:
This spurious failure enables implementation of compare-and-exchange
on a broader class of machines, e.g., load-locked store-conditional machines
. A consequence of spurious failure is
that nearly all uses of weak compare-and-exchange will be in a loop
. When a compare-and-exchange is in a loop,
the weak version will yield better performance on some platforms
. When a weak compare-and-exchange would require a loop and
a strong one would not, the strong one is preferable
. —
end note]
constexpr void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::consume,
memory_order::ac-
quire, or
memory_order::seq_cst. Effects: Repeatedly performs the following steps, in order:
Evaluates
load(order) and
compares its value representation for equality against that of
old.If they compare unequal, returns
.Blocks until it
is unblocked by an atomic notifying operation or is unblocked spuriously
.
Remarks: This function is an atomic waiting operation (
[atomics.wait])
on atomic object
*ptr. constexpr void notify_one() const noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation on
*ptr
that is eligible to be unblocked (
[atomics.wait]) by this call,
if any such atomic waiting operations exist
. Remarks: This function is an atomic notifying operation (
[atomics.wait])
on atomic object
*ptr. constexpr void notify_all() const noexcept;
Effects: Unblocks the execution of all atomic waiting operations on
*ptr
that are eligible to be unblocked (
[atomics.wait]) by this call
. Remarks: This function is an atomic notifying operation (
[atomics.wait])
on atomic object
*ptr. There are specializations of the
atomic_ref class template
for the integral types
char,
signed char,
unsigned char,
short,
unsigned short,
int,
unsigned int,
long,
unsigned long,
long long,
unsigned long long,
char8_t,
char16_t,
char32_t,
wchar_t,
and any other types needed by the typedefs in the header
. For each such type
integral-type,
the specialization
atomic_ref<integral-type> provides
additional atomic operations appropriate to integral types
. namespace std {
template<> struct atomic_ref<integral-type> {
private:
integral-type* ptr;
public:
using value_type = integral-type;
using difference_type = value_type;
static constexpr size_t required_alignment = implementation-defined;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const noexcept;
constexpr explicit atomic_ref(integral-type&);
constexpr atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
constexpr void store(integral-type, memory_order = memory_order::seq_cst) const noexcept;
constexpr integral-type operator=(integral-type) const noexcept;
constexpr integral-type load(memory_order = memory_order::seq_cst) const noexcept;
constexpr operator integral-type() const noexcept;
constexpr integral-type exchange(integral-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_weak(integral-type&, integral-type,
memory_order, memory_order) const noexcept;
constexpr bool compare_exchange_strong(integral-type&, integral-type,
memory_order, memory_order) const noexcept;
constexpr bool compare_exchange_weak(integral-type&, integral-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_strong(integral-type&, integral-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr integral-type fetch_add(integral-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr integral-type fetch_sub(integral-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr integral-type fetch_and(integral-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr integral-type fetch_or(integral-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr integral-type fetch_xor(integral-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr integral-type fetch_max(integral-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr integral-type fetch_min(integral-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr integral-type operator++(int) const noexcept;
constexpr integral-type operator--(int) const noexcept;
constexpr integral-type operator++() const noexcept;
constexpr integral-type operator--() const noexcept;
constexpr integral-type operator+=(integral-type) const noexcept;
constexpr integral-type operator-=(integral-type) const noexcept;
constexpr integral-type operator&=(integral-type) const noexcept;
constexpr integral-type operator|=(integral-type) const noexcept;
constexpr integral-type operator^=(integral-type) const noexcept;
constexpr void wait(integral-type, memory_order = memory_order::seq_cst) const noexcept;
constexpr void notify_one() const noexcept;
constexpr void notify_all() const noexcept;
};
}
Descriptions are provided below only for members
that differ from the primary template
.The following operations perform arithmetic computations
. The correspondence among key, operator, and computation is specified
in Table
148.constexpr integral-type fetch_key(integral-type operand,
memory_order order = memory_order::seq_cst) const noexcept;
Effects: Atomically replaces the value referenced by
*ptr with
the result of the computation applied to the value referenced by
*ptr
and the given operand
. Memory is affected according to the value of
order. These operations are atomic read-modify-write operations (
[intro.races])
.Returns: Atomically, the value referenced by
*ptr
immediately before the effects
. Remarks: Except for
fetch_max and
fetch_min, for signed integer types
the result is as if the object value and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type
. [
Note 2:
There are no undefined results arising from the computation
. —
end note]
For
fetch_max and
fetch_min, the maximum and minimum
computation is performed as if by
max and
min algorithms
(
[alg.min.max]), respectively, with the object value and the first
parameter as the arguments
.constexpr integral-type operator op=(integral-type operand) const noexcept;
Effects: Equivalent to:
return fetch_key(operand) op operand;
There are specializations of the
atomic_ref class template
for all cv-unqualified floating-point types
. For each such type
floating-point-type,
the specialization
atomic_ref<floating-point> provides
additional atomic operations appropriate to floating-point types
.namespace std {
template<> struct atomic_ref<floating-point-type> {
private:
floating-point-type* ptr;
public:
using value_type = floating-point-type;
using difference_type = value_type;
static constexpr size_t required_alignment = implementation-defined;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const noexcept;
constexpr explicit atomic_ref(floating-point-type&);
constexpr atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
constexpr void store(floating-point-type, memory_order = memory_order::seq_cst) const noexcept;
constexpr floating-point-type operator=(floating-point-type) const noexcept;
constexpr floating-point-type load(memory_order = memory_order::seq_cst) const noexcept;
constexpr operator floating-point-type() const noexcept;
constexpr floating-point-type exchange(floating-point-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type,
memory_order, memory_order) const noexcept;
constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type,
memory_order, memory_order) const noexcept;
constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr floating-point-type fetch_add(floating-point-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr floating-point-type fetch_sub(floating-point-type,
memory_order = memory_order::seq_cst) const noexcept;
constexpr floating-point-type operator+=(floating-point-type) const noexcept;
constexpr floating-point-type operator-=(floating-point-type) const noexcept;
constexpr void wait(floating-point-type, memory_order = memory_order::seq_cst) const noexcept;
constexpr void notify_one() const noexcept;
constexpr void notify_all() const noexcept;
};
}
Descriptions are provided below only for members
that differ from the primary template
.The following operations perform arithmetic computations
. The correspondence among key, operator, and computation is specified
in Table
148.constexpr floating-point-type fetch_key(floating-point-type operand,
memory_order order = memory_order::seq_cst) const noexcept;
Effects: Atomically replaces the value referenced by
*ptr with
the result of the computation applied to the value referenced by
*ptr
and the given operand
. Memory is affected according to the value of
order. These operations are atomic read-modify-write operations (
[intro.races])
.Returns: Atomically, the value referenced by
*ptr
immediately before the effects
. Remarks: If the result is not a representable value for its type (
[expr.pre]),
the result is unspecified,
but the operations otherwise have no undefined behavior
. Atomic arithmetic operations on
floating-point-type should conform to
the
std::numeric_limits<floating-point-type> traits
associated with the floating-point type (
[limits.syn])
. The floating-point environment (
[cfenv])
for atomic arithmetic operations on
floating-point-type
may be different than the calling thread's floating-point environment
.constexpr floating-point-type operator op=(floating-point-type operand) const noexcept;
Effects: Equivalent to:
return fetch_key(operand) op operand;
namespace std {
template<class T> struct atomic_ref<T*> {
private:
T** ptr;
public:
using value_type = T*;
using difference_type = ptrdiff_t;
static constexpr size_t required_alignment = implementation-defined;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const noexcept;
constexpr explicit atomic_ref(T*&);
constexpr atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
constexpr void store(T*, memory_order = memory_order::seq_cst) const noexcept;
constexpr T* operator=(T*) const noexcept;
constexpr T* load(memory_order = memory_order::seq_cst) const noexcept;
constexpr operator T*() const noexcept;
constexpr T* exchange(T*, memory_order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_weak(T*&, T*,
memory_order, memory_order) const noexcept;
constexpr bool compare_exchange_strong(T*&, T*,
memory_order, memory_order) const noexcept;
constexpr bool compare_exchange_weak(T*&, T*,
memory_order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_strong(T*&, T*,
memory_order = memory_order::seq_cst) const noexcept;
constexpr T* fetch_add(difference_type, memory_order = memory_order::seq_cst) const noexcept;
constexpr T* fetch_sub(difference_type, memory_order = memory_order::seq_cst) const noexcept;
constexpr T* fetch_max(T*, memory_order = memory_order::seq_cst) const noexcept;
constexpr T* fetch_min(T*, memory_order = memory_order::seq_cst) const noexcept;
constexpr T* operator++(int) const noexcept;
constexpr T* operator--(int) const noexcept;
constexpr T* operator++() const noexcept;
constexpr T* operator--() const noexcept;
constexpr T* operator+=(difference_type) const noexcept;
constexpr T* operator-=(difference_type) const noexcept;
constexpr void wait(T*, memory_order = memory_order::seq_cst) const noexcept;
constexpr void notify_one() const noexcept;
constexpr void notify_all() const noexcept;
};
}
Descriptions are provided below only for members
that differ from the primary template
.The following operations perform arithmetic computations
. The correspondence among key, operator, and computation is specified
in Table
149.constexpr T* fetch_key(difference_type operand, memory_order order = memory_order::seq_cst) const noexcept;
Mandates:
T is a complete object type
. Effects: Atomically replaces the value referenced by
*ptr with
the result of the computation applied to the value referenced by
*ptr
and the given operand
. Memory is affected according to the value of
order. These operations are atomic read-modify-write operations (
[intro.races])
.Returns: Atomically, the value referenced by
*ptr
immediately before the effects
. Remarks: The result may be an undefined address,
but the operations otherwise have no undefined behavior
. For
fetch_max and
fetch_min, the maximum and minimum
computation is performed as if by
max and
min
algorithms (
[alg.min.max]), respectively, with the object value and the first
parameter as the arguments
.[
Note 1:
If the pointers point to different complete objects (or subobjects thereof),
the
< operator does not establish a strict weak ordering
(Table
29,
[expr.rel])
. —
end note]
constexpr T* operator op=(difference_type operand) const noexcept;
Effects: Equivalent to:
return fetch_key(operand) op operand;
33.5.7.6 Member operators
common to integers and pointers to objects [atomics.ref.memop]
constexpr value_type operator++(int) const noexcept;
Effects: Equivalent to: return fetch_add(1);
constexpr value_type operator--(int) const noexcept;
Effects: Equivalent to: return fetch_sub(1);
constexpr value_type operator++() const noexcept;
Effects: Equivalent to: return fetch_add(1) + 1;
constexpr value_type operator--() const noexcept;
Effects: Equivalent to: return fetch_sub(1) - 1;
namespace std {
template<class T> struct atomic {
using value_type = T;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);
constexpr atomic(T) noexcept;
atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete;
T load(memory_order = memory_order::seq_cst) const volatile noexcept;
constexpr T load(memory_order = memory_order::seq_cst) const noexcept;
operator T() const volatile noexcept;
constexpr operator T() const noexcept;
void store(T, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr void store(T, memory_order = memory_order::seq_cst) noexcept;
T operator=(T) volatile noexcept;
constexpr T operator=(T) noexcept;
T exchange(T, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr T exchange(T, memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_weak(T&, T, memory_order, memory_order) volatile noexcept;
constexpr bool compare_exchange_weak(T&, T, memory_order, memory_order) noexcept;
bool compare_exchange_strong(T&, T, memory_order, memory_order) volatile noexcept;
constexpr bool compare_exchange_strong(T&, T, memory_order, memory_order) noexcept;
bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) noexcept;
void wait(T, memory_order = memory_order::seq_cst) const volatile noexcept;
constexpr void wait(T, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() volatile noexcept;
constexpr void notify_one() noexcept;
void notify_all() volatile noexcept;
constexpr void notify_all() noexcept;
};
}
The template argument for
T shall meet the
Cpp17CopyConstructible and
Cpp17CopyAssignable requirements
. The program is ill-formed if any of
- is_trivially_copyable_v<T>,
- is_copy_constructible_v<T>,
- is_move_constructible_v<T>,
- is_copy_assignable_v<T>, or
- is_move_assignable_v<T>
is
false. [
Note 1:
Type arguments that are
not also statically initializable can be difficult to use
. —
end note]
The specialization
atomic<bool> is a standard-layout struct
. It has a trivial destructor
.[
Note 2:
The representation of an atomic specialization
need not have the same size and alignment requirement as
its corresponding argument type
. —
end note]
constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);
Mandates:
is_default_constructible_v<T> is
true. Effects: Initializes the atomic object with the value of
T(). constexpr atomic(T desired) noexcept;
Effects: Initializes the object with the value
desired. [
Note 1:
It is possible to have an access to an atomic object
A
race with its construction, for example by communicating the address of the
just-constructed object
A to another thread via
memory_order::relaxed operations on a suitable atomic pointer
variable, and then immediately accessing
A in the receiving thread
. This results in undefined behavior
. —
end note]
static constexpr bool is_always_lock_free = implementation-defined;
The
static data member
is_always_lock_free is
true
if the atomic type's operations are always lock-free, and
false otherwise
. [
Note 2:
The value of
is_always_lock_free is consistent with the value of
the corresponding
ATOMIC_..._LOCK_FREE macro, if defined
. —
end note]
bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
Returns:
true if the object's operations are lock-free,
false otherwise
. [
Note 3:
The return value of the
is_lock_free member function
is consistent with the value of
is_always_lock_free for the same type
. —
end note]
void store(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store(T desired, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Preconditions:
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value pointed to by
this
with the value of
desired. Memory is affected according to the value of
order.T operator=(T desired) volatile noexcept;
constexpr T operator=(T desired) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Equivalent to
store(desired). T load(memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr T load(memory_order order = memory_order::seq_cst) const noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Preconditions:
order is
memory_order::relaxed,
memory_order::consume,
memory_order::ac-
quire, or
memory_order::seq_cst. Effects: Memory is affected according to the value of
order. Returns: Atomically returns the value pointed to by
this. operator T() const volatile noexcept;
constexpr operator T() const noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Equivalent to: return load();
T exchange(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T exchange(T desired, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Atomically replaces the value pointed to by
this
with
desired. Memory is affected according to the value of
order. Returns: Atomically returns the value pointed to by
this immediately before the effects
. bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) volatile noexcept;
constexpr bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) volatile noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) noexcept;
bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order::seq_cst) noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Preconditions:
failure is
memory_order::relaxed,
memory_order::consume,
memory_order::acquire, or
memory_order::seq_cst. Effects: Retrieves the value in
expected. It then atomically
compares the value representation of the value pointed to by
this
for equality with that previously retrieved from
expected,
and if true, replaces the value pointed to
by
this with that in
desired. If and only if the comparison is
true, memory is affected according to the
value of
success, and if the comparison is false, memory is affected according
to the value of
failure. When only one
memory_order argument is
supplied, the value of
success is
order, and the value of
failure is
order except that a value of
memory_order::acq_rel
shall be replaced by the value
memory_order::acquire and a value of
memory_order::release shall be replaced by the value
memory_order::relaxed. If and only if the comparison is false then, after the atomic operation,
the value in
expected is replaced by the value
pointed to by
this during the atomic comparison
. If the operation returns
true, these
operations are atomic read-modify-write
operations (
[intro.multithread]) on the memory
pointed to by
this. Otherwise, these operations are atomic load operations on that memory
.Returns: The result of the comparison
. [
Note 4:
For example, the effect of
compare_exchange_strong
on objects without padding bits (
[basic.types.general]) is
if (memcmp(this, &expected, sizeof(*this)) == 0)
memcpy(this, &desired, sizeof(*this));
else
memcpy(&expected, this, sizeof(*this));
—
end note]
[
Example 1:
The expected use of the compare-and-exchange operations is as follows
. The
compare-and-exchange operations will update
expected when another iteration of
the loop is needed
. expected = current.load();
do {
desired = function(expected);
} while (!current.compare_exchange_weak(expected, desired));
—
end example]
[
Example 2:
Because the expected value is updated only on failure,
code releasing the memory containing the
expected value on success will work
. For example, list head insertion will act atomically and would not introduce a
data race in the following code:
do {
p->next = head;
} while (!head.compare_exchange_weak(p->next, p));
—
end example]
Implementations should ensure that weak compare-and-exchange operations do not
consistently return
false unless either the atomic object has value
different from
expected or there are concurrent modifications to the
atomic object
.Remarks: A weak compare-and-exchange operation may fail spuriously
. That is, even when
the contents of memory referred to by
expected and
this are
equal, it may return
false and store back to
expected the same memory
contents that were originally there
. [
Note 5:
This
spurious failure enables implementation of compare-and-exchange on a broader class of
machines, e.g., load-locked store-conditional machines
. A
consequence of spurious failure is that nearly all uses of weak compare-and-exchange
will be in a loop
. When a compare-and-exchange is in a loop, the weak version will yield better performance
on some platforms
. When a weak compare-and-exchange would require a loop and a strong one
would not, the strong one is preferable
. —
end note]
[
Note 6:
Under cases where the
memcpy and
memcmp semantics of the compare-and-exchange
operations apply, the comparisons can fail for values that compare equal with
operator== if the value representation has trap bits or alternate
representations of the same value
. Notably, on implementations conforming to
ISO/IEC/IEEE 60559, floating-point
-0.0 and
+0.0
will not compare equal with
memcmp but will compare equal with
operator==,
and NaNs with the same payload will compare equal with
memcmp but will not
compare equal with
operator==. —
end note]
[
Note 7:
Because compare-and-exchange acts on an object's value representation,
padding bits that never participate in the object's value representation
are ignored
. As a consequence, the following code is guaranteed to avoid
spurious failure:
struct padded {
char clank = 0x42;
unsigned biff = 0xC0DEFEFE;
};
atomic<padded> pad = {};
bool zap() {
padded expected, desired{0, 0};
return pad.compare_exchange_strong(expected, desired);
}
—
end note]
[
Note 8:
For a union with bits that participate in the value representation
of some members but not others, compare-and-exchange might always fail
. This is because such padding bits have an indeterminate value when they
do not participate in the value representation of the active member
. As a consequence, the following code is not guaranteed to ever succeed:
union pony {
double celestia = 0.;
short luna;
};
atomic<pony> princesses = {};
bool party(pony desired) {
pony expected;
return princesses.compare_exchange_strong(expected, desired);
}
—
end note]
void wait(T old, memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::consume,
memory_order::ac-
quire, or
memory_order::seq_cst. Effects: Repeatedly performs the following steps, in order:
Evaluates
load(order) and
compares its value representation for equality against that of
old.If they compare unequal, returns
.Blocks until it
is unblocked by an atomic notifying operation or is unblocked spuriously
.
void notify_one() volatile noexcept;
constexpr void notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation
that is eligible to be unblocked (
[atomics.wait]) by this call,
if any such atomic waiting operations exist
. void notify_all() volatile noexcept;
constexpr void notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations
that are eligible to be unblocked (
[atomics.wait]) by this call
. There are specializations of the
atomic
class template for the integral types
char,
signed char,
unsigned char,
short,
unsigned short,
int,
unsigned int,
long,
unsigned long,
long long,
unsigned long long,
char8_t,
char16_t,
char32_t,
wchar_t,
and any other types needed by the typedefs in the header
. For each such type
integral-type, the specialization
atomic<integral-type> provides additional atomic operations appropriate to integral types
. namespace std {
template<> struct atomic<integral-type> {
using value_type = integral-type;
using difference_type = value_type;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const volatile noexcept;
bool () const noexcept;
constexpr atomic() noexcept;
constexpr atomic(integral-type) noexcept;
atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete;
void store(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr void store(integral-type, memory_order = memory_order::seq_cst) noexcept;
integral-type operator=(integral-type) volatile noexcept;
constexpr integral-type operator=(integral-type) noexcept;
integral-type load(memory_order = memory_order::seq_cst) const volatile noexcept;
constexpr integral-type load(memory_order = memory_order::seq_cst) const noexcept;
operator integral-type() const volatile noexcept;
constexpr operator integral-type() const noexcept;
integral-type exchange(integral-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr integral-type exchange(integral-type,
memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_weak(integral-type&, integral-type,
memory_order, memory_order) volatile noexcept;
constexpr bool compare_exchange_weak(integral-type&, integral-type,
memory_order, memory_order) noexcept;
bool compare_exchange_strong(integral-type&, integral-type,
memory_order, memory_order) volatile noexcept;
constexpr bool compare_exchange_strong(integral-type&, integral-type,
memory_order, memory_order) noexcept;
bool compare_exchange_weak(integral-type&, integral-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_weak(integral-type&, integral-type,
memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_strong(integral-type&, integral-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_strong(integral-type&, integral-type,
memory_order = memory_order::seq_cst) noexcept;
integral-type fetch_add(integral-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr integral-type fetch_add(integral-type,
memory_order = memory_order::seq_cst) noexcept;
integral-type fetch_sub(integral-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr integral-type fetch_sub(integral-type,
memory_order = memory_order::seq_cst) noexcept;
integral-type fetch_and(integral-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr integral-type fetch_and(integral-type,
memory_order = memory_order::seq_cst) noexcept;
integral-type fetch_or(integral-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr integral-type fetch_or(integral-type,
memory_order = memory_order::seq_cst) noexcept;
integral-type fetch_xor(integral-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr integral-type fetch_xor(integral-type,
memory_order = memory_order::seq_cst) noexcept;
integral-type fetch_max( integral-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr integral-type fetch_max( integral-type,
memory_order = memory_order::seq_cst) noexcept;
integral-type fetch_min( integral-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr integral-type fetch_min( integral-type,
memory_order = memory_order::seq_cst) noexcept;
integral-type operator++(int) volatile noexcept;
constexpr integral-type operator++(int) noexcept;
integral-type operator--(int) volatile noexcept;
constexpr integral-type operator--(int) noexcept;
integral-type operator++() volatile noexcept;
constexpr integral-type operator++() noexcept;
integral-type operator--() volatile noexcept;
constexpr integral-type operator--() noexcept;
integral-type operator+=(integral-type) volatile noexcept;
constexpr integral-type operator+=(integral-type) noexcept;
integral-type operator-=(integral-type) volatile noexcept;
constexpr integral-type operator-=(integral-type) noexcept;
integral-type operator&=(integral-type) volatile noexcept;
constexpr integral-type operator&=(integral-type) noexcept;
integral-type operator|=(integral-type) volatile noexcept;
constexpr integral-type operator|=(integral-type) noexcept;
integral-type operator^=(integral-type) volatile noexcept;
constexpr integral-type operator^=(integral-type) noexcept;
void wait(integral-type, memory_order = memory_order::seq_cst) const volatile noexcept;
constexpr void wait(integral-type, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() volatile noexcept;
constexpr void notify_one() noexcept;
void notify_all() volatile noexcept;
constexpr void notify_all() noexcept;
};
}
The atomic integral specializations
are standard-layout structs
. They each have
a trivial destructor
.Descriptions are provided below only for members that differ from the primary template
.The following operations perform arithmetic computations
. The correspondence among key, operator, and computation is specified
in Table
148.T fetch_key(T operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T fetch_key(T operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Atomically replaces the value pointed to by
this with the result of the computation applied to the
value pointed to by
this and the given
operand. Memory is affected according to the value of
order. Returns: Atomically, the value pointed to by
this immediately before the effects
. Remarks: Except for
fetch_max and
fetch_min, for signed integer types
the result is as if the object value and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type
. [
Note 2:
There are no undefined results arising from the computation
. —
end note]
For
fetch_max and
fetch_min, the maximum and minimum
computation is performed as if by
max and
min algorithms
(
[alg.min.max]), respectively, with the object value and the first parameter
as the arguments
.T operator op=(T operand) volatile noexcept;
constexpr T operator op=(T operand) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Equivalent to: return fetch_key(operand) op operand;
There are specializations of the
atomic
class template for all cv-unqualified floating-point types
. For each such type
floating-point-type,
the specialization
atomic<floating-point-type>
provides additional atomic operations appropriate to floating-point types
.namespace std {
template<> struct atomic<floating-point-type> {
using value_type = floating-point-type;
using difference_type = value_type;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
constexpr atomic() noexcept;
constexpr atomic(floating-point-type) noexcept;
atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete;
void store(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr void store(floating-point-type, memory_order = memory_order::seq_cst) noexcept;
floating-point-type operator=(floating-point-type) volatile noexcept;
constexpr floating-point-type operator=(floating-point-type) noexcept;
floating-point-type load(memory_order = memory_order::seq_cst) volatile noexcept;
constexpr floating-point-type load(memory_order = memory_order::seq_cst) noexcept;
operator floating-point-type() volatile noexcept;
constexpr operator floating-point-type() noexcept;
floating-point-type exchange(floating-point-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr floating-point-type exchange(floating-point-type,
memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_weak(floating-point-type&, floating-point-type,
memory_order, memory_order) volatile noexcept;
constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type,
memory_order, memory_order) noexcept;
bool compare_exchange_strong(floating-point-type&, floating-point-type,
memory_order, memory_order) volatile noexcept;
constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type,
memory_order, memory_order) noexcept;
bool compare_exchange_weak(floating-point-type&, floating-point-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type,
memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_strong(floating-point-type&, floating-point-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type,
memory_order = memory_order::seq_cst) noexcept;
floating-point-type fetch_add(floating-point-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr floating-point-type fetch_add(floating-point-type,
memory_order = memory_order::seq_cst) noexcept;
floating-point-type fetch_sub(floating-point-type,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr floating-point-type fetch_sub(floating-point-type,
memory_order = memory_order::seq_cst) noexcept;
floating-point-type operator+=(floating-point-type) volatile noexcept;
constexpr floating-point-type operator+=(floating-point-type) noexcept;
floating-point-type operator-=(floating-point-type) volatile noexcept;
constexpr floating-point-type operator-=(floating-point-type) noexcept;
void wait(floating-point-type, memory_order = memory_order::seq_cst) const volatile noexcept;
constexpr void wait(floating-point-type, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() volatile noexcept;
constexpr void notify_one() noexcept;
void notify_all() volatile noexcept;
constexpr void notify_all() noexcept;
};
}
The atomic floating-point specializations
are standard-layout structs
. They each have
a trivial destructor
.Descriptions are provided below only for members that differ from the primary template
.The following operations perform arithmetic addition and subtraction computations
. The correspondence among key, operator, and computation is specified
in Table
148.T fetch_key(T operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T fetch_key(T operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Atomically replaces the value pointed to by
this
with the result of the computation applied to the value pointed
to by
this and the given
operand. Memory is affected according to the value of
order. Returns: Atomically, the value pointed to by
this immediately before the effects
. Remarks: If the result is not a representable value for its type (
[expr.pre])
the result is unspecified, but the operations otherwise have no undefined
behavior
. Atomic arithmetic operations on
floating-point-type
should conform to the
std::numeric_limits<floating-point-type>
traits associated with the floating-point type (
[limits.syn])
. The floating-point environment (
[cfenv]) for atomic arithmetic operations
on
floating-point-type may be different than the
calling thread's floating-point environment
.T operator op=(T operand) volatile noexcept;
constexpr T operator op=(T operand) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Equivalent to: return fetch_key(operand) op operand;
Remarks: If the result is not a representable value for its type (
[expr.pre])
the result is unspecified, but the operations otherwise have no undefined
behavior
. Atomic arithmetic operations on
floating-point-type
should conform to the
std::numeric_limits<floating-point-type>
traits associated with the floating-point type (
[limits.syn])
. The floating-point environment (
[cfenv]) for atomic arithmetic operations
on
floating-point-type may be different than the
calling thread's floating-point environment
.
namespace std {
template<class T> struct atomic<T*> {
using value_type = T*;
using difference_type = ptrdiff_t;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
constexpr atomic() noexcept;
constexpr atomic(T*) noexcept;
atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete;
void store(T*, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr void store(T*, memory_order = memory_order::seq_cst) noexcept;
T* operator=(T*) volatile noexcept;
constexpr T* operator=(T*) noexcept;
T* load(memory_order = memory_order::seq_cst) const volatile noexcept;
constexpr T* load(memory_order = memory_order::seq_cst) const noexcept;
operator T*() const volatile noexcept;
constexpr operator T*() const noexcept;
T* exchange(T*, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr T* exchange(T*, memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_weak(T*&, T*, memory_order, memory_order) volatile noexcept;
constexpr bool compare_exchange_weak(T*&, T*, memory_order, memory_order) noexcept;
bool compare_exchange_strong(T*&, T*, memory_order, memory_order) volatile noexcept;
constexpr bool compare_exchange_strong(T*&, T*, memory_order, memory_order) noexcept;
bool compare_exchange_weak(T*&, T*,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_weak(T*&, T*,
memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_strong(T*&, T*,
memory_order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_strong(T*&, T*,
memory_order = memory_order::seq_cst) noexcept;
T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
T* fetch_max(T*, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr T* fetch_max(T*, memory_order = memory_order::seq_cst) noexcept;
T* fetch_min(T*, memory_order = memory_order::seq_cst) volatile noexcept;
constexpr T* fetch_min(T*, memory_order = memory_order::seq_cst) noexcept;
T* operator++(int) volatile noexcept;
constexpr T* operator++(int) noexcept;
T* operator--(int) volatile noexcept;
constexpr T* operator--(int) noexcept;
T* operator++() volatile noexcept;
constexpr T* operator++() noexcept;
T* operator--() volatile noexcept;
constexpr T* operator--() noexcept;
T* operator+=(ptrdiff_t) volatile noexcept;
constexpr T* operator+=(ptrdiff_t) noexcept;
T* operator-=(ptrdiff_t) volatile noexcept;
constexpr T* operator-=(ptrdiff_t) noexcept;
void wait(T*, memory_order = memory_order::seq_cst) const volatile noexcept;
constexpr void wait(T*, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() volatile noexcept;
constexpr void notify_one() noexcept;
void notify_all() volatile noexcept;
constexpr void notify_all() noexcept;
};
}
There is a partial specialization of the
atomic class template for pointers
. Specializations of this partial specialization are standard-layout structs
. They each have a trivial destructor
.Descriptions are provided below only for members that differ from the primary template
.The following operations perform pointer arithmetic
. The correspondence among key, operator, and computation is specified
in Table
149.T* fetch_key(ptrdiff_t operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T* fetch_key(ptrdiff_t operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Mandates:
T is a complete object type
. [
Note 1:
Pointer arithmetic on
void* or function pointers is ill-formed
. —
end note]
Effects: Atomically replaces the value pointed to by
this with the result of the computation applied to the
value pointed to by
this and the given
operand. Memory is affected according to the value of
order. Returns: Atomically, the value pointed to by
this immediately before the effects
. Remarks: The result may be an undefined address,
but the operations otherwise have no undefined behavior
. For
fetch_max and
fetch_min, the maximum and minimum
computation is performed as if by
max and
min
algorithms (
[alg.min.max]), respectively, with the object value and the first
parameter as the arguments
.[
Note 2:
If the pointers point to different complete objects (or subobjects thereof),
the
< operator does not establish a strict weak ordering
(Table
29,
[expr.rel])
. —
end note]
T* operator op=(ptrdiff_t operand) volatile noexcept;
constexpr T* operator op=(ptrdiff_t operand) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Equivalent to: return fetch_key(operand) op operand;
33.5.8.6 Member operators common to integers and pointers to objects [atomics.types.memop]
value_type operator++(int) volatile noexcept;
constexpr value_type operator++(int) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Equivalent to: return fetch_add(1);
value_type operator--(int) volatile noexcept;
constexpr value_type operator--(int) noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Equivalent to: return fetch_sub(1);
value_type operator++() volatile noexcept;
constexpr value_type operator++() noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Equivalent to: return fetch_add(1) + 1;
value_type operator--() volatile noexcept;
constexpr value_type operator--() noexcept;
Constraints: For the
volatile overload of this function,
is_always_lock_free is
true. Effects: Equivalent to: return fetch_sub(1) - 1;
The library provides partial specializations of the
atomic template
for shared-ownership smart pointers (
[util.sharedptr])
. [
Note 1:
The partial specializations are declared in header
. —
end note]
The template parameter
T of these partial specializations
may be an incomplete type
.All changes to an atomic smart pointer in
[util.smartptr.atomic], and
all associated
use_count increments,
are guaranteed to be performed atomically
. Associated
use_count decrements
are sequenced after the atomic operation,
but are not required to be part of it
. Any associated deletion and deallocation
are sequenced after the atomic update step and
are not part of the atomic operation
. [
Note 2:
If the atomic operation uses locks,
locks acquired by the implementation
will be held when any
use_count adjustments are performed, and
will not be held when any destruction or deallocation
resulting from this is performed
. —
end note]
[
Example 1:
template<typename T> class atomic_list {
struct node {
T t;
shared_ptr<node> next;
};
atomic<shared_ptr<node>> head;
public:
shared_ptr<node> find(T t) const {
auto p = head.load();
while (p && p->t != t)
p = p->next;
return p;
}
void push_front(T t) {
auto p = make_shared<node>();
p->t = t;
p->next = head;
while (!head.compare_exchange_weak(p->next, p)) {}
}
};
—
end example]
namespace std {
template<class T> struct atomic<shared_ptr<T>> {
using value_type = shared_ptr<T>;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const noexcept;
constexpr atomic() noexcept;
constexpr atomic(nullptr_t) noexcept : atomic() { }
constexpr atomic(shared_ptr<T> desired) noexcept;
atomic(const atomic&) = delete;
void operator=(const atomic&) = delete;
constexpr shared_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
constexpr operator shared_ptr<T>() const noexcept;
constexpr void store(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
constexpr void operator=(shared_ptr<T> desired) noexcept;
constexpr void operator=(nullptr_t) noexcept;
constexpr shared_ptr<T> exchange(shared_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
constexpr bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
memory_order success, memory_order failure) noexcept;
constexpr bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
memory_order success, memory_order failure) noexcept;
constexpr bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
constexpr bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
constexpr void wait(shared_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
constexpr void notify_one() noexcept;
constexpr void notify_all() noexcept;
private:
shared_ptr<T> p;
};
}
constexpr atomic() noexcept;
Effects: Initializes
p{}. constexpr atomic(shared_ptr<T> desired) noexcept;
Effects: Initializes the object with the value
desired. [
Note 1:
It is possible to have an access to
an atomic object
A race with its construction,
for example,
by communicating the address of the just-constructed object
A
to another thread via
memory_order::relaxed operations
on a suitable atomic pointer variable, and
then immediately accessing
A in the receiving thread
. This results in undefined behavior
. —
end note]
constexpr void store(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value pointed to by
this with
the value of
desired as if by
p.swap(desired). Memory is affected according to the value of
order.constexpr void operator=(shared_ptr<T> desired) noexcept;
Effects: Equivalent to
store(desired). constexpr void operator=(nullptr_t) noexcept;
Effects: Equivalent to
store(nullptr). constexpr shared_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::consume,
memory_order::ac-
quire, or
memory_order::seq_cst. Effects: Memory is affected according to the value of
order. Returns: Atomically returns
p. constexpr operator shared_ptr<T>() const noexcept;
Effects: Equivalent to: return load();
constexpr shared_ptr<T> exchange(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
Effects: Atomically replaces
p with
desired
as if by
p.swap(desired). Memory is affected according to the value of
order. Returns: Atomically returns the value of
p immediately before the effects
. constexpr bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
memory_order success, memory_order failure) noexcept;
constexpr bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
memory_order success, memory_order failure) noexcept;
Preconditions:
failure is
memory_order::relaxed,
memory_order::consume,
memory_order::acquire, or
memory_order::seq_cst. Effects: If
p is equivalent to
expected,
assigns
desired to
p and
has synchronization semantics corresponding to the value of
success,
otherwise assigns
p to
expected and
has synchronization semantics corresponding to the value of
failure. Returns:
true if
p was equivalent to
expected,
false otherwise
. Remarks: Two
shared_ptr objects are equivalent if
they store the same pointer value and
either share ownership or are both empty
. The weak form may fail spuriously
. If the operation returns
true,
expected is not accessed after the atomic update and
the operation is an atomic read-modify-write operation (
[intro.multithread])
on the memory pointed to by
this. Otherwise, the operation is an atomic load operation on that memory, and
expected is updated with the existing value
read from the atomic object in the attempted atomic update
. The
use_count update corresponding to the write to
expected
is part of the atomic operation
. The write to
expected itself
is not required to be part of the atomic operation
.constexpr bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:
return compare_exchange_weak(expected, desired, order, fail_order);
where
fail_order is the same as
order
except that a value of
memory_order::acq_rel
shall be replaced by the value
memory_order::acquire and
a value of
memory_order::release
shall be replaced by the value
memory_order::relaxed. constexpr bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:
return compare_exchange_strong(expected, desired, order, fail_order);
where
fail_order is the same as
order
except that a value of
memory_order::acq_rel
shall be replaced by the value
memory_order::acquire and
a value of
memory_order::release
shall be replaced by the value
memory_order::relaxed. constexpr void wait(shared_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::consume,
memory_order::ac-
quire, or
memory_order::seq_cst. Effects: Repeatedly performs the following steps, in order:
Evaluates
load(order) and compares it to
old.If the two are not equivalent, returns
.Blocks until it
is unblocked by an atomic notifying operation or is unblocked spuriously
.
Remarks: Two
shared_ptr objects are equivalent
if they store the same pointer and either share ownership or are both empty
. constexpr void notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation
that is eligible to be unblocked (
[atomics.wait]) by this call,
if any such atomic waiting operations exist
. constexpr void notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations
that are eligible to be unblocked (
[atomics.wait]) by this call
. namespace std {
template<class T> struct atomic<weak_ptr<T>> {
using value_type = weak_ptr<T>;
static constexpr bool is_always_lock_free = implementation-defined;
bool is_lock_free() const noexcept;
constexpr atomic() noexcept;
constexpr atomic(weak_ptr<T> desired) noexcept;
atomic(const atomic&) = delete;
constexpr void operator=(const atomic&) = delete;
constexpr weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
constexpr operator weak_ptr<T>() const noexcept;
constexpr void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
constexpr void operator=(weak_ptr<T> desired) noexcept;
constexpr weak_ptr<T> exchange(weak_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order success, memory_order failure) noexcept;
constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order success, memory_order failure) noexcept;
constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
constexpr void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
constexpr void notify_one() noexcept;
constexpr void notify_all() noexcept;
private:
weak_ptr<T> p;
};
}
constexpr atomic() noexcept;
Effects: Initializes
p{}. constexpr atomic(weak_ptr<T> desired) noexcept;
Effects: Initializes the object with the value
desired. [
Note 1:
It is possible to have an access to
an atomic object
A race with its construction,
for example,
by communicating the address of the just-constructed object
A
to another thread via
memory_order::relaxed operations
on a suitable atomic pointer variable, and
then immediately accessing
A in the receiving thread
. This results in undefined behavior
. —
end note]
constexpr void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value pointed to by
this with
the value of
desired as if by
p.swap(desired). Memory is affected according to the value of
order.constexpr void operator=(weak_ptr<T> desired) noexcept;
Effects: Equivalent to
store(desired). constexpr weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::consume,
memory_order::ac-
quire, or
memory_order::seq_cst. Effects: Memory is affected according to the value of
order. Returns: Atomically returns
p. constexpr operator weak_ptr<T>() const noexcept;
Effects: Equivalent to: return load();
constexpr weak_ptr<T> exchange(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
Effects: Atomically replaces
p with
desired
as if by
p.swap(desired). Memory is affected according to the value of
order. Returns: Atomically returns the value of
p immediately before the effects
. constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order success, memory_order failure) noexcept;
constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order success, memory_order failure) noexcept;
Preconditions:
failure is
memory_order::relaxed,
memory_order::consume,
memory_order::acquire, or
memory_order::seq_cst. Effects: If
p is equivalent to
expected,
assigns
desired to
p and
has synchronization semantics corresponding to the value of
success,
otherwise assigns
p to
expected and
has synchronization semantics corresponding to the value of
failure. Returns:
true if
p was equivalent to
expected,
false otherwise
. Remarks: Two
weak_ptr objects are equivalent if
they store the same pointer value and
either share ownership or are both empty
. The weak form may fail spuriously
. If the operation returns
true,
expected is not accessed after the atomic update and
the operation is an atomic read-modify-write operation (
[intro.multithread])
on the memory pointed to by
this. Otherwise, the operation is an atomic load operation on that memory, and
expected is updated with the existing value
read from the atomic object in the attempted atomic update
. The
use_count update corresponding to the write to
expected
is part of the atomic operation
. The write to
expected itself
is not required to be part of the atomic operation
.constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:
return compare_exchange_weak(expected, desired, order, fail_order);
where
fail_order is the same as
order
except that a value of
memory_order::acq_rel
shall be replaced by the value
memory_order::acquire and
a value of
memory_order::release
shall be replaced by the value
memory_order::relaxed. constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:
return compare_exchange_strong(expected, desired, order, fail_order);
where
fail_order is the same as
order
except that a value of
memory_order::acq_rel
shall be replaced by the value
memory_order::acquire and
a value of
memory_order::release
shall be replaced by the value
memory_order::relaxed. constexpr void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::consume,
memory_order::ac-
quire, or
memory_order::seq_cst. Effects: Repeatedly performs the following steps, in order:
Evaluates
load(order) and compares it to
old.If the two are not equivalent, returns
.Blocks until it
is unblocked by an atomic notifying operation or is unblocked spuriously
.
Remarks: Two
weak_ptr objects are equivalent
if they store the same pointer and either share ownership or are both empty
. constexpr void notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation
that is eligible to be unblocked (
[atomics.wait]) by this call,
if any such atomic waiting operations exist
. constexpr void notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations
that are eligible to be unblocked (
[atomics.wait]) by this call
.
A non-member function template whose name matches the pattern
atomic_f or the pattern
atomic_f_explicit
invokes the member function
f, with the value of the
first parameter as the object expression and the values of the remaining parameters
(if any) as the arguments of the member function call, in order
. An argument
for a parameter of type
atomic<T>::value_type* is dereferenced when
passed to the member function call
. If no such member function exists, the program is ill-formed
.[
Note 1:
The non-member functions enable programmers to write code that can be
compiled as either C or C++, for example in a shared header file
. —
end note]
namespace std {
struct atomic_flag {
constexpr atomic_flag() noexcept;
atomic_flag(const atomic_flag&) = delete;
atomic_flag& operator=(const atomic_flag&) = delete;
atomic_flag& operator=(const atomic_flag&) volatile = delete;
bool test(memory_order = memory_order::seq_cst) const volatile noexcept;
constexpr bool test(memory_order = memory_order::seq_cst) const noexcept;
bool test_and_set(memory_order = memory_order::seq_cst) volatile noexcept;
constexpr bool test_and_set(memory_order = memory_order::seq_cst) noexcept;
void clear(memory_order = memory_order::seq_cst) volatile noexcept;
constexpr void clear(memory_order = memory_order::seq_cst) noexcept;
void wait(bool, memory_order = memory_order::seq_cst) const volatile noexcept;
constexpr void wait(bool, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() volatile noexcept;
constexpr void notify_one() noexcept;
void notify_all() volatile noexcept;
constexpr void notify_all() noexcept;
};
}
The
atomic_flag type provides the classic test-and-set functionality
. It has two states, set and clear
.Operations on an object of type
atomic_flag shall be lock-free
. The operations should also be address-free
.The
atomic_flag type is a standard-layout struct
. It has a trivial destructor
.constexpr atomic_flag::atomic_flag() noexcept;
Effects: Initializes
*this to the clear state
. bool atomic_flag_test(const volatile atomic_flag* object) noexcept;
constexpr bool atomic_flag_test(const atomic_flag* object) noexcept;
bool atomic_flag_test_explicit(const volatile atomic_flag* object,
memory_order order) noexcept;
constexpr bool atomic_flag_test_explicit(const atomic_flag* object,
memory_order order) noexcept;
bool atomic_flag::test(memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr bool atomic_flag::test(memory_order order = memory_order::seq_cst) const noexcept;
For
atomic_flag_test, let
order be
memory_order::seq_cst.Preconditions:
order is
memory_order::relaxed,
memory_order::consume,
memory_order::ac-
quire, or
memory_order::seq_cst. Effects: Memory is affected according to the value of
order. Returns: Atomically returns the value pointed to by
object or
this. bool atomic_flag_test_and_set(volatile atomic_flag* object) noexcept;
constexpr bool atomic_flag_test_and_set(atomic_flag* object) noexcept;
bool atomic_flag_test_and_set_explicit(volatile atomic_flag* object, memory_order order) noexcept;
constexpr bool atomic_flag_test_and_set_explicit(atomic_flag* object, memory_order order) noexcept;
bool atomic_flag::test_and_set(memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool atomic_flag::test_and_set(memory_order order = memory_order::seq_cst) noexcept;
Effects: Atomically sets the value pointed to by
object or by
this to
true. Memory is affected according to the value of
order. Returns: Atomically, the value of the object immediately before the effects
. void atomic_flag_clear(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_clear(atomic_flag* object) noexcept;
void atomic_flag_clear_explicit(volatile atomic_flag* object, memory_order order) noexcept;
constexpr void atomic_flag_clear_explicit(atomic_flag* object, memory_order order) noexcept;
void atomic_flag::clear(memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void atomic_flag::clear(memory_order order = memory_order::seq_cst) noexcept;
Preconditions:
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically sets the value pointed to by
object or by
this to
false. Memory is affected according to the value of
order.void atomic_flag_wait(const volatile atomic_flag* object, bool old) noexcept;
constexpr void atomic_flag_wait(const atomic_flag* object, bool old) noexcept;
void atomic_flag_wait_explicit(const volatile atomic_flag* object,
bool old, memory_order order) noexcept;
constexpr void atomic_flag_wait_explicit(const atomic_flag* object,
bool old, memory_order order) noexcept;
void atomic_flag::wait(bool old, memory_order order =
memory_order::seq_cst) const volatile noexcept;
constexpr void atomic_flag::wait(bool old, memory_order order =
memory_order::seq_cst) const noexcept;
For
atomic_flag_wait,
let
order be
memory_order::seq_cst. Let
flag be
object for the non-member functions and
this for the member functions
.Preconditions:
order is
memory_order::relaxed,
memory_order::consume,
memory_order::ac-
quire, or
memory_order::seq_cst. Effects: Repeatedly performs the following steps, in order:
Evaluates
flag->test(order) != old.If the result of that evaluation is
true, returns
.Blocks until it
is unblocked by an atomic notifying operation or is unblocked spuriously
.
void atomic_flag_notify_one(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_notify_one(atomic_flag* object) noexcept;
void atomic_flag::notify_one() volatile noexcept;
constexpr void atomic_flag::notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation
that is eligible to be unblocked (
[atomics.wait]) by this call,
if any such atomic waiting operations exist
. void atomic_flag_notify_all(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_notify_all(atomic_flag* object) noexcept;
void atomic_flag::notify_all() volatile noexcept;
constexpr void atomic_flag::notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations
that are eligible to be unblocked (
[atomics.wait]) by this call
. #define ATOMIC_FLAG_INIT see below
Remarks: The macro
ATOMIC_FLAG_INIT is defined in such a way that
it can be used to initialize an object of type
atomic_flag
to the clear state
. The macro can be used in the form:
atomic_flag guard = ATOMIC_FLAG_INIT;
It is unspecified whether the macro can be used
in other initialization contexts
. For a complete static-duration object, that initialization shall be static
.
This subclause introduces synchronization primitives called
fences. Fences can have
acquire semantics, release semantics, or both
. A release fence
A synchronizes with an acquire fence
B if there exist
atomic operations
X and
Y, both operating on some atomic object
M, such that
A is sequenced before
X,
X modifies
M,
Y is sequenced before
B, and
Y reads the value
written by
X or a value written by any side effect in the hypothetical release
sequence
X would head if it were a release operation
.A release fence
A synchronizes with an atomic operation
B that
performs an acquire operation on an atomic object
M if there exists an atomic
operation
X such that
A is sequenced before
X,
X
modifies
M, and
B reads the value written by
X or a value
written by any side effect in the hypothetical release sequence
X would head if
it were a release operation
.An atomic operation
A that is a release operation on an atomic object
M synchronizes with an acquire fence
B if there exists some atomic
operation
X on
M such that
X is sequenced before
B
and reads the value written by
A or a value written by any side effect in the
release sequence headed by
A.extern "C" constexpr void atomic_thread_fence(memory_order order) noexcept;
Effects: Depending on the value of
order, this operation:
- has no effects, if order == memory_order::relaxed;
- is an acquire fence, if order == memory_order::acquire or order == memory_order::consume;
- is a release fence, if order == memory_order::release;
- is both an acquire fence and a release fence, if order == memory_order::acq_rel;
- is a sequentially consistent acquire and release fence, if order == memory_order::seq_cst.
extern "C" constexpr void atomic_signal_fence(memory_order order) noexcept;
Effects: Equivalent to
atomic_thread_fence(order), except that
the resulting ordering constraints are established only between a thread and a
signal handler executed in the same thread
. [
Note 1:
atomic_signal_fence can be used to specify the order in which actions
performed by the thread become visible to the signal handler
. Compiler optimizations and reorderings of loads and stores are inhibited in
the same way as with
atomic_thread_fence, but the hardware fence instructions
that
atomic_thread_fence would have inserted are not emitted
. —
end note]
The header provides the following definitions:
Each
using-declaration for some name
A in the synopsis above
makes available the same entity as
std::A
declared in
. Each macro listed above other than
_Atomic(T)
is defined as in
. It is unspecified whether
makes available
any declarations in namespace
std.Neither the
_Atomic macro,
nor any of the non-macro global namespace declarations,
are provided by any C++ standard library header
other than
.Recommended practice: Implementations should ensure
that C and C++ representations of atomic objects are compatible,
so that the same object can be accessed as both an
_Atomic(T)
from C code and an
atomic<T> from C++ code
. The representations should be the same, and
the mechanisms used to ensure atomicity and memory ordering
should be compatible
.
Feature test macro
#define __cpp_lib_constexpr_atomic 2024??L
This was implemented in libc++ & clang by adding constexpr
to needed places implementing atomic builtins.
None, currently
std::atomic
and
std::atomic_ref
can't be used in constant evaluated code.