Document Number: | |
---|---|
Date: | |
Editor: | Microsoft Corp. |
Note: this is an early draft. It’s known to be incomplet and incorrekt, and it has lots of bad formatting.
Since the extensions described in this technical specification
are experimental and not part of the C++ standard library, they
should not be declared directly within namespace
std
.
Unless otherwise specified, all components described in this technical specification either:
::experimental::concurrency_v1
to a namespace defined in the C++ Standard Library,
such as std
, or
std
.
Each header described in this technical
specification shall import the contents of
std::experimental::concurrency_v1
into
std::experimental
as if by
namespace std {
namespace experimental {
inline namespace concurrency_v1 {}
}
}
Unless otherwise specified, references to other entities
described in this technical specification are assumed to be
qualified with std::experimental::concurrency_v1::
,
and references to entities described in the standard are assumed
to be qualified with std::
.
Extensions that are expected to eventually be added to an
existing header <meow>
are provided inside the
<experimental/meow>
header, which shall include
the standard contents of <meow>
as if by
#include <meow>
New headers are also provided in the
<experimental/>
directory, but without such an
#include
.
|
This section describes tentative plans for future versions of this technical specification and plans for moving content into future versions of the C++ Standard.
The C++ committee intends to release a new version of this
technical specification approximately every year, containing the
library extensions we hope to add to a near-future version of the
C++ Standard. Future versions will define their contents in
std::experimental::concurrency_v2
,
std::experimental::concurrency_v3
, etc., with the
most recent implemented version inlined into
std::experimental
.
When an extension defined in this or a future version of this
technical specification represents enough existing practice, it
will be moved into the next version of the C++ Standard by
removing the experimental::concurrency_vN
segment of its namespace and by removing the
experimental/
prefix from its header's path.
For the sake of improved portability between partial implementations of various C++ standards,
WG21 (the ISO technical committee for the C++ programming language) recommends
that implementers and programmers follow the guidelines in this section concerning feature-test macros.
Implementers who provide a new standard feature should define a
macro with the recommended name,
in the same circumstances under which the feature is available
(for example, taking into account relevant command-line options),
to indicate the presence of support for that feature.
Implementers should define that macro with the value specified in
the most recent version of this technical specification that they
have implemented.
The recommended macro name is "__cpp_lib_experimental_
" followed by the string in the "Macro Name Suffix" column.
Programmers who wish to determine whether a feature is available in an implementation should base that determination on
the presence of the header (determined with __has_include(<header/name>)
)
and
the state of the macro with the recommended name.
(The absence of a tested feature may result in a program with
decreased functionality, or the relevant functionality may be provided
in a different way.
A program that strictly depends on support for a feature can just
try to use the feature unconditionally;
presumably, on an implementation lacking necessary support,
translation will fail.)
Doc. No. | Title | Primary Section | Macro Name Suffix | Value | Header |
---|---|---|---|---|---|
N4399 | Improvements to std::future<T> and Related APIs | future_continuations |
201505 | <experimental/future> |
|
N4204 | C++ Latches and Barriers | latch |
201505 | <experimental/latch> |
|
N4204 | C++ Latches and Barriers | barrier |
201505 | <experimental/barrier> |
|
N4260 | Atomic Smart Pointers | atomic_smart_pointers |
201505 | <experimental/atomics> |
std::future<T>
and Related APIs
The extensions proposed here are an evolution of the functionality of
std::future
and std::shared_future
. The extensions
enable wait-free composition of asynchronous operations. Class templates
std::promise
and std::packaged_task
are also updated
to be compatible with the updated std::future
.
#include <future>
namespace std {
namespace experimental {
inline namespace concurrency_v1 {
template <class R> class promise;
template <class R> class promise<R&>;
template <> class promise<void>;
template <class R>
void swap(promise<R>& x, promise<R>& y) noexcept;
template <class R> class future;
template <class R> class future<R&>;
template <> class future<void>;
template <class R> class shared_future;
template <class R> class shared_future<R&>;
template <> class shared_future<void>;
template <class> class packaged_task; // undefined
template <class R, class... ArgTypes>
class packaged_task<R(ArgTypes...)>;
template <class R, class... ArgTypes>
void swap(packaged_task<R(ArgTypes...)>&, packaged_task<R(ArgTypes...)>&) noexcept;
template <class T>
see below make_ready_future(T&& value);
future<void> make_ready_future();
template <class T>
future<T> make_exceptional_future(exception_ptr ex);
template <class T, class E>
future<T> make_exceptional_future(E ex);
template <class InputIterator>
see below when_all(InputIterator first, InputIterator last);
template <class... Futures>
see below when_all(Futures&&... futures);
template <class Sequence>
struct when_any_result;
template <class InputIterator>
see below when_any(InputIterator first, InputIterator last);
template <class... Futures>
see below when_any(Futures&&... futures);
} // namespace concurrency_v1
} // namespace experimental
template <class R, class Alloc>
struct uses_allocator<experimental::promise<R>, Alloc>;
template <class R, class Alloc>
struct uses_allocator<experimental::packaged_task<R>, Alloc>;
} // namespace std
future
The specifications of all declarations within this subclause
namespace std {
namespace experimental {
inline namespace concurrency_v1 {
template <class R>
class future {
public:
future() noexcept;
future(future &&) noexcept;
future(const future&) = delete;
future(future<future<R>>&&) noexcept;
~future();
future& operator=(const future&) = delete;
future& operator=(future&&) noexcept;
shared_future<R> share();
// retrieving the value
see below get();
// functions to check state
bool valid() const noexcept;
bool is_ready() const;
void wait() const;
template <class Rep, class Period>
future_status wait_for(const chrono::duration<Rep, Period>& rel_time) const;
template <class Clock, class Duration>
future_status wait_until(const chrono::time_point<Clock, Duration>& abs_time) const;
// continuations
template <class F>
see below then(F&& func);
};
} // namespace concurrency_v1
} // namespace experimental
} // namespace std
future(future<future<R>>&& rhs) noexcept;
future
object from the shared state referred to by
rhs
.
The future
becomes ready when one of the following occurs:
rhs
and rhs.get()
are ready. The value or the exception from rhs.get()
is stored in the future
's shared state.
rhs
is ready but rhs.get()
is invalid. An exception of type std::future_error
, with an error condition of std::future_errc::broken_promise
is stored in the future
's shared state.
valid() == true
.rhs.valid() == false
.
The member function template then
provides a mechanism for attaching
a continuation to a future
object, which will be executed
as specified below.
template <class F>
see below then(F&& func);
INVOKE(DECAY_COPY (std::forward<F>(func)), std::move(*this))
shall be a valid expression.future
object. Additionally,
INVOKE(DECAY_COPY(std::forward<F>(func)), std::move(*this))
is called on
an unspecified thread of execution with the call to
DECAY_COPY()
being evaluated in the thread that called
then
.
future
. Any exception propagated from the execution of
the continuation is stored as the exceptional result in the shared state of the resulting future
.
result_of_t<decay_t<F>(future<R>)>
is future<R2>
, for some type R2
, the function returns future<R2>
.
Otherwise, the function returns future<result_of_t<decay_t<F>(future<R>)>>
.
then
taking a callable returning a
future<R>
would have been future<future<R>>
.
This rule avoids such nested future
objects.
The type of f2
below is
future<int>
and not future<future<int>>
:
future<int> f1 = g(); future<int> f2 = f1.then([](future<int> f) { future<int> f3 = h(); return f3; });— end example ]
valid() == false
on the original future
.
valid() == true
on the future
returned from then.
future
returned from
then
cannot be established until after the completion of the
continuation. If it is not valid, the resulting future
becomes ready with an exception of type std::future_error
,
with an error condition of std::future_errc::broken_promise
.
— end note ]
bool is_ready() const;
true
if the shared state is ready, otherwise false
.shared_future
namespace std { namespace experimental { inline namespace concurrency_v1 { template <class R> class shared_future { public: shared_future() noexcept; shared_future(const shared_future&) noexcept; shared_future(future<R>&&) noexcept; shared_future(future<shared_future<R>>&& rhs) noexcept; ~shared_future(); shared_future& operator=(const shared_future&); shared_future& operator=(shared_future&&) noexcept; // retrieving the value see below get(); // functions to check state bool valid() const noexcept; bool is_ready() const; void wait() const; template <class Rep, class Period> future_status wait_for(const chrono::duration<Rep, Period>& rel_time) const; template <class Clock, class Duration> future_status wait_until(const chrono::time_point<Clock, Duration>& abs_time) const; // continuations template <class F> see below then(F&& func) const; }; } // namespace concurrency_v1 } // namespace experimental } // namespace std
shared_future(future<shared_future<R>>&& rhs) noexcept;
shared_future
object from the shared state referred to by
rhs
.
The shared_future
becomes ready when one of the following occurs:
rhs
and rhs.get()
are ready. The value or the exception from rhs.get()
is stored in the shared_future
's shared state.
rhs
is ready but rhs.get()
is invalid.
The shared_future
stores an exception of type std::future_error
, with an error condition of std::future_errc::broken_promise
.
valid() == true
.rhs.valid() == false
.
template <class F>
see below then(F&& func) const;
INVOKE(DECAY_COPY (std::forward<F>(func)), *this)
shall be a valid expression.future
object. Additionally,
INVOKE(DECAY_COPY(std::forward<F>(func)), *this)
is called on
an unspecified thread of execution with the call to
DECAY_COPY()
being evaluated in the thread that called
then
.
future
. Any exception propagated from the execution of
the continuation is stored as the exceptional result in the shared state of the resulting future
.
result_of_t<decay_t<F>(const shared_future&)>
is future<R2>
, for some type R2
, the function returns future<R2>
.
Otherwise, the function returns future<result_of_t<decay_t<F>(const shared_future&)>>
.
future
. See the notes on
the return type of future::then
in valid() == true
on the original shared_future
object.
valid() == true
on the future
returned from then.
future
returned from
then
cannot be established until after the completion of the
continuation. In such case, the resulting future
becomes ready with an exception of type std::future_error
,
with an error condition of std::future_errc::broken_promise
.
— end note ]
bool is_ready() const;
true
if the shared state is ready, otherwise false
.promise
The specifications of all declarations within this subclause
The future
returned by the function get_future
is the one defined in the experimental
namespace (
packaged_task
The specifications of all declarations within this subclause
The future
returned by the function get_future
is the one defined in the experimental
namespace (
when_all
The function template when_all
creates a future
object that
becomes ready when all elements in a set of future
and shared_future
objects
become ready.
template <class InputIterator>
future<vector<typename iterator_traits<InputIterator>::value_type>>
when_all(InputIterator first, InputIterator last);
template <class... Futures>
future<tuple<decay_t<Futures>...>> when_all(Futures&&... futures);
future
s and shared_future
s passed into
when_all
must be in a valid state (i.e. valid() == true
).
iterator_traits<InputIterator>::value_type
is future<R>
or shared_future<R>
for some type R
.
Di
be
decay_t<Fi>
, and
let Ui
be
remove_reference_t<Fi>
for each Fi
in
Futures
. This function shall not participate in overload resolution unless
for each i either Di
is a shared_future<Ri>
or Ui
is a future<Ri>
.
Sequence
is
created, where Sequence
is either vector
or
tuple
based on the overload, as specified above.
A new future
object that refers to that shared state is created
and returned from when_all
.
first == last
, when_all
returns a future
with an empty vector
that is immediately
ready.
when_all
returns a future<tuple<>>
that is immediately ready.future
s are moved, and any shared_future
s
are copied into, correspondingly, future
s or
shared_future
s of
Sequence
in the shared state.
when_all
.
future
s and shared_future
s supplied
to the call to when_all
are ready, the resulting future
,
as well as the future
s and shared_future
s
of the Sequence
, are ready.
future
returned by when_all
will not store an exception, but the
shared states of future
s and shared_future
s held in the shared state may.future
, valid() == true
.future
s, valid() == false
.shared_future
s, valid() == true
.future
object that becomes ready when all of the input
future
sand shared_future
s are ready.
when_any_result
The library provides a template for storing the result of when_any
.
template<class Sequence>
struct when_any_result {
size_t index;
Sequence futures;
};
when_any
The function template when_any
creates a future
object that
becomes ready when at least one element in a set of future
and shared_future
objects
becomes ready.
template <class InputIterator>
future<when_any_result<vector<typename iterator_traits<InputIterator>::value_type>>>
when_any(InputIterator first, InputIterator last);
template <class... Futures>
future<when_any_result<tuple<decay_t<Futures>...>>> when_any(Futures&&... futures);
future
s and shared_future
s passed into
when_all
must be in a valid state (i.e. valid() == true
).
iterator_traits<InputIterator>::value_type
is future<R>
or shared_future<R>
for some type R
.
Di
be
decay_t<Fi>
, and
let Ui
be
remove_reference_t<Fi>
for each Fi
in
Futures
. This function shall not participate in overload resolution unless
for each i either Di
is a shared_future<Ri>
or Ui
is a future<Ri>
.
when_any_result<Sequence>
is created,
where Sequence
is a vector
for the first overload and a
tuple
for the second overload.
A new future
object that refers to that shared state is created and returned
from when_any
.
first == last
,
when_any
returns a future
that is immediately ready.
The value of the index
field of the when_any_result
is
static_cast<size_t>(-1)
. The futures
field is an empty vector
.
when_any
returns a future
that is immediately ready.
The value of the index
field of the when_any_result
is
static_cast<size_t>(-1)
.
The futures
field is tuple<>
.
future
s are moved, and any shared_future
s
are copied into, correspondingly, future
s or
shared_future
s of the futures
member of
when_any_result<Sequence>
in the shared state.
futures
shared state matches the order
of the arguments supplied to when_any
.
future
s or shared_future
s supplied
to the call to when_any
is ready, the resulting future
is ready.
Given the result future f
,
f.get().index
is the position of the ready future
or shared_future
in the
futures
member of
when_any_result<Sequence>
in the shared state.
future
returned by when_all
will not store an exception, but the
shared states of future
s and shared_future
s held in the shared state may.future
, valid() == true
.future
s, valid() == false
.shared_future
s, valid() == true
.future
object that becomes ready when any of the input
future
s and shared_future
s are ready.
make_ready_future
template <class T>
future<V> make_ready_future(T&& value);
future<void> make_ready_future();
Let U
be decay_t<T>
. Then V
is X&
if U
equals
reference_wrapper<X>
, otherwise V
is U
.
future
associated
with that shared state.
For the first overload, the type of the shared state is V
and the result is
constructed from std::forward<T>(value)
.
For the second overload, the type of the shared state is void
.
future, valid() == true
and is_ready() == true
.
make_exceptional_future
template <class T>
future<T> make_exceptional_future(exception_ptr ex);
promise<T> p;
p.set_exception(ex);
return p.get_future();
template <class T, class E>
future<T> make_exceptional_future(E ex);
promise<T> p;
p.set_exception(make_exception_ptr(ex));
return p.get_future();
This section describes various concepts related to thread coordination, and defines the latch
, barrier
and flex_barrier
classes.
In this subclause, a synchronization point represents a point at which a thread may block until a given condition has been reached.
Latches are a thread coordination mechanism that allow one or more threads to block until an operation is completed. An individual latch is a single-use object; once the operation has been completed, the latch cannot be reused.
namespace std {
namespace experimental {
inline namespace concurrency_v1 {
class latch {
public:
explicit latch(ptrdiff_t count);
latch(const latch&) = delete;
latch& operator=(const latch&) = delete;
~latch();
void count_down_and_wait();
void count_down(ptrdiff_t n);
bool is_ready() const noexcept;
void wait() const;
private:
ptrdiff_t counter_; // exposition only
};
} // namespace concurrency_v1
} // namespace experimental
} // namespace std
latch
A latch maintains an internal counter_
that is initialized when the latch is created. Threads may block at a synchronization point waiting for counter_
to be decremented to 0
. When counter_
reaches 0
, all such blocked threads are released.
Calls to count_down_and_wait()
, count_down()
, wait()
, and is_ready()
behave as atomic operations.
explicit latch(ptrdiff_t count);
count >= 0.
counter_ == count
.~latch();
wait()
or count_down_and_wait()
provided that counter_
is 0
. wait()
or count_down_and_wait()
.
— end note ]
void count_down_and_wait();
counter_ > 0.
counter_
by 1
. Blocks at the synchronization point until counter_
reaches 0
. is_ready
calls on this latch that return true.void count_down(ptrdiff_t n);
counter_ >= n
and n >= 0
.counter_
by n
. Does not block.is_ready
calls on this latch that return true.void wait() const;
counter_
is 0
, returns immediately. Otherwise, blocks the calling thread at the synchronization point until counter_
reaches 0
.is_ready() const noexcept;
counter_ == 0
. Does not block.Barriers are a thread coordination mechanism that allow a set of participating threads to block until an operation is completed. Unlike a latch, a barrier is reusable: once the participating threads are released from a barrier's synchronization point, they can re-use the same barrier. It is thus useful for managing repeated tasks, or phases of a larger task, that are handled by multiple threads.
The barrier types are the standard library types barrier
and flex_barrier
. They shall meet the requirements set out in this subclause. In this description, b
denotes an object of a barrier type.
Each barrier type defines a completion phase as a (possibly empty) set of effects. When the member functions defined in this subclause arrive at the barrier's synchronization point, they have the following effects:
The expression b.arrive_and_wait()
shall be well-formed and have the following semantics:
void arrive_and_wait();
arrive_and_wait()
or arrive_and_drop()
again immediately. It is not necessary to ensure that all blocked threads have exited arrive_and_wait()
before one thread calls it again.
— end note ]
arrive_and_wait()
synchronizes with the start of the completion phase.
The expression b.arrive_and_drop()
shall be well-formed and have the following semantics:
void arrive_and_drop();
arrive_and_drop()
synchronizes with the start of the completion phase.arrive_and_drop()
, any further operations on the barrier are undefined, apart from calling the destructor.
If a thread that has called arrive_and_drop()
calls another method on the same barrier, other than the destructor, the results are undefined.
Calls to arrive_and_wait()
and arrive_and_drop()
never introduce data races with themselves or each other.
namespace std {
namespace experimental {
inline namespace concurrency_v1 {
class barrier;
class flex_barrier;
} // namespace concurrency_v1
} // namespace experimental
} // namespace std
barrier
barrier
is a barrier type whose completion phase has no
effects. Its constructor takes a parameter representing the initial size
of its set of participating threads.
class barrier {
public:
explicit barrier(ptrdiff_t num_threads);
barrier(const barrier&) = delete;
barrier& operator=(const barrier&) = delete;
~barrier();
void arrive_and_wait();
void arrive_and_drop();
};
explicit barrier(ptrdiff_t num_threads);
num_threads >= 0.
num_threads
is zero, the barrier may only be destroyed.
— end note ]
num_threads
participating threads. num_threads
threads to arrive at the synchronization point.
— end note ]
~barrier();
flex_barrier
flex_barrier
is a barrier type whose completion phase can be controlled
by a function object.
class flex_barrier {
public:
template <class F>
flex_barrier(ptrdiff_t num_threads, F completion);
explicit flex_barrier(ptrdiff_t num_threads);
flex_barrier(const flex_barrier&) = delete;
flex_barrier& operator=(const flex_barrier&) = delete;
~flex_barrier();
void arrive_and_wait();
void arrive_and_drop();
private:
function<ptrdiff_t()> completion_; // exposition only
};
The completion phase calls completion_()
. If this returns -1
,
then the set of participating threads is unchanged. Otherwise, the set
of participating threads becomes a new set with a size equal to the
returned value. completion_()
returns 0
then the set of participating threads becomes empty, and this object may only be destroyed.
— end note ]
template <class F>
flex_barrier(ptrdiff_t num_threads, F completion);
num_threads >= 0.
F
shall be CopyConstructible
.
completion
shall be Callable (C++14 §[func.wrap.func]) with no arguments and return type ptrdiff_t
.
completion
shall return a value greater than or equal to -1
and shall not exit via an exception.
flex_barrier
for num_threads
participating threads,
and initializes completion_
with std::move(completion)
.
num_threads
threads to arrive at the
synchronization point.
— end note ]
num_threads
is 0
the set of participating threads is empty, and this object may only be destroyed.
— end note ]
explicit flex_barrier(ptrdiff_t num_threads);
num_threads >= 0.
flex_barrier
with num_threads
and with a callable object whose invocation returns -1
and has no side effects.~flex_barrier();
This section provides alternatives to raw pointers for thread-safe atomic
pointer operations, and defines the atomic_shared_ptr
and
atomic_weak_ptr
class templates.
The class templates atomic_shared_ptr<T>
and
atomic_weak_ptr<T>
have the
corresponding non-atomic types shared_ptr<T>
and
weak_ptr<T>
. The template parameter T
of
these class templates may be an incomplete type.
The behavior of all operations is as specified in
#include <atomic>
namespace std {
namespace experimental {
inline namespace concurrency_v1 {
template <class T> struct atomic_shared_ptr;
template <class T> struct atomic_weak_ptr;
} // namespace concurrency_v1
} // namespace experimental
} // namespace st
atomic_shared_ptr
namespace std {
namespace experimental {
inline namespace concurrency_v1 {
template <class T> struct atomic_shared_ptr {
bool is_lock_free() const noexcept;
void store(shared_ptr<T>, memory_order = memory_order_seq_cst) noexcept;
shared_ptr<T> load(memory_order = memory_order_seq_cst) const noexcept;
operator shared_ptr<T>() const noexcept;
shared_ptr<T> exchange(shared_ptr<T>,
memory_order = memory_order_seq_cst) noexcept;
bool compare_exchange_weak(shared_ptr<T>&, const shared_ptr<T>&,
memory_order, memory_order) noexcept;
bool compare_exchange_weak(shared_ptr<T>&, shared_ptr<T>&&,
memory_order, memory_order) noexcept;
bool compare_exchange_weak(shared_ptr<T>&, const shared_ptr<T>&,
memory_order = memory_order_seq_cst) noexcept;
bool compare_exchange_weak(shared_ptr<T>&, shared_ptr<T>&&,
memory_order = memory_order_seq_cst) noexcept;
bool compare_exchange_strong(shared_ptr<T>&, const shared_ptr<T>&,
memory_order, memory_order) noexcept;
bool compare_exchange_strong(shared_ptr<T>&, shared_ptr<T>&&,
memory_order, memory_order) noexcept;
bool compare_exchange_strong(shared_ptr<T>&, const shared_ptr<T>&,
memory_order = memory_order_seq_cst) noexcept;
bool compare_exchange_strong(shared_ptr<T>&, shared_ptr<T>&&,
memory_order = memory_order_seq_cst) noexcept;
constexpr
atomic_shared_ptr() noexcept = default;
atomic_shared_ptr(shared_ptr<T>) noexcept;
atomic_shared_ptr(const atomic_shared_ptr&) = delete;
atomic_shared_ptr& operator=(const atomic_shared_ptr&) = delete;
atomic_shared_ptr& operator=(shared_ptr<T>) noexcept;
};
} // namespace concurrency_v1
} // namespace experimental
} // namespace std
constexpr
atomic_shared_ptr() noexcept = default;
atomic_weak_ptr
namespace std {
namespace experimental {
inline namespace concurrency_v1 {
template <class T> struct atomic_weak_ptr {
bool is_lock_free() const noexcept;
void store(weak_ptr<T>, memory_order = memory_order_seq_cst) noexcept;
weak_ptr<T> load(memory_order = memory_order_seq_cst) const noexcept;
operator weak_ptr<T>() const noexcept;
weak_ptr<T> exchange(weak_ptr<T>,
memory_order = memory_order_seq_cst) noexcept;
bool compare_exchange_weak(weak_ptr<T>&, const weak_ptr<T>&,
memory_order, memory_order) noexcept;
bool compare_exchange_weak(weak_ptr<T>&, weak_ptr<T>&&,
memory_order, memory_order) noexcept;
bool compare_exchange_weak(weak_ptr<T>&, const weak_ptr<T>&,
memory_order = memory_order_seq_cst) noexcept;
bool compare_exchange_weak(weak_ptr<T>&, weak_ptr<T>&&,
memory_order = memory_order_seq_cst) noexcept;
bool compare_exchange_strong(weak_ptr<T>&, const weak_ptr<T>&,
memory_order, memory_order) noexcept;
bool compare_exchange_strong(weak_ptr<T>&, weak_ptr<T>&&,
memory_order, memory_order) noexcept;
bool compare_exchange_strong(weak_ptr<T>&, const weak_ptr<T>&,
memory_order = memory_order_seq_cst) noexcept;
bool compare_exchange_strong(weak_ptr<T>&, weak_ptr<T>&&,
memory_order = memory_order_seq_cst) noexcept;
constexpr
atomic_weak_ptr() noexcept = default;
atomic_weak_ptr(weak_ptr<T>) noexcept;
atomic_weak_ptr(const atomic_weak_ptr&) = delete;
atomic_weak_ptr& operator=(const atomic_weak_ptr&) = delete;
atomic_weak_ptr& operator=(weak_ptr<T>) noexcept;
};
} // namespace concurrency_v1
} // namespace experimental
} // namespace std
constexpr
atomic_weak_ptr() noexcept = default;
When any operation on an atomic_shared_ptr
or atomic_weak_ptr
causes an object to be destroyed or memory to be deallocated, that destruction or deallocation
shall be sequenced after the changes to the atomic object's state.