Document number: P3111R5.
Date: 2025-02-20.
Authors: Gonzalo Brito Gadeschi, Simon Cooksey, Daniel Lustig.
Reply to: Gonzalo Brito Gadeschi <gonzalob _at_ nvidia.com>.
Audience: CWG, LWG.

Table of Contents

Changelog

Atomic Reduction Operations

Atomic Reduction Operations are atomic read-modify-write (RMW) operations (like fetch_add) that do not "fetch" the old value and are not reads for the purpose of building synchronization with acquire fences. This enables implementations to leverage hardware acceleration available in modern CPU and GPU architectures.

Furthermore, we propose to allow atomic memory operations that aren't reads in unsequenced execution, and to extend atomic arithmetic reduction operations for floating-point types with operations that assume floating-point arithmetic is associative.

Introduction

Concurrent algorithms performing atomic RMW operations that discard the old fetched value are very common in high-performance computing, e.g., finite-element matrix assembly, data analytics (e.g. building histograms), etc.

Consider the following parallel algorithm to build a histogram (full implementation):

// Example 0: Histogram. span<unsigned> data; array<atomic<unsigned>, N> buckets; constexpr T bucket_sz = numeric_limits<T>::max() / (T)N; unsigned nthreads = thread::hardware_concurrency(); for_each_n(execution::par_unseq, views::iota(0).begin(), nthreads, [&](int thread) { unsigned data_per_thread = data.size() / nthreads; T* data_thread = data.data() + data_per_thread * thread; for (auto e : span<T>(data_thread, data_per_thread)) buckets[e / bucket_sz].fetch_add(1, memory_order_relaxed); });

This program has two main issues:

Atomic reduction operations address both shortcomings:

Before (compiler-explorer) After
#include <algorithm> #include <atomic> #include <execution> using namespace std; using execution::par_unseq; int main() { size_t N = 10000; vector<int> v(N, 0); atomic<int> atom = 0; for_each_n(par_unseq, v.begin(), N, [&](auto& e) { // UB+SLOW: atom.fetch_add(e); }); return atom.load(); }
#include <algorithm> #include <atomic> #include <execution> using namespace std; using execution::par_unseq; int main() { size_t N = 10000; vector<int> v(N, 0); atomic<int> atom = 0; for_each_n(par_unseq, v.begin(), N, [&](auto& e) { // OK+FAST atom.store_add(e); }); return atom.load(); }

This new operation can then be used in the Histogram Example (example 0), to replace the fetch_add with store_add.

Motivation

Hardware Exposure

The following ISAs provide Atomic Reduction Operation:

Architecture Instructions
PTX red.
ARM LDADD RZ, STADD, SWP RZ, CAS RZ.
x86-64 Remote Atomic Operations (RAO): AADD, AAND, AOR, AXOR.
RISC-V None (note: AMOs are always loads and stores).
PP64LE None.

Some of these instructions lack a destination operand (red, STADD, AADD). Others change semantics if the destination register used discards the result (Arm's LDADD RZ, SWP RZ, CAS RZ).

All ISAs provide the same sematics: these are not loads from the point-of-view of the Memory Model, and therefore do not participate in acquire sequences, but they do participate in release sequences:

These architectures provide both "relaxed" and "release" orderings for the reductions (e.g. red.relaxed/red.release, STADD/STADDL).

Performance

On hardware architectures that implement these as far atomics, the exposed latency of Atomic Reduction Operations may be as low as half that of "fetch_<key>" operations.

Example: on an NVIDIA Hopper H100 GPU, replacing atomic.fetch_add with atomic.store_add on the Histogram Example (Example 0) improves throughput by 1.2x.

Furthermore, non-associative floating-point atomic operations, like fetch_add, are required to read the "latest value", which sequentializes their execution. In the following example, the outcome x == a + (b + c) is not allowed, because either the atomic operation of thread0 happens-before that of thread1, or vice-versa, and floating-point arithmetic is not associative:

// Litmus test 2: atomic<float> x = a; void thread0() { x.fetch_add(b, memory_order_relaxed); } void thread1() { x.fetch_add(c, memory_order_relaxed); }

Allowing the x == a + (b + c) outcome enables implementations to perform a tree-reduction, which improves complexity from O(N) to O(log(N)), at negligible higher amount of non-determinism which is already inherent to the use of atomic operations:

// Litmus test 3: atomic<float> x = a; x.store_add(b, memory_order_relaxed); x.store_add(c, memory_order_relaxed); // Sound to merge these two operations into one: // x.store_add(b + c);

On GPU architectures, performing an horizontal reduction for then issuing a single atomic operation per thread group, reduces the number of atomic operation issued by up to the size of the thread group.

Functionality

Currently, all atomic memory operations are vectorization-unsafe and therefore not allowed in element access functions of parallel algorithms when the unseq or par_unseq execution policies are used (see [algorithms.parallel.exec.5] and [algorithms.parallel.exec.7]). Atomic memory operations that "read" (e.g. load, fetch_<key>, compare_exchange, exchange, ) enable building synchronization edges that block, which within unseq/par_unseq leads to dead-locks. N4070 solved this by tightening the wording to disallow any synchronization API from being called from within unseq/par_unseq.

Allowing Atomic Writes and Atomic Reduction Operations in unsequenced execution increases the set of concurrent algorithms that can be implemented in the lowest-common denominator of hardware that C++ supports. In particular, many hardware architectures that can accelerate unseq/par_unseq but cannot accelerate par (e.g. most non-NVIDIA GPUs), provide acceleration for atomic reduction operations.

We propose to make lock-free atomic operations that are not reads vectorization safe to enable calling them from unsequenced execution. Atomic operations that read remain vectorization-unsafe and therefore UB:

Design

For each atomic fetch_{OP} in the atomic<T> and atomic_ref<T> class templates and their specializations, we introduce new store_{OP} member functions that return void:

template <class T> struct atomic_ref { T fetch_add (T v, memory_order o) const noexcept; void store_add(T v, memory_order o) const noexcept; };

The store_{OP} APIs are vectorization safe if the atomic is_always_lock_free is true, i.e., they are allowed in Element Access Functions ([algorithms.paralle.defns]) of parallel algorithms:

for_each(par_unseq, ..., [&](auto old, ...) {
    assert(atom.is_lockfree());// Only for lock-free atomics
    atom.store(42);            // OK: vectorization-safe
    atom.store_add(1);        // OK: vectorization-safe
    atom.fetch_add(1);         // UB: vectorization-unsafe
    atom.exchange(42);         // UB: vectorization-unsafe
    atom.compare_exchange_weak(old, 42);   // UB: vectorization-unsafe
    atom.compare_exchange_strong(old, 42); // UB: vectorization-unsafe
    while (atom.load() < 42);  // UB: vectorization-unsafe
}); 

Furthermore, we specify non-associative floating-point atomic reduction operations to enable tree-reduction implementations that improve complexity from O(N) to O(log(N)) by minimally increasing the non-determinism which is already inherent to atomic operations. This allows producing the x == a + (b + c) outcome in the following example, which enables the optimization that merges the two store_add operations into a single one:

atomic<float> x = a; x.store_add(b, memory_order_relaxed); x.store_add(c, memory_order_relaxed); // Sound to merge these two operations into one: // x.store_add(b + c);

Applications that need the sequential semantics can use fetch_add instead.

Implementation impact

It is correct to conservatively implement store_{OP} as a call to fetch_{OP}. We evaluated the following implementations of unsequenced execution policies, which are not impacted:

Design Alternatives

Enable Atomic Reductions as fetch_<key> optimizations

Attempting to improve application performance by implementing compiler-optimizations to leverage Atomic Reduction Operations from fetch_<key> APIs whose result is unused has become a rite of passage for compiler engineers, e.g., GCC#509632, LLVM#68428, LLVM#72747, Unfortunately, "simple" optimization strategies break backward compatibility in the following litmus tests (among others).

Litmus Test 0: from Will Deacon. Performing the optimization to replace the introduces the y == 2 && r0 == 1 && r1 == 0 outcome:

void thread0(atomic_int* y,atomic_int* x) { atomic_store_explicit(x,1,memory_order_relaxed); atomic_thread_fence(memory_order_release); atomic_store_explicit(y,1,memory_order_relaxed); } void thread1(atomic_int* y,atomic_int* x) { atomic_fetch_add_explicit(y,1,memory_order_relaxed); atomic_thread_fence(memory_order_acquire); int r0 = atomic_load_explicit(x,memory_order_relaxed); } void thread2(atomic_int* y) { int r1 = atomic_load_explicit(y,memory_order_relaxed); }

Litmus Test 1: from Luke Geeson. Performing the optimization of replacing the exchange with a store introduces the r0 == 0 && y == 2 outcome:

void thread0(atomic_int* y,atomic_int* x) { atomic_store_explicit(x,1,memory_order_relaxed); atomic_thread_fence(memory_order_release); atomic_store_explicit(y,1,memory_order_relaxed); } void thread1(atomic_int* y,atomic_int* x) { atomic_exchange_explicit(y,2,memory_order_release); atomic_thread_fence(memory_order_acquire); int r0 = atomic_load_explicit(x,memory_order_relaxed); }

In some architectures, Atomic Reduction Operations can write to memory pages or memory locations that are not readable, e.g., MMIO registers on NVIDIA GPUs, and need a reliable programming model that does not depend on compiler-optimizations for functionality.

Naming

We considered the following alternative names:

We considered providing separate version of non-associative floating-point atomic reduction operations with and without the support for tree-reductions, e.g., atomic.store_add (no tree-reduction support) and atomic.store_add_generalized (tree-reduction support), but decided against it because atomic operations are inherently non-deterministic, this relaxation only minimally impacts that, and fetch_add already provides (and has to continue to provide) the sequential semantics without this relaxation.

Memory Ordering

We choose to support memory_order_relaxed, memory_order_release, and memory_order_seq_cst.

We may need a note stating that replacing the operations in a reduction sequence is only valid as long as the replacement maintains the other ordering properties of the operations as defined in [intro.races]. Examples:

Litmus test 2:

// T0: M1.store_add(1, relaxed); M2.store(1, release); M1.store_add(1, relaxed); // T1: r0 = M2.load(acquire); r1 = M1.load(relaxed); // r0 == 0 || r1 >= 1

Litmus test 3:

// T0 M1.store_add(1, seq_cst); M2.store_add(1, seq_cst); M1.store_add(1, seq_cst); // T1 r0 = M2.load(seq_cst); r1 = M1.load(seq_cst); // r0 == 0 || r1 >= 1

Unresolved question: Are tree reductions only supported for memory_order_relaxed? No, see litmus tests for release and seq_cst.

Formalization

Herd already support these for STADD on Arm, and the NVIDIA Volta Memory Model supports these for red and multimem.red on PTX. If we decide to pursue this exposure direction, this proposal would benefit from extending Herd's RC11 with reduction sequences for floating-point.

Wording

Do NOT modify [intro.execution.10]!

We do not need to modify [intro.execution.10] to enable using atomic reduction operations in unsequenced contexts, because this section does not prevent that: atomic.foo() are function calls and two function calls are always indeterminately sequenced, not unsequenced. That is, function calls never overlap, and this section does not impact that.

Except where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced.
[Note 5: In an expression that is evaluated more than once during the execution of a program, unsequenced and indeterminately sequenced evaluations of its subexpressions need not be performed consistently in different evaluations. — end note]

The value computations of the operands of an operator are sequenced before the value computation of the result of the operator. If a side effect on a memory location ([intro.memory]) is unsequenced relative to either another side effect on the same memory location or a value computation using the value of any object in the same memory location, and they are not lock-free atomic read operations ([atomics]) or potentially concurrent ([intro.multithread]), the behavior is undefined.
[Note 6: The next subclause imposes similar, but more complex restrictions on potentially concurrent computations. — end note]

[Example 3:

void g(int i) {
 i = 7, i++, i++;              // i becomes 9

 i = i++ + 1;                  // the value of i is incremented
 i = i++ + i;                  // undefined behavior
 i = i + 1;                    // the value of i is incremented
}

— end example]

Do NOT modify [intro.races]!

We don't need to modify [intro.races] to allow tree-reduction implementations for floating-point. We handle this in the Remark clause; all other alternatives (GENERALIZED_, "as if" integers, "reduction sequences", ) are much worse. See P3111R3 and older.

Add to [algorithms.parallel.defns]:

Review note: the first bullet says which standard library functions are, in general, vectorization unsafe, and the second bullet carves out exceptions.
Review note: SG1 requested making "relaxed", "release", and "seq_cst" atomic reduction operations not be vectorization unsafe, so they are excluded in the second bullet.

A standard library function is vectorization-unsafe if:

[Note 2: Implementations must ensure that internal synchronization inside standard library functions does not prevent forward progress when those functions are executed by threads of execution with weakly parallel forward progress guarantees. — end note]

[Example 2:

int x = 0;
std::mutex m;
void f() {
  int a[] = {1,2};
  std::for_each(std::execution::par_unseq, std::begin(a), std::end(a), [&](int) {
    std::lock_guard<mutex> guard(m); // incorrect: lock_guard constructor calls m.lock()
    ++x;
  });
}

The above program may result in two consecutive calls to m.lock() on the same thread of execution (which may deadlock), because the applications of the function object are not guaranteed to run on different threads of execution. — end example]

Forward progress

Modify [intro.progress#1] as follows:

Review Note: SG1 design intent is that Reduction operations are not a step, infinite loops around reductions operations are not UB (practically implementations can assume reductions are finite), but such loops may cause all threads to loose forward progress.
Review Note: this change makes Atomic Reduction Operations and Atomic Stores/Fences asymmetric, but making this change for Atomic Stores/Fences is a breaking change to be pursued in a separate paper per SG1 feedback. When adding these new operations, it does make sense to make this change, to avoid introducing undesired semantics that would be a breaking change to fix.

The implementation may assume that any thread will eventually do one of the following:

  1. terminate,
  2. invoke the function std​::​this_thread​::​yield ([thread.thread.this]),
  3. make a call to a library I/O function,
  4. perform an access through a volatile glvalue, or
  5. perform a synchronization operation or an atomic operation other than an atomic reduction operation [atomics.order], or
  6. continue execution of a trivial infinite loop ([stmt.iter.general]).

[Note 1: This is intended to allow compiler transformations such as removal of empty loops, even when termination cannot be proven. — end note]

Modify [intro.progress#3] as follows:

Review note: same note as above about exempting atomic stores in a separate paper.

During the execution of a thread of execution, each of the following is termed an execution step:

  1. termination of the thread of execution,
  2. performing an access through a volatile glvalue, or
  3. completion of a call to a library I/O function, or a synchronization operation, or an atomic operation other than an atomic reduction operation [atomics.order].

No acquire sequences support

Modify [atomics.fences] as follows:

33.5.11 Fences[atomics.fences]

  1. This subclause introduces synchronization primitives called fences. Fences can have acquire semantics, release semantics, or both. A fence with acquire semantics is called an acquire fence. A fence with release semantics is called a release fence.
  2. A release fence A synchronizes with an acquire fence B if there exist atomic operations X and Y, where Y is not an atomic reduction operation [atomics.order], both operating on some atomic object M, such that A is sequenced before X, X modifies M, Y is sequenced before B, and Y reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation.
  3. A release fence A synchronizes with an atomic operation B that performs an acquire operation on an atomic object M if there exists an atomic operation X such that A is sequenced before X, X modifies M, and B reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation.
  4. An atomic operation A that is a release operation on an atomic object M synchronizes with an acquire fence B if there exists some atomic operation X on M such that X is sequenced before B and reads the value written by A or a value written by any side effect in the release sequence headed by A.

Atomic Reduction Operation APIs

The following operations perform arithmetic computations. The correspondence among key, operator, and computation is specified in Table 145:

key op computation
add + addition
sub - subtraction
max maximum
min minimum
and & bitwise and
or | bitwise inclusive or
xor ^ bitwise exclusive or

Add to [atomics.syn]:

namespace std {
// [atomics.nonmembers], non-member functions
...

template<class T>
 void atomic_store_add(volatile atomic<T>*,                       // freestanding
                       typename atomic<T>::difference_type) noexcept;
template<class T>
 constexpr void atomic_store_add(atomic<T>*,                      // freestanding
                                 typename atomic<T>::difference_type) noexcept;
template<class T>
 void atomic_store_add_explicit(volatile atomic<T>*,              // freestanding
                                typename atomic<T>::difference_type,
                                memory_order) noexcept;
template<class T>
 constexpr void atomic_store_add_explicit(atomic<T>*,             // freestanding
                                          typename atomic<T>::difference_type,
                                          memory_order) noexcept;
template<class T>
 void atomic_store_sub(volatile atomic<T>*,                       // freestanding
                       typename atomic<T>::difference_type) noexcept;
template<class T>
 constexpr void atomic_store_sub(atomic<T>*,                      // freestanding
                                 typename atomic<T>::difference_type) noexcept;
template<class T>
 void atomic_store_sub_explicit(volatile atomic<T>*,              // freestanding
                                 typename atomic<T>::difference_type,
                                 memory_order) noexcept;
template<class T>
 constexpr void atomic_store_sub_explicit(atomic<T>*,             // freestanding
                                 typename atomic<T>::difference_type,
                                 memory_order) noexcept;
template<class T>
 void atomic_store_and(volatile atomic<T>*,                       // freestanding
                       typename atomic<T>::value_type) noexcept;
template<class T>
 constexpr void atomic_store_and(atomic<T>*,                      // freestanding
                                 typename atomic<T>::value_type) noexcept;
template<class T>
 void atomic_store_and_explicit(volatile atomic<T>*,              // freestanding
                                 typename atomic<T>::value_type,
                                 memory_order) noexcept;
template<class T>
 constexpr void atomic_store_and_explicit(atomic<T>*,             // freestanding
                                          typename atomic<T>::value_type,
                                          memory_order) noexcept;
template<class T>
 void atomic_store_or(volatile atomic<T>*,                        // freestanding
                      typename atomic<T>::value_type) noexcept;
template<class T>
 constexpr void atomic_store_or(atomic<T>*,                       // freestanding
                                typename atomic<T>::value_type) noexcept;
template<class T>
 void atomic_store_or_explicit(volatile atomic<T>*, // freestanding
                               typename atomic<T>::value_type,
                               memory_order) noexcept;
template<class T>
 constexpr void atomic_store_or_explicit(atomic<T>*,              // freestanding
                                         typename atomic<T>::value_type,
                                         memory_order) noexcept;
template<class T>
 void atomic_store_xor(volatile atomic<T>*,                       // freestanding
                        typename atomic<T>::value_type) noexcept;
template<class T>
 constexpr void atomic_store_xor(atomic<T>*,                      // freestanding 
                        typename atomic<T>::value_type) noexcept;
template<class T>
 void atomic_store_xor_explicit(volatile atomic<T>*,              // freestanding
                                 typename atomic<T>::value_type,
                                 memory_order) noexcept;
template<class T>
 constexpr void atomic_store_xor_explicit(atomic<T>*,             // freestanding
                                 typename atomic<T>::value_type,
                                 memory_order) noexcept;
template<class T>
 void atomic_store_max(volatile atomic<T>*,                       // freestanding
                        typename atomic<T>::value_type) noexcept;
template<class T>
 constexpr void atomic_store_max(atomic<T>*,                      // freestanding
                                 typename atomic<T>::value_type) noexcept;
template<class T>
 void atomic_store_max_explicit(volatile atomic<T>*,              // freestanding
                                 typename atomic<T>::value_type,
                                 memory_order) noexcept;
template<class T>
 constexpr void atomic_store_max_explicit(atomic<T>*,             // freestanding
                                          typename atomic<T>::value_type,
                                          memory_order) noexcept;
template<class T>
 void atomic_store_min(volatile atomic<T>*,                       // freestanding
                       typename atomic<T>::value_type) noexcept;
template<class T>
 constexpr void atomic_store_min(atomic<T>*,                      // freestanding
                        typename atomic<T>::value_type) noexcept;
template<class T>
 void atomic_store_min_explicit(volatile atomic<T>*,              // freestanding
                                 typename atomic<T>::value_type,
                                 memory_order) noexcept;
template<class T>
 constexpr void atomic_store_min_explicit(atomic<T>*,             // freestanding
                                 typename atomic<T>::value_type,
                                 memory_order) noexcept;
}

Add to [atomics.ref.int]:


namespace std {
  template<> struct atomic_ref<integral-type> {
    ...
   public:
    ...
    constexpr void store_add(integral-type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_sub(integral-type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_and(integral-type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_or(integral-type,
                            memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_xor(integral-type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_max(integral-type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_min(integral-type,
                             memory_order = memory_order::seq_cst) const noexcept;
  
}
constexpr void store_key(integral-type operand, 
                         memory_order order = memory_order_seq_cst) const noexcept;

Add to [atomics.ref.float]:

namespace std {
  template<> struct atomic_ref<floating-point-type> {
    ...
  public:
    ... 
    constexpr void store_add(floating-point-type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_sub(floating-point-type,
                             memory_order = memory_order::seq_cst) const noexcept;
  };
}
constexpr void store_key(floating-point-type operand,
                         memory_order order = memory_order::seq_cst) const noexcept;

Since P3008 (Atomic floating-point min/max) is ready but blocked on P3348 (C++26 should refer to C23 not C17), either this paper or that paper should add to [atomics.ref.float]:

namespace std {
  template<> struct atomic_ref<floating-point-type> {
    ...
  public:
    ... 
  constexpr void store_max(floating-point, memory_order = memory_order::seq_cst) const noexcept;
  constexpr void store_min(floating-point, memory_order = memory_order::seq_cst) const noexcept;
  constexpr void store_fmaximum(floating-point, memory_order = memory_order::seq_cst) const noexcept;
  constexpr void store_fminimum(floating-point, memory_order = memory_order::seq_cst) const noexcept;
  constexpr void store_fmaximum_num(floating-point, memory_order = memory_order::seq_cst) const noexcept;
  constexpr void store_fminimum_num(floating-point, memory_order = memory_order::seq_cst) const noexcept;
  };
}

And then modify the Remarks clauses added by P3008 (Atomic floating-point min/max) as follows:

constexpr void store_key(floating-point-type operand,
                         memory_order order = memory_order::seq_cst) const noexcept;

[]

Add to [atomics.ref.pointer]:

namespace std {
 template<class T> struct atomic_ref<T*> {
    ...
  public:
    ...
    constexpr void store_add(difference_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_sub(difference_type, 
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_max(T*,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_min(T*,
                             memory_order = memory_order::seq_cst) const noexcept;
  };
}
constexpr void store_key(difference_type operand,
                         memory_order order = memory_order::seq_cst) const noexcept;

Add to [atomics.types.int]:

namespace std {
  template<> struct atomic<integral-type> {
    ...

    void store_add(integral-type,
                    memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_add(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_sub(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_sub(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_and(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_and(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_or(integral-type,
                  memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_or(integral-type,
                            memory_order = memory_order::seq_cst) noexcept;
    void store_xor(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_xor(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_max(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_max(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_min(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_min(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;

  };
}
void store_key(integral-type operand, 
               memory_order order = memory_order_seq_cst) volatile noexcept;
constexpr void store_key(integral-type operand, 
                         memory_order order = memory_order_seq_cst) noexcept;

Add to [atomics.types.float]:

namespace std {
  template<> struct atomic<floating-point-type> {
    ...

    void store_add(floating-point-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_add(floating-point-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_sub(floating-point-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_sub(floating-point-type,
                             memory_order = memory_order::seq_cst) noexcept;

  };
}
void store_key(T operand,
               memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store_key(T operand,
                         memory_order order = memory_order::seq_cst) noexcept;

Since P3008 (Atomic floating-point min/max) is ready but blocked on P3348 (C++26 should refer to C23 not C17), either this paper or that paper should add to [atomics.types.float]:

namespace std {
  template<> struct atomic<floating-point-type> {
    ...
  public:
    ... 
  void store_max(floating-point, memory_order = memory_order::seq_cst) volatile noexcept;
  constexpr void store_max(floating-point, memory_order = memory_order::seq_cst) noexcept;
  void store_min(floating-point, memory_order = memory_order::seq_cst) volatile noexcept;
  constexpr void store_min(floating-point, memory_order = memory_order::seq_cst) noexcept; 
  void store_fmaximum(floating-point, memory_order = memory_order::seq_cst) volatile noexcept;
  constexpr void store_fmaximum(floating-point, memory_order = memory_order::seq_cst) noexcept;
  void store_fminimum(floating-point, memory_order = memory_order::seq_cst) volatile noexcept;
  constexpr void store_fminimum(floating-point, memory_order = memory_order::seq_cst) noexcept;
  void store_fmaximum_num(floating-point, memory_order = memory_order::seq_cst) volatile noexcept;
  constexpr void store_fmaximum_num(floating-point, memory_order = memory_order::seq_cst) noexcept;
  void store_fminimum_num(floating-point, memory_order = memory_order::seq_cst) volatile noexcept;
  constexpr void store_fminimum_num(floating-point, memory_order = memory_order::seq_cst) noexcept;
  };
}

And then modify the Remarks clauses added by P3008 (Atomic floating-point min/max) as follows:

void store_key(T operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store_key(T operand, memory_order order = memory_order::seq_cst) noexcept;

[]

Add to [atomics.types.pointer]:

namespace std {
  template<class T> struct atomic<T*> {
    ...
    void store_add(ptrdiff_t, 
      memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_add(ptrdiff_t, 
      memory_order = memory_order::seq_cst) noexcept;
    void store_sub(ptrdiff_t, 
      memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_sub(ptrdiff_t, 
      memory_order = memory_order::seq_cst) noexcept;
    void store_max(T*, 
      memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_max(T*, 
      memory_order = memory_order::seq_cst) noexcept;
    void store_min(T*, 
      memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_min(T*, 
      memory_order = memory_order::seq_cst) noexcept;
  };
}
void store_key(ptrdiff_t operand, 
     memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store_key(ptrdiff_t operand,
     memory_order order = memory_order::seq_cst) noexcept;

Add __cpp_lib_atomic_store_key version macro to <version> synopsis [version.syn]:

#define __cpp_lib_atomic_store_key ______L // freestanding, also in <atomic>