Doc. No.: | WG21/N2334 J16/07-0194 |
|
---|---|---|
Date: | 2007-08-05 | |
Reply to: | Clark Nelson | Hans-J. Boehm |
Phone: | +1-503-712-8433 | +1-650-857-3406 |
Email: | clark.nelson@intel.com | Hans.Boehm@hp.com |
This paper is a revision of N2300. Changes to N2300 include:
The existing discussion from N2300 was generally left in place..
N2300 changes to the corresponding section of its predecessor, N2171 include:
This avoids some concerns about synchronization elimination. In particular, the old formulation allowed (everything initially zero, atomic):
Thread1 | Thread2 |
r1 = x.load_relaxed(); // yields 1 y.store_relaxed(1); |
r1 = y.load_relaxed(); // yields 1 x.store_relaxed(1); |
but disallowed the corresponding test case if the two statements in each thread were separated by acq_rel read-modify-write operations to dead variables, or a locked region. That interferes with the elimination of locks if the compiler decides to statically combine threads, which is likely to become important for certain programming styles.
This version has benefitted from feed back from many people, including Sarita Adve, Paul McKenney, Raul Silvera, Lawrence Crowl, and Peter Dimov.
New paragraphs inserted as 1.7p3 et seq.:
A memory location is either an object of scalar type, or a maximal sequence of adjacent bit-fields all having non-zero width. Two threads of execution can update and access separate memory locations without interfering with each other.
[Note: Thus a bit-field and an adjacent non-bit-field are in separate memory locations, and therefore can be concurrently updated by two threads of execution without interference. The same applies to two bit-fields, if one is declared inside a nested struct declaration and the other is not, or if the two are separated by a zero-length bit-field declaration, or if they are separated by a non-bit-field declaration. It is not safe to concurrently update two bit-fields in the same struct if all fields between them are also bit-fields, no matter what the sizes of those intervening bit-fields happen to be. end note ]
[Example: A structure declared as
struct {char a; int b:5, c:11, :0, d:8; struct {int ee:8;} e;}
contains four separate memory locations: The fielda
, and bit-fieldsd
ande.ee
are each separate memory locations, and can be modified concurrently without interfering with each other. The bit-fieldsb
andc
together constitute the fourth memory location. The bit-fieldsb
andc
can not be concurrently modified, butb
anda
, for example, can be. end example.]
Insert a new section between 1.9 and 1.10, titled "Multi-threaded executions and data races".
1.10p1:
Under a hosted implementation, a C++ program can have more than one thread of execution (a.k.a. thread) running concurrently. Each thread executes a single function according to the rules expressed in this standard. The execution of the entire program consists of an execution of all of its threads. [Note: Usually the execution can be viewed as an interleaving of all its threads. However some kinds of atomic operations, for example, allow executions inconsistent with a simple interleaving, as described below. end note ] Under a freestanding implementation, it is implementation-defined whether a program can have more than one thread of execution.
1.10p2:
The execution of each thread proceeds as defined by the remainder of this standard. The value of an object visible to a thread T at a particular point might be the initial value of the object, a value assigned to the object by T, or a value assigned to the object by another thread, according to the rules below. [Note: In some cases, there may instead be undefined behavior. Much of this section is motivated by the desire to support atomic operations with explicit and detailed visibility constraints. However, it also implicitly supports a simpler view for more restricted programs. end note ]
1.10p3:
Two expression evaluations conflict if one of them modifies a memory location and the other one accesses or modifies the same memory location.
1.10p4:
The library defines a number of operations, such as operations on locks and atomic objects, that are specially identified as synchronization operations. These operations play a special role in making assignments in one thread visible to another. A synchronization operation is either an acquire operation or a release operation, or both, on one or more memory locations; the semantics of these are described below. In addition, there are relaxed atomic operations, which are not synchronization operations, and read-modify-write operations, which have special characteristics, also described below. [Note: For example, a call that acquires a lock will perform an acquire operation on the locations comprising the lock. Correspondingly, a call that releases the same lock will perform a release operation on those same locations. Informally, performing a release operation on A forces prior side effects on other memory locations to become visible to other threads that later perform an acquire operation on A. We do not include "relaxed" atomic operations as "synchronization" operations though, like synchronization operations, they cannot contribute to data races. end note ]
1.10p5 (previously 1.10p6):
All modifications to a particular atomic object M occur in some particular total order, called the modification order of M. If A and B are modifications of an atomic object M, and A happens before B, then A shall precede B in the modification order of M, which is defined below. [Note: This states that the modification orders must respect "happens before". end note ] [Note: These is a separate order for each scalar object. There is no requirement that these can be combined into a single total order for all objects. In general this will be impossible since different threads may observe modifications to different variables in inconsistent orders. end note ]
1.10p6 (previously embedded in 1.10p7):
A release sequence on an atomic object M is a maximal contiguous sub-sequence of side effects in the modification order of M, where the first operation is a release, and every subsequent operation
- is performed by the same thread that performed the release, or
- is a non-relaxed atomic read-modify-write operation.
1.10p7:
This was weakened since N2171 not to require synchronizes-with for all later reads. Some weakening of the older specs appears to be necessary to preserve efficient cross-platform implementability of low-level atomics. This is probably not the only possible such weakening. But all of them appear to either:
Without the special exemption for read-modify-write operations, we would allow the particularly counterintuitive outcome for one of Peter Dimov's examples: (x, y ordinary, v atomic, all initially zero)
Thread1 | Thread2 | Thread3 |
x = 1; fetch_add_release(&v, 1); |
y = 1; fetch_add_release(&v, 1); |
if (load_acquire(&v) == 2) assert (x + y == 2); |
Here the assertion could fail, since only the later fetch_add_release would ensure visibility of the preceding store. The value written by the earlier might not seen by thread3. The special clause for RMW operations prevents the assertion from failing here and in similar examples.
An evaluation A that performs a release operation on an object M synchronizes with an evaluation B that performs an acquire operation on M and reads a value written by any side effect in the release sequence headed by A. [Note: Except in the specified cases, reading a later value does not necessarily ensure visibility as described below. Such a requirement would sometimes interfere with efficient implementation. end note ] [Note: The specifications of the synchronization operations define when one reads the value written by another. For atomic variables, the definition is clear. All operations on a given lock occur in a single total order. Each lock acquisition "reads the value written" by the last lock release. end note ]
1.10p8:
This has been strengthened since N2171 to include "sequenced before" in "happens before".
An evaluation A happens before an evaluation B if:
- A is sequenced before B or
- A synchronizes with B; or
- for some evaluation X, A happens before X and X happens before B.
1.10p9 (previously embedded in 1.10p10):
A visible side effect A on an object M with respect to a value computation B of M satisfies the conditions:
- A happens before B, and
- there is no other side effect X to M such that A happens before X and X happens before B.
The value of a non-atomic scalar object M, as determined by evaluation B, shall be the value stored by the visible side effect A. [ Note: If there is ambiguity about which side effect to a non-atomic object is visible, then there is a data race, and the behavior is undefined. end note ] [ Note: This states that operations on ordinary variables are not visibly reordered. This is not actually detectable without data races, but it is necessary to ensure that data races, as defined here, and with suitable restrictions on the use of atomics, correspond to data races in a simple interleaved (sequentially consistent) execution. end note ]
1.10p10 (previously embedded in 1.10p10):
The visible sequence of side effects on an atomic object M, with respect to a value computation B of M, is a maximal contiguous sub-sequence of side effects in the modification order of M, where the first operation is visible with respect to B, and B happens before no subsequent operation. The value of an atomic object M, as determined by evaluation B, shall be the value stored by some operation in the visible sequence of M with respect to B. Furthermore, if a value computation A of an atomic object M happens before a value computation B of M, and the value computed by A corresponds to the value stored by side effect X, then the value computed by B shall either equal the value computed by A, or be the value stored by side effect Y, where Y follows X in the modification order of M. [Note: This effectively disallows compiler reordering of atomic operations to a single object, even if both operations are "relaxed" loads. By doing so, we effectively make the "cache coherence" guarantee provided by essentially all hardware available to C++ atomic operations.end note ] [Note: The visible sequence depends on the "happens before" relation, which depends on the values observed by loads of atomics, which we are restricting here. The intended reading is that there must exist an association of atomic loads with modifications they observe that, together with suitably chosen modification orders, and the happens before relation derived as described above, satisfy the resulting constraints as imposed here. -- end note.]
1.10p11:
The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior. [Note: It can be shown that programs that correctly use simple locks to prevent all data races, and use no other synchronization operations, behave as though the executions of their constituent threads were simply interleaved, with each observed value of an object being the last value assigned in that interleaving. This is normally referred to as "sequential consistency". However, this applies only to race-free programs, and race-free programs cannot observe most program transformations that do not change single-threaded program semantics. In fact, most single-threaded program transformations continue to be allowed, since any program that behaves differently as a result must perform an undefined operation. end note ]
1.10p12:
[Note: Compiler transformations that introduce assignments to a potentially shared memory location that would not be modified by the abstract machine are generally precluded by this standard, since such an assignment might overwrite another assignment by a different thread in cases in which an abstract machine execution would not have encountered a data race. This includes implementations of data member assignment that overwrite adjacent members in separate memory locations. We also generally preclude reordering of atomic loads in cases in which the atomics in question may alias, since this may violate the last clause of 1.10p10. end note ]
1.10p13:
[Note: Transformations that introduce a speculative read of a shared variable may not preserve the semantics of the C++ program as defined in this standard, since they potentially introduce a data race. However, they are typically valid in the context of an optimizing compiler that targets a specific machine with well-defined semantics for data races. They would be invalid for a hypothetical machine that is not tolerant of races or provides hardware race detection. end note ]
It is generally felt that it is important to allow the transformation of potentially nonterminating loops (e.g. by merging two loops that iterate over the same potentially infinite set, or by eliminating a side-effect-free loop), even when that may not otherwise be justified in the case in which the first loop never terminates.
Existing compilers commonly assume that code immediately following a loop is executed if and only if code immediately preceding a loop is executed. This assumption is clearly invalid if the loop fails to terminate. Even if we wanted to prohibit this behavior, it is unclear that all relevant compilers could comply in a reasonable amount of time. The assumption appears both pervasive and hard to test for.
The treatment of nonterminating loops in the current standard is very unclear. We believe that some implementations already eliminate potentially nonterminating, side-effect-free, loops, probably based on 1.9p9, which appears to impose very weak requirements on conforming implementations for nonterminating programs. We had previously arrived at a tentative conclusion that nonterminating loops were already sufficiently weakly specified that no changes were needed. We no longer believe this, for the following reasons:
while (!please_self_destruct.load_acquire())
{}; self_destruct()
) is clearly hazardous, and a bit more clarity seems
appropriate.Hence we propose the following addition:
6.5p5:
A nonterminating loop that, outside of the for-init-statement in the case of a for statement,
- performs no I/O operations, and
- does not access or modify volatile objects, and
- performs no synchronization or atomic operations
invokes undefined behavior. [Note: This is meant to allow compiler transformations, such as removal of empty loops, even when termination cannot be proven. end note]
We had previously discussed limiting "undefined" behavior to certain optimizations. But it is unclear how to do that usefully, such that there are any programs that could usefully take advantage of such a statement.
This formulation does have the advantage that it makes it possible to painlessly write nonterminating loops that cannot be eliminated by the compiler, even for single-threaded programs.
15.3p9:
[Beman Dawes' suggestion, reflecting an earlier discussion:] Change "a program" to "the current thread of execution" in
If no matching handler is found in a program the current thread of execution, the function std::terminate() is called; whether or not the stack is unwound before this call to std::terminate() is implementation-defined (15.5.1)."