Introduction

We have been working for some time to clarify the memory object model of C: the behaviour of pointer operations, uninitialised values, padding, effective types, and so on.

This continues our previous work, with Batty et al., on the concurrency model of C++ and C. There, by developing a precise formal model for the behaviour of atomics, we identified problems in the C++0X draft standard, and we worked with the WG21 concurrency subgroup and with WG14 to propose fixes that were taken up in C++11 and C11. The final standard text for atomics is in close correspondence with our formal model, and this has enabled work on compiler testing, optimisation, and verification.

For the C memory object model, there are problems of several different kinds:

Some of the latter seem to arise from the fact that the ISO standard has been written to accommodate a very wide range of hardware platforms and compiler implementations, many of which are now obsolete, while much current systems software depends on stronger properties that hold for "mainstream" current common implementations. Others are real differences between the properties assumed by systems code and those that compilers aim to provide.

To investigate these problems:

We summarise some of the most important questions at (n2012):

http://www.cl.cam.ac.uk/~pes20/cerberus/notes64-wg14.html

For each, we would very much like to get a clear understanding of what WG14 members think the ISO C11 view of each of these questions is, and how that relates to current practice. In many cases we have suggestions for possible clarifications or changes to reconcile the standard and current practice (both compiler behaviour and usage) that we would like to discuss. We have not attempted to draft specific proposals for changes to the standard text here, but we can do that too if there seems to be consensus on the desired intended semantics.

If you are prepared to go through each of the questions in detail (though beware that this may take some time), we have a google form to record your responses in a convenient way:

http://www.cl.cam.ac.uk/users/pes20/cerberus/survey3.html

We want to distribute this only to particular focussed groups (not as a mass survey), as otherwise analysing the results becomes prohibitive, so please do not link to the google form elsewhere.

To keep this note as brief as possible, we haven't included the semantic test-case programs for each question. They are available in our "notes30" (N2013), available from:

http://www.cl.cam.ac.uk/users/pes20/cerberus/notes30.pdf

which you should refer to while looking at this. We hope to attend some of the April 2016 London WG14 meeting to discuss these issues.

Clarifying the C memory object model: trap representations

[See Question 2/15 of our survey and Section 3 (Q47) of our N2013]

In ISO C11 (following C99) trap representations are particular object representations that do not represent values of the object type, for which merely reading a trap representation (except by an lvalue of character type), is undefined behaviour. See 6.2.6.1p5, 6.2.6.2p2, DR338. An "indeterminate value" is either a trap representation or an unspecified value.

Trap representations complicate the language, and it is not clear whether they are significant in practice for current C implementations:

Accordingly, we suggest either:

  1. If there are current implementations where trap representations are significant, that ISO C make the sets of trap representation values for each type be implementation-defined, thereby requiring implementations to document which representations are trap representations (and hence, in the common case that there are none, to document that), or, otherwise,

  2. To remove trap representations from the standard, and hence to coalesce the concepts of "indeterminate value" and "unspecified value" into one.

Clarifying the C memory object model: uninitialised values

[See Question 2/15 of our survey, Sections 3.1 and 3.2 (Q48-59) of our N2013, DR338, DR451, and N1793]

For reading uninitialised values, there are many possible semantics, ranging from a fully concrete semantics in which such a read is guaranteed to give the actual contents of memory through to one in which it gives undefined behaviour. Our survey of C as used and implemented in practice gave very mixed numerical responses:

Is reading an uninitialised variable or struct member (with a current mainstream compiler):

  1. undefined behaviour (meaning that the compiler is free to arbitrarily miscompile the program, with or without a warning): 139 (43%)

  2. going to make the result of any expression involving that value unpredictable: 42 (13%)

  3. going to give an arbitrary and unstable value (maybe with a different value if you read again): 21 (6%)

  4. going to give an arbitrary but stable value (with the same value if you read again): 112 (35%)

However the comments were fairly clear on two points. First, this does arise in practice, e.g. when copying a partially initialised struct, (more rarely) when comparing against one, and in debugging. Second, it appears that some current mainstream compilers (including GCC and Clang) do optimise in ways that would make (d) unsound, while others (perhaps MSVC) may have more deterministic behaviour. None appeared to assume undefined behaviour in this situation.

This suggests that the most useful semantics (that permits current implementation behaviour without being needlessly weak for programmers) gives an symbolic unspecifed value (for reads of uninitialised memory, roughly (b) above.

In Sections 3.1 and 3.2 of our N2013 we collect 10 more specific questions (Q48-59), many of which are not directly addressed in ISO C11. We give concrete examples for each there, but for brevity here we just discuss briefly the current ISO position and (where we have one) our suggested choice.

In ISO C11, for types that (in the particular implementation in question) do not have any trap representations, this is undefined iff "the lvalue designates an object of automatic storage duration that could have been declared with the register storage class (never had its address taken)" (see 6.3.2.1p2 and DR338). This seems to have been added to cope with the Itanium NaT, and presumably has to be retained for such implementations. But for others it complicates and weakens the language for no purpose.

We suggest this be made an implementation-defined choice.

ISO C11 is unclear. The DR451 CR says "library functions will exhibit undefined behavior when used on indeterminate values" but here we are more specifically looking at unspecified values. We see no benefit from making this undefined behaviour, and we are not aware that compilers assume so (unless, conceivably, the Itanium NaT also requires this). It prevents (e.g.) debug printing of partially uninitialised structs.

We suggest "yes" (except for library functions which have an undefined behaviour for specific concrete values, which, similarly to Q54 below, should also have undefined behaviour if given unspecified values).

ISO C11 is unclear (it does not discuss this). We suggest "yes".

As mentioned above, current mainstream compiler optimisations seem to require these to both be "yes". The DR451 CR is "yes" for the analogous questions for indeterminate values. We suggest "yes" for these (note this would make the N1793 Fig.4 printhexdigit not useful when applied to an uninitialised structure member).

We suggest "yes" for this also, giving the simple semantics that all operations on unspecified values give unspecified values.

(Note that the LLVM documentation gives stronger guarantees for particular operations, as discussed in 3.2.4 of our notes30.pdf) but the utility of those is unclear to us, and they seem specific to LLVM).

This seems forced by the above: if x has an unspecified value, then 1/x might in practice trap, and so should be considered as having undefined behaviour. We suggest "yes".

This seems to be relied on in practice, and consistent with the "symbolic unspecified value" semantics we have so far, so we suggest "yes". The copy will have an unspecified value for the same member.

We suggest "yes".

The best answer to this is unclear from all points of view: ISO C11 doesn't address the question; we don't know whether existing compilers assume these are unspecified values, and we don't know whether existing code relies on them not being unspecified values.

For stylistic consistency one might take the answer to be "yes", but then (given the suggested answers above) a bytewise hash or checksum computation involving them would produce an unspecified value. In a more concrete semantics, it could produce different results in different invocations, even if the value is not mutated in the meantime.

We don't have sufficient grounds to suggest either answer at present.

This too is unclear. One could take the first such access as "freezing" the unspecified value and its representation bytes, but we don't know whether that would be sound with respect to current compiler behaviour. The simplest choice is "yes".

Again "yes" is the simplest choice, but one could argue instead that a read of the whole should give any nondeterministically chosen value consistent with the concretely written bytes.

Clarifying the C memory object model: padding in structures and unions

[See Question 1/15 of our survey, and Section 3.3 (Q60-68) of our N2013]

The standard discusses two quite different kinds of padding: padding bits within the representation of integer types (6.2.6.2), and padding bytes in structures and unions. Here we consider just the latter, together with the space between the end of a union's current member and the size of the maximally sized member of its union type (the standard does not refer to this as padding (6.2.6.1p7) but it behaves in a similar way).

Padding bytes might be needed either for alignment or to ensure that there is spare space that the implementation is free to overwrite with a "wide" write, where the hardware does not provide efficient store instructions for the native width of the value to be written.

There are several options for the semantics of padding, including:

- (a) regarding padding bytes as holding unspecified values throughout the lifetime of the object, irrespective of any writes to them;

- (b) when a member of a struct or union is written, deeming the semantics as also having written symbolic unspecified values to all its padding bytes;

- (c) when a member is written, deeming the semantics as also having written symbolic unspecified values to adjacent padding;

- (d) when a member is written, deeming the semantics as also having written symbolic unspecified values to subsequent padding;

- (e) when a member is written, nondeterministically either deeming the semantics as having written zeros to the adjacent padding or leaving it alone; or

- (f) when a member is written, nondeterministically either deeming the semantics as having written zeros to the subsequent padding or leaving it alone.

The standard is unclear which semantics it chooses. On the one hand, we have 6.2.6.1p6: "When a value is stored in an object of structure or union type, including in a member object, the bytes of the object representation that correspond to any padding bytes take unspecified values.", suggesting option (b), and 6.7.9p10 says: "If an object that has static or thread storage duration is not initialized explicitly, then [...] any padding is initialized to zero bits", suggesting that padding can meaningfully hold concrete (non-unspecified) values, so not option (a). But then 7.24.4.1 The memcmp function implies that padding bytes within structures always hold unspecified values, which is option (a): Footnote 310 "The contents of `holes' used as padding for purposes of alignment within structure objects are indeterminate." (even in the standard there are no trap representations here so indeterminate values are unspecified values).

In practical usage this matters in several ways:

On the implementation side, we have not seen current implementations actually do wide writes for single member writes, though a few survey respondents say they have. Whether such implementations guarantee (or could reasonably be made to guarantee) that the extra bytes are zeroed is unknown to us. Rather more respondents believe that compilers will assume that padding contains unspecified values and will optimise away reads of it (effectively (a)), but we don't have a definite answer for that either. Multiple accesses of adjacent members, e.g. in a structure copy, might be aggregated into accesses that also read and write intervening padding.

In the platforms that we are familar with, padding is determined by the ABI specification of type layout (together with compiler flags or pragmas that permit structs to be packed, but we ignore those here), and "Each member is assigned to the lowest available offset with the appropriate alignment" (AMD64 ABI, for example). That means one cannot know the padding following a member without knowing the subsequent member type, which means this interacts with the semantics for type punning between related struct types that share a prefix (undefined behaviour by ISO but apparently widely used nonetheless).

There is also an interaction with the concurrency semantics: it is legal for threads to write to adjacent members of a struct without any synchronisation (this does not comprise a data race), so if one chooses any of (b,c,e), one must take care to ensure that those notional writes to padding do not give rise to spurious data races in the semantics.

Our best suggestion at present is (d): allow padding to contain non-unspecified-value values, and, when a member is written, deem the semantics as also having written symbolic unspecified values to subsequent padding.

In Section 3.3 of our N2013 we collect 9 specific questions (Q60-68) which address some of these choices. We give concrete examples for each there, but for brevity just summarise here.

Implementations have to be allowed to do a structure copy by copying all the bytes of the structure, which will copy padding, or by copying just the members, which will not. Options (a,b,c,d) permit this implicitly as a consequence of the member-write semantics; options (e,f) would need structure writes to be special-cased (and that might cause problems w.r.t. aggregation of member writes into structure writes).

These discriminate between most of the above options (we could but have not yet added tests that discrimate between the "adjacent" and "subsequent" variants).

Our (currently preferred) Option (d) gives (yes, no, no, no, yes, yes) for these.

This is a corner case that is apparently used in practice but which could cause problems for (b-f): in general one could not know how much memory to write notional unspecified values (or zeros) to, so the answer would have to be "no". One could intersect with the allocation footprint to allow it in some cases.

This addresses the question of whether an implementation is allowed to use padding for its own purposes, to maintain metadata. We believe not, and hence that Q68 should be "yes".

Clarifying the C memory object model: pointer provenance

[See Questions 3/15, 4/15, and 5/15 of our survey, Section 2.1-2.9 (Q1-20) of our N2013, and DR260]

C pointer values could traditionally be considered to be concrete numeric values (our survey indicates many still do). However, the DR260 Committee Response suggests otherwise, hinting at a notion of provenance being carried by pointer:

"Implementations are permitted to track the origins of a bit-pattern and treat those representing an indeterminate value as distinct from those representing a determined value. They may also treat pointers based on different origins as distinct even though they are bitwise identical."

Current compilers appear to follow this, using it to justify alias analysis based on provenance distinctions. However, DR260CR leaves many questions unclear. We enumerate those (with examples) in our notes30.pdf; here we suggest a specific proposal for a provenance-aware semantics and discuss how it addresses those questions.

The basic idea is to associate a provenance with every pointer value, essentially identifying the original allocation the pointer is derived from. This is for the "C abstract machine" as defined in the standard: compilers might rely on provenance, but one would not expect normal implementations to record or manipulate provenance at runtime (though dynamic or static analysis tools might).

Then there are many specific choices of how provenance is affected by arithmetic operations and suchlike. We first discuss the questions and then summarise our proposal.

Pointer provenance

Here DR260CR clearly says yes. Our experimental data shows cases where recent versions of GCC and ICC do assume non-aliasing of pointers with identical representation values but distinct provenance. This is incompatible with a concrete semantics of pointers (where they are fully characterised by their representation values). Tracking of provenance in the "abstract machine" is therefore clearly necessary to make these compilers sound with respect to the standard.

This is also allowed according to DR260CR. We have observed GCC regarding two pointers with different provenance as nonequal (with ==) even though they have the same representation value. This happens in some circumstances but not others, so we suggest that whether pointer equality takes provenance into account or not should be made indeterminate in the standard (again to make the observed compiler behaviour sound with respect to the standard). Note that requiring equality to always take provenance into account would require implementations to track provenance at runtime.

The ISO C11 standard text is too strong here: 6.5.9p6 says "Two pointers compare equal if and only if both are [...] or one is a pointer to one past the end of one array object and the other is a pointer to the start of a different array object that happens to immediately follow the first array object in the address space", which requires such pointers to compare equal (reasonable pre-DR260CR, not not after it). We don't expect programmers to rely on that behaviour and GCC does not satisfy it, so, to be consistent with DR260CR and with the indeterminate behaviour we suggest, it should permit them to compare equal or non-equal.

Pointer provenance via integer types

ISO C11 optionally allows implementations to provide the type intptr_t (along with an unsigned variant) with guaranteed round-trip properties for pointer/integer casts. However it seems to be common practice (e.g. in Linux) to extend these properties to unsigned long, when its implementation is large enough. We suggest that this be permitted iff that is the case.

Given the type intptr_t, this asks whether one can return to a concrete view of pointers, by casts to intptr_t followed by integer arithmetic and casting back to a pointer type. Here again, we observe GCC behaving the same as with Q1, reasoning that pointers obtained in this way cannot alias even if they have the same numerical values. This observation is reinforced by the GCC documentation, which mentions an "original pointer" associated to integer values cast to pointer type, so the answer seems to be "yes". This leads to many more questions regarding the specifics of how provenance information affect the semantics of each integer operator. Some of these are discussed in the next subsection and the remainder are given a complete treatment in the summary of our memory model proposal at the end.

The standard leaves conversions between integer and pointer types implementation-defined (6.3.2.3p{5,6}), but it is common practice to use unused pointer bits (either low-order bits from alignment requirements or high-order bits beyond the maximum address range). We suggest that the set of unused bits for pointer types of each alignment should be made implementation-defined, to make this practice legal.

Moreover, where the standard does give a guarantee, e.g. for round-trips through intptr_t (7.20.1.4p1), it says only that the result "will compare equal". In a provenance-aware semantics, that may not be enough to make the result usable to reference memory; the standard text should be strengthened here to guarantee that.

DR260CR does not address this. GCC did at one point do this, but it was regarded as a bug and fixed. We have observed it in Clang. We believe that integer equality testing should not be affected by provenance, i.e. "no".

Pointers involving multiple provenances

DR260CR does not address this, but it is uncontroversially "yes": an intra-object pointer subtraction, say between the addresses of two elements of an array, should give a provenance-free integer offset that can then be used for indexing into this or other arrays.

This is asking about pointers that have multiple provenances, which is not addressed in DR260CR or current GCC or Clang compiler documentation. Our experiments and our survey responses both suggest that compilers do not in general support it, and we imagine it is uncommon in practice. However, there do seem to be specific important use cases, including the Linux and FreeBSD per-CPU variable implementations - though it is unclear whether these are between multiple allocations in the C sense. These might be dealt with by an attribute such as the GCC may_alias - though the documentation for that refers only to type-based alias analysis, not to "provenance-based" alias analysis. This needs further discussion, but we tentatively suggest "no".

(Given that, Q10 is not useful)

This also a question about pointers with multiple provenance, which (in a provenance-aware semantics) are needed to make the idiom legal. While it may have been common practice when memory space was more limited, it seems no longer to be the case. We don't know whether current compiler alias analysis permits it or not. Our suggested semantics would not allow it.

For our suggested semantics, the answer is "yes", which seems the most intuitive for programmers. Again the status of current compiler implementation needs to be checked.

Pointer provenance via pointer representation copying

The ISO C11 text does not explicitly address this. In a pre-provenance semantics, before DR260, it did not need to, but now (as it surely should be allowed) one needs to guarantee that the result has the appropriate provenance to make it usable.

One could allow it by special-casing memcpy() to preserve provenance, but the following questions suggest a less ad hoc approach.

ISO C11 and DR260CR again do not mention this explicitly (though the 6.5p6 effective type text weakly implies it is allowed). We believe it is widely relied on.
Our proposed semantics makes it legal by regarding each representation byte (as an integer value) as having the provenance of the original pointer, and the result pointer, being composed of representation bytes with that provenance, as having the same.

Whether this is supported by ISO C11 is unclear. Programs that explicitly swap out memory to disc and swap it back in would require this to work. Our proposal allows it to some extent.

One might imagine forging pointers via control-flow, e.g. if testing equality of an unprovenanced integer value against a valid pointer permits the integer to be used as if it had the same provenance as the pointer. We don't expect that this is relied on in practice, and our proposed semantics does not permit it - we track provenance only through dataflow. This needs to be discussed with respect to current compiler analysis behaviour.

Pointer provenance and union type punning

The ISO standard says little about these questions, but our survey responses suggest that it is fairly common for implementations to satisfy them and for programmers to exploit them. Following the same choices as we make for provenance of representation bytes, our suggested model permits them.

Pointer provenance via IO

This is allowed in ISO C11 though the use of the %p conversion specifier for fprintf() and fscanf(). Our survey results are clear that such marshalling is used in practice. Given the following quote from the standard:

"If the input item is a value converted earlier during the same program execution, the pointer that results shall compare equal to that value"

We suggest that the pointers output during an execution should be recorded along with their provenance, in order to be reinjected when these representation value are input later during the execution.

ISO C11 makes this undefined behaviour, and this is consistent with an abstract view of pointers. However embedded programs and others dealing with memory-mapped devices do require this to work. Our suggestion is to introduce an implementation-defined set of addresses (which may depend on linking) for which the creation of such pointers be allowed.

Summary of our proposal for abstract (provenance-aware) pointers

The basic idea is to associate a provenance with every pointer value, essentially identifying the original allocation the pointer is derived from. This is for the "C abstract machine" as defined in the standard: compilers might rely on provenance, but one would not expect normal implementations to record or manipulate provenance at runtime (though dynamic or static analysis tools might).

Clarifying the C memory object model: null pointers

[See Questions 12/15 and 13/15 of our survey, and Section 2.13 (Q28-30) of our N2013]

ISO C11 permits the construction of null pointers by casting from integer constant zero expressions, but not from other integer values that happen to be zero (6.3.2.3p3): "An integer constant expression with the value 0, or such an expression cast to type void *, is called a null pointer constant. If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function."

However, in practice it seems that code often does rely on being able to produce null pointers from other zero integer values, or from all-zero representation bytes, and a survey respondent suggests this is sound for all current GCC targets. The only exceptions we are aware of are now-obsolete segmented memory systems (IBM AS/400?) in which pointer representations included a non-zero segment selector, and perhaps some current embedded systems. Summarising, our notes30.pdf asks:

In ISO C11 the answers are (no, no, no). All these could be reconciled with practice simply by making the set of null pointer representations an implementation-defined set, thus requiring it to be documented, and allowing Q28 iff that set contains just a single element with all-zeros representation.

Another common idiom in practice is to use the addresses of members of a NULL struct pointer to calculate their offsets, as in Q36 below. It's unclear whether or not that is well-defined in ISO C11, but there seems no reason to forbid it, at least where the implementation-defined set of null pointer representations is just the singleton zero.

Clarifying the C memory object model: pointer relational comparison (with <, >, <=, or >=)

[See Question 7/15 of our survey, and Section 2.12 (Q25-27) of our N2013]

Here the ISO standard seems to be significantly more restrictive than common practice. In ISO C11 there is first a type constraint: 6.5.8p2 "both operands are pointers to qualified or unqualified versions of compatible object types."

Then 6.5.8p5 allows comparison of pointers only to the same object (or one-past) or to members of the same array, structure, or union: "When two pointers are compared, the result depends on the relative locations in the address space of the objects pointed to. If two pointers to object types both point to the same object, or both point one past the last element of the same array object, they compare equal. If the objects pointed to are members of the same aggregate object, pointers to structure members declared later compare greater than pointers to members declared earlier in the structure, and pointers to array elements with larger subscript values compare greater than pointers to elements of the same array with lower subscript values. All pointers to members of the same union object compare equal. If the expression P points to an element of an array object and the expression Q points to the last element of the same array object, the pointer expression Q+1 compares greater than P. In all other cases, the behavior is undefined."

(Similarly to 6.5.6p7 for pointer arithmetic, 6.5.8p4 treats all non-array element objects as arrays of size one for this: 6.5.8p4 "For the purposes of these operators, a pointer to an object that is not an element of an array behaves the same as a pointer to the first element of an array of length one with the type of the object as its element type.")

This rules out comparisons between pointers to two separately allocated objects, and comparisons between a pointer to a structure member and one to a sub-member of another member.

In practice, comparisons between separately allocated objects seem to be commonly relied on, e.g. for lock ordering, to build collections, and for functions like memmove. A survey respondent said it is "is likely to work in practice" for GCC. The only case where it might not work that we are aware of is that of implementations on segmented architectures, where the obvious runtime comparison might ignore the segment ID. If that is still a real issue, we suggst it be made an implementation-defined question whether inter-object relational comparison is permitted, to bring the ISO standard in line with implementation and usage practice.

For comparison between a pointer to a structure member and one to a sub-member of another member, of compatible object types, we see no reason to forbid it, and suggest the standard is changed accordingly.

For comparison between pointers to objects of incompatible types, the only cases we can imagine where it might not work are rather exotic implementations, though real code may normally cast to (void * ).

Summarising, for the following three questions the current ISO C11 position is (no, no, no), while we suggest (implementation-defined, yes) for the first two and do not have a position on the last.

Clarifying the C memory object model: exploiting unused bits in pointers

[See Section 2.2.4 (Q6) of our N2013]

It is common in practice to use unused low- or high-order bits in pointers to store additional information, e.g. via casts to an integer type and bitwise operations, but the status of this in the ISO C11 standard is unclear.

We suggest that the set of unused bits of pointer types that can be used for such purposes be required to be implementation-defined (perhaps for each possible alignment, or in the limit for each such type).

Clarifying the C memory object model: out-of-bounds pointer arithmetic

[See Question 9/15 of our survey, and Section 2.14 (Q31-33) of our N2013]

The ISO standard permits only very limited pointer arithmetic, restricting the formation of pointer values. First, there is arithmetic within an array: 6.5.6 Additive operators (6.5.6p8,9) permits one to add a pointer and integer (or subtract an integer from a pointer) only within the start and one past the end of an array object, inclusive. 6.5.6p7 adds "For the purposes of these operators, a pointer to an object that is not an element of an array behaves the same as a pointer to the first element of an array of length one with the type of the object as its element type". Subtraction of two pointers is permitted only if both are in a similar range (and only if the result is representable in the result type).

Second, 6.3.2.3p7 says that one can do pointer arithmetic on character-type pointers to access representation bytes: "[...] When a pointer to an object is converted to a pointer to a character type, the result points to the lowest addressed byte of the object. Successive increments of the result, up to the size of the object, yield pointers to the remaining bytes of the object."

In practice the survey responses make clear that there are real differences here. On the one hand, much real code does transiently construct out-of-bounds pointer values by pointer arithmetic, bringing them back into bounds before using them for accesses; most respondents (73%) assume this works, and the clang -fsanitize=undefined deliberately doesn't check for it. On the other hand, others reply that it will not in general work with current compilers, e.g.: "this is not safe; compilers may optimise based on pointers being within bounds".

Possible tricky cases include (1) hardware that does bounds checking, (2) platforms where a transient insufficiently-aligned value cannot be represented at the given pointer type, (3) pointer wrapping at values less than the obvious word size, and (4) pointer arithmetic overflow. How much these matter in current practice is unclear to us.

Summarising, the ISO C11 answers to the following are (no, no, no); real code relies widely at least on the first; and it is not always guaranteed to work in current implementations - but it is unclear when it does. The question (for compiler authors) is thus to articulate when it is guaranteed to work, preferably in a way that can be codified in the standard.

Clarifying the C memory object model: pointer lifetime end

[See Question 8/15 of our survey, and Section 2.17 (Q43) of our N2013]

The ISO C11 text makes all pointers to an object indeterminate at the end of its lifefile: 6.2.4 Storage durations of objects says (6.4.2p2) "If an object is referred to outside of its lifetime, the behavior is undefined. The value of a pointer becomes indeterminate when the object it points to (or just past) reaches the end of its lifetime."

This makes accesses via that pointer undefined behaviour. In the absence of trap representations at pointer types, it also means that comparisons, representation-byte accesses, pointer arithmetic, and member offset calculations will not have useful results - depending on the choices elsewhere, they may give unspecified values - but will not give undefined behaviour (other authors differ on this last, regarding all those as giving undefined behaviour).

(This side-effect of lifetime end on all pointer values that point to the object is a very unusual aspect of ISO C compared with other programming language definitions.)

However, in practice most survey respondents (66%) believe this will work, and they give a number of use-cases in real code.

Then one also has to consider what happens to integer values derived from pointers (e.g. intptr_t values cast from pointers, or pointer representation bytes) when the lifetime of the original object ends.

Summarising: for ISO C11 the following is "no", but for C in practice it seems to be commonly "yes". It's unclear what current implementations do. We suggest that this be made an implementation-defined property, expecting most implementations to support such equality tests (and also access to representation bytes etc.). And we suggest that all integer values are left unchanged at lifetime end.

Clarifying the C memory object model: effective types

[See Question 11/15 of our survey, and Section 4 (Q73-81) of our N2013]

The survey makes clear that this is widely relied on, but the ISO standard disallows it (6.5p7), and we also see, for GCC "No, this is not safe (if it's visible to the compiler that the memory in question has unsigned char as its declared type)".

The question (for compiler authors) is thus to articulate when it is guaranteed to work, preferably in a way that can be codified in the standard.

There are several more questions in Section 4 (Q73-81) of our N2013] that we postpone for now.