Document number: | WG21 N4192 |
Date: | 2014-10-13 |
Project: | Programming Language C++ |
Reference: | ISO/IEC IS 14882:2003 |
Reply to: | William M. Miller |
Edison Design Group, Inc. | |
wmm@edg.com |
This document contains the C++ core language issues on which the Committee (J16 + WG21) has not yet acted, that is, issues with status "Ready," "Tentatively Ready," "Review," "Drafting," and "Open."
This document is part of a group of related documents that together describe the issues that have been raised regarding the C++ Standard. The other documents in the group are:
Section references in this document reflect the section numbering of document WG21 N3936.
The purpose of these documents is to record the disposition of issues that have come before the Core Language Working Group of the ANSI (INCITS PL22.16) and ISO (WG21) C++ Standard Committee.
Some issues represent potential defects in the ISO/IEC IS 14882:2011 document and corrected defects in the earlier 2003 and 1998 documents; others refer to text in the working draft for the next revision of the C++ language and not to any Standard text. Issues are not necessarily formal ISO Defect Reports (DRs). While some issues will eventually be elevated to DR status, others will be disposed of in other ways. (See Issue Status below.)
The most current public version of this document can be found at http://www.open-std.org/jtc1/sc22/wg21. Requests for further information about these documents should include the document number, reference ISO/IEC 14882:2011, and be submitted to the InterNational Committee for Information Technology Standards (INCITS), 1250 Eye Street NW, Suite 200, Washington, DC 20005, USA.
Information regarding C++ standardization can be found at http://isocpp.org/std.
Revision 91, 2014-10-13: Incorporated deliberations of drafting review teleconferences held 2014-07-14 and 2014-10-06. Added new issue 1947.
Revision 90, 2014-07-07: Issue 1715 was returned to "drafting" status in light of an alternative suggestion for its resolution. Issue 1927 was closed as a duplicate of issue 1695. Reflected the deliberations of CWG at the 2014-06 (Rapperswil) meeting. Added new issues 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, and 1946.
Revision 89, 2014-05-27: Issues 1351 1356 1465, 1590, 1639 1708, and 1810 were returned to "review" status for further discussion. Restored issue 1397 to "ready"; it had incorrectly been moved back to "drafting" because of a misunderstood comment. Added new issues 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, and 1931.
Issues progress through various statuses as the Core Language Working Group and, ultimately, the full PL22.16 and WG21 committees deliberate and act. For ease of reference, issues are grouped in these documents by their status. Issues have one of the following statuses:
Open: The issue is new or the working group has not yet formed an opinion on the issue. If a Suggested Resolution is given, it reflects the opinion of the issue's submitter, not necessarily that of the working group or the Committee as a whole.
Drafting: Informal consensus has been reached in the working group and is described in rough terms in a Tentative Resolution, although precise wording for the change is not yet available.
Review: Exact wording of a Proposed Resolution is now available for an issue on which the working group previously reached informal consensus.
Ready: The working group has reached consensus that a change in the working draft is required, the Proposed Resolution is correct, and the issue is ready to forward to the full Committee for ratification.
Tentatively Ready: Like "ready" except that the resolution was produced and approved by a subset of the working group membership between meetings. Persons not participating in these between-meeting activities are encouraged to review such resolutions carefully and to alert the working group with any problems that may be found.
DR: The full Committee has approved the item as a proposed defect report. The Proposed Resolution in an issue with this status reflects the best judgment of the Committee at this time regarding the action that will be taken to remedy the defect; however, the current wording of the Standard remains in effect until such time as a Technical Corrigendum or a revision of the Standard is issued by ISO.
Accepted: Like a DR except that the issue concerns the wording of the current Working Paper rather than that of the current International Standard.
TC1: A DR issue included in Technical Corrigendum 1. TC1 is a revision of the Standard issued in 2003.
CD1: A DR issue not resolved in TC1 but included in Committee Draft 1. CD1 was advanced for balloting at the September, 2008 WG21 meeting.
CD2: A DR issue not resolved in CD1 but included in the Final Committee Draft advanced for balloting at the March, 2010 WG21 meeting.
C++11: A DR issue not resolved in CD2 but included in ISO/IEC 14882:2011.
CD3: A DR issue not resolved in C++11 but included in the Committee Draft advanceed for balloting at the April, 2013 WG21 meeting.
DRWP: A DR issue whose resolution is reflected in the current Working Paper. The Working Paper is a draft for a future version of the Standard.
WP: An Accepted issue whose resolution is reflected in the current Working Paper.
Dup: The issue is identical to or a subset of another issue, identified in a Rationale statement.
NAD: The working group has reached consensus that the issue is not a defect in the Standard. A Rationale statement describes the working group's reasoning.
Extension: The working group has reached consensus that the issue is not a defect in the Standard but is a request for an extension to the language. The working group expresses no opinion on the merits of an issue with this status; however, the issue will be maintained on the list for possible future consideration as an extension proposal.
Concepts: The issue relates to the “Concepts” proposal that was removed from the working paper at the Frankfurt (July, 2009) meeting and hence is no longer under consideration.
Concurrency: The issue deals with concurrency and is to be handled by the Concurrency Working Group within WG21.
According to 2.3 [lex.charset] paragraph 3,
The basic execution character set and the basic execution wide-character set shall each contain all the members of the basic source character set, plus control characters representing alert, backspace, and carriage return, plus a null character (respectively, null wide character), whose representation has all zero bits.
It is not clear that a portable program can examine the bits of the representation; instead, it would appear to be limited to examining the bits of the numbers corresponding to the value representation (3.9.1 [basic.fundamental] paragraph 1). It might be more appropriate to require that the null character value compare equal to 0 or '\0' rather than specifying the bit pattern of the representation.
There is a similar issue for the definition of shift, bitwise and, and bitwise or operators: are those specifications constraints on the bit pattern of the representation or on the values resulting from the interpretation of those patterns as numbers?
Proposed resolution (February, 2014):
Change 2.3 [lex.charset] paragraph 3 as follows:
The basic execution character set and the basic execution wide-character set shall each contain all the members of the basic source character set, plus control characters representing alert, backspace, and carriage return, plus a null character (respectively, null wide character), whose representation has all zero bits value is 0. For each basic execution character set...
The intent of char16_t string literals, as evident from 2.14.5 [lex.string] paragraph 9, is that they be encoded in UTF-16, that is, including surrogate pairs to represent code points outside the basic multi-lingual plane:
A single c-char may produce more than one char16_t character in the form of surrogate pairs.
Paragraph 15, however, is inconsistent with this approach, saying,
Escape sequences and universal-character-names in non-raw string literals have the same meaning as in character literals (2.14.3 [lex.ccon]), except that the single quote ' is representable either by itself or by the escape sequence \', and the double quote " shall be preceded by a \.
The reason is that code points outside the basic multi-lingual plane are ill-formed in char16_t character literals:
A character literal that begins with the letter u, such as u'y', is a character literal of type char16_t. The value of a char16_t literal containing a single c-char is equal to its ISO 10646 code point value, provided that the code point is representable with a single 16-bit code unit. (That is, provided it is a basic multi-lingual plane code point.) If the value is not representable within 16 bits, the program is ill-formed.
It should be clarified that this restriction does not apply to char16_t string literals.
Proposed resolution (February, 2014):
Change 2.14.5 [lex.string] paragraph 16 as follows:
Escape sequences and universal-character-names in non-raw string literals have the same meaning as in character literals (2.14.3 [lex.ccon]), except that the single quote ' is representable either by itself or by the escape sequence \', and the double quote " shall be preceded by a \, and except that a universal-character-name in a char16_t string literal may yield a surrogate pair. In a narrow string literal...
In explaining the relationship between preprocessing tokens and tokens, 2.5 [lex.pptoken] paragraph 4 contains the following example:
[Example: The program fragment 1Ex is parsed as a preprocessing number token (one that is not a valid floating or integer literal token), even though a parse as the pair of preprocessing tokens 1 and Ex might produce a valid expression (for example, if Ex were a macro defined as +1).
This analysis does not take into account the addition of user-defined literals. In fact, 1Ex matches the rule for a user-defined-integer-literal, which is then ill-formed because it uses a reserved ud-suffix (2.14.8 [lex.ext] paragraph 10), as well as (presumably) because of a lookup failure for a matching literal operator, raw literal operator, or literal operator template.
More generally, it might be preferable to eliminate the restriction on the use of a reserved ud-suffix and rely simply on the fact that it is ill-formed to declare a literal operator, raw literal operator, or literal operator template with a reserved literal suffix identifier (17.6.4.3.5 [usrlit.suffix], cf 13.5.8 [over.literal] paragraph 1).
Proposed resolution (June, 2014):
Change 2.5 [lex.pptoken] paragraph 4 as follows:
[Example: The program fragment 1Ex 0xe+foo is parsed as a preprocessing number token (one that is not a valid floating or integer literal token), even though a parse as the pair of three preprocessing tokens 1 0xe, +, and Ex foo might produce a valid expression (for example, if Ex foo were a macro defined as +1). Similarly, the program fragment 1E1 is parsed as a preprocessing number (one that is a valid floating literal token), whether or not E is a macro name. —end example]
Delete 2.14.8 [lex.ext] paragraph 10:
Some identifiers appearing as ud-suffixes are reserved for future standardization (17.6.4.3.5 [usrlit.suffix]). A program containing such a ud-suffix is ill-formed, no diagnostic required.
Change 13.5.8 [over.literal] paragraph 1 as follows:
The string-literal or user-defined-string-literal in a literal-operator-id shall have no encoding-prefix and shall contain no characters other than the implicit terminating '\0'. The ud-suffix of the user-defined-string-literal or the identifier in a literal-operator-id is called a literal suffix identifier. [Note: some Some literal suffix identifiers are reserved for future standardization; see 17.6.4.3.5 [usrlit.suffix]. —end note] A declaration whose literal-operator-id uses such a literal suffix identifier is ill-formed; no diagnostic required.
Change 17.6.4.3.5 [usrlit.suffix] paragraph 1 as follows:
Literal suffix identifiers (13.5.8 [over.literal]) that do not start with an underscore are reserved for future standardization.
Additional note, May, 2014:
It has been suggested that the change to 2.5 [lex.pptoken] paragraph 4 in the proposed resolution would be simpler and better if it did not venture into questions about user-defined literals but simply relied on a string that is a valid pp-number but not a valid floating-point number, as was the case before the introduction of user-defined literals, e.g., 1.2.3.4. The issue has been returned to "review" status for discussion of this suggestion.
According to 3.2 [basic.def.odr] paragraph 3,
A function whose name appears as a potentially-evaluated expression is odr-used if it is the unique lookup result or the selected member of a set of overloaded functions (3.4 [basic.lookup], 13.3 [over.match], 13.4 [over.over]), unless it is a pure virtual function and its name is not explicitly qualified.
In the following example, consequently, S::f is odr-used but not defined, and (because it is an undefined odr-used inline function) a diagnostic is required:
namespace { struct S { inline virtual void f() = 0; }; void (S::*p) = &S::f; }
However, S::f cannot be called through such a pointer-to-member, so forming a pointer-to-member should not cause a pure virtual function to be odr-used. There is implementation divergence on this point.
Proposed resolution (April, 2013):
Change 3.2 [basic.def.odr] paragraph 3 as follows:
...A virtual member function is odr-used if it is not pure. A function whose name appears as a potentially-evaluated expression is odr-used if it is the unique lookup result or the selected member of a set of overloaded functions (3.4 [basic.lookup], 13.3 [over.match], 13.4 [over.over]), unless it is a pure virtual function and either its name is not explicitly qualified or the expression forms a pointer to member (5.3.1). [Note:...
The rules for class scope in 3.3.7 [basic.scope.class] paragraph 1 include the following:
A name N used in a class S shall refer to the same declaration in its context and when re-evaluated in the completed scope of S. No diagnostic is required for a violation of this rule.
If reordering member declarations in a class yields an alternate valid program under (1) and (2), the program is ill-formed, no diagnostic is required.
The need for rule #3 is not clear; it would seem that any otherwise-valid reordering would have to violate rule #2 in order to yield a different interpretation. Taken literally, rule #3 would also apply to simply reordering nonstatic data members with no name dependencies at all. Can it be simply removed?
Proposed resolution (June, 2014):
Delete the third item of 3.3.7 [basic.scope.class] paragraph 1 and renumber the succeeding items:
If reordering member declarations in a class yields an alternate valid program under (1) and (2), the program is ill-formed, no diagnostic is required.
One of the forms of pseudo-destructor-name is
Presumably the intent of this form is to allow the nested-name-specifier to designate a namespace; otherwise the
production would be used.
Since one of the forms of nested-name-specifier is
one can write something like p->decltype(x)::~Y(). However, the lookup rules in 3.4.3 [basic.lookup.qual] paragraph 6 are inappropriate for the decltype-specifier case:
If a pseudo-destructor-name (5.2.4 [expr.pseudo]) contains a nested-name-specifier, the type-names are looked up as types in the scope designated by the nested-name-specifier.
Since this form appears to be useless (use of a decltype-specifier is permitted after a ~, but only with no nested-name-specifer — but see issue 1586), perhaps it should be made ill-formed.
Proposed resolution (February, 2014):
Change the grammar in 5.2 [expr.post] paragraph 1 as follows:
In C++03, all namespace-scope names had external linkage unless explicitly declared otherwise (via static, const, or as a member of an anonymous union). C++11 now specifies that members of an unnamed namespace have internal linkage (see issue 1113). This change invalidated a number of assumptions scattered throughout the Standard that need to be adjusted:
3.5 [basic.link] paragraph 5 says,
a member function, static data member, a named class or enumeration of class scope, or an unnamed class or enumeration defined in a class-scope typedef declaration such that the class or enumeration has the typedef name for linkage purposes (7.1.3 [dcl.typedef]), has external linkage if the name of the class has external linkage.
There is no specification for the linkage of such members of a class with internal linkage. Formally, at least, that leads to the statement in paragraph 8 that such members have no linkage. This omission also contradicts the note in 9.3 [class.mfct] paragraph 3:
[Note: Member functions of a class in namespace scope have external linkage. Member functions of a local class (9.8 [class.local]) have no linkage. See 3.5 [basic.link]. —end note]
as well as the statement in 9.4.2 [class.static.data] paragraph 5,
Static data members of a class in namespace scope have external linkage (3.5 [basic.link]).
The footnote in 3.5 [basic.link] paragraph 8 says,
A class template always has external linkage, and the requirements of 14.3.1 [temp.arg.type] and 14.3.2 [temp.arg.nontype] ensure that the template arguments will also have appropriate linkage.
This is incorrect, since templates in unnamed namespaces now have internal linkage and template arguments are no longer required to have external linkage.
The statement in 7.1.1 [dcl.stc] paragraph 7 is now false:
A name declared in a namespace scope without a storage-class-specifier has external linkage unless it has internal linkage because of a previous declaration and provided it is not declared const.
The entire treatment of unique in 7.3.1.1 [namespace.unnamed] is no longer necessary, and the footnote is incorrect:
Although entities in an unnamed namespace might have external linkage, they are effectively qualified by a name unique to their translation unit and therefore can never be seen from any other translation unit.
Names in unnamed namespaces never have external linkage.
According to 11.3 [class.friend] paragraph 4,
A function first declared in a friend declaration has external linkage (3.5 [basic.link]).
This presumably is incorrect for a class that is a member of an unnamed namespace.
According to 14 [temp] paragraph 4,
A non-member function template can have internal linkage; any other template name shall have external linkage.
Taken literally, this would mean that a template could not be a member of an unnamed namespace.
Proposed resolution (April, 2013):
Change 3.5 [basic.link] paragraph 5 as follows:
In addition, a member function, static data member, a named class or enumeration of class scope, or an unnamed class or enumeration defined in a class-scope typedef declaration such that the class or enumeration has the typedef name for linkage purposes (7.1.3 [dcl.typedef]), has external linkage if the name of the class has external linkage the same linkage, if any, as the name of the class of which it is a member.
Change the footnote in 3.5 [basic.link] paragraph 8 as follows:
33) A class template always has external linkage, and the requirements of 14.3.1 [temp.arg.type] and 14.3.2 [temp.arg.nontype] ensure that the template arguments will also have appropriate linkage has the linkage of the innermost enclosing class or namespace in which it is declared.
Change 7.3.1.1 [namespace.unnamed] paragraph 1 as follows:
An unnamed-namespace-definition behaves as if it were replaced by
inlineopt namespace unique { /* empty body */ } using namespace unique ; namespace unique { namespace-body }where inline appears if and only if it appears in the unnamed-namespace-definition, and all occurrences of unique in a translation unit are replaced by the same identifier, and this identifier differs from all other identifiers in the entire program. [Footnote: Although entities in an unnamed namespace might have external linkage, they are effectively qualified by a name unique to their translation unit and therefore can never be seen from any other translation unit. —end footnote] translation unit. [Example:...
Change the note in 9.3 [class.mfct] paragraph 3 as follows:
[Note: Member functions of a class in namespace scope have external linkage the linkage of that class. Member functions of a local class (9.8 [class.local]) have no linkage. See 3.5 [basic.link]. —end note]
Change 9.4.2 [class.static.data] paragraph 5 as follows:
Static data members of a class in namespace scope have external linkage the linkage of that class (3.5 [basic.link]).
Change 11.3 [class.friend] paragraph 4 as follows:
A function first declared in a friend declaration has external linkage the linkage of the namespace of which it is a member (3.5 [basic.link]). Otherwise, the function retains its previous linkage (7.1.1 [dcl.stc]).
Change 14 [temp] paragraph 4 as follows:
A template name has linkage (3.5 [basic.link]). A non-member function template can have internal linkage; any other template name shall have external linkage. Specializations (explicit or implicit) of a template that has internal linkage are distinct from all specializations in other translation units...
According to 3.5 [basic.link] paragraph 3,
A name having namespace scope (3.3.6 [basic.scope.namespace]) has internal linkage if it is the name of
...
a non-volatile variable that is explicitly declared const or constexpr and neither explicitly declared extern nor previously declared to have external linkage; or
...
It would be more precise and less confusing if the phrase “explicitly declared const” were replaced by saying that its type is const-qualified. This change would also allow removal of the reference to constexpr, which was added by issue 1112 because constexpr variables are implicitly const-qualified but not covered by the “explicitly declared” phrasing.
Proposed resolution (September, 2013):
Change the second bullet of 3.5 [basic.link] paragraph 3 as follows:
a non-volatile variable that is explicitly declared const or constexpr and of non-volatile const-qualified type that is neither explicitly declared extern nor previously declared to have external linkage; or
According to 3.6.2 [basic.start.init] paragraph 2,
Definitions of explicitly specialized class template static data members have ordered initialization. Other class template static data members (i.e., implicitly or explicitly instantiated specializations) have unordered initialization.
This is not clear whether it is referring to static data members of explicit specializations of class templates or to explicit specializations of static data members of class template specializations. It also does not apply to static data member templates and non-member variable templates.
Proposed resolution (February, 2014):
Change 3.6.2 [basic.start.init] paragraph 2 as follows:
...Dynamic initialization of a non-local variable with static storage duration is either ordered or unordered. Definitions of explicitly specialized class template static data members have ordered initializa-tion. Other class template static data members (i.e., implicitly or explicitly instantiated specializations) have unordered initialization. Other non-local variables with static storage duration have ordered initialization unordered if the variable is an implicitly or explicitly instantiated specialization, and otherwise is ordered [Note: an explicitly specialized static data member or variable template specialization has ordered initialization. —end note]. Variables with ordered initialization...
According to 3.6.2 [basic.start.init] paragraph 2,
Constant initialization is performed:
if each full-expression (including implicit conversions) that appears in the initializer of a reference with static or thread storage duration is a constant expression (5.19 [expr.const]) and the reference is bound to an lvalue designating an object with static storage duration or to a temporary (see 12.2 [class.temporary]);
...
This wording should also permit the reference to be bound to an xvalue, e.g., a subobject of a temporary, and not just to a complete temporary.
Proposed resolution (February, 2014):
Change 3.6.2 [basic.start.init] paragraph 2 as follows (note that this resolution incorporates the overlapping change from the resolution of issue 1299)::
...Constant initialization is performed:
if each full-expression (including implicit conversions) that appears in the initializer of a reference with static or thread storage duration is a constant expression (5.19 [expr.const]) and the reference is bound to an lvalue a glvalue designating an object with static storage duration, to a temporary object (see 12.2 [class.temporary]) or subobject thereof, or to a function;
...
In 3.7.4.1 [basic.stc.dynamic.allocation] paragraph 2, allocation functions are constrained to return a pointer that is different from any previously returned pointer that has not been passed to a deallocation function. This does not, for instance, prohibit returning a pointer to storage that is part of another object, for example, a pool of storage. The potential implications of this for aliasing should be spelled out.
(See also issues 1027 and 1116.)
Additional note (March, 2013):
One possibility to allow reasonable optimizations would be to require that allocation packages hide their storage in file-static variables, perhaps by adding wording such as:
Furthermore, p0 shall point to an object distinct from any other object that is accessible outside the implementation of the allocation and deallocation functions.
Additional note, April, 2013:
Concern was expressed that a pool class might provide an interface for iterating over all the pointers that were given out from the pool, and this usage should be supported.
Notes from the September, 2013 meeting:
CWG agreed that changes for this issue should apply only to non-placement forms.
Proposed resolution (February, 2014):
Change 3.7.4.1 [basic.stc.dynamic.allocation] paragraph 2 as follows:
...If the request succeeds, the value returned shall be a non-null pointer value (4.10 [conv.ptr]) p0 different from any previously returned value p1, unless that value p1 was subsequently passed to an operator delete. Furthermore, for the library allocation functions in 18.6.1.1 [new.delete.single] and 18.6.1.2 [new.delete.array], p0 shall point to a block of storage disjoint from the storage for any other object accessible to the caller. The effect of indirecting through a pointer returned as a request for zero size is undefined.36
The description of is_trivially_constructible in 20.10.4.3 [meta.unary.prop] paragraph 3 says,
is_constructible<T, Args...>::value is true and the variable definition for is_constructible, as defined below, is known to call no operation that is not trivial ( 3.9 [basic.types], 12 [special]).
This risks confusion when compared with the wording in 3.8 [basic.life] paragraph 1,
An object is said to have non-trivial initialization if it is of a class or aggregate type and it or one of its members is initialized by a constructor other than a trivial default constructor. [Note: initialization by a trivial copy/move constructor is non-trivial initialization. —end note]
The latter was written long before “trivial” became an important concept in its own right and uses the term differently from how it is used elsewhere in the Standard (as evidenced by the note referring to copy/move construction). The intent is to capture the idea that there is some actual initialization occurring; it should be rephrased to avoid the potential of confusion with the usage of “trivial” elsewhere.
Proposed resolution (February, 2014):
Change 3.8 [basic.life] paragraph 1 as follows:
The lifetime of an object is a runtime property of the object. An object is said to have non-trivial initialization non-vacuous initialization if it is of a class or aggregate type and it or one of its members is initialized by a constructor other than a trivial default constructor. [Note: initialization by a trivial copy/move constructor is non-trivial non-vacuous initialization. —end note] The lifetime of an object of type T begins when:
storage with the proper alignment and size for type T is obtained, and
if the object has non-trivial non-vacuous initialization, its initialization is complete.
The lifetime of an object...
According to 3.9.1 [basic.fundamental] paragraph 1,
For unsigned narrow character types, all possible bit patterns of the value representation represent numbers.
Presumably the intent is that each distinct bit pattern represents a different number, but this should be made explicit.
Proposed resolution (February, 2014):
Change 3.9.1 [basic.fundamental] paragraph 1 as follows:
...For unsigned narrow character types, all each possible bit patterns pattern of the value representation represent numbers represents a distinct number. These requirements...
4.7 [conv.integral] paragraph 3 says, regarding integral conversions,
If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.
The values that can be represented in a bit-field are not well specified, except for the correspondence with the values of an enumeration in 7.2 [dcl.enum]. In particular, it is not clear whether a bit-field has a sign bit and whether bit-fields may have padding bits.
Another point to note in this wording: paragraph 1 describes the context as
A prvalue of an integer type can be converted to a prvalue of another integer type.
However, prvalues cannot be bit-fields, so the applicability of the mention of “bit-field width” in paragraph 3 is unclear.
Proposed resolution (February, 2014):
Change 4.7 [conv.integral] paragraph 3 as follows:
If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.
Change 5.2.6 [expr.post.incr] paragraph 1 as follows:
...The result is a prvalue. The type of the result is the cv-unqualified version of the type of the operand. If the operand is a bit-field that cannot represent the incremented value, the resulting value of the bit-field is implementation-defined. See also 5.7 [expr.add] and 5.17 [expr.ass].
Change 5.17 [expr.ass] paragraph 6 as follows:
When the left operand of an assignment operator denotes a reference to T, the operation assigns to the object of type T denoted by the reference is a bit-field that cannot represent the value of the expression, the resulting value of the bit-field is implementation-defined..
Change the final bullet of 8.5 [dcl.init] paragraph 17 as follows:
The semantics of initializers are as follows...
...
...no user-defined conversions are considered. If the conversion cannot be done, the initialization is ill-formed. When initializing a bit-field with a value that it cannot represent, the resulting value of the bit-field is implementation-defined. [Note: An expression of type...
Similarly to issue 1738, it is not clear whether it is permitted to explicitly instantiate or specialize the call operator of a polymorphic lambda (via decltype).
Proposed resolution (February, 2014):
Add the following as a new paragraph following 5.1.2 [expr.prim.lambda] paragraph 21:
The closure type associated with a lambda-expression has an implicitly-declared destructor (12.4 [class.dtor]).
A member of a closure type shall not be explicitly instantiated (14.7.1 [temp.inst]), explicitly specialized (14.7.2 [temp.explicit]), or named in a friend declaration (11.3 [class.friend]).
According to 5.2.3 [expr.type.conv] paragraph 1,
A simple-type-specifier (7.1.6.2 [dcl.type.simple]) or typename-specifier (14.6 [temp.res]) followed by a parenthesized expression-list constructs a value of the specified type given the expression list. If the expression list is a single expression, the type conversion expression is equivalent (in definedness, and if defined in meaning) to the corresponding cast expression (5.4 [expr.cast]). If the type specified is a class type, the class type shall be complete. If the expression list specifies more than a single value, the type shall be a class with a suitably declared constructor (8.5 [dcl.init], 12.1 [class.ctor]), and the expression T(x1, x2, ...) is equivalent in effect to the declaration T t(x1, x2, ...); for some invented temporary variable t, with the result being the value of t as a prvalue.
This does not cover the cases when the expression-list contains a single braced-init-list (which is neither an expression nor more than a single value) or if it contains no expressions as the result of an empty pack expansion.
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue 1299.
The specification of casting to an enumeration type in 5.2.9 [expr.static.cast] paragraph 10 does not require that the enumeration type be complete. Should it? (There is variation among implementations.)
Proposed resolution (February, 2014):
Change 5.2.9 [expr.static.cast] paragraph 10 as follows:
A value of integral or enumeration type can be explicitly converted to an a complete enumeration type. The value is...
According to 5.3.1 [expr.unary.op] paragraph 3,
The result of the unary & operator is a pointer to its operand. The operand shall be an lvalue or a qualified-id. If the operand is a qualified-id naming a non-static member m of some class C with type T, the result has type “pointer to member of class C of type T” and is a prvalue designating C::m.
It is not clear whether this wording applies to variant members of C (i.e., members of nested anonymous unions) or only to its non-variant members. For example, given
struct A { union { int n; }; }; auto x = &A::n;
should the type of x be int A::* or int A::anon::*? Current implementations choose the former.
Proposed resolution (February, 2014):
Change 5.3.1 [expr.unary.op] paragraph 3 as follows:
The result of the unary & operator is a pointer to its operand. The operand shall be an lvalue or a qualified-id. If the operand is a qualified-id naming a non-static or variant member m of some class C with type T, the result has type “pointer to member of class C of type T” and is a prvalue designating C::m. Otherwise...
According to 5.3.4 [expr.new] paragraph 15,
[Note: unless an allocation function is declared with a non-throwing exception-specification (15.4 [except.spec]), it indicates failure to allocate storage by throwing a std::bad_alloc exception (Clause 15 [except], 18.6.2.1 [bad.alloc]); it returns a non-null pointer otherwise. If the allocation function is declared with a non-throwing exception-specification, it returns null to indicate failure to allocate storage and a non-null pointer otherwise. —end note] If the allocation function returns null, initialization shall not be done, the deallocation function shall not be called, and the value of the new-expression shall be null.
This wording applies even to the non-replaceable placement forms defined in 18.6.1.3 [new.delete.placement] that simply return the supplied pointer as the result of the allocation function. Compilers are thus required to check for a null pointer and avoid the initialization if one is used. This test is unnecessary overhead; it should be the user's responsibility to ensure that a null pointer is not used in these forms of placement new, just as for other cases when a pointer is dereferenced.
Proposed resolution (February, 2014):
Change 5.3.4 [expr.new] paragraph 15 as follows:
[Note: unless an allocation function is declared with a non-throwing exception-specification (15.4 [except.spec]), it indicates failure to allocate storage by throwing a std::bad_alloc exception (Clause 15 [except], 18.6.2.1 [bad.alloc]); it returns a non-null pointer otherwise. If the allocation function is declared with a non-throwing exception-specification, it returns null to indicate failure to allocate storage and a non-null pointer otherwise. —end note] If the allocation function is a reserved placement allocation function (18.6.1.3 [new.delete.placement]) that returns null, the behavior is undefined. Otherwise, if the allocation function returns null, initialization shall not be done, the deallocation function shall not be called, and the value of the new-expression shall be null.
7.1.6.4 [dcl.spec.auto] paragraph 5 says that a placeholder type (presumably including decltype(auto)) can appear in a new-expression. However, 5.3.4 [expr.new] mentions only auto, not decltype(auto).
Proposed resolution (February, 2014):
Change 5.3.4 [expr.new] paragraph 2 as follows:
If the auto type-specifier a placeholder type (7.1.6.4 [dcl.spec.auto]) appears in the type-specifier-seq of a new-type-id or type-id of a new-expression, the new-expression shall contain...
The list of causes for a false result of the noexcept operator does not include a new-expression with a non-constant array bound, which could result in an exception even if the allocation function that would be called is declared not to throw (see 5.3.4 [expr.new] paragraph 7).
Proposed resolution (June, 2012):
This issue is resolved by the resolution of issue 1351.
The provision to treat non-array objects as if they were array objects with a bound of 1 is given only for pointer arithmetic in C++ (5.7 [expr.add] paragraph 4). C99 supplies similar wording for the relational and equality operators, explicitly allowing pointers resulting from such implicit-array treatment to be compared. C++ should follow suit.
Proposed resolution (August, 2013):
Change 5.3.1 [expr.unary.op] paragraph 3 as follows:
...Otherwise, if the type of the expression is T, the result has type “pointer to T” and is a prvalue that is the address of the designated object (1.7 [intro.memory]) or a pointer to the designated function. [Note: In particular, the address of an object of type “cv T” is “pointer to cv T”, with the same cv-qualification. —end note] For purposes of pointer arithmetic (5.7 [expr.add]) and comparison (5.9 [expr.rel], 5.10 [expr.eq]), an object that is not an array element whose address is taken in this way is considered to belong to an array with one element of type T. [Example:
struct A { int i; }; struct B : A { }; ... &B::i ... // has type int A::* int a; int* p1 = &a; int* p2 = p1 + 1; // Defined behavior bool b = p2 > p1; // Defined behavior, with value true—end example] [Note: a pointer to member...
Delete 5.7 [expr.add] paragraph 4:
For the purposes of these operators, a pointer to a nonarray object behaves the same as a pointer to the first element of an array of length one with the type of the object as its element type.
Change 5.7 [expr.add] paragraph 5 as follows:
When an expression that has integral type is added to or subtracted from a pointer, the result has the type of the pointer operand. If the pointer operand points to an element of an array object [Footnote: An object that is not an array element is considered to belong to a single-element array for this purpose; see 5.3.1 [expr.unary.op] —end footnote], and the array is large enough, the result points to an element
Change 5.9 [expr.rel] paragraph 3 as follows:
Comparing pointers to objects [Footnote: An object that is not an array element is considered to belong to a single-element array for this purpose; see 5.3.1 [expr.unary.op] —end footnote] is defined as follows:...
[Drafting note: No change is proposed for 5.10 [expr.eq], since the comparison is phrased in terms of “same address”, not in terms of array elements, so the handling of one-past-the-end addresses falls out of the specification of pointer arithmetic.]
The final bullet of 5.16 [expr.cond] paragraph 3, describing the attempt to convert the operands of the conditional operator to the other operand's type as part of determining the type of the result, says,
Otherwise (i.e., if E1 or E2 has a nonclass type, or if they both have class types but the underlying classes are not either the same or one a base class of the other): E1 can be converted to match E2 if E1 can be implicitly converted to the type that expression E2 would have if E2 were converted to a prvalue (or the type it has, if E2 is a prvalue).
The phrase “if E2 were converted to a prvalue” is problematic if E2 has an array type. For example,
struct S {
S(const char *s);
operator const char *();
};
S s;
const char *f(bool b) {
return b ? s : ""; // #1
}
One might expect that the expression in #1 would be ambiguous, since S can be converted both to and from const char*. However, the target type for the conversion of s is const char[1], not const char*, so that conversion fails and the result of the conditional-expression has type S.
It might be better to specify the target type for this trial conversion to be the type after the usual lvalue-to-rvalue, array-to-pointer, and function-to-pointer conversions instead of simply the result of converting “to a prvalue.”
Proposed resolution (February, 2014):
Change the final subbullet of 5.16 [expr.cond] paragraph 3 as follows:
[Editorial note: this wording was approved by CWG, but I'd suggest an editorial change to “...or if both have class types but the underlying classes are not the same and neither is a base class of the other.” —wmm]...The process for determining whether an operand expression E1 of type T1 can be converted to match an operand expression E2 of type T2 is defined as follows:
...
If E2 is a prvalue or if neither of the conversions above can be done and at least one of the operands has (possibly cv-qualified) class type:
if E1 and E2 have class type...
Otherwise (i.e., if E1 or E2 has a nonclass type, or if they both have class types but neither are the underlying classes are not either the same or nor is one a base class of the other): E1 can be converted to match E2 if E1 can be implicitly converted to the type that expression E2 would have if E2 were converted to a prvalue (or the type it has, if E2 is a prvalue) after applying the lvalue-to-rvalue (4.1 [conv.lval]), array-to-pointer (4.2 [conv.array]), and function-to-pointer (4.3 [conv.func]) standard conversions.
Presumably the result of something like
b ? x : throw y
is a bit-field if x is, but the current wording does not say that.
Proposed resolution (February, 2014):
Change 5.16 [expr.cond] paragraph 2 as follows (this assumes the revised wording of the resolution of issue 1299 as the base text):
If either the second or the third operand has type void, one of the following shall hold:
The second or the third operand (but not both) is a (possibly parenthesized) throw-expression (15.1 [except.throw]); the result is of the type and value category of the other operand. The conditional-expression is a temporary expression if that operand is a temporary expression and is a bit-field if that operand is a bit-field.
...
We're missing a restriction on the value of a temporary which is bound to a static storage duration reference:
void f(int n) { static constexpr int *&&r = &n; }
This is currently valid, because &n is a core constant expression, and it is a constant expression because the reference binds to a temporary (of type int*) that has static storage duration (because it's lifetime-extended by the reference binding).
The value of r is constant here (it's a constant reference to a temporary with a non-constant initializer), but I don't think we should accept this. Generally, I think a temporary which is lifetime-extended by a constexpr variable should also be treated as if it were declared to be a constexpr object.
Proposed resolution (September, 2013) [SUPERSEDED]:
Change 5.19 [expr.const] paragraph 4 as follows:
A constant expression is either a glvalue core constant expression whose value refers to an object with static storage duration or to a function entity that is a permitted result of a constant expression, or a prvalue core constant expression whose value is an object where, for that object and its subobjects:
each non-static data member of reference type refers to an object with static storage duration or to a function entity that is a permitted result of a constant expression, and
if the object or subobject is of pointer type, it contains the address of an object with static storage duration, the address past the end of such an object (5.7 [expr.add]), the address of a function, or a null pointer value.
An entity is a permitted result of a constant expression if it is an object with static storage duration that is either not a temporary or is a temporary whose value satisfies the above constraints, or it is a function.
Proposed resolution (February, 2014):
Change 5.19 [expr.const] paragraph 4 as follows:
A constant expression is either a glvalue core constant expression whose value refers to an object with static storage duration or to a function entity that is a permitted result of a constant expression (as defined below), or a prvalue core constant expression whose value is an object where, for that object and its subobjects:
each non-static data member of reference type refers to an object with static storage duration or to a function entity that is a permitted result of a constant expression, and
if the object or subobject is of pointer type, it contains the address of an object with static storage duration, the address past the end of such an object (5.7), the address of a function, or a null pointer value.
An entity is a permitted result of a constant expression if it is an object with static storage duration that is either not a temporary object or is a temporary object whose value satisfies the above constraints, or it is a function.
The requirements for a constant expression in 5.19 [expr.const] permit an lvalue-to-rvalue conversion on
a non-volatile glvalue of integral or enumeration type that refers to a non-volatile const object with a preceding initialization, initialized with a constant expression
This does not exclude subobjects of objects that are not compile-time constants, for example:
int f(); struct S { S() : a(f()), b(5) {} int a, b; }; const S s; constexpr int k = s.b;
This rule is intended to provide backward compatibility with pre-constexpr C++, but it should be restricted to complete objects. Care should be taken in resolving this issue not to break the handling of string literals, since use of their elements in constant expressions depends on the current form of this rule.
Proposed resolution (February, 2014):
Change 5.19 [expr.const] paragraph 2 bullet 7 as follows:
A conditional-expression e is a core constant expression unless the evaluation of e, following the rules of the abstract machine (1.9 [intro.execution]), would evaluate one of the following expressions:
...
an lvalue-to-rvalue conversion (4.1 [conv.lval]) unless it is applied to
a non-volatile glvalue of integral or enumeration type that refers to a complete non-volatile const object with a preceding initialization, initialized with a constant expression [Note: a string literal (2.14.5 [lex.string]) corresponds to an array of such objects. —end note], or
a non-volatile glvalue that refers to a subobject of a string literal (2.14.5 [lex.string]), or
a non-volatile glvalue that refers to a non-volatile object defined with constexpr...
Although repeated type-specifiers such as const are forbidden, there is no such prohibition against repeated non-type specifiers like constexpr and virtual. Should there be?
On the “con” side, it's not clear that such a prohibition actually helps anyone; it could happen via macros, and a warning about non-macro use could be a QoI issue. Also, C99 moved in the opposite direction, removing the prohibition against repeated cv-qualifiers.
Proposed resolution (February, 2014):
Add the following as a new paragraph before 7.1 [dcl.spec] paragraph 2:
Each decl-specifier shall appear at most once in the complete decl-specifier-seq of a declaration, except that long may appear twice.
If a type-name is encountered...
According to 7.1.1 [dcl.stc] paragraph 1,
...If thread_local appears in any declaration of a variable it shall be present in all declarations of that entity... A storage-class-specifier shall not be specified in an explicit specialization (14.7.3 [temp.expl.spec]) or an explicit instantiation (14.7.2 [temp.explicit]) directive.
These two requirements appear to be in conflict when an explicit instantiation or explicit specialization names a thread_local variable. For example,
template <class T> struct S { thread_local static int tlm; }; template <> int S<int>::tlm = 0; template <> thread_local int S<float>::tlm = 0;
which of the two explicit specializations is correct?
Proposed resolution (February, 2014):
Change 7.1.1 [dcl.stc] paragraph 1 as follows:
...A storage-class-specifier other than thread_local shall not be specified in an explicit specialization (14.7.3 [temp.expl.spec]) or an explicit instantiation (14.7.2 [temp.explicit]) directive.
According to 7.1.1 [dcl.stc] paragraph 9,
The mutable specifier can be applied only to names of class data members (9.2 [class.mem]) and cannot be applied to names declared const or static, and cannot be applied to reference members.
This is similar to issue 1686 in that the restriction appears to apply only to declarations in which the const keyword appears directly. It should instead apply to members with const-qualified types, regardless of how the qualification was achieved.
Proposed resolution (January, 2014) [SUPERSEDED]:
Change 7.1.1 [dcl.stc] paragraph 9 as follows:
The mutable specifier can be applied only to names of non-static class data members (9.2 [class.mem]) and cannot be applied to names declared const or static, and cannot be applied to reference members whose type is neither const-qualified nor a reference type. [Example:...
Proposed resolution (February, 2014):
Change 7.1.1 [dcl.stc] paragraph 9 as follows:
The mutable specifier can be applied shall appear only to names in the declaration of class a non-static data members member (9.2 [class.mem]) and cannot be applied to names declared const or static, and cannot be applied to reference members whose type is neither const-qualified nor a reference type. [Example:...
According to 7.1.2 [dcl.fct.spec] paragraph 4,
A string literal in the body of an extern inline function is the same object in different translation units.
The Standard does not otherwise specify when string literals are required to be the same object, and this requirement is not widely implemented. Should it be removed?
Proposed resolution (February, 2014):
Change 2.14.5 [lex.string] paragraph 1 as follows:
A string literal string-literal is a sequence of characters...
Change 2.14.5 [lex.string] paragraph 2 as follows:
A string literal string-literal that has an R in the prefix...
Change 2.14.5 [lex.string] paragraph 6 as follows:
After translation phase 6, a string literal string-literal that does not begin...
Change 2.14.5 [lex.string] paragraph 7 as follows:
A string literal string-literal that begins with u8...
Change 2.14.5 [lex.string] paragraph 10 as follows:
A string literal string-literal that begins with u, such as u"asdf", is a char16_t string literal. A char16_t string literal has type “array of n const char16_t”, where n is the size of the string as defined below; it has static storage duration and is initialized with the given characters. A single c-char may produce more than one char16_t character in the form of surrogate pairs.
Change 2.14.5 [lex.string] paragraph 11 as follows:
A string literal string-literal that begins with U, such as U"asdf", is a char32_t string literal. A char32_t string literal has type “array of n const char32_t”, where n is the size of the string as defined below; it has static storage duration and is initialized with the given characters.
Change 2.14.5 [lex.string] paragraph 12 as follows:
A string literal string-literal that begins with L, such as L"asdf", is a wide string literal. A wide string literal has type “array of n const wchar_t”, where n is the size of the string as defined below; it has static storage duration and is initialized with the given characters.
Delete 2.14.5 [lex.string] paragraph 13:
Whether all string literals are distinct (that is, are stored in nonoverlapping objects) is implementation-defined. The effect of attempting to modify a string literal is undefined.
Change 2.14.5 [lex.string] paragraph 14 as follows:
In translation phase 6 (2.2 [lex.phases]), adjacent string literals string-literals are concatenated. If both string literals string-literals have the same encoding-prefix, the resulting concatenated string literal has that encoding-prefix. If one string literal string-literal has no encoding-prefix, it is treated as a string literal string-literal of the same encoding-prefix as the other operand. If a UTF-8 string literal token is adjacent to a wide string literal token, the program is ill-formed. Any other concatenations are conditionally-supported with implementation-defined behavior. [Note: This concatenation is an interpretation, not a conversion. Because the interpretation happens in translation phase 6 (after each character from a literal has been translated into a value from the appropriate character set), a string literal string-literal's initial rawness has no effect on the interpretation or well-formedness of the concatenation. —end note] Table 8...
Add the following as a new paragraph at the end of 2.14.5 [lex.string]:
Evaluating a string-literal results in a string literal object with static storage duration, initialized from the given characters as specified above. Whether all string literals are distinct (that is, are stored in nonoverlapping objects) and whether successive evaluations of a string-literal yield the same or a different object is unspecified. [Note: The effect of attempting to modify a string literal is undefined. —end note]
Change 7.1.2 [dcl.fct.spec] paragraph 4 as follows:
...A static local variable in an extern inline function always refers to the same object. A string literal in the body of an extern inline function is the same object in different translation units. [Note: A string literal appearing in a default argument is not in the body of an inline function merely because the expression is used in a function call from that inline function. —end note] A type defined within the body of an extern inline function is the same type in every translation unit.
Additional note, February, 2014:
Two editorial changes have been made since CWG approved the proposed resolution:
The deletion of the requirement in 7.1.2 [dcl.fct.spec] paragraph 4 that string literals in inline functions be the same made the note following that requirement irrelevant, so the deletion has been extended to include the note as well.
The issue has been returned to "review" status to allow possible reconsideration of these editorial changes.
According to 7.1.5 [dcl.constexpr] paragraph 1,
If any declaration of a function, function template, or variable template has a constexpr specifier, then all its declarations shall contain the constexpr specifier.
This requirement does not make sense applied to variable templates. The constexpr specifier requires that there be an initializer, and a variable template declaration with an initializer is a definition, so there cannot be more than one declaration of a variable template with the constexpr specifier.
Proposed resolution (February, 2014):
Change 7.1.5 [dcl.constexpr] paragraph 1 as follows:
...If any declaration of a function, or function template, or variable template has a constexpr specifier, then all its declarations shall contain the constexpr specifier. [Note:...
An example like
struct X { std::unique_ptr<int> p; constexpr X() { } };
is ill-formed because the X constructor cannot be used in a constant expression, because a constant expression cannot construct an object of a non-literal type like unique_ptr. This prevents use of something like
X x;
to guarantee constant-initialization.
Proposed resolution (June, 2014):
Change 7.1.5 [dcl.constexpr] paragraph 5 as follows:
For a non-template, non-defaulted constexpr function or a non-template, non-defaulted, non-inheriting constexpr constructor, if no argument values exist such that an invocation of the function or constructor could be an evaluated subexpression of a core constant expression (5.19 [expr.const]), or, for a constructor, a constant initializer for some object (3.6.2 [basic.start.init]), the program is ill-formed; no diagnostic required.
The phrase “top-level cv-qualifier” is used numerous times in the Standard, but it is not defined. The phrase could be misunderstood to indicate that the const in something like const T& is at the “top level,” because where it appears is the highest level at which it is permitted: T& const is ill-formed.
Proposed resolution (February, 2014):
Change 3.9.3 [basic.type.qualifier] paragraph 5 as follows, splitting it into two paragraphs:
In this International Standard, the notation cv (or cv1, cv2, etc.), used in the description of types, represents an arbitrary set of cv-qualifiers, i.e., one of {const}, {volatile}, {const, volatile}, or the empty set. For a type cv T, the top-level cv-qualifiers of that type are those denoted by cv. [Example: The type corresponding to the type-id “const int&” has no top-level cv-qualifiers. The type corresponding to the type-id “volatile int * const” has the top-level cv-qualifier const. For a class type C, the type corresponding to the type-id “void (C::* volatile)(int) const” has the top-level cv-qualifier volatile. —end example]
Cv-qualifiers applied to an array type attach...
The example in 7.1.6.2 [dcl.type.simple] paragraph 4 reads, in part,
const int&& foo();
int i;
decltype(foo()) x1 = i; // type is const int&&
The initialization is an ill-formed attempt to bind an rvalue reference to an lvalue.
Proposed resolution (April, 2013):
Change the example in 7.1.6.2 [dcl.type.simple] paragraph 4 as follows:
const int&& foo(); int i; struct A { double x; }; const A* a = new A(); decltype(foo()) x1 = i 17; // type is const int&& decltype(i) x2; // type is int decltype(a->x) x3; // type is double decltype((a->x)) x4 = x3; // type is const double&
According to 7.1.6.2 [dcl.type.simple] paragraph 2,
The auto specifier is a placeholder for a type to be deduced (7.1.6.4 [dcl.spec.auto]).
This is not true when auto appears in the decltype(auto) construct.
On a slightly related wording issue, 7.1.6.4 [dcl.spec.auto] paragraph 2 says,
The auto and decltype(auto) type-specifiers designate a placeholder type that will be replaced later, either by deduction from an initializer or by explicit specification with a trailing-return-type.
This could be read as implying that decltype(auto) can be used to introduce a function with a trailing-return-type, contradicting 8.3.5 [dcl.fct] paragraph 2, which requires that a function declarator with a trailing-return-type must have auto as the sole type specifier.
Proposed resolution (February, 2014):
Change 7.1.6.2 [dcl.type.simple] paragraph 2 as follows:
The simple-type-specifier auto specifier is a placeholder for a type to be deduced (7.1.6.4 [dcl.spec.auto]). The other simple-type-specifiers...
Change 7.1.6.4 [dcl.spec.auto] paragraph 1 as follows:
The auto and decltype(auto) type-specifiers are used to designate a placeholder type that will be replaced later, either by deduction from an initializer or by explicit specification with a trailing-return-type. The auto type-specifier is also used to introduce a function type having a trailing-return-type or to signify that a lambda is a generic lambda.
Return type deduction from a return statement with no expression is described in 7.1.6.4 [dcl.spec.auto] paragraph 7 as follows:
When a variable declared using a placeholder type is initialized, or a return statement occurs in a function declared with a return type that contains a placeholder type, the deduced return type or variable type is determined from the type of its initializer. In the case of a return with no operand, the initializer is considered to be void(). Let T be the declared type of the variable or return type of the function. If the placeholder is the auto type-specifier, the deduced type is determined using the rules for template argument deduction. If the deduction is for a return statement and the initializer is a braced-init-list (8.5.4 [dcl.init.list]), the program is ill-formed. Otherwise, obtain P from T by replacing the occurrences of auto with either a new invented type template parameter U or, if the initializer is a braced-init-list, with std::initializer_list<U>. Deduce a value for U using the rules of template argument deduction from a function call (14.8.2.1 [temp.deduct.call]), where P is a function template parameter type and the initializer is the corresponding argument.
However, this does not work: the deduction for an argument of void() would give a parameter type of void and be ill-formed. It would be better simply to say that the deduced type in this case is void.
In a related example, consider
decltype(auto) f(void *p) { return *p; }
This is presumably an error because decltype(*p) would be void&, which is ill-formed. Perhaps this case should be mentioned explicitly.
Notes from the June, 2014 meeting:
The last part of the issue is not a defect, because the unary * operator requires its operand to be a pointer to an object or function type, and void is neither, so the expression is ill-formed and deduction does not occur for that case.
It was also observed during the discussion that the same deduction problem occurs when returning an expression of type void as when the expression is omitted, so the resolution should cover both cases.
Proposed resolution (June, 2014):
Change 7.1.6.4 [dcl.spec.auto] paragraph 7 as follows:
When a variable declared using a placeholder type is initialized, or a return statement occurs in a function declared with a return type that contains a placeholder type, the deduced return type or variable type is determined from the type of its initializer. In the case of a return with no operand or with an operand of type void, the initializer declared return type shall be auto and the deduced return type is void considered to be void(). Let Otherwise, let T be the declared type...
There appear to be no restrictions against using the auto specifier in examples like the following:
template<typename T> using X = T; X<auto()> f_with_deduced_return_type; // ok std::vector<auto(*)()> v; // ok?! void f(auto (*)()); // ok?!
Proposed resolution (June, 2014):
Change 7.1.6.4 [dcl.spec.auto] paragraph 2 as follows:
The placeholder type can appear with a function declarator in the decl-specifier-seq, type-specifier-seq, conversion-function-id, or trailing-return-type, in any context where such a declarator is valid. If the function declarator includes a trailing-return-type (8.3.5 [dcl.fct]), that specifies the declared return type of the function. Otherwise, the function declarator shall declare a function. If the declared return type of the function contains a placeholder type, the return type of the function is deduced from return statements in the body of the function, if any.
Although issue 1094 clarified that the value of an expression of enumeration type might not be within the range of the values of the enumeration after a conversion to the enumeration type (see 5.2.9 [expr.static.cast] paragraph 10), the result is simply an unspecified value. This should probably be strengthened to produce undefined behavior, in light of the fact that undefined behavior makes an expression non-constant. See also 9.6 [class.bit] paragraph 4.
Proposed resolution (February, 2014):
Change 5.2.9 [expr.static.cast] paragraph 10 as follows:
A value of integral or enumeration type can be explicitly converted to an enumeration type. The value is unchanged if the original value is within the range of the enumeration values (7.2 [dcl.enum]). Otherwise, the resulting value is unspecified (and might not be in that range) behavior is undefined. A value of floating-point type...
According to 7.5 [dcl.link] paragraph 6,
An entity with C language linkage shall not be declared with the same name as an entity in global scope, unless both declarations denote the same entity; no diagnostic is required if the declarations appear in different translation units.
This restriction is too broad; it does not allow for the so-called stat hack, where a C-linkage function and a class are both declared in global scope, and it does not allow for function overloading, either. It should be revised to apply only to variables.
Additional note (February, 2014):
See also issue 1838 for an interaction with using-declarations.
Proposed resolution (February, 2014):
Change 7.5 [dcl.link] paragraph 6 as follows:
...An entity with C language linkage shall not be declared with the same name as an entity a variable in global scope, unless both declarations denote the same entity; no diagnostic is required if the declarations appear in different translation units...
Additional note, May, 2014:
It was observed that this resolution would allow a definition of main as a C-linkage variable in a namespace. The issue is being returned to "review" status for further discussion.
According to 7 [dcl.dcl] paragraph 2,
Unless otherwise stated, utterances in Clause 7 [dcl.dcl] about components in, of, or contained by a declaration or subcomponent thereof refer only to those components of the declaration that are not nested within scopes nested within the declaration.
This contradicts the intent of 7.5 [dcl.link] paragraph 4, which says,
In a linkage-specification, the specified language linkage applies to the function types of all function declarators, function names with external linkage, and variable names with external linkage declared within the linkage-specification.
Also, one of the comments in the example in paragraph 4 is inconsistent with the intent:
extern "C" { static void f4(); // the name of the function f4 has // internal linkage (not C language // linkage) and the function's type // has C language linkage. } extern "C" void f5() { extern void f4(); // OK: Name linkage (internal) // and function type linkage (C // language linkage) gotten from // previous declaration. }
The language linkage for the block-scope declaration of f4 is presumably determined by the fact that it appears in a C-linkage function, not by the previous declaration.
Proposed resolution (February, 2014):
Change 7.5 [dcl.link] paragraph 4 as follows:
Linkage specifications nest. When linkage specifications nest, the innermost one determines the language linkage. A linkage specification does not establish a scope. A linkage-specification shall occur only in namespace scope (3.3 [basic.scope]). In a linkage-specification, the specified language linkage applies to the function types of all function declarators, function names with external linkage, and variable names with external linkage declared within the linkage-specification, including those appearing in scopes nested inside the linkage specification and not inside a nested linkage-specification. [Example:
...
extern "C" { static void f4(); // the name of the function f4 has // internal linkage (not C language // linkage) and the function's type // has C language linkage. } extern "C" void f5() { extern void f4(); // OK: Name linkage (internal) // and function type linkage (C // language linkage) gotten from // previous declaration.; function type // linkage (C language // linkage) gotten // from linkage specification }
According to 7.6.2 [dcl.align] paragraph 5,
The combined effect of all alignment-specifiers in a declaration shall not specify an alignment that is less strict than the alignment that would be required for the entity being declared if all alignment-specifiers were omitted (including those in other declarations).
Presumably the intent was “other declarations of the same entity,” but the wording as written could be read to make the following example well-formed (assuming alignof(int) is 4):
struct alignas(4) A { alignas(8) int n; }; struct alignas(8) B { char c; }; alignas(1) B b; struct alignas(1) C : B {}; enum alignas(8) E : int { k }; alignas(4) E e = k;
Proposed resolution (February, 2014):
Change 7.6.2 [dcl.align] paragraph 5 as follows:
...if all alignment-specifiers appertaining to that entity were omitted (including those in other declarations). [Example:
struct alignas(8) S {}; struct alignas(1) U { S s; }; // Error: U specifies an alignment that is less strict than // if the alignas(1) were omitted.
—end example]
EDG rejects this code:
template <typename T> struct S {}; void f (S<int (*)[]>);G++ accepts it.
This is another case where the standard isn't very clear:
The language from 8.3.5 [dcl.fct] is:
If the type of a parameter includes a type of the form "pointer to array of unknown bound of T" or "reference to array of unknown bound of T," the program is ill-formed.Since "includes a type" is not a term defined in the standard, we're left to guess what this means. (It would be better if this were a recursive definition, the way a type theoretician would do it:
Notes from April 2003 meeting:
We agreed that the example should be allowed.
Additional note (January, 2013):
Additional discussion of this issue has arisen . For example, the following is permissible:
T (*p) [] = (U(*)[])0;
but the following is not:
template<class T> void sp_assert_convertible( T* ) {} sp_assert_convertible<T[]>( (U(*)[])0 );
Proposed resolution (February, 2014):
Change 8.3.5 [dcl.fct] paragraph 8 as follows, including deleting the footnote:
If the type of a parameter includes a type of the form “pointer to array of unknown bound of T” or “reference to array of unknown bound of T,” the program is ill-formed.101 Functions shall not have a return type of type array or function, although...
Consider the following example:
template<typename T> struct A { T t; }; struct S { A<S> f() { return A<S>(); } };
According to 8.3.5 [dcl.fct] paragraph 9,
The type of a parameter or the return type for a function definition shall not be an incomplete class type (possibly cv-qualified) unless the function is deleted (8.4.3 [dcl.fct.def.delete]) or the definition is nested within the member-specification for that class (including definitions in nested classes defined within the class).
Thus type A<S> must be a complete type. The requirement for a complete type triggers the instantiation of the template, which requires that its template argument be complete in order to use it as the type of a non-static data member.
According to 14.6.4.1 [temp.point] paragraph 4, the point of instantiation of A<S> is “immediately preced[ing] the namespace scope declaration or definition that refers to the specialization.” Thus the point of instantiation precedes the definition of S, making this example ill-formed. Most or all current implementations accept the example, however.
Perhaps the specification in 8.3.5 [dcl.fct] ought to say that the completeness of the type is checked from the context of the function body (at which S is a complete type)?
Proposed resolution (February, 2014):
Change 8.3.5 [dcl.fct] paragraph 9 as follows:
Types shall not be defined in return or parameter types. The type of a parameter or the return type for a function definition shall not be an incomplete class type (possibly cv-qualified) in the context of the function definition unless the function is deleted (8.4.3 [dcl.fct.def.delete]) or the definition is nested within the member-specification for that class (including definitions in nested classes defined within the class).
The resolution for issue 974 permitting default arguments in lambda-expressions overlooked 8.3.6 [dcl.fct.default] paragraph 3:
A default argument shall be specified only in the parameter-declaration-clause of a function declaration or in a template-parameter (14.1 [temp.param])...
Proposed resolution (February, 2014):
Change 8.3.6 [dcl.fct.default] paragraph 3 as follows:
A default argument shall be specified only in the parameter-declaration-clause of a function declaration or lambda-declarator or in a template-parameter (14.1 [temp.param]); in the latter case, the initializer-clause shall be an assignment-expression. A default argument shall not be specified for a parameter pack. If it is specified in a parameter-declaration-clause, it shall not occur within a declarator or abstract-declarator of a parameter-declaration.103
Paragraph 5 of 8.4.1 [dcl.fct.def.general] says,
A cv-qualifier-seq or a ref-qualifier (or both) can be part of a non-static member function declaration, non-static member function definition, or pointer to member function only (8.3.5 [dcl.fct]); see 9.3.2 [class.this].
This is redundant with the specification in 8.3.5 [dcl.fct] paragraph 6 and is factually incorrect, since the list there contains other permissible constructs. It should be at most a note or possibly removed altogether.
Proposed resolution (February, 2014):
Change 8.4.1 [dcl.fct.def.general] paragraph 5 as follows:
A cv-qualifier-seq or a ref-qualifier (or both) can be part of a non-static member function declaration, non-static member function definition, or pointer to member function only (8.3.5 [dcl.fct]); see 9.3.2 [class.this]. [Note: a cv-qualifier-seq affects the type of this in the body of a member function; see 8.3.2 [dcl.ref]. —end note]
The current wording of 8.4.2 [dcl.fct.def.default] paragraph 2 has some surprising implications:
An explicitly-defaulted function may be declared constexpr only if it would have been implicitly declared as constexpr, and may have an explicit exception-specification only if it is compatible (15.4 [except.spec]) with the exception-specification on the implicit declaration.
In an example like
struct A { A& operator=(A&); }; A& A::operator=(A&) = default;
presumably the exception-specification of A::operator=(A&) is noexcept(false). However, attempting to make that exception-specification explicit,
A& A::operator=(A&) noexcept(false) = default;
is an error. Is this intentional?
Proposed resolution (February, 2014):
Change 15.4 [except.spec] paragraph 4 as follows:
...If any declaration of a pointer to function, reference to function, or pointer to member function has an exception-specification, all occurrences of that declaration shall have a compatible exception-specification. If a declaration of a function has an implicit exception-specification, other declarations of the function shall not specify an exception-specification. In an explicit instantiation...
(This resolution also resolves issue 1492.)
Additional note (January, 2013):
The resolution conflicts with the current specification of operator delete: in 3.7.4 [basic.stc.dynamic] paragraph 2, the two operator delete overloads are declared with an implicit exception specification, while in 18.6 [support.dynamic] paragraph 1, they are declared as noexcept.
Additional note (February, 2014):
The overloads cited in the preceding note have been independently changed in N3936 to include a noexcept specification, making the proposed resolution correct as it stands.
According to 8.4.2 [dcl.fct.def.default] paragraph 2,
An explicitly-defaulted function may be declared constexpr only if it would have been implicitly declared as constexpr.
However, the rules for determining whether a function is constexpr and its exception-specification depend on the definition of function and do not apply to deleted functions. Can an explicitly-defaulted implicitly-deleted function be declared constexpr or have an exception-specification, and if so, how is its correctness to be determined?
Proposed resolution (February, 2014):
Change 8.4.2 [dcl.fct.def.default] paragraph 2 as follows:
An explicitly-defaulted function that is not defined as deleted may be declared constexpr only if it would have been implicitly declared as constexpr.
According to 8.5 [dcl.init] paragraph 16,
The initialization that occurs in the forms
T x(a); T x{a};as well as in new expressions (5.3.4 [expr.new]), static_cast expressions (5.2.9 [expr.static.cast]), functional notation type conversions (5.2.3 [expr.type.conv]), and base and member initializers (12.6.2 [class.base.init]) is called direct-initialization.
This wording was overlooked when brace-or-equal-initializers were added to the language, permitting copy-initialization of members by use of the = form.
Proposed resolution (April, 2013):
Change 8.5 [dcl.init] paragraphs 15-16 as follows, removing the example in paragraph 15 and making it a single running sentence:
The initialization that occurs in the = form of a brace-or-equal-initializer or condition (6.4 [stmt.select]),
T x = a;
as well as in argument passing, function return, throwing an exception (15.1 [except.throw]), handling an exception (15.3 [except.handle]), and aggregate member initialization (8.5.1 [dcl.init.aggr]), is called copy-initialization. [Note: Copy-initialization may invoke a move (12.8). —end note]
The initialization that occurs in the forms
T x(a); T x{a};as well as in new expressions (5.3.4 [expr.new]), static_cast expressions (5.2.9), functional notation type conversions (5.2.3 [expr.type.conv]), and base and member initializers mem-initializers (12.6.2 [class.base.init]), and the braced-init-list form of a condition is called direct-initialization.
According to 8.5 [dcl.init] paragraph 14,
The form of initialization (using parentheses or =) is generally insignificant, but does matter when the initializer or the entity being initialized has a class type; see below.
This does not consider conversions from std::nullptr_t to bool, which are permitted only for direct-initialization (4.12 [conv.bool]).
Proposed resolution (February, 2014):
Change 8.5 [dcl.init] paragraph 14 as follows:
The form of initialization (using parentheses or =) is generally insignificant, but does matter when the initializer or the entity being initialized has a class type; see below. If the entity being initialized...
In the case of indirect reference binding, 8.5.3 [dcl.init.ref] paragraph 5 only requires that the cv-qualification of the referred-to type be the same or greater than that of the initializer expression when the types are reference-related. This leads to the following anomaly:
class A { public: operator volatile int &(); }; A a; const int & ir1a = a.operator volatile int&(); // error! const int & ir2a = a; // allowed! ir = a.operator volatile int&();
Is this intended?
Notes from the April, 2013 meeting:
CWG felt that the declaration of ir2a should also be an error.
Proposed resolution (February, 2014):
Change 8.5.3 [dcl.init.ref] paragraph 5 as follows:
A reference to type “cv1 T1” is initialized by an expression of type “cv2 T2” as follows:
If the reference is an lvalue reference...
Otherwise, the reference shall be an lvalue reference to a non-volatile const type...
If the initializer expression
is an xvalue (but not a bit-field), class prvalue, array prvalue or function lvalue and “cv1 T1” is reference-compatible with “cv2 T2”, or
has a class type (i.e., T2 is a class type), where T1 is not reference-related to T2, and can be converted to an xvalue, class prvalue, or function lvalue of type “cv3 T3”, where “cv1 T1” is reference-compatible with “cv3 T3” (see 13.3.1.6 [over.match.ref]),
then the reference is bound to the value of the initializer expression in the first case and to the result of the conversion in the second case (or, in either case, to an appropriate base class subobject). In the second case, if the reference is an rvalue reference and the second standard conversion sequence of the user-defined conversion sequence includes an lvalue-to-rvalue conversion, the program is ill-formed. [Example:
struct A { }; struct B : A { } b; extern B f(); const A& rca2 = f(); // bound to the A subobject of the B rvalue. A&& rra = f(); // same as above struct X { operator B(); operator int&(); } x; const A& r = x; // bound to the A subobject of the result of the conversion int i2 = 42; int&& rri = static_cast<int&&>(i2); // bound directly to i2 B&& rrb = x; // bound directly to the result of operator B int&& rri2 = X(); // error: lvalue-to-rvalue conversion applied to the // result of operator int&—end example]
Otherwise:
If T1 or T2 is a class type and T1 is not reference-related to T2, user-defined conversions are considered using the rules for copy-initialization of an object of type “cv1 T1” by user-defined conversion (8.5 [dcl.init], 13.3.1.4 [over.match.copy], 13.3.1.5 [over.match.conv]); the program is ill-formed if the corresponding non-reference copy-initialization would be ill-formed. The result of the call to the conversion function, as described for the non-reference copy-initialization, is then used to direct-initialize the reference. The program is ill-formed if the direct-initialization does not result in a direct binding or if it involves a user-defined conversion. For this direct-initialization, user-defined conversions are not considered.
If T1 is a non-class type Otherwise, a temporary of type “cv1 T1” is created and copy-initialized (8.5 [dcl.init]) from the initializer expression. The reference is then bound to the temporary.
If T1 is reference-related to T2:
cv1 shall be the same cv-qualification as, or greater cv-qualification than, cv2; and
if the reference is an rvalue reference, the initializer expression shall not be an lvalue.
[Example:
struct Banana { }; struct Enigma { operator const Banana(); }; struct Alaska { operator Banana&(); }; void enigmatic() { typedef const Banana ConstBanana; Banana &&banana1 = ConstBanana(); // ill-formed Banana &&banana2 = Enigma(); // ill-formed Banana &&banana2 = Alaska(); // ill-formed } const double& rcd2 = 2; // rcd2 refers to temporary with value 2.0 double&& rrd = 2; // rrd refers to temporary with value 2.0 const volatile int cvi = 1; const int& r2 = cvi; // error: type qualifiers dropped struct A { operator volatile int&(); } a; const int& r3 = a; // error: type qualifiers dropped from result of conversion function double d2 = 1.0; double&& rrd2 = d2; // error: copying initializer is lvalue of related type struct X { operator int&(); }; int && rri2 = X(); // error: result of conversion function is lvalue of related type int i3 = 2; double&& rrd3 = i3; // rrd3 refers to temporary with value 2.0—end example]
This resolution also resolves issue 1572.
The example just before the final bullet of 8.5.4 [dcl.init.list] paragraph 5 is incorrect. It reads, in part,
struct X { operator int&(); } x; int&& rri2 = X(); // error: lvalue-to-rvalue conversion applied to the // result of operator int&
In fact, according to 13.3.1.6 [over.match.ref] (as clarified by the proposed resolution of issue 1328, although the intent was arguably the same for the previous wording), X::operator int&() is not a candidate for the initialization of rri2, so the case falls into the last bullet, creating an int temporary.
It is not clear whether the lvalue-to-rvalue conversion whose prohibition is intended to be illustrated by that example could actually occur, given the specification of candidate functions in 13.3.1.6 [over.match.ref].
Proposed resolution (February, 2014):
This issue is resolved by the resolution of issue 1571.
The current list-initialization rules do not provide for list-initialization of an aggregate from an object of the same type:
struct X { X() = default; X(const X&) = default; #ifdef OK X(int) { } #endif }; X x; X x2{x}; // error, {x} is not a valid aggregate initializer for X
Suggested resolution:
Change 8.5.4 [dcl.init.list] paragraph 3 as follows:
List-initialization of an object or reference of type T is defined as follows:
If T is a class type and the initializer list has a single element of type cv T or a class type derived from T, the object is initialized from that element.
If Otherwise, if T is an aggregate...
Additional notes (September, 2012):
(See messages 22368, 22371 through 22373, 22388, and 22494.)
It appears that 13.3.3.1.5 [over.ics.list] will also need to be updated in parallel with this change. Alternatively, it may be better to change 8.5.1 [dcl.init.aggr] instead of 8.5.4 [dcl.init.list] and 13.3.3.1.5 [over.ics.list].
In a related note, given
struct NonAggregate {
NonAggregate() {}
};
struct WantsIt {
WantsIt(NonAggregate);
};
void f(NonAggregate n);
void f(WantsIt);
int main() {
NonAggregate n;
// ambiguous!
f({n});
}
13.3.3.1.5 [over.ics.list] paragraph 3 says that the call to f(NonAggregate) is a user-defined conversion, the same as the call to f(WantsIt) and thus ambiguous. Also,
NonAggregate n; // #1 (n -> NonAggregate = Identity conversion) NonAggregate m{n}; // #2 ({n} -> NonAggregate = User-defined conversion} // (copy-ctor not considered according to 13.3.3.1 [over.best.ics] paragraph 4) NonAggregate m{{n}};
Finally, the suggested resolution simply says “initialized from,” without specifying whether that means direct initialization or copy initialization. It should be explicit about which is intended, e.g., if it reflects the kind of list-initialization being done.
Proposed resolution (February, 2014) [SUPERSEDED]:
Change 8.5.4 [dcl.init.list] paragraph 3 as follows:
List-initialization of an object or reference of type T is defined as follows:
If T is a class type and the initializer list has a single element of type cv U, where U is T or a class derived from T, the object is initialized from that element (by copy-initialization for copy-list-initialization, or by direct-initialization for direct-list-initialization).
Otherwise, if T is a character array and the initializer list has a single element that is an appropriately typed string literal (8.5.2 [dcl.init.string]), initialization is done as described in that section.
If Otherwise, if T is an aggregate...
Delete the final bullet of 13.3.3.1 [over.best.ics] paragraph 4, as follows:
However, if the target is
the first parameter of a constructor or
the implicit object parameter of a user-defined conversion function
and the constructor or user-defined conversion function is a candidate by
13.3.1.3 [over.match.ctor], when the argument is the temporary in the second step of a class copy-initialization, or
13.3.1.4 [over.match.copy], 13.3.1.5 [over.match.conv], or 13.3.1.6 [over.match.ref] (in all cases), or
the second phase of 13.3.1.7 [over.match.list] when the initializer list has exactly one element, and the target is the first parameter of a constructor of class X, and the conversion is to X or reference to (possibly cv-qualified) X,
user-defined conversion sequences are not considered. [Note:...
Insert the following two paragraphs between 13.3.3.1.5 [over.ics.list] paragraphs 1 and 2, moving the footnote from the current paragraph 3 to the second inserted paragraph:
When an argument is an initializer list (8.5.4 [dcl.init.list]), it is not an expression and special rules apply for converting it to a parameter type.
If the parameter type is a class C and the initializer list has a single element of type cv U, where U is C or a class derived from C, the implicit conversion sequence is the one required to convert the element to the parameter type.
Otherwise, if the parameter type is a character array [Footnote: Since there are no parameters of array type, this will only occur as the underlying type of a reference parameter. —end footnote] and the initializer list has a single element that is an appropriately typed string literal (8.5.2 [dcl.init.string]), the implicit conversion is the identity conversion.
If Otherwise, if the parameter type is std::initializer_list<X> and...
Otherwise, if the parameter type is “array of N X” [Footnote: ... —end footnote], if the initializer list has...
Change 13.3.3.1.5 [over.ics.list] paragraph 7 as follows:
Otherwise, if the parameter type is not a class:
if the initializer list has one element that is not itself an initializer list, the implicit conversion sequence is the one required to convert the element to the parameter type; [Example:...
...
Change 13.3.3.2 [over.ics.rank] paragraph 3 as follows:
Two implicit conversion sequences of the same form are indistinguishable conversion sequences unless one of the following rules applies:
...
List-initialization sequence L1 is a better conversion sequence than list-initialization sequence L2 if
L1 converts to std::initializer_list<X> for some X and L2 does not, or, if not that,
L1 converts to type “array of N1 T”, L2 converts to type “array of N2 T”, and N1 is smaller than N2.,
even if one of the above rules would otherwise apply. [Example:
void f1(int); // #1 void f1(std::initializer_list<long>); // #2 void g1() { f1({42}); } // chooses #2 void f2(std::pair<const char*, const char*>); // #3 void f2(std::initializer_list<std::string>); // #4 void g2() { f2({"foo","bar"}); } // chooses #4
—end example]
This resolution also resolves issues 1490, 1589, and 1631.
Notes from the February, 2014 meeting:
The resolution above does not adequately address the related issue 1758. It appears that conversion functions and constructors must be handled separately.
Proposed resolution (June, 2014):
Change 8.5.4 [dcl.init.list] paragraph 3 as follows:
List-initialization of an object or reference of type T is defined as follows:
If T is a class type and the initializer list has a single element of type cv U, where U is T or a class derived from T, the object is initialized from that element (by copy-initialization for copy-list-initialization, or by direct-initialization for direct-list-initialization).
Otherwise, if T is a character array and the initializer list has a single element that is an appropriately-typed string literal (8.5.2 [dcl.init.string]), initialization is performed as described in that section.
If Otherwise, if T is an aggregate,
Otherwise, if the initializer list has no elements...
Otherwise, if T is a specialization of std::initializer_list<E>...
Otherwise, if T is a class type...
Otherwise, if the initializer list has a single element of type E and either T is not a reference type or its referenced type is reference-related to E, the object or reference is initialized from that element (by copy-initialization for copy-list-initialization, or by direct-initialization for direct-list-initialization); if a narrowing conversion (see below) is required to convert the element to T, the program is ill-formed. [Example:...
Otherwise...
Change 13.3.1.7 [over.match.list] paragraph 1 as follows:
When objects of non-aggregate class type T are list-initialized (8.5.4 [dcl.init.list]) such that 8.5.4 [dcl.init.list] specifies that overload resolution is performed according to the rules in this section, overload resolution selects the constructor...
Change 13.3.3.1 [over.best.ics] paragraph 4 as follows:
...and the constructor or user-defined conversion function is a candidate by
13.3.1.3 [over.match.ctor], when the argument is the temporary in the second step of a class copy-initialization, or
13.3.1.4 [over.match.copy], 13.3.1.5 [over.match.conv], or 13.3.1.6 [over.match.ref] (in all cases), or
the second phase of 13.3.1.7 [over.match.list] when the initializer list has exactly one element, and the target is the first parameter of a constructor of class X, and the conversion is to X or reference to (possibly cv-qualified) X,
user-defined conversion sequences are not considered.
Change 13.3.3.1.5 [over.ics.list] paragraphs 1-2 as follows, moving the footnote from paragraph 3:
When an argument is an initializer list (8.5.4 [dcl.init.list]), it is not an expression and special rules apply for converting it to a parameter type.
If the parameter type is a class X and the initializer list has a single element of type cv U, where U is X or a class derived from X, the implicit conversion sequence is the one required to convert the element to the parameter type.
Otherwise, if the parameter type is a character array [Footnote: Since there are no parameters of array type, this will only occur as the underlying type of a reference parameter. —end footnote] and the initializer list has a single element that is an appropriately-typed string literal (8.5.2 [dcl.init.string]), the implicit conversion sequence is the identity conversion.
If Otherwise, if the parameter type is std::initializer_list<X> and...
Change 8.5.4 [dcl.init.list] paragraph 7 as follows:
Otherwise, if the parameter type is not a class:
if the initializer list has one element that is not itself an initializer list, the implicit conversion sequence is the one required to convert the element to the parameter type; [Example:...
Move the final bullet of 13.3.3.2 [over.ics.rank] paragraph 3 to the beginning of the list and change it as follows:
List-initialization sequence L1 is a better conversion sequence than list-initialization sequence L2 if
L1 converts to std::initializer_list<X> for some X and L2 does not, or, if not that,
L1 converts to type “array of N1 T”, L2 converts to type “array of N2 T”, and N1 is smaller than N2.,
even if one of the other rules in this paragraph would otherwise apply. [Example:
void f1(int); // #1
void f1(std::initializer_list<long>); // #2
void g1() { f1({42}); } // chooses #2
void f2(std::pair<const char*, const char*>); // #3
void f2(std::initializer_list<std::string>); // #4
void g2() { f2({"foo","bar"}); } // chooses #4
—end example]
This resolution also resolves issues 1490, 1589, 1631, 1756, and 1758.
Initialization of an array of characters from a string literal is handled by the third bullet of 8.5 [dcl.init] paragraph 16, branching off to 8.5.2 [dcl.init.string]. However, list initialization is handled by the first bullet, branching off to 8.5.4 [dcl.init.list], and there is no corresponding special case in 8.5.4 [dcl.init.list] paragraph 3 for an array of characters initialized by a brace-enclosed string literal. That is, an initialization like
char s[4]{"abc"};
is ill-formed, which could be surprising. Similarly,
std::initializer_list<char>{"abc"};
is plausible but also not permitted.
Notes from the October, 2012 meeting:
CWG agreed that the first example should be permitted, but not the second.
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue 1467.
The wording of 8.5.4 [dcl.init.list] paragraph 3,
if the initializer list has a single element of type E and either T is not a reference type or its referenced type is reference-related to E, the object or reference is initialized from that element
does not specify whether the initialization is direct-initialization, copy-initialization, or the same kind of initialization that applied to the list-initialization. This matters when E is a class type with an explicit conversion function. (Note that aggregate initialization performs copy-initialization on its subobjects, but it's not clear whether that should be the pattern followed for this case.)
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue 1467.
In Bloomington there was general agreement that given a class that uses non-static data member initializers, the exception-specification for the default constructor depends on whether those initializers are noexcept. However, according to 9.2 [class.mem] paragraph 2, the class is regarded as complete within the brace-or-equal-initializers for non-static data members.
This suggests that we need to finish processing the class before parsing the NSDMI, but our direction on issue 1351 suggests that we need to parse the NSDMI in order to finish processing the class. Can't have both...
Additional note (March, 2013):
A specific example:
struct A { void *p = A{}; operator void*() const { return nullptr; } };
Perhaps the best way of addressing this would be to make it ill-formed for a non-static data member initializer to use a defaulted constructor of its class.
See also issue 1360.
Notes from the September, 2013 meeting:
One approach that might be considered would be to parse deferred portions lazily, on demand, and then issue an error if this results in a cycle.
Proposed resolution (February, 2014):
Change 9.2 [class.mem] paragraph 4 as follows:
A brace-or-equal-initializer shall appear only in the declaration of a data member. (For static data members, see 9.4.2 [class.static.data]; for non-static data members, see 12.6.2 [class.base.init]). A brace-or-equal-initializer for a non-static data member shall not directly or indirectly cause the implicit definition of a defaulted default constructor for the enclosing class or the exception-specification of that constructor.
C++ allows only non-static data member declarations in an anonymous union, but C and several C++ implementations permit static_assert declarations. Should the C++ Standard be changed accordingly?
Proposed resolution (June, 2014):
Change 9.5 [class.union] paragraph 5 as follows:
A union of the form
union { member-specification } ;
is called an anonymous union; it defines an unnamed object of unnamed type. The member-specification of an anonymous union shall only define non-static data members Each member-declaration in the member-specification of an anonymous union shall either define a non-static data member or be a static_assert-declaration. [Note:...
A class-or-decltype is used as a base-specifier and as a mem-initializer-id that names a base class. It is specified in 10 [class.derived] paragraph 1 as:
Consequently, a declaration like
template<typename T> struct D : T::template B<int>::template C<int> {};
is ill-formed, although most implementations accept it; some actually require the use of the template keyword, although the relevant wording in 14.2 [temp.names] paragraph 4 only requires it in a qualified-id, not in a class-or-decltype. It would probably be good to add a production like
to the definition of class-or-decltype and explicitly mention those contexts in 14.2 [temp.names] as not requiring use of the template keyword.
Additional note (January, 2014):
This is effectively issues 314 and 343.
See also issue 1812.
Proposed resolution (February, 2014):
Change 9 [class] paragraph 3 as follows:
If a class is marked with the class-virt-specifier final and it appears as a base-type-specifier class-or-decltype in a base-clause (Clause 10 [class.derived]), the program is ill-formed. Whenever a class-key is followed...
Change the grammar in 10 [class.derived] paragraph 1 as follows:
Delete paragraph 4 and change paragraph 5 of 14.2 [temp.names] as follows, splitting paragraph 5 into two paragraphs and moving the example from paragraph 4 into paragraph 5:
When the name of a member template specialization appears after . or -> in a postfix-expression or after a nested-name-specifier in a qualified-id, and the object expression of the postfix-expression is type-dependent or the nested-name-specifier in the qualified-id refers to a dependent type, but the name is not a member of the current instantiation (14.6.2.1 [temp.dep.type]), the member template name must be prefixed by the keyword template. Otherwise the name is assumed to name a non-template. [Example: ... —end example]
A name prefixed by the keyword template shall be a template-id or the name shall refer to a class template or alias template. [Note: The keyword template may not be applied to non-template members of class templates. —end note] The nested-name-specifier (5.1.1 [expr.prim.general]) of
a class-head-name (Clause 9 [class]) or enum-head (7.2 [dcl.enum]) (if any) or
a qualified-id in a declarator-id (Clause 8 [dcl.decl]),
or a nested-name-specifier directly contained in such a nested-name-specifier (recursively), shall not be of the form
nested-name-specifier template simple-template-id ::
[Note: That is, a simple-template-id shall not be prefixed by the keyword template in these cases. —end note]
The keyword template is optional in a typename-specifier (14.6 [temp.res]), elaborated-type-specifier (7.1.6.3 [dcl.type.elab]), using-declaration (7.3.3 [namespace.udecl]), or class-or-decltype (Clause 10 [class.derived]), and in recursively directly-contained nested-name-specifiers thereof. In these contexts, a < token is always assumed to introduce a template-argument-list. [Note: Thus, if the preceding name is not a template-name, the program is ill-formed. —end note] In other contexts, when the name of a member template specialization appears after a nested-name-specifier that denotes a dependent type, but the name is not a member of the current instantiation, the member template name shall be prefixed by the keyword template. Similarly, when the name of a member template specialization appears after . or -> in a postfix-expression (5.2 [expr.post]) and the object expression of the postfix-expression is type-dependent, but the name is not a member of the current instantiation (14.6.2.1 [temp.dep.type]), the member template name shall be prefixed by the keyword template. Otherwise, the name is assumed to name a non-template. [Example:
<From original paragraph 4>—end example] [Note: As is the case with the typename prefix...
This resolution also resolves issues 314, 343, 1794, and 1812.
The Standard is insufficiently precise in dealing with temporaries. It is not always clear when the term “temporary” is referring to an expression whose result is a prvalue and when it is referring to a temporary object.
(See also issue 1568.)
Proposed resolution (February, 2014):
The resolution is contained in document N3918.
This resolution also resolves issues 1651 and 1893.
The resolution of issues 616 and 1213, making the result of a member access or subscript expression applied to a prvalue an xvalue, means that binding a reference to such a subobject of a temporary does not extend the temporary's lifetime. 12.2 [class.temporary] should be revised to ensure that it does.
Proposed resolution (February, 2014):
This issue is resolved by the resolution of issue 1299.
According to 12.4 [class.dtor] paragraph 3,
A declaration of a destructor that does not have an exception-specification is implicitly considered to have the same exception-specification as an implicit declaration (15.4 [except.spec]).
The implications of this are not clear for the destructor of a class template. For example,
template <class T> struct B: T { ~B(); }; template <class T> B<T>::~B() noexcept {}
The implicit exception-specification of the in-class declaration of the destructor depends on the characteristics of the template argument. Does this mean that the out-of-class definition of the destructor is ill-formed, or will it be ill-formed only in specializations where the template argument causes the implicit exception-specification to be other than noexcept?
Proposed resolution (February, 2014):
This issue is resolved by the resolution of issue 1552.
Notes from the April, 2013 meeting:
This issue was approved as a DR at the April, 2013 (Bristol) meeting, but it was not noticed that issue 1552 was not being moved at that time. It is being returned to "drafting" status pending the resolution of that issue.
The grammar for mem-initializer-list in 12.6.2 [class.base.init] paragraph 1 (after the resolution of issue 1649) is right-recursive:
In general, however, such lists elsewhere in the Standard are described using a left-recursive grammar, e.g., for initializer-list in 8.5 [dcl.init] paragraph 1:
It would be better to be consistent in the definition of mem-initializer-list.
Proposed resolution (February, 2014):
Change the grammar in 12.6.2 [class.base.init] paragraph 1 as follows:
The proposed resolution for issue 1402 overlooked some needed changes in 12.8 [class.copy] paragraph 28.
Proposed resolution (February, 2014):
...It is unspecified whether subobjects representing virtual base classes are assigned more than once by the implicitly-defined copy/move assignment operator. [Example:
struct V { }; struct A : virtual V { }; struct B : virtual V { }; struct C : B, A { };It is unspecified whether the virtual base class subobject V is assigned twice by the implicitly-defined copy/move assignment operator for C. —end example] [Note: This does not apply to move assignment, as a defaulted move assignment operator is deleted if the class has virtual bases. —end note]
Issue 1350 clarified that the exception-specification for an inheriting constructor is determined like defaulted functions, but we should also say something similar for deleted, and perhaps constexpr.
Also, the description of the semantics of inheriting constructors don't seem to allow for C-style variadic functions, so the text should be clearer that such constructors are only inherited without their ellipsis.
Proposed resolution (February, 2014):
Change 12.9 [class.inhctor] paragraph 1 as follows:
A using-declaration (7.3.3 [namespace.udecl]) that names a constructor implicitly declares a set of inheriting constructors. The candidate set of inherited constructors from the class X named in the using-declaration consists of actual constructors and notional constructors that result from the transformation of defaulted parameters and ellipsis parameter specifications as follows:
all non-template constructors for each non-template constructor of X, the constructor that results from omitting any ellipsis parameter specification, and
for each non-template constructor of X that has at least one parameter with a default argument, the set of constructors that results from omitting any ellipsis parameter specification and successively omitting parameters with a default argument from the end of the parameter-type-list, and
all constructor templates for each constructor template of X, the constructor template that results from omitting any ellipsis parameter specification, and
for each constructor template of X that has at least one parameter with a default argument, the set of constructor templates that results from omitting any ellipsis parameter specification and successively omitting parameters with a default argument from the end of the parameter-type-list.
Change 12.9 [class.inhctor] paragraph 2 as follows:
The constructor characteristics of a constructor or constructor template are
the template parameter list (14.1 [temp.param]), if any,
the parameter-type-list parameter-type-list (8.3.5 [dcl.fct]), and
absence or presence of explicit (12.3.1 [class.conv.ctor]), and.
absence or presence of constexpr (7.1.5 [dcl.constexpr]).
Change 12.9 [class.inhctor] paragraph 4 as follows:
A constructor so declared has the same access as the corresponding constructor in X. It is constexpr if the user-written constructor (see below) would satisfy the requirements of a constexpr constructor (7.1.5 [dcl.constexpr]). It is deleted if the corresponding constructor in X is deleted (8.4 [dcl.fct.def] 8.4.3 [dcl.fct.def.delete]) or if a defaulted default constructor (12.1 [class.ctor]) would be deleted, except that the construction of the direct base class X is not considered in the determination. An inheriting constructor shall not be explicitly instantiated (14.7.2 [temp.explicit]) or explicitly specialized (14.7.3 [temp.expl.spec]).
The current wording of the second bullet of paragraph 1 of 13.3.1.4 [over.match.copy] contains the phrase,
When initializing a temporary to be bound to the first parameter of a constructor that takes a reference to possibly cv-qualified T as its first argument...
Presumably “argument” should be “parameter.”
Proposed resolution (February, 2014):
Change 13.3.1.4 [over.match.copy] paragraph 1 as follows:
...the candidate functions are selected as follows:
The converting constructors (12.3.1 [class.conv.ctor]) of T are candidate functions.
When the type of the initializer expression is a class type “cv S”, the non-explicit conversion functions of S and its base classes are considered. When initializing a temporary to be bound to the first parameter of a constructor that takes a where parameter is of type “reference to possibly cv-qualified T” as its first argument, and the constructor is called with a single argument in the context of direct-initialization of an object of type “cv2 T”, explicit conversion functions are also considered. Those that are not hidden...
Consider the following example:
struct X { X(); }; struct Y { explicit operator X(); } y; X x{y};
This appears to be ill-formed, although the corresponding case with parentheses is well-formed. There seem to be two factors that prevent this from being accepted:
First, the special provision allowing an explicit conversion function to be used when initializing the parameter of a copy/move constructor is in 13.3.1.4 [over.match.copy], and this case takes us to 13.3.1.7 [over.match.list] instead.
Second, 13.3.3.1 [over.best.ics] paragraph 4 says that in this case, because we are in 13.3.1.7 [over.match.list], and we have a single argument, and we are calling a copy/move constructor, we are not allowed to consider a user-defined conversion sequence for the argument.
Similarly, in an example like
struct A { A() {} A(const A &) {} }; struct B { operator A() { return A(); } } b; A a{b};
the wording in 13.3.3.1 [over.best.ics] paragraph 4 with regard to 13.3.1.7 [over.match.list] prevents considering B's conversion function when initializing the first parameter of A's copy constructor, thereby making this code ill-formed.
Notes from the February, 2014 meeting:
This issue should be addressed by the eventual resolution of issue 1467.
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue 1467.
According to bullet 1 of 13.3.3.1.5 [over.ics.list] paragraph 6,
Otherwise, if the parameter type is not a class:
if the initializer list has one element, the implicit conversion sequence is the one required to convert the element to the parameter type;
...
This wording ignores the possibility that the element might be an initializer list (as opposed to an expression with a type, as illustrated in the example). This oversight affects an example like:
struct A { int a[1]; }; struct B { B(int); }; void f(B, int); void f(int, A); int main() { f({0}, {{1}}); }
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue 1467.
The interpretation of the following example is unclear in the current wording:
void f(long); void f(initializer_list<int>); int main() { f({1L});
The problem is that a list-initialization sequence can also be a standard conversion sequence, depending on the types of the elements and the type of the parameter, so more than one bullet in the list in 13.3.3.2 [over.ics.rank] paragraph 3 applies:
Two implicit conversion sequences of the same form are indistinguishable conversion sequences unless one of the following rules applies:
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if
...
the rank of S1 is better than the rank of S2, or S1 and S2 have the same rank and are distinguishable by the rules in the paragraph below, or, if not that,
...
...
List-initialization sequence L1 is a better conversion sequence than list-initialization sequence L2 if L1 converts to std::initializer_list<X> for some X and L2 does not.
These bullets give opposite results for the example above, and there is implementation variance in which is selected.
Notes from the April, 2013 meeting:
CWG determined that the latter bullet should apply only if the first one does not.
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue 1467.
The EDG front-end accepts:
template <typename T> struct A { template <typename U> struct B {}; }; template <typename T> struct C : public A<T>::template B<T> { };
It rejects this code if the base-specifier is spelled A<T>::B<T>.
However, the grammar for a base-specifier does not allow the template keyword.
Suggested resolution:
It seems to me that a consistent approach to the solution that looks like it will be adopted for issue 180 (which deals with the typename keyword in similar contexts) would be to assume that B is a template if it is followed by a "<". After all, an expression cannot appear in this context.Notes from the 4/02 meeting:
We agreed that template must be allowed in this context. The syntax needs to be changed. We also opened the related issue 343.
Additional note (August, 2010):
The same considerations apply to mem-initializer-ids, as noted in issue 1019.
Additional note (January, 2014):
See also issue 1710.
Proposed resolution (February, 2014):
This issue is resolved by the resolution of issue 1710.
By analogy with typename, the keyword template used to indicate that a dependent name will be a template name should be optional in contexts where a type is required, e.g., base class lists. We could also consider member and parameter declarations.
This was suggested by issue 314.
Additional note (January, 2014):
See also issue 1710.
Proposed resolution (February, 2014):
This issue is resolved by the resolution of issue 1710.
The current wording of 14.2 [temp.names] paragraph 5 is:
A name prefixed by the keyword template shall be a template-id or the name shall refer to a class template.
Presumably this should also allow template before alias templates. For example,
template<template<typename> class Template> struct Internal { template<typename Arg> using Bind = Template<Arg>; }; template<template<typename> class Template, typename Arg> using Instantiate = Template<Arg>; template<template<typename> class Template, typename Argument> using Bind = Instantiate<Internal<Template>::template Bind, Argument>;
Proposed resolution (February, 2014):
This issue is resolved by the resolution of issue 1710.
According to 14.2 [temp.names] paragraph 4,
When the name of a member template specialization appears after . or -> in a postfix-expression or after a nested-name-specifier in a qualified-id, and the object expression of the postfix-expression is type-dependent or the nested-name-specifier in the qualified-id refers to a dependent type, but the name is not a member of the current instantiation (14.6.2.1 [temp.dep.type]), the member template name must be prefixed by the keyword template. Otherwise the name is assumed to name a non-template.
This does not seem necessary in a typename-specifier; a < following a qualified-id in a typename-specifier could safely be assumed to begin a template argument list, so the template keyword should be optional in this case. Some implementations already do not enforce this requirement.
See also issue 1710.
Proposed resolution (February, 2014):
This issue is resolved by the resolution of issue 1710.
According to 14.5.4 [temp.friend] paragraph 5,
A member of a class template may be declared to be a friend of a non-template class. In this case, the corresponding member of every specialization of the class template is a friend of the class granting friendship. For explicit specializations the corresponding member is the member (if any) that has the same name, kind (type, function, class template, or function template), template parameters, and signature as the member of the class template instantiation that would otherwise have been generated.
Should this treatment of members of explicit specializations also apply to members of partial specializations?
Proposed resolution (February, 2014):
Change 14.5.4 [temp.friend] paragraph 5 as follows:
A member of a class template may be declared to be a friend of a non-template class. In this case, the corresponding member of every specialization of the primary class template and class template partial specializations thereof is a friend of the class granting friendship. For explicit specializations and specializations of partial specializations, the corresponding member is the member (if any) that has the same name, kind (type, function, class template, or function template), template parameters, and signature as the member of the class template instantiation that would otherwise have been generated. [Example:...
A member function with no ref-qualifier can be called for a class prvalue, as can a non-member function whose first parameter is an rvalue reference to that class type. However, 14.5.6.2 [temp.func.order] does not handle this case.
Proposed resolution (February, 2014):
Change 14.5.6.2 [temp.func.order] paragraph 3 as follows:
...If only one of the function templates M is a non-static member of some class A, that function template M is considered to have a new first parameter inserted in its function parameter list. Given cv as the cv-qualifiers of the function template M (if any), the new parameter is of type “rvalue reference to cv A” if the optional ref-qualifier of the function template M is &&, or if M has no ref-qualifier and the first parameter of the other template has rvalue reference type. Otherwise, the new parameter is of type “lvalue reference to cv A” otherwise. [Note: This allows a non-static member to be ordered with respect to a nonmember function and for the results to be equivalent to the ordering of two equivalent nonmembers. —end note] [Example:...
The treatment of unused arguments in an alias template specialization is not specified by the current wording of 14.5.7 [temp.alias]. For example:
#include <iostream> template <class T, class...> using first_of = T; template <class T> first_of<void, typename T::type> f(int) { std::cout << "1\n"; } template <class T> void f(...) { std::cout << "2\n"; } struct X { typedef void type; }; int main() { f<X>(0); f<int>(0); }
Is the reference to first_of<void, T::type> with T being int equivalent to simply void, or is it a substitution failure?
(See also issues 1430, 1520, and 1554.)
Notes from the October, 2012 meeting:
The consensus of CWG was to treat this case as substitution failure.
Proposed resolution (February, 2014):
Add the following as a new paragraph before 14.5.7 [temp.alias] paragraph 3:
When a template-id refers to the specialization of an alias template, it is equivalent...
However, if the template-id is dependent, subsequent template argument substitution still applies to the template-id. [Example:
template<typename...> using void_t = void; template<typename T> void_t<typename T::foo> f(); f<int>(); // error, int does not have a nested type foo
—end example]
The type-id in an alias template declaration shall not refer...
Various characteristics of entities referred to by a non-dependent reference in a template can change between the definition context and the point of instantiation of a specialization of that template. These include initialization (which affects whether an object can be used in a constant expression), function and template default arguments, and the completeness of types. There is implementation divergence as to whether these are checked in the definition context or at the point of instantiation. Presumably a rule is needed to make it ill-formed, no diagnostic required, if the validity of such a reference changes between the two contexts.
Proposed resolution (February, 2014):
Change 14.6 [temp.res] paragraph 8 as follows:
...If a type used in a non-dependent name is incomplete at the point at which a template is defined but is complete at the point at which an instantiation is done, and if the completeness of that type affects whether or not the program is well-formed or affects the semantics of the program, hypothetical instantiation of a template immediately following its definition would be ill-formed due to a construct that does not depend on a template parameter, the program is ill-formed; no diagnostic is required. If the interpretation of such a construct in the hypothetical instantiation is different from the interpretation of the corresponding construct in any actual instantiation of the template, the program is ill-formed; no diagnostic is required. [Note: This can happen in situations including the following:
a type used in a non-dependent name is incomplete at the point at which a template is defined but is complete at the point at which an instantiation is performed, or
an instantiation uses a default argument or default template argument that had not been defined at the point at which the template was defined, or
const expression evaluation (5.19 [expr.const]) within the template instantiation uses
- the value of a const object of integral or unscoped enumeration type or
the value of a constexpr object or
the value of a reference or
the definition of a constexpr function,
and that entity was not defined when the template was defined, or
a class template specialization or variable template specialization that is specified by a non-dependent simple-template-id is used by the template, and either it is instantiated from a partial specialization that was not defined when the template was defined or it names an explicit specialization that was not declared when the template was defined.
—end note] [Note: If a template is instantiated...
Is the following example well-formed?
template<class T> struct A { typedef int M; struct B { typedef void M; struct C; }; }; template<class T> struct A<T>::B::C : A<T> { M // A<T>::M or A<T>::B::M? p[2]; };
14.6.2 [temp.dep] paragraph 3 says the use of M should refer to A<T>::B::M because the base class A<T> is not searched because it's dependent. But in this case A<T> is also the current instantiation (14.6.2.1 [temp.dep.type]) so it seems like it should be searched.
Notes from the August, 2011 meeting:
The recent changes to the handling of the current instantiation may have sufficiently addressed this issue.
Additional note (September, 2012):
See also issue 1526 for additional analysis demonstrating that this issue is still current despite the changes to the description of the current instantiation. The status has consequently been changed back to "open" for further consideration.
Proposed resolution (February, 2014):
Add the following as a new paragraph before 14.6.2.1 [temp.dep.type] paragraph 4:
A dependent base class is a base class that is a dependent type and is not the current instantiation. [Note: a base class can be the current instantiation in the case of a nested class naming an enclosing class as a base. —end note] [Example:
template<class T> struct A { typedef int M; struct B { typedef void M; struct C; }; }; template<class T> struct A<T>::B::C : A<T> { M m; // OK, A<T>::M };
—end example]
A name is a member of the current instantiation if...
Change 14.6.1 [temp.local] paragraph 9 as follows:
In the definition of a class template or in the definition of a member of such a template that appears outside of the template definition, for each non-dependent base class (14.6.2.1 [temp.dep.type]) which does not depend on a template-parameter (14.6.2 [temp.dep]), if the name of the base class or the name of a member of the base class is the same as the name of a template-parameter, the base class name or member name hides the template-parameter name (3.3.10 [basic.scope.hiding]). [Example:...
Change 14.6.2 [temp.dep] paragraph 3 as follows:
In the definition of a class or class template, if a base class depends on a template-parameter, the base class scope the scope of a dependent base class (14.6.2.1 [temp.dep.type]) is not examined during unqualified name lookup either at the point of definition of the class template or member or during an instantiation of the class template or member. [Example:...
The discussion of issue 1233 revealed that the dependency of function calls involving a braced-init-list containing a pack expansion is not adequately addressed by the existing wording.
Proposed resolution (February, 2014):
Change 14.6.2 [temp.dep] paragraph 1 as follows:
...In an expression of the form:
postfix-expression ( expression-listopt )
where the postfix-expression is an unqualified-id, the unqualified-id denotes a dependent name if
any of the expressions in the expression-list is a pack expansion (14.5.3 [temp.variadic]),
any of the expressions or braced-init-lists in the expression-list is a type-dependent expression (14.6.2.2 [temp.dep.expr]), or
if the unqualified-id is...
Add the following as a new paragraph at the end of 14.6.2.2 [temp.dep.expr]:
A class member access expression (5.2.5 [expr.ref]) is type-dependent if...
A braced-init-list is type-dependent if any element is type-dependent or is a pack expansion.
The length of the __func__ array is implementation-defined but potentially depends on the signature of the function in which it occurs. However, __func__ is not listed among the type-dependent id-expressions in 14.6.2.2 [temp.dep.expr] paragraph 3.
Proposed resolution (February, 2014):
Change 14.6.2.2 [temp.dep.expr] paragraph 3 as follows:
An id-expression is type-dependent if it contains
an identifier associated by name lookup with one or more declarations declared with a dependent type,
an identifier associated by name lookup with one or more declarations of member functions of the current instantiation declared with a return type that contains a placeholder type (7.1.6.4 [dcl.spec.auto]),
The identifier __func__ (8.4.1 [dcl.fct.def.general]) where any enclosing function is a template, a member of a class template, or a generic lambda,
a template-id that is dependent,
...
Do local classes of function templates get the same treatment as member classes of class templates? In particular, is their definition only instantiated when they are required? For example,
template<typename T> void f() { struct B { T t; }; } int main() { f<void>(); }
Implementations vary on this question.
(This question is superficially similar to the one in issue 1253. However, the entities in view in that issue can be named and defined outside the containing template and thus can be explicitly specialized, none of which is true for local classes of function templates.)
It should also be noted that the resolution of this issue should apply as well to local enumeration types.
Proposed resolution (October, 2012):
Change 14.7.1 [temp.inst] paragraph 1 as follows:
Unless a class template specialization has been explicitly instantiated (14.7.2 [temp.explicit]) or explicitly specialized (14.7.3 [temp.expl.spec]), the class template specialization is implicitly instantiated when the specialization is referenced in a context that requires a completely-defined object type or when the completeness of the class type affects the semantics of the program. [Note: Within a template declaration, a local class or enumeration and the members of a local class are never considered to be entities that can be separately instantiated (this includes their default arguments, exception-specifications, and non-static data member initializers, if any). As a result, the dependent names are looked up, the semantic constraints are checked, and any templates used are instantiated as part of the instantiation of the entity within which the local class or enumeration is declared. —end note] The implicit instantiation of a class template specialization...
Notes from the April, 2013 meeting:
The proposed resolution interacts with N3649 (generic lambdas), adopted at this meeting, and this issue has returned to "review" status to allow any necessary changes to be made.
14.8.2 [temp.deduct] paragraph 9 reads,
Except as described above, the use of an invalid value shall not cause type deduction to fail. [Example: In the following example 1000 is converted to signed char and results in an implementation-defined value as specified in (4.7 [conv.integral]). In other words, both templates are considered even though 1000, when converted to signed char, results in an implementation-defined value.
template <int> int f(int); template <signed char> int f(int); int i1 = f<1>(0); // ambiguous int i2 = f<1000>(0); // ambiguous—end example]
This is no longer correct, even ignoring the fact that some implementations may be able to represent the value 1000 as a signed char: integral and enumeration non-type template arguments are now converted constant expressions (14.3.2 [temp.arg.nontype] paragraph 1), and converted constant expressions disallow narrowing conversions (5.19 [expr.const] paragraph 3).
Proposed resolution (February, 2014):
Change 14.8.2 [temp.deduct] paragraph 9 as follows:
Except as described above, the use of an invalid value shall not cause type deduction to fail. [Example: In the following example, 1000 is converted to signed char and results in an implementation-defined value as specified in (4.7 [conv.integral]). In other words, both templates are considered even though 1000, when converted to signed char, results in an implementation-defined value assuming a signed char cannot represent the value 1000, a narrowing conversion would be required to convert the template-argument of type int to signed char, therefore substitution fails for the second template (14.3.2 [temp.arg.nontype])..
template <int> int f(int); template <signed char> int f(int); int i1 = f<1000>(0); // ambiguous OK int i2 = f<1000>(0); // ambiguous; not narrowing—end example]
It is not clear whether the following is well-formed or not:
void foo(){} template<class T> void deduce(const T*) { } int main() { deduce(foo); }
Implementations vary in their treatment of this example.
Proposed resolution (April, 2013):
Change 14.8.2.5 [temp.deduct.type] paragraph 18 as follows:
A template-argument can be deduced from a function, pointer to function, or pointer to member function type. [Note: cv-qualification of a deduced function type is ignored; see 8.3.5 [dcl.fct]. —end note] [Example:
template<class T> void f(void(*)(T,int)); template<class T> void f2(const T*); template<class T> void foo(T,int); void g(int,int); void g(char,int); void g2(); void h(int,int,int); void h(char,int); int m() { f(&g); // error: ambiguous f(&h); // OK: void h(char,int) is a unique match f(&foo); // error: type deduction fails because foo is a template f2(g2); // OK: cv-qualification of deduced function type ignored }—end example]
Currently, 14.8.2.1 [temp.deduct.call] paragraph 1 says,
Template argument deduction is done by comparing each function template parameter type (call it P) with the type of the corresponding argument of the call (call it A) as described below. If removing references and cv-qualifiers from P gives std::initializer_list<P'> for some P' and the argument is an initializer list (8.5.4 [dcl.init.list]), then deduction is performed instead for each element of the initializer list, taking P' as a function template parameter type and the initializer element as its argument. Otherwise, an initializer list argument causes the parameter to be considered a non-deduced context (14.8.2.5 [temp.deduct.type]).
It would seem reasonable, however, to allow an array bound to be deduced from the number of elements in the initializer list, e.g.,
template<int N> void g(int const (&)[N]); void f() { g( { 1, 2, 3, 4 } ); }
Additional note (March, 2013):
The element type should also be deducible.
Proposed resolution (February, 2014):
Change 14.8.2.1 [temp.deduct.call] paragraph 1 as follows:
Template argument deduction is done by comparing each function template parameter type (call it P) with the type of the corresponding argument of the call (call it A) as described below. If P is a dependent type, removing references and cv-qualifiers from P gives std::initializer_list<P'> or P'[N] for some P' and N, and the argument is an a non-empty initializer list (8.5.4 [dcl.init.list]), then deduction is performed instead for each element of the initializer list, taking P' as a function template parameter type and the initializer element as its argument, and in the P'[N] case, if N is a non-type template parameter, N is deduced from the length of the initializer list. Otherwise, an initializer list argument causes the parameter to be considered a non-deduced context (14.8.2.5 [temp.deduct.type]). [Example:
template<class T> void f(std::initializer_list<T>); f({1,2,3}); // T deduced to int f({1,"asdf"}); // error: T deduced to both int and const char* template<class T> void g(T); g({1,2,3}); // error: no argument deduced for T template<class T, int N> void h(T const(&)[N]); h({1,2,3}); // T deduced to int, N deduced to 3 template<class T> void j(T const(&)[3]); j({42}); // T deduced to int, array bound not considered struct Aggr { int i; int j; }; template<int N> void k(Aggr const(&)[N]); k({1,2,3}); // error: deduction fails, no conversion from int to Aggr k({{1},{2},{3}}); // OK, N deduced to 3 template<int M, int N> void m(int const(&)[M][N]); m({{1,2},{3,4}}); // M and N both deduced to 2 template<class T, int N> void n(T const(&)[N], T); n({{1},{2},{3}},Aggr()); // OK, T is Aggr, N is 3—end example] For a function parameter pack...
Change the penultimate bullet of 14.8.2.5 [temp.deduct.type] paragraph 5 as follows:
The non-deduced contexts are:
...
A function parameter for which the associated argument is an initializer list (8.5.4 [dcl.init.list]) but the parameter does not have std::initializer_list or reference to possibly cv-qualified std::initializer_list type a type for which deduction from an initializer list is specified (14.8.2.1 [temp.deduct.call]). [Example:...
A function parameter pack that does not occur at the end of the parameter-declaration-list.
The current wording of 14.8.2.4 [temp.deduct.partial] paragraph 10 is:
If for each type being considered a given template is at least as specialized for all types and more specialized for some set of types and the other template is not more specialized for any types or is not at least as specialized for any types, then the given template is more specialized than the other template. Otherwise, neither template is more specialized than the other.
This is confusing and needs to be clarified.
Proposed resolution (September, 2013) [SUPERSEDED]:
Change 14.8.2.4 [temp.deduct.partial] paragraphs 9 and 10 as follows:
If, for a given type, deduction succeeds in both directions (i.e., the types are identical after the transformations above) and both P and A were reference types (before being replaced with the type referred to above):
if the type from the argument template was an lvalue reference and the type from the parameter template was not, the argument type is considered to be more specialized than the other the other type is not considered to be at least as specialized as the argument type; otherwise,
if the type from the argument template is more cv-qualified than the type from the parameter template (as described above), the argument type is considered to be more specialized than the other; otherwise, the other type is not considered to be at least as specialized as the argument type.
neither type is more specialized than the other.
If for each type being considered a given template is at least as specialized for all types and more specialized for some set of types and the other template is not more specialized for any types or is not at least as specialized for any types, then the given template is more specialized than the other template. Otherwise, neither template is more specialized than the other. A given template is at least as specialized as another template if it is at least as specialized as the other template for all types being considered. A given template is more specialized than another template if it is at least as specialized as the other template for all types being considered, and the other template is not at least as specialized as the given template for any type being considered.
Proposed resolution (February, 2014):
Change 14.8.2.4 [temp.deduct.partial] paragraphs 9-10 as follows:
If, for a given type, deduction succeeds in both directions (i.e., the types are identical after the transformations above) and both P and A were reference types (before being replaced with the type referred to above):
if the type from the argument template was an lvalue reference and the type from the parameter template was not, the argument type is considered to be more specialized than the other the parameter type is not considered to be at least as specialized as the argument type; otherwise,
if the type from the argument template is more cv-qualified than the type from the parameter template (as described above), the argument type is considered to be more specialized than the other; otherwise, the parameter type is not considered to be at least as specialized as the argument type.
neither type is more specialized than the other.
If for each type being considered a given template is at least as specialized for all types and more specialized for some set of types and the other template is not more specialized for any types or is not at least as specialized for any types, then the given template is more specialized than the other template. Otherwise, neither template is more specialized than the other. Function template F is at least as specialized as function template G if, for each pair of types used to determine the ordering, the type from F is at least as specialized as the type from G. F is more specialized than G if F is at least as specialized as G and G is not at least as specialized as F.
The determination of the exception-specification for an implicitly-declared special member function, as described in 15.4 [except.spec] paragraph 14, does not take into account the fact that nonstatic data member initializers and default arguments in default constructors can contain throw-expressions, which are not part of the exception-specification of any function that is “directly invoked” by the implicit definition. Also, the reference to “directly invoked” functions is not restricted to potentially-evaluated expressions, thus possibly including irrelevant exception-specifications.
Additional note (August, 2012):
The direction established by CWG for resolving this issue was to consider functions called from default arguments and non-static data member initializers in determining the exception-specification. This leads to a problem with ordering: because non-static data member initializers can refer to members declared later, their effect cannot be known until the end of the class. However, a non-static data member initializer could possibly refer to an implicitly-declared constructor, either its own or that of an enclosing class.
Proposed resolution (October, 2012) [SUPERSEDED]:
Add the following two new paragraphs and make the indicated changes to 15.4 [except.spec] paragraph 14:
A set of potential exceptions may contain types and the special value “any.” The set of potential exceptions of an expression is the union of all sets of potential exceptions of each potentially-evaluated subexpression e:
If e is a call to a function, member function, function pointer, or member function pointer (including implicit calls, such as a call to the allocation function in a new-expression):
if it has a non-throwing exception-specification or the call is a core constant expression (5.19 [expr.const]), the set is empty;
otherwise, if it has a dynamic-exception-specification, the set consists of every type in that dynamic-exception-specification;
otherwise, the set consists of “any.”
If e is a throw-expression (15.1 [except.throw]), the set consists of the type of the exception object that would be initialized by the operand if present, or “any” otherwise.
If e is a dynamic_cast expression that casts to a reference type and requires a run-time check (5.2.7 [expr.dynamic.cast]), the set consists of the type std::bad_cast.
If e is a typeid expression applied to a glvalue expression whose type is a polymorphic class type (5.2.8 [expr.typeid]), the set consists of the type std::bad_typeid.
If e is a new-expression with a non-constant expression in the noptr-new-declarator (5.3.4 [expr.new]), the set also includes the type std::bad_array_new_length.
Otherwise, the set is the empty set.
The set of potential exceptions of a function f of some class X, where f is an inheriting constructor or an implicitly-declared special member function, is defined as follows:
If f is a constructor, the set is the union of the sets of potential exceptions of the constructor invocations for X's non-variant non-static data members, for X's direct base classes, and, if X is non-abstract (10.4 [class.abstract]), for X's virtual base classes, as selected by overload resolution for the implicit definition of f (12.1 [class.ctor]), including default argument expressions used in such invocations. [Note: Even though destructors for fully constructed subobjects are invoked when an exception is thrown during the execution of a constructor (15.2 [except.ctor]), their exception-specifications do not contribute to the exception-specification of the constructor, because an exception thrown from such a destructor could never escape the constructor (15.1 [except.throw], 15.5.1 [except.terminate]). —end note]
If f is a default constructor or inheriting constructor, the set also contains all members of the sets of potential exceptions of the initialization of non-static data members from brace-or-equal-initializers.
If f is an assignment operator, the set is the union of the sets of potential exceptions of the assignment operator invocations for X's non-variant non-static data members and for X's virtual and direct base classes, as selected by overload resolution for the implicit definition of f (12.8 [class.copy]), including default argument expressions used in such invocations.
If f is a destructor, the set is the union of the sets of potential exceptions of the destructor invocations for X's non-variant non-static data members and for X's virtual and direct base classes.
An inheriting constructor (12.9 [class.inhctor]) and an implicitly declared implicitly-declared special member function (Clause 12 [special]) have an are considered to have an implicit exception-specification. If f is an inheriting constructor or an implicitly declared default constructor, copy constructor, move constructor, destructor, copy assignment operator, or move assignment operator, its implicit exception-specification specifies the type-id T if and only if T is allowed by the exception-specification of a function directly invoked by f's implicit definition; f allows all exceptions if any function it directly invokes allows all exceptions, and f has the exception-specification noexcept(true) if every function it directly invokes allows no exceptions. The implicit exception-specification is noexcept(false) if the set of potential exceptions of the function contains “any;” otherwise, if that set contains at least one type, the implicit exception-specification specifies each type T contained in the set; otherwise, the implicit exception-specification is noexcept(true). [Note: An instantiation of an inheriting constructor template has an implied exception-specification as if it were a non-template inheriting constructor. —end note] [Example:
struct A { A(); A(const A&) throw(); A(A&&) throw(); ~A() throw(X); }; struct B { B() throw(); B(const B&) throw(); B(B&&, int = (throw Y(), 0)) throw(Y) noexcept; ~B() throw(Y); }; struct D : public A, public B { // Implicit declaration of D::D(); // Implicit declaration of D::D(const D&) noexcept(true); // Implicit declaration of D::D(D&&) throw(Y); // Implicit declaration of D::~D() throw(X, Y); };Furthermore, if...
Change 5.3.7 [expr.unary.noexcept] paragraph 3 as follows:
The result of the noexcept operator is false if in a potentially-evaluated context the set of potential exceptions of the expression (15.4 [except.spec]) would contain contains “any” or at least one type and true otherwise.
a potentially evaluated call80 to a function, member function, function pointer, or member function pointer that does not have a non-throwing exception-specification (15.4 [except.spec]), unless the call is a constant expression (5.19 [expr.const]),
a potentially evaluated throw-expression (15.1 [except.throw]),
a potentially evaluated dynamic_cast expression dynamic_cast<T>(v), where T is a reference type, that requires a run-time check (5.2.7 [expr.dynamic.cast]), or
a potentially evaluated typeid expression (5.2.8 [expr.typeid]) applied to a glvalue expression whose type is a polymorphic class type (10.3 [class.virtual]).
Otherwise, the result is true.
(This resolution also resolves issues 1356 and 1465.)
Additional note (October, 2012):
The preceding wording has been modified from the version that was reviewed following the October, 2012 meeting and thus has been returned to "review" status.
Additional note (March, 2013):
It has been suggested that it might be more consistent with other parts of the language, and particularly in view of the deprecation of dynamic-exception-specifications, if a potentially-throwing non-static data member initializer simply made an implicit constructor noexcept(false) instead of giving it a set of potential exception types.
Additional note, April, 2013:
One problem with the approach suggested in the preceding note would be something like the following example:
struct S { virtual ~S() throw(int); }; struct D: S { };
This approach would make the example ill-formed, because the derived class destructor would be declared to throw types not permitted by the base class destructor's exception-specification. A further elaboration on the suggestion above that would not have this objection would be to define all dynamic-exception-specifications as simply equivalent to noexcept(false).
(See also issue 1639.)
Additional note, April, 2013:
The version of this resolution approved in Bristol assumed the underlying text of the C++11 IS; however, the wording of 15.4 [except.spec] paragraph 14 has been changed by previous resolutions, so this and the related issues are being returned to "review" status.
Proposed resolution, February, 2014 [SUPERSEDED]:
Change 15.4 [except.spec] paragraph 5 as follows:
If a virtual function has an exception-specification, all declarations, including the definition, of any function that overrides that virtual function in any derived class shall only allow exceptions that are allowed by the exception-specification of the base class virtual function, unless the overriding function is defined as deleted. [Example:...
Add the following two new paragraphs and change 15.4 [except.spec] paragraph 14 as indicated:
A set of potential exceptions may contain types and the special value “any”. The set of potential exceptions of an expression is the union of all sets of potential exceptions of each potentially-evaluated subexpression e:
If e is a core constant expression (5.19 [expr.const]), the set is empty.
Otherwise, if e is a function call (5.2.2 [expr.call]) whose postfix-expression is not a (possibly parenthesized) id-expression (5.1.1 [expr.prim.general]) or class member access (5.2.5 [expr.ref]), the set consists of “any”.
Otherwise, if e invokes a function, member function, or function pointer (including implicit calls, such as to an overloaded operator or to an allocation function in a new-expression):
if its declaration has a non-throwing exception-specification, the set is empty;
otherwise, if its declaration has a dynamic-exception-specification, the set consists of every type in that dynamic-exception-specification;
otherwise, the set consists of “any”.
If e is a throw-expression (15.1 [except.throw]), the set consists of the type of the exception object that would be initialized by the operand if present, or “any” otherwise.
If e is a dynamic_cast expression that casts to a reference type and requires a run-time check (5.2.7 [expr.dynamic.cast]), the set consists of the type std::bad_cast.
If e is a typeid expression applied to a glvalue expression whose type is a polymorphic class type (5.2.8 [expr.typeid]), the set consists of the type std::bad_typeid.
If e is a new-expression with a non-constant expression in the noptr-new-declarator (5.3.4 [expr.new]), the set also includes the type std::bad_array_new_length.
If none of the previous items applies, the set is the empty set.
The set of potential exceptions of an implicitly-declared special member function f of some class X is defined as follows:
If f is a constructor, the set is the union of the sets of potential exceptions of the constructor invocations for X's non-variant non-static data members, for X's direct base classes, and, if X is non-abstract (10.4 [class.abstract]), for X's virtual base classes, as selected by overload resolution for the implicit definition of f (12.1 [class.ctor]), including default argument expressions used in such invocations. [Note: Even though destructors for fully constructed subobjects are invoked when an exception is thrown during the execution of a constructor (15.2 [except.ctor]), their exception-specifications do not contribute to the exception-specification of the constructor, because an exception thrown from such a destructor could never escape the constructor (15.1 [except.throw], 15.5.1 [except.terminate]). —end note]
If f is a default constructor, the set also contains all members of the sets of potential exceptions of the initialization of non-static data members from brace-or-equal-initializers.
If f is an assignment operator, the set is the union of the sets of potential exceptions of the assignment operator invocations for X's non-variant non-static data members and for X's virtual and direct base classes, as selected by overload resolution for the implicit definition of f (12.8 [class.copy]), including default argument expressions used in such invocations.
If f is a destructor, the set is the union of the sets of potential exceptions of the destructor invocations for X's non-variant non-static data members and for X's virtual and direct base classes.
An inheriting constructor (12.9 [class.inhctor]) and an implicitly-declared special member function (Clause 12 [special]) have are considered to have an implicit exception-specification. If f is an inheriting constructor or an implicitly declared default constructor, copy constructor, move constructor, destructor, copy assignment operator, or move assignment operator, its implicit exception-specification specifies the type-id T if and only if T is allowed by the exception-specification of a function directly invoked by f's implicit definition; f allows all exceptions if any function it directly invokes allows all exceptions, and f has the exception-specification noexcept(true) if every function it directly invokes allows no exceptions. [Note: It follows that f has the exception-specification noexcept(true) if it invokes no other functions. —end note] [Note: An instantiation of an inheriting constructor template has an implied exception-specification as if it were a non-template inheriting constructor. —end note] The implicit exception-specification is noexcept(false) if the set of potential exceptions of the special member function contains “any”; otherwise, if that set contains at least one type, the implicit exception-specification specifies each type T contained in the set; otherwise, the implicit exception-specification is noexcept(true). [Example:
struct A { A(); A(const A&) throw(); A(A&&) throw(); ~A() throw(X); }; struct B { B() throw(); B(const B&) = default; // Declaration of B::B(const B&) noexcept(true) throw(); B(B&&, int = (throw Y(), 0)) throw(Y) noexcept; ~B() throw(Y); }; struct D : public A, public B { // Implicit declaration of D::D(); // Implicit declaration of D::D(const D&) noexcept(true); // Implicit declaration of D::D(D&&) throw(Y); // Implicit declaration of D::~D() throw(X, Y); };Furthermore...
Change 5.3.7 [expr.unary.noexcept]paragraph 3 as follows:
The result of the noexcept operator is false true if in a potentially-evaluated context the set of potential exceptions of the expression (15.4 [except.spec]) would contain is empty, and false otherwise.
a potentially-evaluated call [[Footnote: This includes implicit calls such as the call to an allocation function in a new-expression. —end footnote] to a function, member function, function pointer, or member function pointer that does not have a non-throwing exception-specification (15.4 [except.spec]), unless the call is a constant expression (5.19 [expr.const]),
a potentially-evaluated throw-expression (15.1 [except.throw]),
a potentially-evaluated dynamic_cast expression dynamic_cast<T>(v), where T is a reference type, that requires a run-time check (5.2.7 [expr.dynamic.cast]), or
a potentially-evaluated typeid expression (5.2.8 [expr.typeid]) applied to a glvalue expression whose type is a polymorphic class type (10.3 [class.virtual]).
Otherwise, the result is true.
(This resolution also resolves issues 1356, 1465, and 1639.)
Additional note, May, 2014:
The current version of the proposed resolution only defines the set of potential exceptions for special member functions; since an inheriting constructor is not a special member function, the exception-specification for an inheriting constructor is no longer specified.
In addition, the structure of the specification of the set of potential exceptions of an expression is unclear. If the bulleted list is intended to be the definition of general statement (“union of all sets of potential exceptions...”), it's incomplete because it doesn't consider exceptions thrown by the evaluation of function arguments in a call, just the exceptions thrown by the function itself; if it's intended to be a list of exceptions to the general rule, the rule about core constant expressions doesn't exclude unselected subexpressions that might throw, so those exceptions are incorrect included in the union.
The issue has been returned to "review" status to allow discussion of these points.
See also the discussion in messages 25290 through 25293.
Proposed resolution (June, 2014):
If a virtual function has an exception-specification, all declarations, including the definition, of any function that overrides that virtual function in any derived class shall only allow exceptions that are allowed by the exception-specification of the base class virtual function, unless the overriding function is defined as deleted. [Example:...
Add the following new paragraphs following 15.4 [except.spec] paragraph 13:
An exception-specification is not considered part of a function's type.
A potential exception of a given context is either a type that might be thrown as an exception or a pseudo-type, denoted by “any”, that represents the situation where an exception of an arbitrary type might be thrown. A subexpression e1 of an expression e is an immediate subexpression if there is no subexpression e2 of e such that e1 is a subexpression of e2.
The set of potential exceptions of a function, function pointer, or member function pointer f is defined as follows:
If the declaration of f has a non-throwing exception-specification, the set is empty.
Otherwise, if the declaration of f has a dynamic-exception-specification, the set consists of every type in that dynamic-exception-specification.
Otherwise, the set consists of the pseudo-type “any”.
The set of potential exceptions of an expression e is empty if e is a core constant expression (5.19 [expr.const]). Otherwise, it is the union of the sets of potential exceptions of the immediate subexpressions of e, including default argument expressions used in a function call, combined with a set S defined by the form of e, as follows:
If e is a function call (5.2.2 [expr.call]):
If its postfix-expression is a (possibly parenthesized) id-expression (5.1.1 [expr.prim.general]), class member access (5.2.5 [expr.ref]), or pointer-to-member operation (5.5 [expr.mptr.oper]) whose cast-expression is an id-expression, S is the set of potential exceptions of the entity selected by the contained id-expression (after overload resolution, if applicable).
Otherwise, S contains the pseudo-type “any”.
If e implicitly invokes a function (such as an overloaded operator, an allocation function in a new-expression, or a destructor if e is a full-expression), S is the set of potential exceptions of the function.
if e is a throw-expression (15.1 [except.throw]), S consists of the type of the exception object that would be initialized by the operand, if present, or the pseudo-type “any” otherwise.
if e is a dynamic_cast expression that casts to a reference type and requires a run-time check (5.2.7 [expr.dynamic.cast]), S consists of the type std::bad_cast.
if e is a typeid expression applied to a glvalue expression whose type is a polymorphic class type (5.2.8 [expr.typeid]), S consists of the type std::bad_typeid.
if e is a new-expression with a non-constant expression in the noptr-new-declarator (5.3.4 [expr.new]), S consists of the type std::bad_array_new_length.
[Example: Given the following declarations
void f() throw(int); void g(); struct A { A(); }; struct B { B() noexcept; }; struct D() { D() throw (double); };
the set of potential exceptions for some sample expressions is:
for f(), the set consists of int;
for g(), the set consists of “any”;
for new A, the set consists of “any”
for B(), the set is empty;
for new D, the set consists of “any” and double.
—end example]
Given a member function f of some class X, where f is an inheriting constructor (12.9 [class.inhctor]) or an implicitly-declared special member function, the set of potential exceptions of the implicitly-declared member function f consists of all the members from the following sets:
if f is a constructor,
the sets of potential exceptions of the constructor invocations
for X's non-variant non-static data members,
for X's direct base classes, and
if X is non-abstract (10.4 [class.abstract]), for X's virtual base classes,
(including default argument expressions used in such invocations) as selected by overload resolution for the implicit definition of f (12.1 [class.ctor]). [Note: Even though destructors for fully-constructed subobjects are invoked when an exception is thrown during the execution of a constructor (15.2 [except.ctor]), their exception-specifications do not contribute to the exception-specification of the constructor, because an exception thrown from such a destructor could never escape the constructor (15.1 [except.throw], 15.5.1 [except.terminate]). —end note]
the sets of potential exceptions of the initialization of non-static data members from brace-or-equal-initializers that are not ignored (12.6.2 [class.base.init]);
if f is an assignment operator, the sets of potential exceptions of the assignment operator invocations for X's non-variant non-static data members and for X's direct base classes (including default argument expressions used in such invocations), as selected by overload resolution for the implicit definition of f (12.8 [class.copy]);
if f is a destructor, the sets of potential exceptions of the destructor invocations for X's non-variant non-static data members and for X's virtual and direct base classes.
Change 15.4 [except.spec] paragraph 14 as follows:
An inheriting constructor (12.9 [class.inhctor]) and an implicitly declared implicitly-declared special member function (Clause 12 [special]) are considered to have an implicit exception-specification, as follows, where f is the member function and S is the set of potential exceptions of the implicitly-declared member function f:.
if S contains the pseudo-type “any”, the implicit exception-specification is noexcept(false);
otherwise, if S contains at least one type, the implicit exception-specification specifies each type T contained in S;
otherwise, the implicit exception-specification is noexcept(true).
If f is an inheriting constructor or an implicitly declared default constructor, copy constructor, move constructor, destructor, copy assignment operator, or move assignment operator, its implicit exception-specification specifies the type-id T if and only if T is allowed by the exception-specification of a function directly invoked by f's implicit definition; f allows all exceptions if any function it directly invokes allows all exceptions, and f has the exception-specification noexcept(true) if every function it directly invokes allows no exceptions. [Note: It follows that f has the exception-specification noexcept(true) if it invokes no other functions. —end note] [Note: An instantiation of an inheriting constructor template has an implied exception-specification as if it were a non-template inheriting constructor. —end note] [Example:
struct A { A(int = (A(5), 0)) noexcept; A(const A&) throw(); A(A&&) throw(); ~A() throw(X); }; struct B { B() throw(); B(const B&) = default; // Declaration of B::B(const B&) noexcept(true) B(B&&, int = (throw Y(), 0)) throw(Y) noexcept; ~B() throw(Y); }; int n = 7; struct D : public A, public B { int * p = new (std::nothrow) int[n]; // Implicit declaration of D::D() throw(X, std::bad_array_new_length); // Implicit declaration of D::D(); // Implicit declaration of D::D(const D&) noexcept(true); // Implicit declaration of D::D(D&&) throw(Y); // Implicit declaration of D::~D() throw(X, Y); };
Change 5.3.7 [expr.unary.noexcept] paragraph 3 as follows:
The result of the noexcept operator is false true if in a potentially-evaluated context the set of potential exceptions of the expression would contain is empty, and false otherwise.
a potentially-evaluated call83 to a function, member function, function pointer, or member function pointer that does not have a non-throwing exception-specification (15.4 [except.spec]), unless the call is a constant expression (5.19 [expr.const]),
a potentially-evaluated throw-expression (15.1 [except.throw]),
a potentially-evaluated dynamic_cast expression dynamic_cast<T>(v), where T is a reference type, that requires a run-time check (5.2.7 [expr.dynamic.cast]), or
a potentially-evaluated typeid expression (5.2.8 [expr.typeid]) applied to a glvalue expression whose type is a polymorphic class type (10.3 [class.virtual]).
Otherwise, the result is true.
This resolution also resolves issues 1356, 1465, and 1639.
It is unspecified if an implicitly-defined copy assignment operator directly invokes the copy assignment operators of virtual bases. The exception-specification of such a copy assignment operator is thus also unspecified. The specification in 15.4 [except.spec] paragraph 14 should explicitly include the exceptions from the copy assignment operators of virtual base classes, regardless of whether the implicit definition actually invokes the virtual base assignment operators or not.
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue 1351.
The current specification is not clear whether the exception-specification for a function is propagated to the result of taking its address. For example:
template<class T> struct A { void f() noexcept(false) {} void g() noexcept(true) {} }; int main() { if (noexcept((A<short>().*(&A<short>::f))())) return 1; if (!noexcept((A<long>().*(&A<long>::g))())) return 1; return 0; }
There is implementation variance on whether main returns 0 or 1 for this example. (It also appears that taking the address of a member function of a class template requires instantiating its exception-specification, but it is not clear whether the Standard currently specifies this or not.)
(See also issues 92 and 1351.)
Proposed resolution (June, 2013):
This issue is resolved by the proposed resolution of issue 1351.
According to 14.5.3 [temp.variadic] paragraph 6, describing an empty pack expansion,
When N is zero, the instantiation of the expansion produces an empty list. Such an instantiation does not alter the syntactic interpretation of the enclosing construct, even in cases where omitting the list entirely would otherwise be ill-formed or would result in an ambiguity in the grammar.
This leaves open the question of whether something like
template<typename...T> void f() throw(T...);
should be considered to have a non-throwing exception-specification when T... is empty. The definition in 15.4 [except.spec] paragraph 12 appears to be syntactic regarding dynamic-exception-specifications:
An exception-specification is non-throwing if it is of the form throw(), noexcept, or noexcept(constant-expression ) where the constant-expression yields true. A function with a non-throwing exception-specification does not allow any exceptions.
It seems evident, however, that a dynamic-exception-specification with an empty pack expansion “does not allow any exceptions.”
Proposed resolution (February, 2014):
Change 15.4 [except.spec] paragraph 12 as follows:
A function with no exception-specification or with an exception-specification of the form noexcept(constant-expression ) where the constant-expression yields false allows all exceptions. An exception-specification is non-throwing if it is of the form throw(), noexcept, or noexcept(constant-expression ) where the a dynamic-exception-specification whose set of adjusted types is empty (after any packs are expanded) or a noexcept-specification whose constant-expression is either absent or yields true. A function with a non-throwing exception-specification does not allow any exceptions.
Sections 14.7.2 [temp.explicit] and 14.7.3 [temp.expl.spec] describe cases of explicit instantiation directives and explicit specializations, respectively, that are not definitions. However, the description in 3.1 [basic.def] does not include these distinctions, classifying all declarations other than those listed as definitions. These should be harmonized.
Proposed Resolution (July, 2014):
Change 3.1 [basic.def] paragraph 2 as follows:
A declaration is a definition unless it... an empty-declaration (Clause 7 [dcl.dcl]), or a using-directive (7.3.4 [namespace.udir]), an explicit instantiation declaration (14.7.2 [temp.explicit]), or an explicit specialization (14.7.3 [temp.expl.spec]) whose declaration is not a definition.
According to 5.1.2 [expr.prim.lambda] paragraph 20,
The closure type associated with a lambda-expression has a deleted (8.4.3 [dcl.fct.def.delete]) default constructor and a deleted copy assignment operator. It has an implicitly-declared copy constructor (12.8 [class.copy]) and may have an implicitly-declared move constructor (12.8 [class.copy]).
However, according to 12.8 [class.copy] paragraph 9,
If the definition of a class X does not explicitly declare a move constructor, one will be implicitly declared as defaulted if and only if
X does not have a user-declared copy constructor,
X does not have a user-declared copy assignment operator,
X does not have a user-declared move assignment operator, and
X does not have a user-declared destructor.
It is not clear how this applies to the closure class. Would it be better to state that the closure class has a defaulted move constructor and a defaulted move assignment operator? There is already wording that handles the case if they are ultimately defined as deleted.
Proposed resolution (October, 2014):
Change 5.1.2 [expr.prim.lambda] paragraph 20 as follows:
The closure type associated with a lambda-expression has a deleted (8.4.3 [dcl.fct.def.delete]) no default constructor and a deleted copy assignment operator. It has an implicitly-declared a defaulted copy constructor (12.8 [class.copy]) and may have an implicitly-declared and a defaulted move constructor (12.8 [class.copy]). [Note: The copy/move constructor is implicitly defined in the same way as any other implicitly declared copy/move constructor would be implicitly defined These special member functions are implicitly defined as usual, and might therefore be defined as deleted. —end note]
The intent is that a function call is a temporary expression whose result is a temporary, but that appears not to be said anywhere. It should also be clarified that a return statement in a function with a class return type copy-initializes the temporary that is the result. The sequencing of the initialization of the returned temporary, destruction of temporaries in the return expression, and destruction of automatic variables should be make explicit.
Proposed resolution (October, 2014):
Change 6.6.3 [stmt.return] paragraphs 2-3 as follows:
A return statement with neither an expression nor a braced-init-list can be used only in functions that do not return a value, that is, The expression or braced-init-list of a return statement is called its operand. A return statement with no operand shall be used only in a function with the whose return type is cv void, a constructor (12.1 [class.ctor]), or a destructor (12.4 [class.dtor]). A return statement with an operand of type void shall be used only in a function whose return type is cv void. A return statement with an expression of non-void type can be used only any other operand shall be used only in functions returning a value; the value of the expression is returned to the caller of the function. The value of the expression is implicitly converted to the return type of the function in which it appears a function whose return type is not cv void; the return statement initializes the object or reference to be returned by copy-initialization (8.5 [dcl.init]) from the operand. [Note: A return statement can involve the construction and copy or move of a temporary object (12.2 [class.temporary]). [Note: A copy or move operation associated with a return statement may be elided or considered as an rvalue for the purpose of overload resolution in selecting a constructor (12.8 [class.copy]). —end note] A return statement with a braced-init-list initializes the object or reference to be returned from the function by copy-list-initialization (8.5.4 [dcl.init.list]) from the specified initializer list. [Example:
std::pair<std::string,int> f(const char* p, int x) { return {p,x}; }—end example] Flowing off the end of a function is equivalent to a return with no value; this results in undefined behavior in a value-returning function.
A return statement with an expression of type void can be used only in functions with a return type of cv void; the expression is evaluated just before the function returns to its caller. The copy-initialization of the returned entity is sequenced before the destruction of temporaries at the end of the full-expression established by the operand of the return statement, which, in turn, is sequenced before the destruction of local variables (6.6 [stmt.jump]) of the block enclosing the return statement.
(See also the related changes in the resolution of issue 1299.)
The changes from N3778 require use of a sized deallocator for a case like
char *p = new char[32]; void f() { delete [] p; }
That is unimplementable under current ABIs, which do not store the array size for such allocations. It should instead be unspecified or implementation-defined whether the sized form of operator[] is used for a pointer to a type other than a class with a non-trivial destructor or array thereof.
Proposed resolution (February, 2014) [SUPERSEDED]:
Change 5.3.5 [expr.delete] paragraph 10 as follows:
If the type is complete and if deallocation function lookup finds both a usual deallocation function with only a pointer parameter and a usual deallocation function with both a pointer parameter and a size parameter, then
- for the first alternative (delete object), if the type of the object to be deleted is complete, and for the second alternative (delete array), if the type of the object to be deleted is a complete class type with a non-trivial destructor, then the selected deallocation function shall be the one with two parameters;
otherwise, it is implementation-defined which deallocation function is selected.
Otherwise, the selected deallocation function shall be the function with one parameter.
Additional note, February, 2014:
It is not clear that this resolution accurately reflects the intent of the issue. In particular, it changes deletion of a pointer to incomplete type from requiring use of the single-parameter version to being implementation-defined. Also, the “type of the object to be deleted” in the array case is always an array type and thus cannot be “a complete class type with a non-trivial destructor.” The issue has consequently been returned to "review" status.
Proposed resolution (June, 2014):
Change 5.3.5 [expr.delete] paragraph 10 as follows:
If the type is complete and if deallocation function lookup finds both a usual deallocation function with only a pointer parameter and a usual deallocation function with both a pointer parameter and a size parameter, then the selected deallocation function shall be the one with two parameters. Otherwise, the selected deallocation function shall be the function with one parameter. the function to be called is selected as follows:
If the type is complete and if, for the second alternative (delete array) only, the operand is a pointer to a class type with a non-trivial destructor or a (possibly multi-dimensional) array thereof, the function with two parameters is selected.
Otherwise, it is unspecified which of the two deallocation functions is selected.
The resolution of issue 1504 added 5.7 [expr.add] paragraph 7:
For addition or subtraction, if the expressions P or Q have type “pointer to cv T”, where T is different from the cv-unqualified array element type, the behavior is undefined.
This wording was intended to address derived-base conversion in pointer arithmetic, but it inadvertently categorized as undefined behavior previously well-defined pointer arithmetic on pointers that are the result of multi-level qualification conversions. For example:
void f() { int i = 0; int* arr[3] = {&i, &i, &i}; int const * const * aptr = arr; assert(aptr[2] == &i); }
This now has undefined behavior because the type of *aptr is “pointer to const int,” which is different from the cv-unqualified array element type, “pointer to int.”
See also issue 330.
Proposed Resolution (July, 2014):
Change 5.7 [expr.add] paragraph 7 as follows:
For addition or subtraction, if the expressions P or Q have type “pointer to cv T”, where T is different from the cv-unqualified and the array element type are not similar (4.4 [conv.qual]), the behavior is undefined. [Note: In particular, a pointer to a base class cannot be used for pointer arithmetic when the array contains objects of a derived class type. —end note]
Comparison of pointers to members is currently specified in 5.10 [expr.eq] paragraph 3 as,
two pointers to members compare equal if they would refer to the same member of the same most derived object (1.8 [intro.object]) or the same subobject if indirection with a hypothetical object of the associated class type were performed, otherwise they compare unequal.
The “same member” requirement could be interpreted as incorrect for union members. The wording should be clarified in this regard.
Proposed Resolution (July, 2014):
Insert the following before bullet 5 of 5.10 [expr.eq] paragraph 3:
...
If both refer to (possibly different) members of the same union (9.5 [class.union]), they compare equal.
Otherwise, two pointers to members compare equal if...
The Standard currently appears to allow something like
struct S { template<class T> operator auto() { return 42; } };
This is of very limited utility and presents difficulties for some implementations. It might be good to prohibit such constructs.
Proposed resolution (October, 2014):
Add the following as the last paragraph of 12.3.2 [class.conv.fct]:
A conversion function template shall not have a deduced return type (7.1.6.4 [dcl.spec.auto]).
According to 7.3.1 [namespace.def] paragraph 2,
The identifier in an original-namespace-definition shall not have been previously defined in the declarative region in which the original-namespace-definition appears.
Apparently the intent of this requirement is to say that, given the declarations
namespace N { } namespace N { }
the second declaration is to be taken as an extension-namespace-definition and not an original-namespace-definition, since the general rules in 3.3.1 [basic.scope.declarative] cover the case in which the identifier has been previously declared as something other than a namespace.
This use of “shall” for disambiguation is novel, however, and it would be better to replace it with a specific statement addressing disambiguation in paragraphs 2 and 3.
Proposed Resolution (July, 2014):
Change 3.3.6 [basic.scope.namespace] paragraph 1 as follows:
The declarative region of a namespace-definition is its namespace-body. The potential scope denoted by an original-namespace-name is the concatenation of the declarative regions established by each of the namespace-definitions in the same declarative region with that original-namespace-name. Entities declared in a namespace-body...
Change 7.3.1 [namespace.def] paragraphs 1-4 as follows:
The grammar for a namespace-definition is
namespace-name:
original-namespace-name identifier
namespace-aliasoriginal-namespace-name:
identifier
namespace-definition:
named-namespace-definition
unnamed-namespace-definitionnamed-namespace-definition:
original-namespace-definition
extension-namespace-definitionoriginal-namespace-definition:
inlineopt namespace identifier { namespace-body }
extension-namespace-definition:
inlineopt namespace original-namespace-name { namespace-body }
unnamed-namespace-definition:
inlineopt namespace { namespace-body }
namespace-body:
declaration-seqopt
The identifier in an original-namespace-definition shall not have been previously defined in the declarative region in which the original-namespace-definition appears. The identifier in an original-namespace-definition is the name of the namespace. Subsequently in that declarative region, it is treated as an original-namespace-name.
The original-namespace-name in an extension-namespace-definition shall have previously been defined in an original-namespace-definition in the same declarative region.
Every namespace-definition shall appear in the global scope or in a namespace scope (3.3.6 [basic.scope.namespace]).
In a named-namespace-definition, the identifier is the name of the namespace. If the identifier, when looked up (3.4.1 [basic.lookup.unqual]), refers to a namespace-name (but not a namespace-alias) introduced in the declarative region in which the named-namespace-definition appears, the namespace-definition extends the previously-declared namespace. Otherwise, the identifier is introduced as a namespace-name into the declarative region in which the named-namespace-definition appears.
Change 7.3.1 [namespace.def] paragraph 7 as follows:
If the optional initial inline keyword appears in a namespace-definition for a particular namespace, that namespace is declared to be an inline namespace. The inline keyword may be used on an extension-namespace-definition a namespace-definition that extends a namespace only if it was previously used on the original-namespace-definition namespace-definition that initially declared the namespace-name for that namespace.
Delete 7.3.2 [namespace.alias] paragraph 4:
A namespace-name or namespace-alias shall not be declared as the name of any other entity in the same declarative region. A namespace-name defined at global scope shall not be declared as the name of any other entity in any global scope of the program. No diagnostic is required for a violation of this rule by declarations in different translation units.
Change 7.3.4 [namespace.udir] paragraph 5 as follows:
If a namespace is extended by an extension-namespace-definition after a using-directive for that namespace is given, the additional members of the extended namespace and the members of namespaces nominated by using-directives in the extension-namespace-definition extending namespace-definition can be used after the extension-namespace-definition extending namespace-definition.
According to 7.3.1.2 [namespace.memdef] paragraphs 1 and 2 read,
Members (including explicit specializations of templates (14.7.3 [temp.expl.spec])) of a namespace can be defined within that namespace.
Members of a named namespace can also be defined outside that namespace by explicit qualification (3.4.3.2 [namespace.qual]) of the name being defined, provided that the entity being defined was already declared in the namespace and the definition appears after the point of declaration in a namespace that encloses the declaration's namespace.
It is not clear what these specifications mean for the following pair of examples:
namespace N { struct A; } using N::A; struct A { };
Although this does not satisfy the “by explicit qualification” requirement, it is accepted by major implementations.
struct S; namespace A { using ::S; struct S { }; }
Is this a definition “within that namespace,” or should that wording be interpreted as “directly within” the namespace?
See also issue 1838.
Proposed Resolution (July, 2014):
This issue is resolved by the resolution of issue 1838.
The Standard is not clear about what happens when an entity is declared but not defined in an inner namespace and declared via a using-declaration in an outer namespace, and a definition of an entity with that name as an unqualified-id appears in the outer namespace. Is this a legitimate definition of the inner-namespace entity, as it would be if the definition used a qualified-id, or is the definition a member of the outer namespace and thus in conflict with the using-declaration? There is implementation divergence on the treatment of such definitions.
See also issues 1708 and 1021.
Notes from the February, 2014 meeting:
CWG agreed that the definition in such cases is a member of the outer namespace, not a redeclaration of the name introduced in that namespace by the using-declaration.
Proposed Resolution (July, 2014):
Change 7.3.1.2 [namespace.memdef] paragraph 1 as follows:
Members (including explicit specializations of templates (14.7.3 [temp.expl.spec])) of a namespace can be defined within that namespace. A declaration in a namespace N (excluding declarations in nested scopes) whose declarator-id is an unqualified-id declares (or redeclares) a member of N, and may be a definition. [Note: An explicit instantiation (14.7.2 [temp.explicit]) or explicit specialization (14.7.3 [temp.expl.spec]) of a template does not introduce a name and thus may be declared using an unqualified-id in a member of the enclosing namespace set, if the primary template is declared in an inline namespace. —end note] [Example:
namespace X { void f() { /* ... */ } // OK: introduces X::f() namespace M { void g(); // OK: introduces X::M::g() } using M::g; void g(); // error: conflicts with X::M::g() }
—end example]
Change 7.3.1.2 [namespace.memdef] paragraph 3 as follows:
Every name first declared in a namespace is a member of that namespace. If a friend declaration...
This resolution also resolves issue 1021.
Issue 1411 added :: as a production for nested-name-specifier. However, the grammar for using-declarations should have been updated but was overlooked:
In addition, there is some verbiage in 3.4.3.2 [namespace.qual] paragraph 1 and 7.3.3 [namespace.udecl] paragraph 9 that should probably be revised.
Proposed resolution (October, 2014):
Change the grammar in 7.3.3 [namespace.udecl] paragraph 1 as follows:
Change 3.4.3.2 [namespace.qual] paragraph 1 as follows:
If the nested-name-specifier of a qualified-id nominates a namespace (including the case where the nested-name-specifier is ::, i.e., nominating the global namespace), the name specified after the nested-name-specifier is looked up in the scope of the namespace. If a qualified-id starts with ::, the name after the :: is looked up in the global namespace. In either case, the The names in a template-argument of a template-id are looked up in the context in which the entire postfix-expression occurs.
Change 5.1.1 [expr.prim.general] paragraph 10 as follows:
A ::, or a The nested-name-specifier :: names the global namespace. A nested-name-specifier that names a namespace (7.3 [basic.namespace]), in either case followed by the name of a member of that namespace (or the name of a member of a namespace made visible by a using-directive), is a qualified-id; 3.4.3.2 [namespace.qual] describes name lookup for namespace members that appear in qualified-ids. The result is...
Change 7.3.3 [namespace.udecl] paragraph 9 as follows:
Members declared by a using-declaration can be referred to by explicit qualification just like other member names (3.4.3.2 [namespace.qual]). In a using-declaration, a prefix :: refers to the global namespace. [Example:
It is unclear whether code like the following is supposed to be supported or not:
#include <iostream> #include <type_traits> #define ENABLE_IF(...) \ typename std::enable_if<__VA_ARGS__, int>::type = 0 #define PRINT_VALUE(...) \ std::cout << #__VA_ARGS__ " = " << __VA_ARGS__ << std::endl struct undefined {}; template <class T> undefined special_default_value(T *); template <class T> struct has_special_default_value : std::integral_constant < bool, !std::is_same < decltype(special_default_value((T *)0)), undefined >{} > {}; template <class T> struct X { template <class U = T, ENABLE_IF(!has_special_default_value<U>{})> X() : value() {} template <class U = T, ENABLE_IF(has_special_default_value<U>{})> X() : value(special_default_value((T *)0)) {} T value; }; enum E { e1 = 1, e2 = 2 }; E special_default_value(E *) { return e1; } int main() { X<int> x_int; X<E> x_E; PRINT_VALUE(x_int.value); PRINT_VALUE(x_E.value); PRINT_VALUE(X<int>().value); PRINT_VALUE(X<E>().value); }
The intent is that X<int> should call the first default constructor and X<E> should call the second.
If this is intended to work, the rules for making it do so are not clear; current wording reads as if a class can have only a single default constructor, and there appears to be no mechanism for using overload resolution to choose between variants.
Proposed resolution (June, 2014):
Change 3.2 [basic.def.odr] paragraph 3 as follows:
...An assignment operator function in a class is odr-used by an implicitly-defined copy-assignment or move-assignment function for another class as specified in 12.8 [class.copy]. A default constructor for a class is odr-used by default initialization or value initialization as specified in 8.5 [dcl.init]. A constructor for a class is odr-used as specified in 8.5 [dcl.init]. A destructor for a class is odr-used if it is potentially invoked (12.4 [class.dtor]).
Change 8.5 [dcl.init] paragraph 7 as follows:
To default-initialize an object of type T means:
if If T is a (possibly cv-qualified) class type (Clause 9 [class]), the default constructor (12.1 [class.ctor]) for T is called (and the initialization is ill-formed if T has no default constructor or overload resolution (13.3 [over.match]) results in an ambiguity or in a function that is deleted or inaccessible from the context of the initialization); constructors are considered. The applicable constructors are enumerated (13.3.1.3 [over.match.ctor]), and the best one for the initializer () is chosen through overload resolution (13.3 [over.match]). The constructor thus selected is called, with an empty argument list, to initialize the object.
if If T is an array type, each element is default-initialized;.
otherwise Otherwise, no initialization is performed.
Change 12.1 [class.ctor] paragraph 4 as follows:
A default constructor for a class X is a constructor of class X that can be called without an argument either has no parameters or else each parameter that is not a function parameter pack has a default argument. If there is no user-declared constructor...
Change 13.3 [over.match] paragraph 2 bullet 4 as follows:
Overload resolution selects the function to call in seven distinct contexts within the language:
...
invocation of a constructor for default- or direct-initialization (8.5 [dcl.init]) of a class object (13.3.1.3 [over.match.ctor]);
...
Change 13.3.1.3 [over.match.ctor] paragraph 1 as follows:
When objects of class type are direct-initialized (8.5 [dcl.init]), or copy-initialized from an expression of the same or a derived class type (8.5 [dcl.init]), or default-initialized, overload resolution selects the constructor. For direct-initialization or default-initialization, the candidate functions are all the constructors of the class of the object being initialized. For copy-initialization, the candidate functions are all the converting constructors (12.3.1 [class.conv.ctor]) of that class. The argument list is the expression-list or assignment-expression of the initializer.
With the recent addition of brace-or-equal-initializers to aggregates and the presumed resolution for issue 1696, it is not clear how lifetime extension of temporaries should work in aggregate initialization. For example:
struct A { }; struct B { A&& a { A{} } }; B b; // #1 B b{ A{} }; // #2 B b{}; // #3
#1 is default initialization, so (presumably) the lifetime of the temporary persists only until B's default constructor exits. #2 is aggregate initialization, which binds B::a to the temporary in the initializer for b and thus extends its lifetime to that of b. #3 is aggregate initialization, but it is not clear whether the lifetime of the temporary in the non-static data member initializer for B::a should be lifetime-extended like #2 or not, like #1.
One possibility might be to extend the lifetime in #3 but to give B a deleted default constructor since it would extend the lifetime of a temporary.
See also issue 1696.
Notes from the February, 2014 meeting:
CWG agreed with the suggested direction, which would treat #3 in the example like #2 and make the default constructor deleted, resulting in #1 being ill-formed.
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue 1696.
One of the criteria for a standard-layout class in 9 [class] paragraph 7 is:
either has no non-static data members in the most derived class and at most one base class with non-static data members, or has no base classes with non-static data members,
In an example like
struct B { int i; }; struct C : B { }; struct D : C { };
this could be read as indicating that D is not a standard-layout class, since it has two base classes, one direct and one indirect, that each have a non-static data member. The intent should be clarified.
See also issue 1881 for a related question about standard-layout classes.
Proposed resolution (June, 2014):
Change 9 [class] paragraph 7 as follows:
A standard-layout class is a class that:
has no non-static data members of type non-standard-layout class (or array of such types) or reference,
has no virtual functions (10.3 [class.virtual]) and no virtual base classes (10.1 [class.mi]),
has the same access control (Clause 11 [class.access]) for all non-static data members,
has no non-standard-layout base classes,
has at most one base class subobject of any given type,
either has no non-static data members in the most derived class and at most one base class with non-static data members, or has no base classes with non-static data members has all non-static data members and bit-fields in the class and its base classes first declared in the same class, and
has no base classes of the same type as the first non-static data member.109
[Example:
struct B { int i; }; // standard-layout class struct C : B { }; // standard-layout class struct D : C { }; // standard-layout class struct E : D { char : 4; }; // not a standard-layout class struct Q {}; struct S : Q { }; struct T : Q { }; struct U : S, T { }; // not a standard-layout class
—end example]
This resolution also resolves issue 1881.
(See also the related changes in the resolution of issue 1672.)
According to 9.6 [class.bit] paragraph 2,
Unnamed bit-fields are not members and cannot be initialized.
However, the rules defining standard-layout classes in 9 [class] paragraph 7 do not account for the fact that a class containing an unnamed bit-field has associated storage.
See also issue 1813 for a related question about standard-layout classes.
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue 1813.
The layout compatibility rules of 9.2 [class.mem] paragraph 16 are phrased only in terms of non-static data members, ignoring the existence of base classes:
Two standard-layout struct (Clause 9 [class]) types are layout-compatible if they have the same number of non-static data members and corresponding non-static data members (in declaration order) have layout-compatible types (3.9 [basic.types]).
However, this means that in an example like
struct empty {}; struct A { char a; }; struct also_empty : empty {}; struct C : empty, also_empty { char c; }; union U { struct X { A a1, a2; } x; struct Y { C c1, c2; } y; } u;
u.x.a2.a and u.y.c2.c must have the same address, even though sizeof(A) would typically be 1 and sizeof(B) would need to be at least 2 to give the empty subobjects different addresses.
Proposed resolution (October, 2014):
Change 9 [class] paragraph 7 as indicated and add the following as a new paragraph:
A class S is a standard-layout class is a class that if it:
...
has no base classes of the same type as the first non-static data member element of the set M(S) of types (defined below) as a base class.109
M(X) is defined as follows:
If X is a non-union class type, the set M(X) is empty if X has no (possibly inherited (Clause 10 [class.derived])) non-static data members; otherwise, it consists of the type of the first non-static data member of X (where said member may be an anonymous union), X0, and the elements of M(X0).
If X is a union type, the set M(X), where each Ui is the type of the ith non-static data member of X, is the union of all M(Ui) and the set containing all Ui.
If X is a non-class type, the set M(X) is empty.
[Note: M(X) is the set of the types of all non-base-class subobjects that are guaranteed in a standard-layout class to be at a zero offset in X. —end note]
(See also the related changes in the resolution for issue 1813.)
When the effect of cv-qualification on layout compatibility was previously discussed (see issue 1334), the question was resolved by reference to the historical origin of layout compatibility: it was a weakening of type correctness that was added for C compatibility, mimicking exactly the corresponding C specification of compatible types in this context and going no further. Because cv-qualified and cv-unqualified types are not compatible in C, they were not made layout-compatible in C++.
Because of specific use-cases involving std::pair and the like, however, and in consideration of the fact that cv-qualified and cv-unqualified versions of types are aliasable by the rules of 3.10 [basic.lval], the outcome of that question is worthy of reconsideration.
Proposed resolution (June, 2014):
Change 3 [basic] paragraph 3 as follows:
An entity is a value, object, reference, function, enumerator, type, class member, bit-field, template, template specialization, namespace, parameter pack, or this.
Change 3.9 [basic.types] paragraph 11 as follows:
If two types T1 and T2 are the same type, then T1 and T2 Two types cv1 T1 and cv2 T2 are layout-compatible types if T1 and T2 are the same type, layout-compatible enumerations (7.2 [dcl.enum]), or layout-compatible standard-layout class types (9.2 [class.mem]). [Note: Layout-compatible enumerations are described in 7.2 [dcl.enum]. Layout-compatible standard-layout structs and standard-layout unions are described in 9.2 [class.mem]. —end note]
Change 3.9.2 [basic.compound] paragraph 3 as follows:
...The value representation of pointer types is implementation-defined. Pointers to cv-qualified and cv-unqualified versions (3.9.3 [basic.type.qualifier]) of layout-compatible types shall have the same value representation and alignment requirements (3.11 [basic.align]). [Note:...
Insert the following as a new paragraph before 9.2 [class.mem] paragraph 16 and change paragraphs 16 through 18 as follows:
The common initial sequence of two standard-layout struct (Clause 9 [class]) types is the longest sequence of non-static data members and bit-fields in declaration order, starting with the first such entity in each of the structs, such that corresponding entities have layout-compatible types and either neither entity is a bit-field or both are bit-fields with the same width. [Example:
struct A { int a; char b; }; struct B { const int b1; volatile char b2; }; struct C { int c; unsigned : 0; char b; }; struct D { int d; char b : 4; }; struct E { unsigned int e; char b; };
The common initial sequence of A and B comprises all members of either class. The common initial sequence of A and C and of A and D comprises the first member in each case. The common initial sequence of A and E is empty. —end example]
Two standard-layout struct (Clause 9 [class]) types are layout-compatible if they have the same number of non-static data members and corresponding non-static data members (in declaration order) have layout-compatible types their common initial sequence comprises all members and bit-fields of both classes (3.9 [basic.types]).
Two standard-layout union (Clause 9 [class]) types unions are layout-compatible if they have the same number of non-static data members and corresponding non-static data members (in any order) have layout-compatible types (3.9 [basic.types]).
If a standard-layout union contains two or more standard-layout structs that share a common initial sequence, and if the standard-layout union object currently contains one of these standard-layout structs, it is permitted to inspect the common initial part of any of them. Two standard-layout structs share a common initial sequence if corresponding members have layout-compatible types and either neither member is a bit-field or both are bit-fields with the same width for a sequence of one or more initial members. In a standard-layout union with an active member (9.5 [class.union]) of struct type T1, it is permitted to read a non-static data member m of another union member of struct type T2 provided m is part of the common initial sequence of T1 and T2. [Note: Reading a volatile object through a non-volatile glvalue has undefined behavior (7.1.6.1 [dcl.type.cv]). —end note]
The list in 9.2 [class.mem] paragraph 14 of the kinds of class members whose names must differ from that of the class does not include an entry for a member class template. Presumably it should.
Proposed resolution (October, 2014):
Change 9.2 [class.mem] paragraph 14 as follows:
If T is the name of a class, then each of the following shall have a name different from T:
...
every member of class T that is itself a type;
every member template of class T; ...
Presumably a temporary bound to a reference in a non-static data member initializer should be treated analogously with what happens in a ctor-initializer, but the current wording of 12.2 [class.temporary] paragraph 5 is not clear on this point.
See also issue 1815 for similar questions regarding aggregate initialization.
Proposed resolution (June, 2014):
Add the following after 8.5.1 [dcl.init.aggr] paragraph 7:
If a reference member is initialized from its brace-or-equal-initializer and a potentially-evaluated subexpression thereof is an aggregate initialization that would use that brace-or-equal-initializer, the program is ill-formed. [Example:
struct A; extern A a; struct A { const A& a1 { A{a,a} }; // OK const A& a2 { A{} }; // error }; A a{a,a}; // OK
If an aggregate class C contains a subaggregate...
Delete the first bullet of 12.2 [class.temporary] paragraph 5:
The second context is when a reference is bound to a temporary.117 The temporary to which the reference is bound or the temporary that is the complete object of a subobject to which the reference is bound persists for the lifetime of the reference except:
A temporary bound to a reference member in a constructor's ctor-initializer (12.6.2 [class.base.init]) persists until the constructor exits.
...
Insert the following as a new paragraph after 12.6.2 [class.base.init] paragraph 7:
A temporary expression bound to a reference member in a mem-initializer is ill-formed. [Example:
struct A { A() : v(42) { } // error const int& v; };
—end example]
In a non-delegating constructor, if a given potentially constructed subobject...
Insert the following as a new paragraph after 12.6.2 [class.base.init] paragraph 9:
A temporary expression bound to a reference member from a brace-or-equal-initializer is ill-formed. [Example:
struct A { A() = default; // OK A(int v) : v(v) { } // OK const int& v = 42; // OK }; A a1; // error: ill-formed binding of temporary to reference A a2(1); // OK, unfortunately
—end example]
In a non-delegating constructor, the destructor for each potentially constructed subobject...
This resolution also resolves issue 1815.
According to 12.4 [class.dtor] paragraph 12,
At the point of definition of a virtual destructor (including an implicit definition (12.8 [class.copy])), the non-array deallocation function is looked up in the scope of the destructor's class (10.2 [class.member.lookup]), and, if no declaration is found, the function is looked up in the global scope. If the result of this lookup is ambiguous or inaccessible, or if the lookup selects a placement deallocation function or a function with a deleted definition (8.4 [dcl.fct.def]), the program is ill-formed. [Note: This assures that a deallocation function corresponding to the dynamic type of an object is available for the delete-expression (12.5 [class.free]). —end note]
This specification is not sufficiently clear regarding the nature of the lookup. Presumably the intent is that the processing be parallel to that described in 5.3.5 [expr.delete], but that should be made explicit.
Proposed resolution (February, 2014):
Change 12.4 [class.dtor] paragraph 12 as follows:
At the point of definition of a virtual destructor (including an implicit definition (12.8 [class.copy])), the non-array deallocation function is looked up in the scope of the destructor's class (10.2 [class.member.lookup]), and, if no declaration is found, the function is looked up in the global scope determined as if for the expression delete this appearing in a non-virtual destructor of the destructor's class (see 5.3.5 [expr.delete]). If the result of this lookup is ambiguous or inaccessible, lookup fails or if the lookup selects a placement deallocation function or a function with deallocation function has a deleted definition (8.4 [dcl.fct.def]), the program is ill-formed. [Note: This assures that a deallocation function corresponding to the dynamic type of an object is available for the delete-expression (12.5 [class.free]). —end note]
The description of the required syntax for declaring a destructor in 12.4 [class.dtor] paragraph 1 says,
A declaration of a destructor uses a function declarator (8.3.5 [dcl.fct]) of the form
ptr-declarator ( parameter-declaration-clause ) exception-specificationopt attribute-specifier-seqopt
where the ptr-declarator...
A declaration such as
(~S())
arguably “uses” a declarator of the required form, since the cited wording does not forbid placing a declarator of that form inside parentheses. (Similar considerations apply to the syntax of constructors in 12.1 [class.ctor] paragraph 1.) There is implementation divergence on this point. The wording should be clarified as to whether parentheses surrounding a declarator of the required form are permitted or not.
Proposed Resolution (July, 2014):
Change 12.1 [class.ctor] paragraph 1 as follows:
Constructors do not have names. A In a declaration of a constructor, uses the declarator is a function declarator (8.3.5 [dcl.fct]) of the form...
Change 12.4 [class.dtor] paragraph 1 as follows:
A In a declaration of a destructor, uses the declarator is a function declarator (8.3.5 [dcl.fct]) of the form...
According to 13.3.3.1 [over.best.ics] paragraph 9,
If no sequence of conversions can be found to convert an argument to a parameter type or the conversion is otherwise ill-formed, an implicit conversion sequence cannot be formed.
However, compare this with 13.3.3.1 [over.best.ics] paragraph 2:
Implicit conversion sequences are concerned only with the type, cv-qualification, and value category of the argument and how these are converted to match the corresponding properties of the parameter. Other properties, such as the lifetime, storage class, alignment, or accessibility of the argument and whether or not the argument is a bit-field are ignored. So, although an implicit conversion sequence can be defined for a given argument-parameter pair, the conversion from the argument to the parameter might still be ill-formed in the final analysis.
It is not clear what cases are in view in paragraph 9.
Proposed resolution (October, 2014):
Change 13.3.3.1 [over.best.ics] paragraph 2 as follows:
Implicit conversion sequences are concerned only with the type, cv-qualification, and value category of the argument and how these are converted to match the corresponding properties of the parameter. Other properties, such as the lifetime, storage class, alignment, or accessibility of the argument, and whether or not the argument is a bit-field, and whether a function is deleted (8.4.3 [dcl.fct.def.delete]), are ignored. So, although an implicit conversion sequence can be defined for a given argument-parameter pair, the conversion from the argument to the parameter might still be ill-formed in the final analysis.
Change 13.3.3.1 [over.best.ics] paragraph 9 as follows:
If no sequence of conversions can be found to convert an argument to a parameter type or the conversion is otherwise ill-formed, an implicit conversion sequence cannot be formed.
The Standard is not clear enough that a template parameter like class T is to be interpreted as a type parameter and not an ill-formed non-type parameter of class type T.
Proposed resolution (October, 2014):
Change 14.1 [temp.param] paragraph 2 as follows, moving the example from paragraph 3 to paragraph 2:
There is no semantic difference between class and typename in a template-parameter. typename followed by an unqualified-id names a template type parameter. typename followed by a qualified-id denotes the type in a non-type137 parameter-declaration. A template-parameter of the form class identifier is a type-parameter. [Example:
class T { /* ... */ }; int i; template<class T, T i> void f(T t) { T t1 = i; // template-parameters T and i ::T t2 = ::i; // global namespace members T and i }
Here, the template f has a type-parameter called T, rather than an unnamed non-type template-parameter of class T. —end example]. A storage class shall not be specified in a template-parameter declaration. Types shall not be defined in a template-parameter declaration. [Note: A template parameter may be a class template. For example... —end note]
Change 14.1 [temp.param] paragraph 3 as follows, moving the example from paragraph 2 to paragraph 3:
A type-parameter whose identifier does not follow an ellipsis defines its identifier to be a typedef-name (if declared with class or typename) or template-name (if declared with template) in the scope of the template declaration. [Note: Because of the name lookup rules, a template-parameter that could be interpreted as either a non-type template-parameter or a type-parameter (because its identifier is the name of an already existing class) is taken as a type-parameter. For example... —end note] [Note: A template parameter may be a class template. For example,
template<class T> class myarray { /* ... */ }; template<class K, class V, template<class T> class C = myarray> class Map { C<K> key; C<V> value; };
—end note]
According to 14.5.5 [temp.class.spec] paragraph 5,
A class template partial specialization may be declared or redeclared in any namespace scope in which its definition may be defined (14.5.1 [temp.class] and 14.5.2 [temp.mem]).
However, there is nothing in those referenced sections specifying where the definition may appear. Should this have referred to the definition of the primary template?
Also, the cross-reference to 14.5.1 [temp.class] is suspect; the actual rules for where non-member class templates may be defined are found in 7.3.1.2 [namespace.memdef] paragraphs 1-2, 8.3 [dcl.meaning] paragraph 1, and 7.3.1 [namespace.def] paragraph 8.
(Apropos of 7.3.1 [namespace.def], the description in paragraph 8 mentions explicit instantiation and explicit specialization, but presumably inadvertently omits partial specializations.)
Proposed resolution (February, 2014) [SUPERSEDED]:
Change 14.5.5 [temp.class.spec] paragraph 5 as follows:
A class template partial specialization may be declared or redeclared in any namespace scope in which its definition the corresponding primary template may be defined (14.5.1 [temp.class] and 14.5.2 [temp.mem]). [Example:...
Additional note, February, 2014:
The proposed resolution approved by CWG at the February, 2014 meeting does not address the additional points raised in the issue, specifically the cross-reference to 14.5.1 [temp.class] and the omission of partial specializations from 7.3.1 [namespace.def]. The issue has been returned to "review" status to consider amending the resolution to include these items.
Proposed Resolution (July, 2014):
Change 7.3.1 [namespace.def] paragraph 8 as follows:
...Furthermore, each member of the inline namespace can subsequently be partially specialized (14.5.5 [temp.class.spec]), explicitly instantiated (14.7.2 [temp.explicit]), or explicitly specialized (14.7.3 [temp.expl.spec]) as though it were a member of the enclosing namespace. Finally, looking up a name...
Change 14.5.5 [temp.class.spec] paragraph 5 as follows:
A class template partial specialization may be declared or redeclared in any namespace scope in which its definition the corresponding primary template may be defined (14.5.1 [temp.class] 7.3.1.2 [namespace.memdef] and 14.5.2 [temp.mem]). [Example:...
The current wording of 15.5.1 [except.terminate] paragraph 2 affords implementations a significant degree of freedom when exception handling results in a call to std::terminate:
In the situation where no matching handler is found, it is implementation-defined whether or not the stack is unwound before std::terminate() is called. In the situation where the search for a handler (15.3 [except.handle]) encounters the outermost block of a function with a noexcept-specification that does not allow the exception (15.4 [except.spec]), it is implementation-defined whether the stack is unwound, unwound partially, or not unwound at all before std::terminate() is called. In all other situations, the stack shall not be unwound before std::terminate() is called.
This contrasts with the treatment of subobjects and objects constructed via delegating constructos in 15.2 [except.ctor] paragraph 2:
An object of any storage duration whose initialization or destruction is terminated by an exception will have destructors executed for all of its fully constructed subobjects (excluding the variant members of a union-like class), that is, for subobjects for which the principal constructor (12.6.2 [class.base.init]) has completed execution and the destructor has not yet begun execution. Similarly, if the non-delegating constructor for an object has completed execution and a delegating constructor for that object exits with an exception, the object's destructor will be invoked.
Here the destructors must be called. It would be helpful if these requirements were harmonized.
Notes from the September, 2013 meeting:
Although the Canadian NB comment principally was a request to reconsider the resolution of issue 1424, which CWG decided to retain, the comment also raised the question above, which CWG felt merited its own issue.
Proposed resolution (June, 2014):
Change all of 15.2 [except.ctor], reparagraphing as follows:
As control passes from the point where an exception is thrown to a handler, destructors are invoked by a process, specified in this section, called stack unwinding. If a destructor directly invoked by stack unwinding exits with an exception, std::terminate is called (15.5.1 [except.terminate]). [Note: Consequently, destructors should generally catch exceptions and not let them propagate out of the destructor. —end note]
The destructor is invoked for all automatic objects each automatic object of class type constructed since the try block was entered. The automatic objects are destroyed in the reverse order of the completion of their construction.
An For an object of class type of any storage duration whose initialization or destruction is terminated by an exception will have destructors executed, the destructor is invoked for all each of its the object's fully constructed subobjects (excluding the variant members of a union-like class), that is, for subobjects each subobject for which the principal constructor (12.6.2 [class.base.init]) has completed execution and the destructor has not yet begun execution. The subobjects are destroyed in the reverse order of the completion of their construction. Such destruction is sequenced before entering a handler of the function-try-block of the constructor or destructor, if any.
Similarly, if the non-delegating constructor for an object has completed execution and a delegating constructor for that object exits with an exception, the object's destructor will be is invoked. Such destruction is sequenced before entering a handler of the function-try-block of a delegating constructor for that object, if any.
[Note: If the object was allocated in by a new-expression (5.3.4 [expr.new]), the matching deallocation function (3.7.4.2 [basic.stc.dynamic.deallocation], 5.3.4 [expr.new], 12.5 [class.free]), if any, is called to free the storage occupied by the object. —end note]
The process of calling destructors for automatic objects constructed on the path from a try block to the point where an exception is thrown is called “stack unwinding.” If a destructor called during stack unwinding exits with an exception, std::terminate is called (15.5.1 [except.terminate]). [Note: So destructors should generally catch exceptions and not let them propagate out of the destructor. —end note]
Delete 15.3 [except.handle] paragraph 11:
The fully constructed base classes and members of an object shall be destroyed before entering the handler of a function-try-block of a constructor for that object. Similarly, if a delegating constructor for an object exits with an exception after the non-delegating constructor for that object has completed execution, the object's destructor shall be executed before entering the handler of a function-try-block of a constructor for that object. The base classes and non-variant members of an object shall be destroyed before entering the handler of a function-try-block of a destructor for that object (12.4 [class.dtor]).
This resolution also resolves issue 1807.
The destruction of fully-constructed array elements when array initialization is terminated by an exception is required by 15.2 [except.ctor] paragraph 2, but the order in which they are to be destroyed is not specified. Presumably it should be in reverse order of construction.
Proposed resolution (June, 2014):
This issue is resolved by the resolution of issue 1774.
According to 15.2 [except.ctor] paragraph 2,
An object of any storage duration whose initialization or destruction is terminated by an exception will have destructors executed for all of its fully constructed subobjects (excluding the variant members of a union-like class), that is, for subobjects for which the principal constructor (12.6.2 [class.base.init]) has completed execution and the destructor has not yet begun execution.
This introduces a potential leak if a variant member is initialized and has a non-trivial destructor. If the assumption can't be made that such an initialized member is the active member at the time an exception occurs so that it can be destroyed, perhaps variant members of types having a non-trivial destructor should be prohibited.
Notes from the June, 2014 meeting:
CWG favored removing the exclusion of variant members from the destruction following an exception during construction, though not during destruction. If the active member of the union has changed between the initialization and destruction, the behavior is undefined.
Proposed Resolution (July, 2014):
Change 15.2 [except.ctor] paragraph 2 as follows:
An object of any storage duration whose initialization or destruction is terminated by an exception will have destructors executed for all of its fully constructed subobjects (excluding the variant members of a union-like class), that is, for subobjects for which the principal constructor (12.6.2 [class.base.init]) has completed execution and the destructor has not yet begun execution, except that in the case of destruction, the variant members of a union-like class are not destroyed. Similarly, if the non-delegating constructor...
The section of the Standard reserving names that begin with two underscores or an underscore and a capital letter, 17.6.4.3.2 [global.names], applies only to “programs that use the facilities of the C++ standard library” (17.6.4.1 [constraints.overview]). However, implementations rely on this restriction for mangling, even when no standard library facilities are used. Should this requirement be moved to the core language section?
(There is a related issue with user-defined literal suffixes, 17.6.4.3.5 [usrlit.suffix]. However, these are already mentioned normatively in the core language section, so it could be argued that the question of library usage does not apply.)
Proposed resolution (October, 2014):
Change 2.11 [lex.name] paragraph 3 as follows:
In addition, some identifiers are reserved for use by C++ implementations and standard libraries (17.6.4.3.2 [global.names]) and shall not be used otherwise; no diagnostic is required.
Each identifier that contains a double understore __ or begins with an underscore followed by an uppercase letter is reserved to the implementation for any use.
Each identifier that begins with an underscore is reserved to the implementation for use as a name in the global namespace.
Change the footnote in 8.4.1 [dcl.fct.def.general] paragraph 8 as follows:
[Footnote: Implementations are permitted to provide additional predefined variables with names that are reserved to the implementation (17.6.4.3.2 [global.names] 2.11 [lex.name]). If a predefined variable is not odr-used (3.2 [basic.def.odr]), its string value need not be present in the program image. —end footnote]
Change the example in 13.5.8 [over.literal] paragraph 8 as follows:
double operator""_Bq(double); // OK: does not use the reserved name identifier _Bq (17.6.4.3.2 [global.names] 2.11 [lex.name]) double operator"" _Bq(double); // uses the reserved name identifier _Bq (17.6.4.3.2 [global.names] 2.11 [lex.name])
Delete 17.6.4.3.2 [global.names]:
Certain sets of names and function signatures are always reserved to the implementation:
Each name that contains a double underscore __ or begins with an underscore followed by an uppercase letter (2.12 [lex.key]) is reserved to the implementation for any use.
Each name that begins with an underscore is reserved to the implementation for use as a name in the global namespace.
Section 4.4 [conv.qual] covers the case of multi-level pointers, but does not appear to cover the case of pointers to arrays of pointers. The effect is that arrays are treated differently from simple scalar values.
Consider for example the following code: (from the thread "Pointer to array conversion question" begun in comp.lang.c++.moderated)
int main() { double *array2D[2][3]; double * (*array2DPtr1)[3] = array2D; // Legal double * const (*array2DPtr2)[3] = array2DPtr1; // Legal double const * const (*array2DPtr3)[3] = array2DPtr2; // Illegal }and compare this code with:-
int main() { double *array[2]; double * *ppd1 = array; // legal double * const *ppd2 = ppd1; // legal double const * const *ppd3 = ppd2; // certainly legal (4.4/4) }
The problem appears to be that the pointed to types in example 1 are unrelated since nothing in the relevant section of the standard covers it - 4.4 [conv.qual] does not mention conversions of the form "cv array of N pointer to T" into "cv array of N pointer to cv T"
It appears that reinterpret_cast is the only way to perform the conversion.
Suggested resolution:
Artem Livshits proposed a resolution :-
"I suppose if the definition of "similar" pointer types in 4.4 [conv.qual] paragraph 4 was rewritten like this:
it would address the problem.T1 is cv1,0 P0 cv1,1 P1 ... cv1,n-1 Pn-1 cv1,n T
and
T2 is cv1,0 P0 cv1,1 P1 ... cv1,n-1 Pn-1 cv1,n T
where Pi is either a "pointer to" or a "pointer to an array of Ni"; besides P0 may be also a "reference to" or a "reference to an array of N0" (in the case of P0 of T2 being a reference, P0 of T1 may be nothing).
In fact I guess Pi in this notation may be also a "pointer to member", so 4.4 [conv.qual]/{4,5,6,7} would be nicely wrapped in one paragraph."
Additional note, February, 2014:
Geoffrey Romer: LWG plans to resolve US 16/LWG 2118, which concerns qualification-conversion of unique_ptr for array types, by effectively punting the issue to core: unique_ptr<T[]> will be specified to be convertible to unique_ptr<U[]> only if T(*)[] is convertible to U(*)[]. LWG and LEWG have jointly decided to adopt the same approach for shared_ptr<T[]> and shared_ptr<T[N]> in the Fundamentals TS. This will probably substantially raise the visibility of core issue 330, which concerns the fact that array types support only top-level qualification conversion of the element type, so it'd be nice if CWG could bump up the priority of that issue.
See also issue 1865.
Proposed resolution (October, 2014):
The resolution is contained in paper N4178.
The example in 5.19 [expr.const] paragraph 6,
struct A { constexpr A(int i) : val(i) { } constexpr operator int() { return val; } constexpr operator long() { return 43; } private: int val; }; template<int> struct X { }; constexpr A a = 42; X<a> x; // OK: unique conversion to int int ary[a]; // error: ambiguous conversion
is no longer correct now that constexpr does not imply const for member functions, since the conversion functions cannot be invoked for the constant a.
Notes from the September, 2013 meeting:
This issue is being handled editorially and is being placed in "review" status to ensure that the change has been made.
Whether an implementation is hosted or freestanding is only required to be documented by the value of the __STDC_HOSTED__ macro (16.8 [cpp.predefined]). Should this characteristic be classified as implementation-defined, thus requiring documentation?
The current wording does not indicate that initialization of a non-class object is a full-expression, but presumably should do so.
Additional note, April, 2013:
There is implementation variance in the treatment of the following example:
struct A { A() { puts("ctor"); } A(const A&) { puts("copy"); } const A&get() const { return *this; } ~A() { puts("dtor"); } }; struct B { B(A, A) {} }; typedef A A2[2]; A2 a = { A().get(), A().get() }; B b = { A().get(), A().get() }; int c = (A2{ A().get(), A().get() }, 0); int d = (B{ A().get(), A().get() }, 0); int main() {}
Additional note (February, 2014):
Aggregate initialization could also involve more than one full-expression, so the limitation above to “initialization of a non-class object” is not correct.
According to 2.3 [lex.charset] paragraph 2,
The character designated by the universal-character-name \UNNNNNNNN is that character whose character short name in ISO/IEC 10646 is NNNNNNNN; the character designated by the universal-character-name \uNNNN is that character whose character short name in ISO/IEC 10646 is 0000NNNN. If the hexadecimal value for a universal-character-name corresponds to a surrogate code point (in the range 0xD800-0xDFFF, inclusive), the program is ill-formed. Additionally, if the hexadecimal value for a universal-character-name outside the c-char-sequence, s-char-sequence, or r-char-sequence of a character or string literal corresponds to a control character (in either of the ranges 0x00-0x1F or 0x7F-0x9F, both inclusive) or to a character in the basic source character set, the program is ill-formed.
It is not specified what should happen if the hexadecimal value does not designate a Unicode code point: is that undefined behavior or does it make the program ill-formed?
As an aside, a note should be added explaining why these requirements apply to to an r-char-sequence when, as the footnote at the end of the paragraph explains,
A sequence of characters resembling a universal-character-name in an r-char-sequence (2.14.5 [lex.string]) does not form a universal-character-name.
2.5 [lex.pptoken] paragraph 2 specifies that there are 5 categories of tokens in phases 3 to 6. With 2.13 [lex.operators] paragraph 1, it is unclear whether new is an identifier or a preprocessing-op-or-punc; likewise for delete. This is relevant to answer the question whether
#define delete foo
is a well-formed control-line, since that requires an identifier after the define token.
(See also issue 189.)
According to 2.5 [lex.pptoken] paragraph 3,
If the input stream has been parsed into preprocessing tokens up to a given character:
If the next character begins a sequence of characters that could be the prefix and initial double quote of a raw string literal, such as R", the next preprocessing token shall be a raw string literal. Between the initial and final double quote characters of the raw string, any transformations performed in phases 1 and 2 (trigraphs, universal-character-names, and line splicing) are reverted; this reversion shall apply before any d-char, r-char, or delimiting parenthesis is identified.
However, phase 1 is defined as:
Physical source file characters are mapped, in an implementation-defined manner, to the basic source character set (introducing new-line characters for end-of-line indicators) if necessary. The set of physical source file characters accepted is implementation-defined. Trigraph sequences (2.4 [lex.trigraph]) are replaced by corresponding single-character internal representations. Any source file character not in the basic source character set (2.3 [lex.charset]) is replaced by the universal-character-name that designates that character.
The reversion described in 2.5 [lex.pptoken] paragraph 3 specifically does not mention the replacement of physical end-of-line indicators with new-line characters. Is it intended that, for example, a CRLF in the source of a raw string literal is to be represented as a newline character or as the original characters?
The syntactic nonterminal punctuator appears in the grammar for token in 2.7 [lex.token], but it is nowhere defined. It should be merged with operator and given an appropriate list of tokens as a definition for the merged term.
The nonterminals operator and punctuator in 2.7 [lex.token] are not defined. There is a definition of the nonterminal operator in 13.5 [over.oper] paragraph 1, but it is apparent that the two nonterminals are not the same: the latter includes keywords and multi-token operators and does not include the nonoverloadable operators mentioned in paragraph 3.
There is a definition of preprocessing-op-or-punc in 2.13 [lex.operators] , with the notation that
Each preprocessing-op-or-punc is converted to a single token in translation phase 7 (2.1).However, this list doesn't distinguish between operators and punctuators, it includes digraphs and keywords (can a given token be both a keyword and an operator at the same time?), etc.
Suggested resolution:
Additional note (April, 2005):
The resolution for this problem should also address the fact that sizeof and typeid (and potentially others like decltype that may be added in the future) are described in some places as “operators” but are not listed in 13.5 [over.oper] paragraph 3 among the operators that cannot be overloaded.
(See also issue 369.)
The term “literal” is used without definition except the implicit connection with the syntactic nonterminal literal. The relationships of English terms to syntactic nonterminals (such as “integer literal” and integer-literal) should be examined throughout 2.14 [lex.literal] and its subsections.
According to 2.14.3 [lex.ccon] paragraph 4,
The escape \ooo consists of the backslash followed by one, two, or three octal digits that are taken to specify the value of the desired character. The escape \xhhh consists of the backslash followed by x followed by one or more hexadecimal digits that are taken to specify the value of the desired character. There is no limit to the number of digits in a hexadecimal sequence. A sequence of octal or hexadecimal digits is terminated by the first character that is not an octal digit or a hexadecimal digit, respectively. The value of a character literal is implementation-defined if it falls outside of the implementation-defined range defined for char (for literals with no prefix), char16_t (for literals prefixed by 'u'), char32_t (for literals prefixed by 'U'), or wchar_t (for literals prefixed by 'L').
It is not clearly stated whether the “desired character” being specified reflects the source or the target encoding. This particularly affects UTF-8 string literals (2.14.5 [lex.string] paragraph 7):
A string literal that begins with u8, such as u8"asdf", is a UTF-8 string literal and is initialized with the given characters as encoded in UTF-8.
For example, assuming the source encoding is Latin-1, is u8"\xff" supposed to specify a three-byte string whose first two bytes are 0xc3 0xbf (the UTF-8 encoding of \u00ff) or a two-byte string whose first byte has the value 0xff? (At least some current implementations assume the latter interpretation.)
Notes from the September, 2013 meeting:
The second interpretation (that the escape sequence specifies the execution-time code unit) is intended.
The resolution of issue 1802 clarified that char16_t string literals can contain surrogate pairs, in contrast to char16_t character literals. However, there is no explicit requirement that char16_t literals be encoded as UTF-16, although that is explicitly stated for char16_t character literals, so it's not clear what the value is required to be in the surrogate-pair case.
According to 2.14.3 [lex.ccon] paragraph 1, a multicharacter literal like 'ab' is conditionally-supported and has type int.
According to 2.14.8 [lex.ext] paragraph 6,
If L is a user-defined-character-literal, let ch be the literal without its ud-suffix. S shall contain a literal operator (13.5.8 [over.literal]) whose only parameter has the type of ch and the literal L is treated as a call of the form
operator "" X(ch)
A user-defined-character-literal like 'ab'_foo would thus require a literal operator
However, that is not one of the signatures permitted by 13.5.8 [over.literal] paragraph 3.
Should multicharacter user-defined-character-literals be conditionally-supported? If so, 13.5.8 [over.literal] paragraph 3 should be adjusted accordingly. If not, a note in 2.14.8 [lex.ext] paragraph 6 saying explicitly that they are not supported would be helpful.
The description of the numeric literals occurring as part of user-defined-integer-literals and user-defined-floating-literals in 2.14.8 [lex.ext] says nothing about whether they are required to satisfy the same constraints as literals that are not part of a user-defined-literal. In particular, because it is the spelling, not the value, of the literal that is used for raw literal operators and literal operator templates, there is no particular reason that they should be restricted to the maximum values and precisions that apply to ordinary literals (and one could imagine that this would be a good notation for allowing literals of extended-precision types).
Is this relaxation of limits intended to be required, or is it a quality-of-implementation issue? Should something be said, either normatively or non-normatively, about this question?
According to 3 [basic] paragraph 6,
A variable is introduced by the declaration of a reference other than a non-static data member or of an object.
In other words, non-static data members of reference type are not variables. This complicates the wording in a number of places, where the text refers to “variable or data member,” presumably to cover the reference case, but that phrasing could lead to the mistaken impression that all data members are not variables. It would be better if either there were a term for the current phrase “variable or data member” or if there were a less-unwieldy term for “non-static data member of reference type” that could be used in place of “data member” in the current phrasing.
Clause 12 [special] is perfectly clear that special member functions are only implicitly defined when they are odr-used. This creates a problem for constant expressions in unevaluated contexts:
struct duration {
constexpr duration() {}
constexpr operator int() const { return 0; }
};
// duration d = duration(); // #1
int n = sizeof(short{duration(duration())});
The issue here is that we are not permitted to implicitly define constexpr duration::duration(duration&&) in this program, so the expression in the initializer list is not a constant expression (because it invokes a constexpr function which has not been defined), so the braced initializer contains a narrowing conversion, so the program is ill-formed.
If we uncomment line #1, the move constructor is implicitly defined and the program is valid. This spooky action at a distance is extremely unfortunate. Implementations diverge on this point.
There are also similar problems with implicit instantiation of constexpr functions. It is not clear which contexts require their instantiation. For example:
template<int N> struct U {}; int g(int); template<typename T> constexpr int h(T) { return T::error; } template<typename T> auto f(T t) -> U<g(T()) + h(T())> {} int f(...); int k = f(0);
There are at least two different ways of modeling the current rules:
constexpr function instantiation is triggered by constant expression evaluation. In that case, the validity of the above program depends on the order in which that evaluation proceeds:
If the LHS of the + is evaluated first, the program might be valid, because the implementation might bail out evaluation before triggering the ill-formed instantiation of h<int>.
If the RHS is evaluated first, the program is invalid, because the instantiation fails.
constexpr function instantiation is triggered whenever a constexpr function is referenced from an expression which could be evaluated (note that this is not the same as being potentially-evaluated)
These two approaches can be distinguished by code like this:
int k = sizeof(U<0 && h(0)>);
Under the first approach, this code is valid; under the second, it is ill-formed.
A possible approach to resolving this issue would be to change the definition of “potentially-evaluated” such that template arguments, array bounds, and braced-init-lists (and any other expressions which are constant evaluated) are always potentially-evaluated, even if they appear within an unevaluated context, and to change 14.7.1 [temp.inst] paragraph 3 to say simply that function template specializations are implicitly instantiated when they are odr-used.
A related question is whether putatively constexpr constructors must be instantiated in order to determine whether their class is a literal type or not. See issue 1358.
Jason Merrill:
I'm concerned about unintended side-effects of such a large change to “potentially-evaluated;” I would prefer something that only affects constexpr.
It occurs to me that this is parallel to issue 1330: just like we want to instantiate exception specifiers for calls in unevaluated context, we also want to instantiate constexpr functions. I think we should define some other term to say when there's a reference to a declaration, and then say that the declaration is odr-used when that happens in potentially-evaluated context.
Notes from the April, 2013 meeting:
An additional question was raised about whether constexpr functions should be instantiated as a result of appearing within unevaluated subexpressions of constant expressions. For example:
#include <type_traits> template <class T> constexpr T f(T t) { return +t; } struct A { }; template <class T> decltype(std::is_scalar<T>::value ? T::fail : f(T())) g() { } template <class T> void g(...); int main() { g<A>(); }
If constexpr instantiation happens during constant expression evaluation, f<A> is never instantiated and the program is well-formed. If constexpr instantiation happens during parsing, f<A> is instantiated and the program is ill-formed.
The description in 3.2 [basic.def.odr] paragraph 6 of when entities can be multiply-declared in a program does not, but should, discuss variable templates.
According to 2.6 [lex.digraph] paragraph 2,
In all respects of the language, each alternative token behaves the same, respectively, as its primary token, except for its spelling.
However, the primary and alternative tokens are different tokens, which runs afoul of the ODR requirement in 3.2 [basic.def.odr] paragraph 6 that the definitions consist of the “same sequence of tokens.” This wording should be amended to allow for use of primary and alternative tokens.
The definition of the potential results of an expression in 3.2 [basic.def.odr] paragraph 2 do not, but should, include the subscript operator.
The various uses of the term “declarative region” throughout the Standard indicate that the term is intended to refer to the entire block, class, or namespace that contains a given declaration. For example, 3.3 [basic.scope] paragraph 2 says, in part:
[Example: in
int j = 24; int main() { int i = j, j; j = 42; }The declarative region of the first j includes the entire example... The declarative region of the second declaration of j (the j immediately before the semicolon) includes all the text between { and }...
However, the actual definition given for “declarative region” in 3.3 [basic.scope] paragraph 1 does not match this usage:
Every name is introduced in some portion of program text called a declarative region, which is the largest part of the program in which that name is valid, that is, in which that name may be used as an unqualified name to refer to the same entity.
Because (except in class scope) a name cannot be used before it is declared, this definition contradicts the statement in the example and many other uses of the term throughout the Standard. As it stands, this definition is identical to that of the scope of a name.
The term “scope” is also misused. The scope of a declaration is defined in 3.3 [basic.scope] paragraph 1 as the region in which the name being declared is valid. However, there is frequent use of the phrase “the scope of a class,” not referring to the region in which the class's name is valid but to the declarative region of the class body, and similarly for namespaces, functions, exception handlers, etc. There is even a mention of looking up a name “in the scope of the complete postfix-expression” (3.4.5 [basic.lookup.classref] paragraph 3), which is the exact inverse of the scope of a declaration.
This terminology needs a thorough review to make it logically consistent. (Perhaps a discussion of the scope of template parameters could also be added to section 3.3 [basic.scope] at the same time, as all other kinds of scopes are described there.)
Proposed resolution (November, 2006):
Change 3.3 [basic.scope] paragraph 1 as follows:
Every name is introduced in some portion of program text called a declarative region, which is the largest part of the program in which that name is valid, that is, in which that name may be used as an unqualified name to refer to the same entity a statement, block, function declarator, function-definition, class, handler, template-declaration, template-parameter-list of a template template-parameter, or namespace. In general, each particular name is valid may be used as an unqualified name to refer to the entity of its declaration or to the label only within some possibly discontiguous portion of program text called its scope. To determine the scope of a declaration...
Change 3.3 [basic.scope] paragraph 3 as follows:
The names declared by a declaration are introduced into the scope in which the declaration occurs declarative region that directly encloses the declaration, except that declaration-statements, function parameter names in the declarator of a function-definition, exception-declarations (3.3.3 [basic.scope.block]), the presence of a friend specifier (11.3 [class.friend]), certain uses of the elaborated-type-specifier (7.1.6.3 [dcl.type.elab]), and using-directives (7.3.4 [namespace.udir]) alter this general behavior.
Change 3.3.3 [basic.scope.block] paragraphs 1-3 and add a new paragraph 4 before the existing paragraph 4 as follows:
A name declared in a block (6.3 [stmt.block]) is local to that block. Its potential scope begins at its point of declaration (3.3.2 [basic.scope.pdecl]) and ends at the end of its declarative region. The declarative region of a name declared in a declaration-statement is the directly enclosing block (6.3 [stmt.block]). Such a name is local to the block.
The potential scope declarative region of a function parameter name (including one appearing in the declarator of a function-definition or in a lambda-parameter-declaration-clause) or of a function-local predefined variable in a function definition (8.4 [dcl.fct.def]) begins at its point of declaration. If the function has a function-try-block the potential scope of a parameter or of a function-local predefined variable ends at the end of the last associated handler, otherwise it ends at the end of the outermost block of the function definition. A parameter name is the entire function definition or lambda-expression. Such a name is local to the function definition and shall not be redeclared in the any outermost block of the function definition nor in the outermost block of any handler associated with a function-try-block function-body (including handlers of a function-try-block) or lambda-expression.
The name in a catch exception-declaration The declarative region of a name declared in an exception-declaration is its entire handler. Such a name is local to the handler and shall not be redeclared in the outermost block of the handler.
The potential scope of any local name begins at its point of declaration (3.3.2 [basic.scope.pdecl]) and ends at the end of its declarative region.
Change 3.3.5 [basic.funscope] as indicated:
Labels (6.1 [stmt.label]) have function scope and may be used anywhere in the function in which they are declared except in members of local classes (9.8 [class.local]) of that function. Only labels have function scope.
Change 6.7 [stmt.dcl] paragraph 1 as follows:
A declaration statement introduces one or more new identifiers names into a block; it has the form
declaration-statement:
block-declaration
[Note: If an identifier a name introduced by a declaration was previously declared in an outer block, the outer declaration is hidden for the remainder of the block, after which it resumes its force (3.3.10 [basic.scope.hiding]). —end note]
[Drafting notes: This resolution deals almost exclusively with the unclear definition of “declarative region.” I've left the ambiguous use of “scope” alone for now. However sections 3.3.x all have headings reading “xxx scope,” but they don't mean the scope of a declaration but the different kinds of declarative regions and their effects on the scope of declarations contained therein. To me, it looks like most of 3.4 should refer to “declarative region” and not to “scope.”
The change to 6.7 fixes an “identifier” misuse (e.g., extern T operator+(T,T); at block scope introduces a name but not an identifier) and removes normative redundancy.]
The Standard does not completely specify how to look up the type-name(s) in a pseudo-destructor-name (5.2 [expr.post] paragraph 1, 5.2.4 [expr.pseudo]), and what information it does have is incorrect and/or in the wrong place. Consider, for instance, 3.4.5 [basic.lookup.classref] paragraphs 2-3:
If the id-expression in a class member access (5.2.5 [expr.ref]) is an unqualified-id, and the type of the object expression is of a class type C (or of pointer to a class type C), the unqualified-id is looked up in the scope of class C. If the type of the object expression is of pointer to scalar type, the unqualified-id is looked up in the context of the complete postfix-expression.
If the unqualified-id is ~type-name, and the type of the object expression is of a class type C (or of pointer to a class type C), the type-name is looked up in the context of the entire postfix-expression and in the scope of class C. The type-name shall refer to a class-name. If type-name is found in both contexts, the name shall refer to the same class type. If the type of the object expression is of scalar type, the type-name is looked up in the scope of the complete postfix-expression.
There are at least three things wrong with this passage with respect to pseudo-destructors:
A pseudo-destructor call (5.2.4 [expr.pseudo]) is not a “class member access”, so the statements about scalar types in the object expressions are vacuous: the object expression in a class member access is required to be a class type or pointer to class type (5.2.5 [expr.ref] paragraph 2).
On a related note, the lookup for the type-name(s) in a pseudo-destructor name should not be described in a section entitled “Class member access.”
Although the class member access object expressions are carefully allowed to be either a class type or a pointer to a class type, paragraph 2 mentions only a “pointer to scalar type” (disallowing references) and paragraph 3 deals only with a “scalar type,” presumably disallowing pointers (although it could possibly be a very subtle way of referring to both non-class pointers and references to scalar types at once).
The other point at which lookup of pseudo-destructors is mentioned is 3.4.3 [basic.lookup.qual] paragraph 5:
If a pseudo-destructor-name (5.2.4 [expr.pseudo]) contains a nested-name-specifier, the type-names are looked up as types in the scope designated by the nested-name-specifier.
Again, this specification is in the wrong location (a pseudo-destructor-name is not a qualified-id and thus should not be treated in the “Qualified name lookup” section).
Finally, there is no place in the Standard that describes the lookup for pseudo-destructor calls of the form p->T::~T() and r.T::~T(), where p and r are a pointer and reference to scalar, respectively. To the extent that it gives any guidance at all, 3.4.5 [basic.lookup.classref] deals only with the case where the ~ immediately follows the . or ->, and 3.4.3 [basic.lookup.qual] deals only with the case where the pseudo-destructor-name contains a nested-name-specifier that designates a scope in which names can be looked up.
See document J16/06-0008 = WG21 N1938 for further discussion of this and related issues, including 244, 305, 399, and 414.
Proposed resolution (June, 2008):
Add a new paragraph following 5.2 [expr.post] paragraph 2 as follows:
When a postfix-expression is followed by a dot . or arrow -> operator, the interpretation depends on the type T of the expression preceding the operator. If the operator is ., T shall be a scalar type or a complete class type; otherwise, T shall be a pointer to a scalar type or a pointer to a complete class type. When T is a (pointer to) a scalar type, the postfix-expression to which the operator belongs shall be a pseudo-destructor call (5.2.4 [expr.pseudo]); otherwise, it shall be a class member access (5.2.5 [expr.ref]).
Change 5.2.4 [expr.pseudo] paragraph 2 as follows:
The left-hand side of the dot operator shall be of scalar type. The left-hand side of the arrow operator shall be of pointer to scalar type. This scalar type The type of the expression preceding the dot operator, or the type to which the expression preceding the arrow operator points, is the object type...
Change 5.2.5 [expr.ref] paragraph 2 as follows:
For the first option (dot) the type of the first expression (the object expression) shall be “class object” (of a complete type) is a class type. For the second option (arrow) the type of the first expression (the pointer expression) shall be “pointer to class object” (of a complete type) is a pointer to a class type. In these cases, the id-expression shall name a member of the class or of one of its base classes.
Add a new paragraph following 3.4 [basic.lookup] paragraph 2 as follows:
In a pseudo-destructor-name that does not include a nested-name-specifier, the type-names are looked up as types in the context of the complete expression.
Delete the last sentence of 3.4.5 [basic.lookup.classref] paragraph 2:
If the id-expression in a class member access (5.2.5 [expr.ref]) is an unqualified-id, and the type of the object expression is of a class type C, the unqualified-id is looked up in the scope of class C. If the type of the object expression is of pointer to scalar type, the unqualified-id is looked up in the context of the complete postfix-expression.
Notes from the August, 2011 meeting:
The proposed resolution must be updated with respect to the current wording of the WP.
The description of name lookup in the parameter-declaration-clause of member functions in 3.4.1 [basic.lookup.unqual] paragraphs 7-8 is flawed in at least two regards.
First, both paragraphs 7 and 8 apply to the parameter-declaration-clause of a member function definition and give different rules for the lookup. Paragraph 7 applies to names "used in the definition of a class X outside of a member function body...," which includes the parameter-declaration-clause of a member function definition, while paragraph 8 applies to names following the function's declarator-id (see the proposed resolution of issue 41), including the parameter-declaration-clause.
Second, paragraph 8 appears to apply to the type names used in the parameter-declaration-clause of a member function defined inside the class definition. That is, it appears to allow the following code, which was not the intent of the Committee:
struct S { void f(I i) { } typedef int I; };
Additional note, January, 2012:
brace-or-equal-initializers for non-static data members are intended effectively as syntactic sugar for mem-initializers in constructor definitions; the lookup should be the same.
According to 3.4.1 [basic.lookup.unqual] paragraph 10,
In a friend declaration naming a member function, a name used in the function declarator and not part of a template-argument in the declarator-id is first looked up in the scope of the member function's class (10.2 [class.member.lookup]). If it is not found, or if the name is part of a template-argument in the declarator-id, the look up is as described for unqualified names in the definition of the class granting friendship.
The corresponding specification for non-friend declarations in paragraph 8 applies the class-scope lookup only to names that follow the declarator-id. The same should be true in friend declarations.
Issue 125 concerned an example like
friend A::B::C();
which might be parsed as either
friend A (::B::C)();
or
friend A::B (::C)();
Its resolution attempted to make such constructs unambiguously ill-formed by allowing any identifier, not just namespaces and types, to appear in a nested-name-specifier, apparently on the assumption that C in this case would become part of an ill-formed nested-name-specifier instead of being taken as the unqualified-id in a qualified-id. Unfortunately, the current specification does not implement that intent, leaving both parses as valid possibilities.
A different approach might be to adjust the specification of the lookup of names appearing in nested-name-specifiers from
If a :: scope resolution operator in a nested-name-specifier is not preceded by a decltype-specifier, lookup of the name preceding that :: considers only namespaces, types, and templates whose specializations are types. If the name found does not designate a namespace or a class, enumeration, or dependent type, the program is ill-formed.
to
Lookup of an identifier followed by a :: scope resolution operator considers only namespaces, types, and templates whose specializations are types. If an identifer, template-id, or decltype-specifier is followed by a :: scope resolution operator, the name shall designate a namespace, class, enumeration, or dependent type, and shall form part of a nested-name-specifier.
This approach would also remove the need for deferred lookup for template-ids and thus resolve issue 1771.
Paragraph 7 of 3.4.5 [basic.lookup.classref] says,
If the id-expression is a conversion-function-id, its conversion-type-id shall denote the same type in both the context in which the entire postfix-expression occurs and in the context of the class of the object expression (or the class pointed to by the pointer expression).Does this mean that the following example is ill-formed?
struct A { operator int(); } a; void foo() { typedef int T; a.operator T(); // 1) error T is not found in the context // of the class of the object expression? }The second bullet in paragraph 1 of 3.4.3.1 [class.qual] says,
a conversion-type-id of an operator-function-id is looked up both in the scope of the class and in the context in which the entire postfix-expression occurs and shall refer to the same type in both contextsHow about:
struct A { typedef int T; operator T(); }; struct B : A { operator T(); } b; void foo() { b.A::operator T(); // 2) error T is not found in the context // of the postfix-expression? }Is this interpretation correct? Or was the intent for this to be an error only if T was found in both scopes and referred to different entities?
If the intent was for these to be errors, how do these rules apply to template arguments?
template <class T1> struct A { operator T1(); } template <class T2> struct B : A<T2> { operator T2(); void foo() { T2 a = A<T2>::operator T2(); // 3) error? when instantiated T2 is not // found in the scope of the class T2 b = ((A<T2>*)this)->operator T2(); // 4) error when instantiated? } }
(Note bullets 2 and 3 in paragraph 1 of 3.4.3.1 [class.qual] refer to postfix-expression. It would be better to use qualified-id in both cases.)
Erwin Unruh: The intent was that you look in both contexts. If you find it only once, that's the symbol. If you find it in both, both symbols must be "the same" in some respect. (If you don't find it, its an error).
Mike Miller: What's not clear to me in these examples is whether what is being looked up is T or int. Clearly the T has to be looked up somehow, but the "name" of a conversion function clearly involves the base (non-typedefed) type, not typedefs that might be used in a definition or reference (cf 3 [basic] paragraph 7 and 12.3 [class.conv] paragraph 5). (This is true even for types that must be written using typedefs because of the limited syntax in conversion-type-ids — e.g., the "name" of the conversion function in the following example
typedef void (*pf)(); struct S { operator pf(); };is S::operator void(*)(), even though you can't write its name directly.)
My guess is that this means that in each scope you look up the type named in the reference and form the canonical operator name; if the name used in the reference isn't found in one or the other scope, the canonical name constructed from the other scope is used. These names must be identical, and the conversion-type-id in the canonical operator name must not denote different types in the two scopes (i.e., the type might not be found in one or the other scope, but if it's found in both, they must be the same type).
I think this is all very vague in the current wording.
3.4.5 [basic.lookup.classref] does not mention template aliases as the possible result of the lookup but should do so.
In an example like
template<typename T> void f(T p)->decltype(p.T::x);
The nested-name-specifier T:: looks like it refers to the template parameter. However, if this is instantiated with a type like
struct T { int x; }; struct S: T { };
the reference will be ambiguous, since it is looked up in both the context of the expression, finding the template parameter, and in the class, finding the base class injected-class-name, and this could be a deduction failure. As a result, the same declaration with a different parameter name
template<typename U> void f(U p)->decltype(p.U::x);
is, in fact, not a redeclaration because the two can be distinguished by SFINAE.
It would be better to add a new lookup rule that says that if a name in a template definition resolves to a template parameter, that name is not subject to further lookup at instantiation time.
The Standard talks about looking up a conversion-type-id as if it were an identifier (3.4.5 [basic.lookup.classref] paragraph 7), but that is not exactly accurate. Presumably it should talk instead about looking up names (if any) appearing in the type-specifier-seq of the conversion-type-id.
According to 3.4.5 [basic.lookup.classref] paragraph 1,
In a class member access expression (5.2.5 [expr.ref]), if the . or -> token is immediately followed by an identifier followed by a <, the identifier must be looked up to determine whether the < is the beginning of a template argument list (14.2 [temp.names]) or a less-than operator. The identifier is first looked up in the class of the object expression. If the identifier is not found, it is then looked up in the context of the entire postfix-expression and shall name a class template.
Given
template<typename T> T end(T); template<typename T> bool Foo(T it) { return it->end < it->end; }
since it is dependent and thus end cannot be looked up in the class of the object expression, it is looked up in the context of the postfix-expression. This lookup finds the function template, making the expression ill-formed.
One possibility might be to limit the lookup to the class of the object expression when the object expression is dependent.
According to 3.4.5 [basic.lookup.classref] paragraph 3,
If the unqualified-id is ~type-name, the type-name is looked up in the context of the entire postfix-expression. If the type T of the object expression is of a class type C, the type-name is also looked up in the scope of class C. At least one of the lookups shall find a name that refers to (possibly cv-qualified) T.
This would apply to an example like
namespace K { template <typename T, typename U = char> struct A { }; A<short> *a; } template <typename T> using A = K::A<short, T>; int main() { K::a->~A<char>(); }
Current implementations, however, only apply the dual lookup when the type-name is not a template-id. The specification should be changed to reflect current practice.
An example in 3.5 [basic.link] paragraph 6 creates two file-scope variables with the same name, one with internal linkage and one with external.
static void f(); static int i = 0; //1 void g() { extern void f(); // internal linkage int i; //2: i has no linkage { extern void f(); // internal linkage extern int i; //3: external linkage } }
Is this really what we want? C99 has 6.2.2.7/7, which gives undefined behavior for having an identifier appear with internal and external linkage in the same translation unit. C++ doesn't seem to have an equivalent.
Notes from October 2003 meeting:
We agree that this is an error. We propose to leave the example but change the comment to indicate that line //3 has undefined behavior, and elsewhere add a normative rule giving such a case undefined behavior.
Proposed resolution (October, 2005):
Change 3.5 [basic.link] paragraph 6 as indicated:
...Otherwise, if no matching entity is found, the block scope entity receives external linkage. If, within a translation unit, the same entity is declared with both internal and external linkage, the behavior is undefined.
[Example:
static void f(); static int i = 0; // 1 void g () { extern void f (); // internal linkage int i; // 2: i has no linkage { extern void f (); // internal linkage extern int i; // 3: external linkage } }There are three objects named i in this program. The object with internal linkage introduced by the declaration in global scope (line //1 ), the object with automatic storage duration and no linkage introduced by the declaration on line //2, and the object with static storage duration and external linkage introduced by the declaration on line //3. Without the declaration at line //2, the declaration at line //3 would link with the declaration at line //1. But because the declaration with internal linkage is hidden, //3 is given external linkage, resulting in a linkage conflict. —end example]
Notes frum the April 2006 meeting:
According to 3.5 [basic.link] paragraph 9, the two variables with linkage in the proposed example are not “the same entity” because they do not have the same linkage. Some other formulation will be needed to describe the relationship between those two variables.
Notes from the October 2006 meeting:
The CWG decided that it would be better to make a program with this kind of linkage mismatch ill-formed instead of having undefined behavior.
According to 3.5 [basic.link] paragraph 6,
The name of a function declared in block scope and the name of a variable declared by a block scope extern declaration have linkage. If there is a visible declaration of an entity with linkage having the same name and type, ignoring entities declared outside the innermost enclosing namespace scope, the block scope declaration declares that same entity and receives the linkage of the previous declaration. If there is more than one such matching entity, the program is ill-formed.
It is not clear how declarations that are in the lexical scope of the block-scope declaration but not members of the nearest enclosing namespace (see 7.3.1 [namespace.def] paragraph 6) should be treated. (For example, the definition of the function in which the block extern appears might be defined in an enclosing namespace, with a visible declaration of the name in that namespace, or it might be a member function of a class containing a member function of the name being declared.) Should such declarations be produce an error or should the lexically-nearer declaration simply be ignored? There is implementation divergence on this point.
According to 3.5 [basic.link] paragraph 9,
Two names that are the same (Clause 3 [basic]) and that are declared in different scopes shall denote the same variable, function, type, enumerator, template or namespace if
both names have external linkage or else both names have internal linkage and are declared in the same translation unit; and
both names refer to members of the same namespace or to members, not by inheritance, of the same class; and
when both names denote functions, the parameter-type-lists of the functions (8.3.5 [dcl.fct]) are identical; and
when both names denote function templates, the signatures (14.5.6.1 [temp.over.link]) are the same.
This is not as clear as it should be. The intent is that this rule prevents declaring a name with extenal linkage to be, for instance, a type in one translation unit and a namespace in a different translation unit. It has instead been read as begging the question of what it means for two entities to be the same. The wording should be tweaked to make the intention clear. Among other things, it should be clarified that "declared in" refers to the namespace of which the name is a member, not the lexical scope in which the declaration appears (which affects friend declarations, block-scope extern declarations, and elaborated-type-specifiers).
There is a similar restriction in 3.3.1 [basic.scope.declarative] paragraph 4 dealing with declarations within a single declarative region, while 3.5 [basic.link] paragraph 9 deals with names that are associated via linkage. The relationship between these complementary requirements may need to be clarified as well.
There does not appear to be any restriction on giving main() an explicit language linkage, but it should probably be either ill-formed or conditionally-supported.
According to 3.6.2 [basic.start.init] paragraph 3,
An implementation is permitted to perform the initialization of a non-local variable with static storage duration as a static initialization even if such initialization is not required to be done statically, provided that
the dynamic version of the initialization does not change the value of any other object of namespace scope prior to its initialization, and
the static version of the initialization produces the same value in the initialized variable as would be produced by the dynamic initialization if all variables not required to be initialized statically were initialized dynamically.
This does not consider side effects of the initialization in this determination, only the values of namespace-scope variables.
The resolution of issue 1489 added wording regarding value initialization to 3.6.2 [basic.start.init] paragraph 2 in an attempt to clarify the status of an example like
int a[1000]{};
However, this example is aggregate initialization, not value initialization. Also, now that we allow brace-or-equal-initializers in aggregates, this wording also needs to be updated to allow an aggregate with constant non-static data member initializers to qualify for constant initialization.
According to 3.7 [basic.stc] paragraph 2,
Static, thread, and automatic storage durations are associated with objects introduced by declarations (3.1 [basic.def]) and implicitly created by the implementation (12.2 [class.temporary]).
The apparent intent of the reference to 12.2 [class.temporary] is that a temporary whose lifetime is extended to be that of a reference with one of those storage durations is considered also to have that storage duration. This interpretation is buttressed by use of the phrase “an object with the same storage duration as the temporary” (twice) in 12.2 [class.temporary] paragraph 5.
There are two problems, however: first, the specification of lifetime extension of temporaries (also in 12.2 [class.temporary] paragraph 5) does not say anything about storage duration. Also, nothing is said in either of these locations about the storage duration of a temporary whose lifetime is not extended.
The latter point is important because 3.8 [basic.life] makes a distinction between the lifetime of an object and the acquisition and release of the storage the object occupies, at least for objects with non-trivial initialization and/or a non-trivial destructor. The assumption is made in 12.2 [class.temporary] and elsewhere that the storage in which a temporary is created is no longer available for reuse, as specified in 3.8 [basic.life], after the lifetime of the temporary has ended, but this assumption is not explicitly stated. One way to make that assumption explicit would be to define a storage duration for temporaries whose lifetime is not extended.
Do we need explicit language to forbid auto as the return type of allocation and deallocation functions?
(See also issue 1669.)
According to 3.7.4.1 [basic.stc.dynamic.allocation] paragraph 3,
If an allocation function declared with a non-throwing exception-specification (15.4 [except.spec]) fails to allocate storage, it shall return a null pointer. Any other allocation function that fails to allocate storage shall indicate failure only by throwing an exception (15.1 [except.throw]) of a type that would match a handler (15.3 [except.handle]) of type std::bad_alloc (18.6.2.1 [bad.alloc]).
The use of the word “shall” to constrain runtime behavior is inappropriate, as it normally identifies cases requiring a compile-time diagnostic.
Is the following well-formed?
int f() { int i = 3; new (&i) float(1.2); return i; }
The wording that is intended to prevent such shenanigans, 3.8 [basic.life] paragraphs 7-9, doesn't quite apply here. In particular, paragraph 7 reads,
If, after the lifetime of an object has ended and before the storage which the object occupied is reused or released, a new object is created at the storage location which the original object occupied, a pointer that pointed to the original object, a reference that referred to the original object, or the name of the original object will automatically refer to the new object and, once the lifetime of the new object has started, can be used to manipulate the new object, if:
the storage for the new object exactly overlays the storage location which the original object occupied, and
the new object is of the same type as the original object (ignoring the top-level cv-qualifiers), and...
The problem here is that this wording only applies “after the lifetime of an object has ended and before the storage which the object occupied is reused;” for an object of a scalar type, its lifetime only ends when the storage is reused or released (paragraph 1), so it appears that these restrictions cannot apply to such objects.
(See also issues 1116 and 1338.)
Proposed resolution (August, 2010):
This issue is resolved by the resolution of issue 1116.
Related to issue 1027, consider:
int f() { union U { double d; } u1, u2; (int&)u1.d = 1; u2 = u1; return (int&)u2.d; }
Does this involve undefined behavior? 3.8 [basic.life] paragraph 4 seems to say that it's OK to clobber u1 with an int object. Then union assignment copies the object representation, possibly creating an int object in u2 and making the return statement well-defined. If this is well-defined, compilers are significantly limited in the assumptions they can make about type aliasing. On the other hand, the variant where U has an array of unsigned char member must be well-defined in order to support std::aligned_storage.
Suggested resolution: Clarify that this case is undefined, but that adding an array of unsigned char to union U would make it well-defined — if a storage location is allocated with a particular type, it should be undefined to create an object in that storage if it would be undefined to access the stored value of the object through the allocated type.
(See also issues 1027 and 1338.)
Proposed resolution (August, 2010):
Change 3.8 [basic.life] paragraph 1 as follows:
...The lifetime of an object of type T begins when storage with the proper alignment and size for type T is obtained, and either:
- storage with the proper alignment and size for type T is obtained, and
if the object has non-trivial initialization, its initialization is complete., or
if T is trivially copyable, the object representation of another T object is copied into the storage.
The lifetime of an object of type T ends...
Change 3.8 [basic.life] paragraph 4 as follows:
A program may end the lifetime of any object by reusing the storage which the object occupies or by explicitly calling the destructor for an object of a class type with a non-trivial destructor. For an object of a class type with a non-trivial destructor, the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released; however, if there is no explicit call to the destructor or if a delete-expression (5.3.5 [expr.delete]) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior. If a program obtains storage for an object of a particular type A (e.g. with a variable definition or new-expression) and later reuses that storage for an object of another type B such that accessing the stored value of the B object through a glvalue of type A would have undefined behavior (3.10 [basic.lval]), the behavior is undefined. [Example:
int i; (double&)i = 1.0; // undefined behavior struct S { unsigned char alignas(double) ar[sizeof (double)]; } s; (double&)s = 1.0; // OK, can access stored double through s because it has an unsigned char subobject
—end example]
Change 3.10 [basic.lval] paragraph 10 as follows:
If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined52:
the dynamic type of the object,
a cv-qualified version of the dynamic type of the object,
a type similar (as defined in 4.4 [conv.qual]) to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object,
a char or unsigned char type,
an aggregate or union type that includes one of the aforementioned types among its elements, bases, or non-static data members (including, recursively, an element, base, or non-static data member of a subaggregate, base, or contained union),.
a type that is a (possibly cv-qualified) base class type of the dynamic type of the object,
a char or unsigned char type.
This resolution also resolves issue 1027.
Additional note (August, 2012):
Concerns have been raised regarding the interaction of this change with facilities like std::aligned_storage and memory pools. Care must be taken to achieve the proper balance between supporting type-based optimization techniques and allowing practical storage management.
Additional note (January, 2013):
Several questions have been raised about the wording above . In particular:
Since aggregates and unions cannot have base classes, why are base classes mentioned?
Since unions can now have special member functions, is it still valid to assume that they alias all their member types?
Shouldn't standard-layout classes also be considered and not just aggregates?
Additional note, February, 2014:
According to 1.8 [intro.object] paragraph 1, an object (i.e., a “region of storage”) is created by one of only three means:
An object is created by a definition (3.1 [basic.def]), by a new-expression (5.3.4 [expr.new]) or by the implementation (12.2 [class.temporary]) when needed. The properties of an object are determined when the object is created.
This does not allow for obtaining the storage in other ways, such as via malloc, in determining the lifetime of an object with vacuous initialization (3.8 [basic.life] paragraph 1).
In addition, 3.8 [basic.life] paragraph 1 does not require the storage obtained for an object of type T to be accessed via an lvalue of type T in order to be considered an object of that type. The treatment of “effective type” by C may be helpful here.
The note in 3.8 [basic.life] paragraph 2 reads,
[Note: The lifetime of an array object starts as soon as storage with proper size and alignment is obtained, and its lifetime ends when the storage which the array occupies is reused or released. 12.6.2 [class.base.init] describes the lifetime of base and member subobjects. —end note]
This wording reflects an earlier version of paragraph 1 that deferred the start of an object's lifetime only for initialization of objects of class type. The note simply emphasized the implication that that the lifetime of a POD type or an array began immediately, even if lifetime of an array's elements began later.
The decomposition of POD types removed the mention of PODs, leaving only the array types, and when the normative text was changed to include aggregates whose members have non-trivial initialization, the note was overlooked.
It is not clear whether it would be better to update the note to emphasize the distinction between aggregates with non-trivial initialization and those without or to delete it entirely.
A possible related normative change to consider is whether the specification of paragraph 1 is sufficiently clear with respect to multidimensional arrays. The current definition of “non-trivial initialization” is:
An object is said to have non-trivial initialization if it is of a class or aggregate type and it or one of its members is initialized by a constructor other than a trivial default constructor.
Presumably the top-level array of an N-dimensional array whose ultimate element type is a class type with non-trivial initialization would also have non-trivial initialization, but it's not clear that this wording says that.
A more radical change that came up in the discussion was whether the undefined behavior resulting from an lvalue-to-rvalue conversion of an uninitialized object in 4.1 [conv.lval] paragraph 1 would be better dealt with as a lifetime violation instead.
According to 3.8 [basic.life] paragraphs 5 and 6, a program has undefined behavior if a pointer or glvalue designating an out-of-lifetime object
is used to access a non-static data member or call a non-static member function of the object
It is not clear what the word “access” means in this context. A reasonable interpretation might be using the pointer or glvalue as the left operand of a class member access expression; alternatively, it might mean to read or write the value of that member, allowing a class member access expression that is used only to form an address or bind a reference.
This needs to be clarified. A relevant consideration is the recent adoption of the resolution of issue 597, which eased the former restriction on simple address manipulations involving out-of-lifetime objects: if base-class offset calculations are now allowed, why not non-static data member offset calculations?
(See also issue 1531 for other uses of the term “access.”)
Additional note (January, 2013):
A related question is the meaning of the phrase “before the constructor begins execution” in 12.7 [class.cdtor] paragraph 1 means:
For an object with a non-trivial constructor, referring to any non-static member or base class of the object before the constructor begins execution results in undefined behavior.
For example:
struct DerivedMember { ... }; struct Base { Base(DerivedMember const&); }; struct Derived : Base { DerivedMember x; Derived() : Base(x) {} }; Derived a;
Is the reference to Derived::x in the mem-initializer valid?
Additional note (March, 2013):
This clause is phrased in terms of the execution of the constructor. However, it is possible for an aggregate to have a non-trivial default constructor and be initialized without executing a constructor. The wording needs to be updated to allow for non-constructor initialization to avoid appearing to imply undefined behavior for an example like:
struct X { std::string s; } x = {}; std::string t = x.s; // No constructor called for x: undefined behavior?
The rules given in 3.8 [basic.life] paragraph 7 for when an object's lifetime can be ended and a new object created in its storage include the following restriction:
the type of the original object is not const-qualified, and, if a class type, does not contain any non-static data member whose type is const-qualified or a reference type
The intent of this restriction is to permit optimizers to rely upon the original values of const and reference members in their analysis of subsequent code. However, this makes it difficult to implement certain desirable functionality such as optional<T>; see this discussion for more details.
This rule should be reconsidered, at least as far as it applies to unions. If it is decided to keep the rule, acceptable programming techniques for writing safe code when replacing such objects should be outlined in a note.
(See also issue 1404, which will become moot if the restriction is lifted.)
The term “allocated storage” is used in several places in the Standard to refer to memory in which an object may be created (dynamic, static, or automatic storage), but it has no formal definition.
According to 3.9 [basic.types] paragraph 4,
The object representation of an object of type T is the sequence of N unsigned char objects taken up by the object of type T, where N equals sizeof(T).
However, it is not clear that a “sequence” can be indexed, as an array can and as is required for the implementation of memcpy and similar code.
The aliasing rules given in 3.10 [basic.lval] paragraph 10 rely on the concept of “dynamic type.” The problem is that the dynamic type of an object often cannot be determined (or even sufficiently constrained) at the point at which an optimizer needs to be able to determine whether aliasing might occur or not. For example, consider the function
void foo(int* p, double* q) { *p = 42; *q = 3.14; }
An optimizer, on the basis of the existing aliasing rules, might decide that an int* and a double* cannot refer to the same object and reorder the assignments. This reordering, however, could result in undefined behavior if the function foo is called as follows:
void goo() { union { int i; double d; } t; t.i = 12; foo(&t.i, &t.d); cout << t.d << endl; };
Here, the reference to t.d after the call to foo will be valid only if the assignments in foo are executed in the order in which they were written; otherwise, the union will contain an int object rather than a double.
One possibility would be to require that if such aliasing occurs, it be done only via member names and not via pointers.
Notes from the July, 2007 meeting:
This is the same issue as C's DR236. The CWG expressed a desire to address the issue the same way C99 does. The issue also occurs in C++ when placement new is used to end the lifetime of one object and start the lifetime of a different object occupying the same storage.
3.11 [basic.align] speaks of “alignment requirements,” and 3.7.4.1 [basic.stc.dynamic.allocation] requires the result of an allocation function to point to “suitably aligned” storage, but there is no explicit statement of what happens when these requirements are violated (presumably undefined behavior).
According to 4.1 [conv.lval] paragraph 1, applying the lvalue-to-rvalue conversion to any uninitialized object results in undefined behavior. However, character types are intended to allow any data, including uninitialized objects and padding, to be copied (hence the statements in 3.9.1 [basic.fundamental] paragraph 1 that “For character types, all bits of the object representation participate in the value representation” and in 3.10 [basic.lval] paragraph 15 that char and unsigned char types can alias any object). The lvalue-to-rvalue conversion should be permitted on uninitialized objects of character type without evoking undefined behavior.
The descriptions of explicit (5.2.9 [expr.static.cast] paragraph 9) and implicit (4.11 [conv.mem] paragraph 2) pointer-to-member conversions differ in two significant ways:
(This situation cannot arise in an implicit pointer-to-member conversion where the source value is something like &X::f, since you can only implicitly convert from pointer-to-base-member to pointer-to-derived-member. However, if the source value is the result of an explicit "up-cast," the target type of the conversion might still not contain the member referred to by the source value.)
The first difference seems like an oversight. It is not clear whether the latter difference is intentional or not.
(See also issue 794.)
There are at least a couple of problems in the description of the various id-expressions in 5.1.1 [expr.prim.general]:
Paragraph 4 embodies an incorrect assumption about the syntax of qualified-ids:
The operator :: followed by an identifier, a qualified-id, or an operator-function-id is a primary-expression.
The problem here is that the :: is actually part of the syntax of qualified-id; consequently, “:: followed by... a qualified-id” could be something like “:: ::i,” which is ill-formed. Presumably this should say something like, “A qualified-id with no nested-name-specifier is a primary-expression.”
More importantly, some kinds of id-expressions are not described by 5.1.1 [expr.prim.general]. The structure of this section is that the result, type, and lvalue-ness are specified for each of the cases it covers:
paragraph 4 deals with qualified-ids that have no nested-name-specifier
paragraph 7 deals with bare identifiers and with qualified-ids containing a nested-name-specifier that names a class
paragraph 8 deals with qualified-ids containing a nested-name-specifier that names a namespace
This treatment leaves unspecified all the non-identifier unqualified-ids (operator-function-id, conversion-function-id, and template-id), as well as (perhaps) “:: template-id” (it's not clear whether the “:: followed by a qualified-id” case is supposed to apply to template-ids or not). Note also that the proposed resolution of issue 301 slightly exacerbates this problem by removing the form of operator-function-id that contains a tmeplate-argument-list; as a result, references like “::operator+<X>” are no longer covered in 5.1.1 [expr.prim.general].
According to 5.1.1 [expr.prim.general] paragraph 3,
Unlike the object expression in other contexts, *this is not required to be of complete type for purposes of class member access (5.2.5 [expr.ref]) outside the member function body.
Is this special treatment of member access expressions intended to apply only to *this, or does it apply to other ways of specifying the class being defined in the object expression? For example,
struct S { int i; auto f1() -> decltype((*this).i); // okay auto f2(S& This) -> decltype(This.i); // okay? auto f3() -> decltype(((S*)0)->i); // okay? };
There is implementation divergence on this question.
If the intent is to allow object expressions other than *this to have the current class type, this specification should be moved from 5.1.1 [expr.prim.general] to 5.2.5 [expr.ref] paragraph 2, which is where the general requirement for complete object expression types is found.
On a related point, the note immediately following the above-cited passage is not quite correct:
[Note: only class members declared prior to the declaration are visible. —end note]
This does not apply when the member is a “member of an unknown specialization,” per 14.6.2.1 [temp.dep.type] paragraph 5 bullet 3 sub-bullet 1; for example,
template<typename T> struct S : T { auto f() -> decltype(this->x); };
Here x is presumed to be a member of the dependent base T and is not “declared prior to the declaration” that refers to it.
The description of the use of this found in 5.1.1 [expr.prim.general] paragraphs 3 and 4 allow it to appear in the declaration of a non-static member function following the optional cv-qualifier-seq and in the brace-or-equal-initializer of a non-static data member; all other uses are prohibited. These restrictions appear to allow questionable uses of this in several contexts. For example:
template <typename T> struct Fish { static const bool value = true; }; struct Other { int p(); auto q() -> decltype(p()) *; }; class Outer { // The following declares a member function of class Other. // Is this interpreted as Other* or Outer*? friend auto Other::q() -> decltype(this->p()) *; int g(); int f() { extern void f(decltype(this->g()) *); struct Inner { // The following are all within the declaration of Outer::f(). // Is this Outer* or Inner*? static_assert(Fish<decltype(this->g())>::value, ""); enum { X = Fish<decltype(this->f())>::value }; struct Inner2 : Fish<decltype(this->g())> { }; friend void f(decltype(this->g()) *); friend auto Other::q() -> decltype(this->p()) *; }; return 0; } };
struct A { int f(); bool b = [] { struct Local { static_assert(sizeof this->f() == sizeof(int), ""); // A or Local? }; }; };
There is implementation divergence on the treatment of these examples.
It is not clear whether the template keyword should be accepted in an example like
template<typename> struct s {};
::template s<void> q; // innocuous disambiguation?
Although it is accepted by the grammar, the verbiage in 5.1.1 [expr.prim.general] paragraph 10 does not mention the possibility, while the preceding paragraph dealing with class qualification calls it out explicitly.
Notes from the June, 2014 meeting:
CWG agreed that this usage should be accepted.
Consider the following example:
void f(int i) { auto l1 = [i] { auto l2 = [&i] { ++i; // Well-formed? }; }; }
Because the l1 lambda is not marked as mutable, its operator() is const; however, it is not clear from the wording of 5.1.2 [expr.prim.lambda] paragraph 16 whether the captured member of the enclosing lambda is considered const or not.
According to 5.1.2 [expr.prim.lambda] paragraph 11, a variable is implicitly captured if it is odr-used. In the following example,
struct P { virtual ~P(); }; P &f(int&); int f(const int&); void g(int x) { [=] { typeid(f(x)); }; }
x is only odr-used if the operand of typeid is a polymorphic lvalue; otherwise, the operand is unevaluated (5.2.8 [expr.typeid] paragraphs 2-3). Whether the operand is a polymorphic lvalue depends on overload resolution in this case, which depends on whether x is captured or not: if x is captured, since the lambda is not mutable, the type of x in the body of the lambda is const int, while if it is not captured, it is just int. However, the const int version of f returns int and the int version of f returns a polymorphic lvalue, leading to a conundrum: x is only captured if it is not captured, and vice versa.
Notes from the October, 2012 meeting:
The approach favored by CWG was to specify that the operand of typeid is considered to be odr-used for the purpose of determining capture.
According to 5.1.2 [expr.prim.lambda] paragraph 6,
The closure type for a non-generic lambda-expression with no lambda-capture has a public non-virtual non-explicit const conversion function to pointer to function with C++ language linkage (7.5 [dcl.link]) having the same parameter and return types as the closure type's function call operator.
This does not specify whether the conversion function is noexcept(true) or noexcept(false). It might be helpful to nail that down.
According to 5.1.2 [expr.prim.lambda] paragraph 19,
Every occurrence of decltype((x)) where x is a possibly parenthesized id-expression that names an entity of automatic storage duration is treated as if x were transformed into an access to a corresponding data member of the closure type that would have been declared if x were an odr-use of the denoted entity.
This formulation is problematic because it assumes that x can be captured and, if captured, would result in a member of the closure class. The former is not true if the lambda has no capture-default, and the latter is not guaranteed if the capture-default is &.
According to 5.1.2 [expr.prim.lambda] paragraph 6,
The closure type for a non-generic lambda-expression with no lambda-capture has a public non-virtual non-explicit const conversion function to pointer to function with C ++ language linkage (7.5 [dcl.link]) having the same parameter and return types as the closure type's function call operator. The value returned by this conversion function shall be the address of a function that, when invoked, has the same effect as invoking the closure type's function call operator.
This does not mention the object for which the function call operator would be invoked (although since there is no capture, presumably the function call operator makes no use of the object pointer). This could be addressed by relating the behavior of the function call operator to a notional temporary, or the function call operator for such closure classes could be made static.
According to 5.2.2 [expr.call] paragraph 11,
If a function call is a prvalue of object type:
if the function call is either
the operand of a decltype-specifier or
the right operand of a comma operator that is the operand of a decltype-specifier,
a temporary object is not introduced for the prvalue. The type of the prvalue may be incomplete. [Note: as a result, storage is not allocated for the prvalue and it is not destroyed; thus, a class type is not instantiated as a result of being the type of a function call in this context. This is true regardless of whether the expression uses function call notation or operator notation (13.3.1.2 [over.match.oper]). —end note] [Note: unlike the rule for a decltype-specifier that considers whether an id-expression is parenthesized (7.1.6.2 [dcl.type.simple]), parentheses have no special meaning in this context. —end note]
otherwise, the type of the prvalue shall be complete.
Thus, an example like
template <class T> struct A: T { }; template <class T> A<T> f(T) { return A<T>(); }; decltype(f(42)) *p;
is well-formed. However, a function template specialization in which the return type is an abstract class should be a deduction failure, per 14.8.2 [temp.deduct] paragraph 8, last bullet:
...
Attempting to create a function type in which a parameter type or the return type is an abstract class type (10.4 [class.abstract]).
The requirement that the return type in a function call in a decltype-specifier not be instantiated prevents the detection of this deduction failure in an example like:
template <class T> struct A { virtual void f() = 0; }; template <class T> A<T> f(T) { return A<T>(); }; decltype(f(42)) *p;
It is not clear how this should be resolved.
(See also issue 1640.)
According to 5.2.2 [expr.call] paragraph 4,
The lifetime of a parameter ends when the function in which it is defined returns. The initialization and destruction of each parameter occurs within the context of the calling function.
This presumably means that the destruction of the parameter object occurs before the end of the full-expression, unlike temporaries. This is not what current implementations do, however. It is not clear that a change to treat parameter objects like temporaries, to match existing practice, would be an improvement, however, as it would result in ABI breakage for implementations that destroy parameters in the called function.
See also issue 1935 for a related question regarding the handling of arguments to a placement allocation function and placement deallocation function.
Notes from the June, 2014 meeting:
WG decided to make it unspecified whether parameter objects are destroyed immediately following the call or at the end of the full-expression to which the call belongs. This approach also resolves issue 1935.
According to 5.2.3 [expr.type.conv] paragraph 4,
Similarly, a simple-type-specifier or typename-specifier followed by a braced-init-list creates a temporary object of the specified type direct-list-initialized (8.5.4 [dcl.init.list]) with the specified braced-init-list, and its value is that temporary object as a prvalue.
This wording does not handle the case where T is a reference type: it is not possible to create a temporary object of that type, and presumably the result would be an xvalue, not a prvalue.
An example like
typedef int T; typedef const T CT; void blah2(T *a) { a->CT::~T(); }
is ill-formed, because 5.2.4 [expr.pseudo] paragraph 2 requires that the two type-names in the qualified-id be the same type. The corresponding case for a real destructor, however, is allowed because of the provision in 9.1 [class.name] paragraph 5 ignoring cv-qualifiers in a typedef-name referring to a class type. The specification for pseudo-destructors should be adjusted accordingly.
At least a couple of places in the IS state that indirection through a null pointer produces undefined behavior: 1.9 [intro.execution] paragraph 4 gives "dereferencing the null pointer" as an example of undefined behavior, and 8.3.2 [dcl.ref] paragraph 4 (in a note) uses this supposedly undefined behavior as justification for the nonexistence of "null references."
However, 5.3.1 [expr.unary.op] paragraph 1, which describes the unary "*" operator, does not say that the behavior is undefined if the operand is a null pointer, as one might expect. Furthermore, at least one passage gives dereferencing a null pointer well-defined behavior: 5.2.8 [expr.typeid] paragraph 2 says
If the lvalue expression is obtained by applying the unary * operator to a pointer and the pointer is a null pointer value (4.10 [conv.ptr]), the typeid expression throws the bad_typeid exception (18.7.3 [bad.typeid]).
This is inconsistent and should be cleaned up.
Bill Gibbons:
At one point we agreed that dereferencing a null pointer was not undefined; only using the resulting value had undefined behavior.
For example:
char *p = 0; char *q = &*p;
Similarly, dereferencing a pointer to the end of an array should be allowed as long as the value is not used:
char a[10]; char *b = &a[10]; // equivalent to "char *b = &*(a+10);"
Both cases come up often enough in real code that they should be allowed.
Mike Miller:
I can see the value in this, but it doesn't seem to be well reflected in the wording of the Standard. For instance, presumably *p above would have to be an lvalue in order to be the operand of "&", but the definition of "lvalue" in 3.10 [basic.lval] paragraph 2 says that "an lvalue refers to an object." What's the object in *p? If we were to allow this, we would need to augment the definition to include the result of dereferencing null and one-past-the-end-of-array.
Tom Plum:
Just to add one more recollection of the intent: I was very happy when (I thought) we decided that it was only the attempt to actually fetch a value that creates undefined behavior. The words which (I thought) were intended to clarify that are the first three sentences of the lvalue-to-rvalue conversion, 4.1 [conv.lval]:
An lvalue (3.10 [basic.lval]) of a non-function, non-array type T can be converted to an rvalue. If T is an incomplete type, a program that necessitates this conversion is ill-formed. If the object to which the lvalue refers is not an object of type T and is not an object of a type derived from T, or if the object is uninitialized, a program that necessitates this conversion has undefined behavior.
In other words, it is only the act of "fetching", of lvalue-to-rvalue conversion, that triggers the ill-formed or undefined behavior. Simply forming the lvalue expression, and then for example taking its address, does not trigger either of those errors. I described this approach to WG14 and it may have been incorporated into C 1999.
Mike Miller:
If we admit the possibility of null lvalues, as Tom is suggesting here, that significantly undercuts the rationale for prohibiting "null references" -- what is a reference, after all, but a named lvalue? If it's okay to create a null lvalue, as long as I don't invoke the lvalue-to-rvalue conversion on it, why shouldn't I be able to capture that null lvalue as a reference, with the same restrictions on its use?
I am not arguing in favor of null references. I don't want them in the language. What I am saying is that we need to think carefully about adopting the permissive approach of saying that it's all right to create null lvalues, as long as you don't use them in certain ways. If we do that, it will be very natural for people to question why they can't pass such an lvalue to a function, as long as the function doesn't do anything that is not permitted on a null lvalue.
If we want to allow &*(p=0), maybe we should change the definition of "&" to handle dereferenced null specially, just as typeid has special handling, rather than changing the definition of lvalue to include dereferenced nulls, and similarly for the array_end+1 case. It's not as general, but I think it might cause us fewer problems in the long run.
Notes from the October 2003 meeting:
See also issue 315, which deals with the call of a static member function through a null pointer.
We agreed that the approach in the standard seems okay: p = 0; *p; is not inherently an error. An lvalue-to-rvalue conversion would give it undefined behavior.
Proposed resolution (October, 2004):
(Note: the resolution of issue 453 also resolves part of this issue.)
Add the indicated words to 3.10 [basic.lval] paragraph 2:
An lvalue refers to an object or function or is an empty lvalue (5.3.1 [expr.unary.op]).
Add the indicated words to 5.3.1 [expr.unary.op] paragraph 1:
The unary * operator performs indirection: the expression to which it is applied shall be a pointer to an object type, or a pointer to a function type and the result is an lvalue referring to the object or function to which the expression points, if any. If the pointer is a null pointer value (4.10 [conv.ptr]) or points one past the last element of an array object (5.7 [expr.add]), the result is an empty lvalue and does not refer to any object or function. An empty lvalue is not modifiable. If the type of the expression is “pointer to T,” the type of the result is “T.” [Note: a pointer to an incomplete type (other than cv void) can be dereferenced. The lvalue thus obtained can be used in limited ways (to initialize a reference, for example); this lvalue must not be converted to an rvalue, see 4.1 [conv.lval].—end note]
Add the indicated words to 4.1 [conv.lval] paragraph 1:
If the object to which the lvalue refers is not an object of type T and is not an object of a type derived from T, or if the object is uninitialized, or if the lvalue is an empty lvalue (5.3.1 [expr.unary.op]), a program that necessitates this conversion has undefined behavior.
Change 1.9 [intro.execution] as indicated:
Certain other operations are described in this International Standard as undefined (for example, the effect of dereferencing the null pointer division by zero).
Note (March, 2005):
The 10/2004 resolution interacts with the resolution of issue 73. We added wording to 3.9.2 [basic.compound] paragraph 3 to the effect that a pointer containing the address one past the end of an array is considered to “point to” another object of the same type that might be located there. The 10/2004 resolution now says that it would be undefined behavior to use such a pointer to fetch the value of that object. There is at least the appearance of conflict here; it may be all right, but it at needs to be discussed further.
Notes from the April, 2005 meeting:
The CWG agreed that there is no contradiction between this direction and the resolution of issue 73. However, “not modifiable” is a compile-time concept, while in fact this deals with runtime values and thus should produce undefined behavior instead. Also, there are other contexts in which lvalues can occur, such as the left operand of . or .*, which should also be restricted. Additional drafting is required.
(See also issue 1102.)
The preincrement (5.3.2 [expr.pre.incr]) and postincrement (5.2.6 [expr.post.incr]) operators can be applied to operands of type bool, setting the operand to true, but this use is deprecated. Can it now be removed altogether?
It is not clear from 5.3.4 [expr.new] whether a deleted operator delete is referenced by a new-expression in which there is no initialization or in which the initialization cannot throw an exception, rendering the program ill-formed. (The question also arises as to whether such a new-expression constitutes a “use” of the deallocation function in the sense of 3.2 [basic.def.odr].)
Notes from the July, 2009 meeting:
The rationale for defining a deallocation function as deleted would presumably be to prevent such objects from being freed. Treating the new-expression as a use of such a deallocation function would mean that such objects could not be created in the first place. There is already an exemption from freeing an object if “a suitable deallocation function [cannot] be found;” a deleted deallocation function should be treated similarly.
The resolution of issue 1796 addressed only the relationship of “bits” with the null character value. The values and arrangements of bits within an object are also mentioned in other contexts; these should also be considered for revision. For example, 5.8 [expr.shift] paragraph 2 says,
The value of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are zero-filled.
This appears to place constraints on the bit representation, which (as noted in issue 1796) is not accessible to the program. A similar statement appears in paragraph 3 for >>.
The specification of the bitwise operations in 5.11 [expr.bit.and], 5.12 [expr.xor], and 5.13 [expr.or] uses the undefined term “bitwise” in describing the operations, without specifying whether it is the value or object representation that is in view.
Part of the resolution of this might be to define “bit” (which is otherwise currently undefined in C++) as a value of a given power of 2.
Notes from the June, 2014 meeting:
CWG decided to reformulate the description of the operations themselves to avoid references to bits, splitting off the larger questions of defining “bit” and the like to issue 1943 for further consideration.
Pointer equality is defined by reference to the addresses of the objects designated by the pointer values, reflecting the implementation technique of most/all compilers. However, this definition is intrinsically a runtime property, and such a description is inappropriate with respect to constexpr expressions, which must deal with pointer comparisons without necessarily knowing the runtime layout of the objects involved. A better definition usable at compile time is needed.
In an example like,
struct B; struct A { A(); A(B&) = delete; operator B&(); }; struct B : A {} b; B &c = true ? A() : b;
the rules of 5.16 [expr.cond] paragraph 3 make this ambiguous: A() can be implicitly converted to the type “lvalue reference to B,” and b satisfies the constraints to be converted to an A prvalue (it's of a type derived from A and the cv-qualifiers are okay). Bullet 3 bullet 1 is clear that we do not actually try to create an A temporary from b, so we don't notice that it invokes a deleted constructor and rule out that conversion.
If the deleted conversion is in the other sense, the result is unambiguous:
struct B; struct A { A(); A(B&); operator B&() = delete; }; struct B : A {} b; B &c = true ? A() : b;
A() can no longer be implicitly converted to the type “lvalue reference to B”: since the declaration B &t = A(); is not well formed (it invokes a deleted function), there is no implicit conversion. So we unambiguously convert the third operand to an A prvalue.
These should presumably either both be valid or both invalid. EDG and gcc call both ambiguous.
Notes from the June, 2014 meeting:
The wording should be changed to handle the convertibility test more like overload resolution: the conversion "exists" if the conversion function is declared, but is ill-formed if it would actually be used.
According to 5.16 [expr.cond] paragraph 3,
if the second and third operand have different types and either has (possibly cv-qualified) class type, or if both are glvalues of the same value category and the same type except for cv-qualification, an attempt is made to convert each of those operands to the type of the other. The process for determining whether an operand expression E1 of type T1 can be converted to match an operand expression E2 of type T2 is defined as follows:
If E2 is an lvalue: E1 can be converted to match E2 if E1 can be implicitly converted (Clause 4 [conv]) to the type “lvalue reference to T2”, subject to the constraint that in the conversion the reference must bind directly (8.5.3 [dcl.init.ref]) to an lvalue.
If two bit-field glvalues have exactly the same scalar type, paragraph 3 does not apply (two non-class operands must differ in at least cv-qualification). For an example like
struct S { int i:3; const int j:4; } s; int k = true ? s.i : s.j;
the condition is satisfied. The intent is that S::i can be converted to const int but S::j cannot be converted to int, so the result should be a bit-field lvalue of type const int. However, the test for convertibility is phrased in terms of direct reference binding, which is inapplicable to bit-fields, resulting in neither conversion succeeding, leading to categorizing the expression as ambiguous.
The specification of 5.17 [expr.ass] paragraph 9 is presumably intended to allow use of a braced-init-list as the operand of a compound assignment operator as well as a simple assignment operator, although the normative wording does not explicitly say so. (The example in that paragraph does include
complex<double> z;
z += { 1, 2 }; // meaning z.operator+=({1,2})
for instance, which could be read to imply compound assignment operators for scalar types as well.)
However, the details of how this is to be implemented are not clear. Paragraph 7 says,
The behavior of an expression of the form E1 op = E2 is equivalent to E1 = E1 op E2 except that E1 is evaluated only once.
Applying this pattern literally to a braced-init-list yields invalid code: x += {1} would become x = x + {1}, which is non-syntactic.
Another problem is how to apply the prohibition against narrowing conversions to a compound assignment. For example,
char c; c += {1};
would presumably always be a narrowing error, because after integral promotions, the type of c+1 is int. The similar issue 1078 was classified as "NAD" because the workaround was simply to add a cast to suppress the error; however, there is no place to put a similar cast in a compound assignment.
Notes from the October, 2012 meeting:
The incorrect description of the meaning of a compound assignment with a braced-init-list should be fixed by CWG. The question of whether it makes sense to apply narrowing rules to such assignments is better addressed by EWG.
According to 5.18 [expr.comma] paragraph 1,
The type and value of the result are the type and value of the right operand; the result is of the same value category as its right operand, and is a bit-field if its right operand is a glvalue and a bit-field.
The description of a bit-field result seems to indicate that the operand might not be a glvalue but could still be a bit-field. There doesn't appear to be a normative prohibition against prvalue bit-fields, so one should presumably be added, and this wording should be adjusted to remove the suggestion that such a thing might exist.
The current wording of the Standard is not sufficiently clear regarding the interaction of class scope (which treats the bodies of member functions as effectively appearing after the class definition is complete) and the use of constexpr member functions within the class definition in contexts requiring constant expressions. For example, an array bound cannot use a constexpr member function that relies on the completeness of the class or on members that have not yet been declared, but the current wording does not appear to state that.
Additional note (October, 2013):
This question also affects function return type deduction (the auto specifier) in member functions. For example, the following should presumably be prohibited, but the current wording is not clear:
struct S { static auto f() { return 42; } auto g() -> decltype(f()) { return f(); } };
Some classes that would produce a constant when initialized by value-initialization are not considered literal types. For example:
struct A { int a; }; // non-constexpr default constructor struct B : A {}; // non-literal type
The Standard should make clear that a constexpr member function cannot be used in a constant expression until its class is complete. For example:
template<typename T> struct C { template<typename T2> static constexpr bool _S_chk() { return false; } static const bool __value = _S_chk<int>(); }; C<double> c;
Current implementations accept this, although they reject the corresponding non-template case:
struct C { static constexpr bool _S_chk() { return false; } static const bool __value = _S_chk(); }; C c;
Presumably the template case should be handled consistently with the non-template case.
It would be helpful to have a single grammar term for expression and braced-init-list, which often occur together in the text. In particular, 6.5.4 [stmt.ranged] paragraph 1 allows both, but the description of __RangeT refers only to the expression case; such errors would be less likely if the common term were available.
According to the general rule for declarations in 3.3.2 [basic.scope.pdecl] paragraph 1,
The point of declaration for a name is immediately after its complete declarator (Clause 8 [dcl.decl]) and before its initializer (if any), except as noted below.
However, the rewritten expansion of the range-based for statement in 6.5.4 [stmt.ranged] paragraph 1 contradicts this general rule, so that the index variable is not visible in the range-init:
for (int i : {i}) ; // error: i not in scope
(See also issue 1498 for another question regarding the rewritten form of the range-based for.)
Notes from the October, 2012 meeting:
EWG is discussing issue 900 and the outcome of that discussion should be taken into consideration in addressing this issue.
Notes from the April, 2013 meeting:
The approach favored by CWG for resolving this issue is to change the point of declaration of the variable in the for-range-declaration to be after the ).
A simple example like
int main() { int k = 0; for (auto x : { 1, 2, 3 }) k += x; return k; }
requires that the <initializer_list> header be included, because the expansion of the range-based for involves a declaration of the form
auto &&__range = { 1, 2, 3 };
and a braced-init-list causes auto to be deduced as a specialization of std::initializer_list. This seems unnecessary and could be eliminated by specifying that __range has an array type for cases like this.
(It should be noted that EWG is considering a proposal to change auto deduction for cases involving braced-init-lists, so resolution of this issue should be coordinated with that effort.)
Notes from the September, 2013 meeting:
CWG felt that this issue should be resolved by using the array variant of the range-based for implementation.
Because the restriction that a trailing-return-type can appear only in a declaration with “the single type-specifier auto” (8.3.5 [dcl.fct] paragraph 2) is a semantic, not a syntactic, restriction, it does not influence disambiguation, which is “purely syntactic” (6.8 [stmt.ambig] paragraph 3). Consequently, some previously unambiguous expressions are now ambiguous. For example:
struct A { A(int *); A *operator()(void); int B; }; int *p; typedef struct BB { int C[2]; } *B, C; void foo() { // The following line becomes invalid under C++0x: A (p)()->B; // ill-formed function declaration // In the following, // - B()->C is either type-id or class member access expression // - B()->C[1] is either type-id or subscripting expression // N3126 subclause 8.2 [dcl.ambig.res] does not mention an ambiguity // with these forms of expression A a(B ()->C); // function declaration or object declaration sizeof(B ()->C[1]); // sizeof(type-id) or sizeof on an expression }
Notes from the March, 2011 meeting:
CWG agreed that the presence of auto should be considered in disambiguation, even though it is formally handled semantically rather than syntactically.
According to 6.8 [stmt.ambig] paragraph 3,
The disambiguation is purely syntactic; that is, the meaning of the names occurring in such a statement, beyond whether they are type-names or not, is not generally used in or changed by the disambiguation. Class templates are instantiated as necessary to determine if a qualified name is a type-name. Disambiguation precedes parsing, and a statement disambiguated as a declaration may be an ill-formed declaration. If, during parsing, a name in a template parameter is bound differently than it would be bound during a trial parse, the program is ill-formed. No diagnostic is required. [Note: This can occur only when the name is declared earlier in the declaration. —end note]
The statement about template parameters is confusing (and not helped by the fact that the example that follows illustrates the general rule for declarations and does not involve any template parameters). It is attempting to say that a program is ill-formed if a template argument of a class template specialization has a different value in the two parses. With decltype this can now apply to other kinds of templates as well, so the wording should be clarified and made more general.
According to 7.1.1 [dcl.stc] paragraph 1,
If a storage-class-specifier appears in a decl-specifier-seq, there can be no typedef specifier in the same decl-specifier-seq and the init-declarator-list of the declaration shall not be empty...
This obviously should apply to mutable but does not because mutable applies to member-declarator-lists, not init-declarator-lists. Similarly, in 7.1.6.1 [dcl.type.cv] paragraph 1,
If a cv-qualifier appears in a decl-specifier-seq, the init-declarator-list of the declaration shall not be empty.
this should apply to member declarations as well.
With the resolution of issue 1044, there is no need to say that the name of the alias cannot appear in the type-id of the declaration.
According to 7.1.5 [dcl.constexpr] paragraph 6,
If the instantiated template specialization of a constexpr function template or member function of a class template would fail to satisfy the requirements for a constexpr function or constexpr constructor, that specialization is still a constexpr function or constexpr constructor, even though a call to such a function cannot appear in a constant expression.
The restriction on appearing in a constant expression assumes the previous wording that made such a specialization non-constexpr, and a call to a non-constexpr function cannot appear in a constant expression. With the current wording, however, there is no normative restriction against calls to such specializations. 5.19 [expr.const] should be updated to include such a prohibition.
It is not clear whether the auto specifier can appear in a trailing-return-type.
The current wording allows something like
struct S { operator auto() { return 0; } } s;
If it is intended to be permitted, the details of its handling are not clear. Also, a similar syntax has been discussed as a possible future extension for dealing with proxy types in deduction which, if adopted, could cause confusion.
Additional note, November, 2013:
Doubt was expressed during the 2013-11-25 drafting review teleconference as to the usefulness of this provision. It is therefore being left open for further consideration after C++14 is finalized.
Notes from the February, 2014 meeting:
CWG continued to express doubt as to the usefulness of this construct but felt that if it is permitted, the rules need clarification.
7.1.6 [dcl.type] paragraph 2 describes the auto specifier as “a placeholder for a type to be deduced.” Elsewhere, the Standard refers to the type represented by the auto specifier as a “placeholder type.” This usage has been deemed confusing by some, requiring either a definition of one or both terms or rewording to avoid them.
The scope in which the names of enumerators are entered for a member unscoped opaque enumeration is not clear. According to 7.2 [dcl.enum] paragraph 10,
Each enum-name and each unscoped enumerator is declared in the scope that immediately contains the enum-specifier.
In the case of a member opaque enumeration defined outside its containing class, however, it is not clear whether the enumerator names are declared in the class scope or in the lexical scope containing the definition. Declaring them in the class scope would be a violation of 9.2 [class.mem] paragraph 1:
The member-specification in a class definition declares the full set of members of the class; no member can be added elsewhere.
Declaring the names in the lexical scope containing the definition would be contrary to the example in 14.5.1.4 [temp.mem.enum] paragraph 1:
template<class T> struct A { enum E : T; }; A<int> a; template<class T> enum A<T>::E : T { e1, e2 }; A<int>::E e = A<int>::e1;
There also appear to be problems with the rules for dependent types and members of the current instantiation.
According to 7.2 [dcl.enum] paragraph 7,
For an enumeration whose underlying type is fixed, the values of the enumeration are the values of the underlying type. Otherwise, for an enumeration where emin is the smallest enumerator and emax is the largest, the values of the enumeration are the values in the range bmin to bmax, defined as follows: Let K be 1 for a two's complement representation and 0 for a one's complement or sign-magnitude representation. bmax is the smallest value greater than or equal to max(|emin|-K,|emax|) and equal to 2M-1, where M is a non-negative integer. bmin is zero if emin is non-negative and -(bmax+K) otherwise. The size of the smallest bit-field large enough to hold all the values of the enumeration type is max(M,1) if bmin is zero and M+1 otherwise.
The result of these calculations is that the number of bits required for
enum { N = -1, Z = 0 }
is 1, but the number required for
enum { N = -1 }
is 2. This is surprising. This could be fixed by changing |emax| to emax.
There is no syntax currently for declaring an explicit specialization of a member scoped enumeration. A declaration (not a definition) of such an explicit specialization most resembles an opaque-enum-declaration, but the grammar for that requires that the name be a simple identifier, which will not be the case for an explicit specialization of a member enumeration. This could be remedied by adding a nested-name-specifier to the grammar with a restriction that a nested-name-specifier only appear in an explicit specialization.
8.3 [dcl.meaning] paragraph 1 and 9 [class] paragraph 11 prohibit decltype-qualified declarators and class names, respectively. There is no such prohibition in 7.2 [dcl.enum] for enumeration names. Presumably that is an oversight that should be rectified.
7.3.1.2 [namespace.memdef] paragraph 3 says,
If a friend declaration in a non-local class first declares a class or function the friend class or function is a member of the innermost enclosing namespace... When looking for a prior declaration of a class or a function declared as a friend, scopes outside the innermost enclosing namespace scope are not considered.It is not clear from this passage how to determine whether an entity is "first declared" in a friend declaration. One question is whether a using-declaration influences this determination. For instance:
void foo(); namespace A{ using ::foo; class X{ friend void foo(); }; }Is the friend declaration a reference to ::foo or a different foo?
Part of the question involves determining the meaning of the word "synonym" in 7.3.3 [namespace.udecl] paragraph 1:
A using-declaration introduces a name into the declarative region in which the using-declaration appears. That name is a synonym for the name of some entity declared elsewhere.Is "using ::foo;" the declaration of a function or not?
More generally, the question is how to describe the lookup of the name in a friend declaration.
John Spicer: When a declaration specifies an unqualified name, that name is declared, not looked up. There is a mechanism in which that declaration is linked to a prior declaration, but that mechanism is not, in my opinion, via normal name lookup. So, the friend always declares a member of the nearest namespace scope regardless of how that name may or may not already be declared there.
Mike Miller: 3.4.1 [basic.lookup.unqual] paragraph 7 says:
A name used in the definition of a class X outside of a member function body or nested class definition shall be declared in one of the following ways:... [Note: when looking for a prior declaration of a class or function introduced by a friend declaration, scopes outside of the innermost enclosing namespace scope are not considered.]The presence of this note certainly implies that this paragraph describes the lookup of names in friend declarations.
John Spicer: It most certainly does not. If that section described the friend lookup it would yield the incorrect results for the friend declarations of f and g below. I don't know why that note is there, but it can't be taken to mean that that is how the friend lookup is done.
void f(){} void g(){} class B { void g(); }; class A : public B { void f(); friend void f(); // ::f not A::f friend void g(); // ::g not B::g };
Mike Miller: If so, the lookups for friend functions and classes behave differently. Consider the example in 3.4.4 [basic.lookup.elab] paragraph 3:
struct Base { struct Data; // OK: declares nested Data friend class Data; // OK: nested Data is a friend };
If the friend declaration is not a reference to ::foo, there is a related but separate question: does the friend declaration introduce a conflicting (albeit "invisible") declaration into namespace A, or is it simply a reference to an as-yet undeclared (and, in this instance, undeclarable) A::foo? Another part of the example in 3.4.4 [basic.lookup.elab] paragraph 3 is related:
struct Data { friend struct Glob; // OK: Refers to (as yet) undeclared Glob // at global scope. };
John Spicer: You can't refer to something that has not yet been declared. The friend is a declaration of Glob, it just happens to declare it in a such a way that its name cannot be used until it is redeclared.
(A somewhat similar question has been raised in connection with issue 36. Consider:
namespace N { struct S { }; } using N::S; struct S; // legal?
According to 9.1 [class.name] paragraph 2,
A declaration consisting solely of class-key identifier ; is either a redeclaration of the name in the current scope or a forward declaration of the identifier as a class name.
Should the elaborated type declaration in this example be considered a redeclaration of N::S or an invalid forward declaration of a different class?)
(See also issues 95, 136, 139, 143, 165, and 166, as well as paper J16/00-0006 = WG21 N1229.)
The following came up recently on comp.lang.c++.moderated (edited for brevity):
namespace N1 { template<typename T> void f( T* x ) { // ... other stuff ... delete x; } } namespace N2 { using N1::f; template<> void f<int>( int* ); // A: ill-formed class Test { ~Test() { } friend void f<>( Test* x ); // B: ill-formed? }; }
I strongly suspect, but don't have standardese to prove, that the friend declaration in line B is ill-formed. Can someone show me the text that allows or disallows line B?
Here's my reasoning: Writing "using" to pull the name into namespace N2 merely allows code in N2 to use the name in a call without qualification (per 7.3.3 [namespace.udecl]). But just as declaring a specialization must be done in the namespace where the template really lives (hence line A is ill-formed), I suspect that declaring a specialization as a friend must likewise be done using the original namespace name, not obliquely through a "using". I see nothing in 7.3.3 [namespace.udecl] that would permit this use. Is there?
Andrey Tarasevich: 14.5.4 [temp.friend] paragraph 2 seems to get pretty close: "A friend declaration that is not a template declaration and in which the name of the friend is an unqualified 'template-id' shall refer to a specialization of a function template declared in the nearest enclosing namespace scope".
Herb Sutter: OK, thanks. Then the question in this is the word "declared" -- in particular, we already know we cannot declare a specialization of a template in any other namespace but the original one.
John Spicer: This seems like a simple question, but it isn't.
First of all, I don't think the standard comments on this usage one way or the other.
A similar example using a namespace qualified name is ill-formed based on 8.3 [dcl.meaning] paragraph 1:
namespace N1 { void f(); } namespace N2 { using N1::f; class A { friend void N2::f(); }; }
Core issue 138 deals with this example:
void foo(); namespace A{ using ::foo; class X{ friend void foo(); }; }
The proposed resolution (not yet approved) for issue 138 is that the friend declares a new foo that conflicts with the using-declaration and results in an error.
Your example is different than this though because the presence of the explicit argument list means that this is not declaring a new f but is instead using a previously declared f.
One reservation I have about allowing the example is the desire to have consistent rules for all of the "declaration like" uses of template functions. Issue 275 (in DR status) addresses the issue of unqualified names in explicit instantiation and explicit specialization declarations. It requires that such declarations refer to templates from the namespace containing the explicit instantiation or explicit specialization. I believe this rule is necessary for those directives but is not really required for friend declarations -- but there is the consistency issue.
Notes from April 2003 meeting:
This is related to issue 138. John Spicer is supposed to update his paper on this topic. This is a new case not covered in that paper. We agreed that the B line should be allowed.
Given a namespace-scope declaration like
template<typename T> T var = T();
should T<const int> have internal linkage by virtue of its const-qualified type? Or should it inherit the linkage of the template?
Notes from the February, 2014 meeting:
CWG noted that linkage is by name, and a specialization of a variable template does not have a name separate from that of the variable template, thus the specialization will have the linkage of the template.
The grammar for alignment-specifier in 7.6.1 [dcl.attr.grammar] paragraph 1 is:
where the ellipsis indicates pack expansion. Naively, one would expect that the expansion would result in forms like
alignas() alignas(1, 2) alignas(int, double)
but none of those forms is given any meaning by the current wording. Instead, 14.5.3 [temp.variadic] paragraph 4 says,
In an alignment-specifier (7.6.2 [dcl.align]); the pattern is the alignment-specifier without the ellipsis.
Presumably this means that something like alignas(T...) would expand to something like
alignas(int) alignas(double)
This is counterintuitive and should be reexamined.
See also messages 24016 through 24021.
Notes from the February, 2014 meeting:
CWG decided to change the pack expansion of alignas so that the type-id or assignment-expression is repeated inside the parentheses and to change the definition of alignas to accept multiple arguments with the same meaning as multiple alignas specifiers.
It is not clear what, if anything, in the existing specification requires that the initialization of multiple init-declarators within a single declaration be performed in declaration order.
The grammar for type-id in 9.1 [class.name] paragraph 1 has two problems. First, the fact that we allow an abstract-pack-declarator makes some uses of type-id (template arguments, alignment specifiers, exception-specifications) ambiguous: T... could be parsed either as a type-id, including the ellipsis, or as the type-id T with a following ellipsis. There does not appear to be any rule to disambiguate these parses.
The other problem is that we do not allow parentheses in an abstract-pack-declarator, which makes
template<typename...Ts> void f(Ts (&...)[4]);
ill-formed because (&...)() is not an abstract-pack-declarator. There is implementation variance on this point.
Issue 1477 assumes that a name declared only in a friend declaration can be defined outside its namespace using a qualified-id, but the normative passages in 8.3 [dcl.meaning] paragraph 1 and 7.3.1.2 [namespace.memdef] paragraph 2 do not settle the question definitively, and there is implementation variance. A clearer statement of intent is needed.
8.3.2 [dcl.ref] paragraph 4 says:
A reference shall be initialized to refer to a valid object or function. [Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the "object" obtained by dereferencing a null pointer, which causes undefined behavior ...]
What is a "valid" object? In particular the expression "valid object" seems to exclude uninitialized objects, but the response to Core Issue 363 clearly says that's not the intent. This is an example (overloading construction on constness of *this) by John Potter, which I think is supposed to be legal C++ though it binds references to objects that are not initialized yet:
struct Fun { int x, y; Fun (int x, Fun const&) : x(x), y(42) { } Fun (int x, Fun&) : x(x), y(0) { } }; int main () { const Fun f1 (13, f1); Fun f2 (13, f2); cout << f1.y << " " << f2.y << "\n"; }
Suggested resolution: Changing the final part of 8.3.2 [dcl.ref] paragraph 4 to:
A reference shall be initialized to refer to an object or function. From its point of declaration on (see 3.3.2 [basic.scope.pdecl]) its name is an lvalue which refers to that object or function. The reference may be initialized to refer to an uninitialized object but, in that case, it is usable in limited ways (3.8 [basic.life], paragraph 6) [Note: On the other hand, a declaration like this:int & ref = *(int*)0;is ill-formed because ref will not refer to any object or function ]
I also think a "No diagnostic is required." would better be added (what about something like int& r = r; ?)
Proposed Resolution (October, 2004):
(Note: the following wording depends on the proposed resolution for issue 232.)
Change 8.3.2 [dcl.ref] paragraph 4 as follows:
A reference shall be initialized to refer to a valid object or function. If an lvalue to which a reference is directly bound designates neither an existing object or function of an appropriate type (8.5.3 [dcl.init.ref]), nor a region of memory of suitable size and alignment to contain an object of the reference's type (1.8 [intro.object], 3.8 [basic.life], 3.9 [basic.types]), the behavior is undefined. [Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” empty lvalue obtained by dereferencing a null pointer, which causes undefined behavior. As does not designate an object or function. Also, as described in 9.6 [class.bit], a reference cannot be bound directly to a bit-field. ]
The name of a reference shall not be used in its own initializer. Any other use of a reference before it is initialized results in undefined behavior. [Example:
int& f(int&); int& g(); extern int& ir3; int* ip = 0; int& ir1 = *ip; // undefined behavior: null pointer int& ir2 = f(ir3); // undefined behavior: ir3 not yet initialized int& ir3 = g(); int& ir4 = f(ir4); // ill-formed: ir4 used in its own initializer—end example]
Rationale: The proposed wording goes beyond the specific concerns of the issue. It was noted that, while the current wording makes cases like int& r = r; ill-formed (because r in the initializer does not "refer to a valid object"), an inappropriate initialization can only be detected, if at all, at runtime and thus "undefined behavior" is a more appropriate treatment. Nevertheless, it was deemed desirable to continue to require a diagnostic for obvious compile-time cases.
It was also noted that the current Standard does not say anything about using a reference before it is initialized. It seemed reasonable to address both of these concerns in the same wording proposed to resolve this issue.
Notes from the April, 2005 meeting:
The CWG decided that whether to require an implementation to diagnose initialization of a reference to itself should be handled as a separate issue (504) and also suggested referring to “storage” instead of “memory” (because 1.8 [intro.object] defines an object as a “region of storage”).
Proposed Resolution (April, 2005):
(Note: the following wording depends on the proposed resolution for issue 232.)
Change 8.3.2 [dcl.ref] paragraph 4 as follows:
A reference shall be initialized to refer to a valid object or function. If an lvalue to which a reference is directly bound designates neither an existing object or function of an appropriate type (8.5.3 [dcl.init.ref]), nor a region of storage of suitable size and alignment to contain an object of the reference's type (1.8 [intro.object], 3.8 [basic.life], 3.9 [basic.types]), the behavior is undefined. [Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” empty lvalue obtained by dereferencing a null pointer, which causes undefined behavior. As does not designate an object or function. Also, as described in 9.6 [class.bit], a reference cannot be bound directly to a bit-field. ]
Any use of a reference before it is initialized results in undefined behavior. [Example:
int& f(int&); int& g(); extern int& ir3; int* ip = 0; int& ir1 = *ip; // undefined behavior: null pointer int& ir2 = f(ir3); // undefined behavior: ir3 not yet initialized int& ir3 = g(); int& ir4 = f(ir4); // undefined behavior: ir4 used in its own initializer—end example]
Note (February, 2006):
The word “use” in the last paragraph of the proposed resolution was intended to refer to the description in 3.2 [basic.def.odr] paragraph 2. However, that section does not define what it means for a reference to be “used,” dealing only with objects and functions. Additional drafting is required to extend 3.2 [basic.def.odr] paragraph 2 to apply to references.
Additional note (May, 2008):
The proposed resolution for issue 570 adds wording to define “use” for references.
Note, January, 2012:
The resolution should also probably deal with the fact that the “one-past-the-end” address of an array does not designate a valid object (even if such a pointer might “point to” an object of the correct type, per 3.9.2 [basic.compound]) and thus is not suuitable for the lvalue-to-rvalue conversion.
According to 8.3.4 [dcl.array] paragraph 1, an array declarator whose element type is an abstract class is ill-formed. However, if the element type is a class template specialization, it may not be known that the class is abstract; because forming an array of an incomplete type is permitted (3.9 [basic.types] paragraphs 5-6), the class template is not required to be instantiated in order to use it as an element type. The expected handling if the class template is later instantiated is unclear; should the compiler issue an error about the earlier array array type at the point at which the class template is instantiated?
This also affects overload resolution:
template<typename> struct Abstract { virtual void f() = 0; typedef int type; }; template<typename T> char &abstract_test(T[1]); // #1 template<typename T> char (&abstract_test(...))[2]; // #2 // Abstract<int>::type n; static_assert(sizeof(abstract_test<Abstract<int>>(nullptr)) == 2, "");
Overload resolution will select #1 and fail the assertion; if the commented line is uncommented, there is implementation variance, but presumably #2 should be selected and satisfy the assertion.
These effects of completing the type are troublesome. Would it be better to allow array types of abstract element type and simply prohibit creation of objects of such arrays?
(See also issue 1646.)
According to 8.3.5 [dcl.fct] paragraph 5, top-level cv-qualifiers on parameter types are deleted when determining the function type. It is not clear how or whether this adjustment should be applied to parameters of function templates when the parameter has a dependent type, however. For example:
template<class T> struct A { typedef T arr[3]; }; template<class T> void f(const typename A<T>::arr) { } // #1 template void f<int>(const A<int>::arr); template <class T> struct B { void g(T); }; template <class T> void B<T>::g(const T) { } // #2
If the const in #1 is dropped, f<int> has a parameter type of A* rather than the const A* specified in the explicit instantiation. If the const in #2 is not dropped, we fail to match the definition of B::g to its declaration.
Rationale (November, 2010):
The CWG agreed that this behavior is intrinsic to the different ways cv-qualification applies to array types and non-array types.
Notes, January, 2012:
Additional discussion of this issue arose regarding the following example:
template<class T> struct A { typedef double Point[2]; virtual double calculate(const Point point) const = 0; }; template<class T> struct B : public A<T> { virtual double calculate(const typename A<T>::Point point) const { return point[0]; } }; int main() { B<int> b; return 0; }
The question is whether the member function in B<int> has the same type as that in A<int>: is the parameter-type-list instantiated directly (i.e., using the adjusted types) or regenerated from the individual parameter types?
(See also issue 1322.)
According to 8.3.5 [dcl.fct] paragraph 5,
The type of a function is determined using the following rules. The type of each parameter (including function parameter packs) is determined from its own decl-specifier-seq and declarator. After determining the type of each parameter, any parameter of type “array of T” or “function returning T” is adjusted to be “pointer to T” or “pointer to function returning T,” respectively. After producing the list of parameter types, any top-level cv-qualifiers modifying a parameter type are deleted when forming the function type. The resulting list of transformed parameter types and the presence or absence of the ellipsis or a function parameter pack is the function's parameter-type-list. [Note: This transformation does not affect the types of the parameters. For example, int(*)(const int p, decltype(p)*) and int(*)(int, const int*) are identical types. —end note]
This is not sufficiently clear to specify the intended handling of an example like
void f(int a[10], decltype(a) *p );
Should the type of p be int(*)[10] or int**? The latter is the intended result, but the phrase “after determining the type of each parameter” makes it sound as if the adjustments are performed after all the parameter types have been determined from the decl-specifier-seq and declarator instead of for each parameter individually.
See also issue 1444.
The standard is not precise enough about when the default arguments of member functions are parsed. This leads to confusion over whether certain constructs are legal or not, and the validity of certain compiler implementation algorithms.
8.3.6 [dcl.fct.default] paragraph 5 says "names in the expression are bound, and the semantic constraints are checked, at the point where the default argument expression appears"
However, further on at paragraph 9 in the same section there is an example, where the salient parts are
int b; class X { int mem2 (int i = b); // OK use X::b static int b; };which appears to contradict the former constraint. At the point the default argument expression appears in the definition of X, X::b has not been declared, so one would expect ::b to be bound. This of course appears to violate 3.3.7 [basic.scope.class] paragraph 1(2) "A name N used in a class S shall refer to the same declaration in its context and when reevaluated in the complete scope of S. No diagnostic is required."
Furthermore 3.3.7 [basic.scope.class] paragraph 1(1) gives the scope of names declared in class to "consist not only of the declarative region following the name's declarator, but also of .. default arguments ...". Thus implying that X::b is in scope in the default argument of X::mem2 previously.
That previous paragraph hints at an implementation technique of saving the token stream of a default argument expression and parsing it at the end of the class definition (much like the bodies of functions defined in the class). This is a technique employed by GCC and, from its behaviour, in the EDG front end. The standard leaves two things unspecified. Firstly, is a default argument expression permitted to call a static member function declared later in the class in such a way as to require evaluation of that function's default arguments? I.e. is the following well formed?
class A { static int Foo (int i = Baz ()); static int Baz (int i = Bar ()); static int Bar (int i = 5); };If that is well formed, at what point does the non-sensicalness of
class B { static int Foo (int i = Baz ()); static int Baz (int i = Foo()); };become detected? Is it when B is complete? Is it when B::Foo or B::Baz is called in such a way to require default argument expansion? Or is no diagnostic required?
The other problem is with collecting the tokens that form the default argument expression. Default arguments which contain template-ids with more than one parameter present a difficulty in determining when the default argument finishes. Consider,
template <int A, typename B> struct T { static int i;}; class C { int Foo (int i = T<1, int>::i); };The default argument contains a non-parenthesized comma. Is it required that this comma is seen as part of the default argument expression and not the beginning of another of argument declaration? To accept this as part of the default argument would require name lookup of T (to determine that the '<' was part of a template argument list and not a less-than operator) before C is complete. Furthermore, the more pathological
class D { int Foo (int i = T<1, int>::i); template <int A, typename B> struct T {static int i;}; };would be very hard to accept. Even though T is declared after Foo, T is in scope within Foo's default argument expression.
Suggested resolution:
Append the following text to 8.3.6 [dcl.fct.default] paragraph 8.
The default argument expression of a member function declared in the class definition consists of the sequence of tokens up until the next non-parenthesized, non-bracketed comma or close parenthesis. Furthermore such default argument expressions shall not require evaluation of a default argument of a function declared later in the class.
This would make the above A, B, C and D ill formed and is in line with the existing compiler practice that I am aware of.
Notes from the October, 2005 meeting:
The CWG agreed that the first example (A) is currently well-formed and that it is not unreasonable to expect implementations to handle it by processing default arguments recursively.
Additional notes, May, 2009:
Presumably the following is ill-formed:
int f(int = f());
However, it is not clear what in the Standard makes it so. Perhaps there needs to be a statement to the effect that a default argument only becomes usable after the complete declarator of which it is a part.
Notes from the August, 2011 meeting:
In addition to default arguments, commas in template argument lists also cause problems in initializers for nonstatic data members:
struct S { int n = T<a,b>(c); // ill-formed declarator for member b // or template argument? };
(This is from #16 of the IssuesFoundImplementingC0x.pdf document on the Bloomington wiki.
Additional notes (August, 2011):
Notes from the February, 2012 meeting:
It was decided to handle the question of parsing an initializer like T<a,b>(c) (a template-id or two declarators) in this issue and the remaining questions in issue 361. For this issue, a template-id will only be recognized if there is a preceding declaration of a template.
It is not clear, either from 8.3.6 [dcl.fct.default] or 14.7.2 [temp.explicit], whether it is permitted to add a default argument in an explicit instantiation of a function template:
template<typename T> void f(T, int) { }
template void f<int>(int, int=0); // Permitted?
Notes from the April, 2013 meeting:
The intent is to prohibit default arguments in explicit instantiations.
8.4.2 [dcl.fct.def.default] paragraph 1 specifies that an explicitly-defaulted function shall
have the same declared function type (except for possibly differing ref-qualifiers and except that in the case of a copy constructor or copy assignment operator, the parameter type may be “reference to non-const T”, where T is the name of the member function's class) as if it had been implicitly declared...
This allows an example like
struct A { A& operator=(A const&) && = default; };
but forbids
struct B { B&& operator=(B const&) && = default; };
which seems backward.
In addition, 12.8 [class.copy] paragraph 22 only specifies the return value for implicitly-declared copy/move assignment operators, not for explicitly-defaulted ones.
The resolution of issue 1778 means that whether an explicitly-defaulted function is deleted or not cannot be known until the end of the class definition. As a result, new rules are required to disallow references (in, e.g., decltype) to explicitly-defaulted functions that might later become deleted.
Notes from the June, 2014 meeting:
The approach favored by CWG was to make any reference to an explicitly-defaulted function ill-formed if it occurs prior to the end of the class definition.
Paragraph 9 of 8.5 [dcl.init] says:
If no initializer is specified for an object, and the object is of (possibly cv-qualified) non-POD class type (or array thereof), the object shall be default-initialized; if the object is of const-qualified type, the underlying class type shall have a user-declared default constructor. Otherwise, if no initializer is specified for an object, the object and its subobjects, if any, have an indeterminate initial value; if the object or any of its subobjects are of const-qualified type, the program is ill-formed.
What if a const POD object has no non-static data members? This wording requires an empty initializer for such cases:
struct Z { // no data members operator int() const { return 0; } }; void f() { const Z z1; // ill-formed: no initializer const Z z2 = { }; // well-formed }
Similar comments apply to a non-POD const object, all of whose non-static data members and base class subobjects have default constructors. Why should the class of such an object be required to have a user-declared default constructor?
(See also issue 78.)
Additional note (February, 2011):
This issue should be brought up again in light of constexpr constructors and non-static data member initializers.
Notes from the August, 2011 meeting:
If the implicit default constructor initializes all subobjects, no initializer should be required.
According to 8.5.1 [dcl.init.aggr] paragraph 15,
When a union is initialized with a brace-enclosed initializer, the braces shall only contain an initializer-clause for the first non-static data member of the union.
This would appear to preclude using {} as the initializer for a union, which would otherwise have reasonable semantics. Is there a reason for this restriction?
Also, paragraph 7 reads,
If there are fewer initializer-clauses in the list than there are members in the aggregate, then each member not explicitly initialized shall be initialized from an empty initializer list (8.5.4 [dcl.init.list]).
There should presumably be special treatment for unions, so that only a single member is initialized in such cases.
(See also issue 1460.)
The example in 8.5.2 [dcl.init.string] paragraph 1 says,
char msg[] = "Syntax error on line %s\n";shows a character array whose members are initialized with a string-literal. Note that because '\n' is a single character and because a trailing '\0' is appended, sizeof(msg) is 25.
However, there appears to be no normative specification of how the size of the array is to be calculated.
Currently an attempt to bind an rvalue reference to a reference-unrelated lvalue succeeds, binding the reference to a temporary initialized from the lvalue by copy-initialization. This appears to be intentional, as the accompanying example contains the lines
int i3 = 2; double&& rrd3 = i3; // rrd3 refers to temporary with value 2.0
This violates the expectations of some who expect that rvalue references can be initialized only with rvalues. On the other hand, it is parallel with the handling of an lvalue reference-to-const (and is handled by the same wording). It also can add efficiency without requiring existing code to be rewritten: the implicitly-created temporary can be moved from, just as if the call had been rewritten to create a prvalue temporary from the lvalue explicitly.
On a related note, assuming the binding is permitted, the intent of the overload tiebreaker found in 13.3.3.2 [over.ics.rank] paragraph 3 is not clear:
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if
...
S1 and S2 are reference bindings (8.5.3 [dcl.init.ref]) and neither refers to an implicit object parameter of a non-static member function declared without a ref-qualifier, and S1 binds an rvalue reference to an rvalue and S2 binds an lvalue reference.
At question is what “to an rvalue” means here. If it is referring to the value category of the initializer itself, before conversions, then the supposed performance advantage of the binding under discussion does not occur because the competing rvalue and lvalue reference overloads will be ambiguous:
void f(int&&); // #1 void f(const int&); void g(double d) { f(d); // ambiguous: #1 does not bind to an rvalue }
On the other hand, if “to an rvalue” refers to the actual object to which the reference is bound, i.e., to the temporary in the case under discussion, the phrase would seem to be vacuous because an rvalue reference can never bind directly to an lvalue.
Notes from the February, 2012 meeting:
CWG agreed that the binding rules are correct, allowing creation of a temporary when binding an rvalue reference to a non-reference-related lvalue. The phrase “to an rvalue” in 13.3.3.2 [over.ics.rank] paragraph 3 is a leftover from before binding an rvalue reference to an lvalue was prohibited and should be removed. A change is also needed to handle the following case:
void f(const char (&)[1]); // #1 template<typename T> void f(T&&); // #2 void g() { f(""); //calls #2, should call #1 }
Additional note (October, 2012):
Removing “to an rvalue,” as suggested, would have the effect of negating the preference for binding a function lvalue to an lvalue reference instead of an rvalue reference because the case would now fall under the preceding bullet of 13.3.3.2 [over.ics.rank] paragraph 3 bullet 1, sub-bullets 4 and 5:
Two implicit conversion sequences of the same form are indistinguishable conversion sequences unless one of the following rules applies:
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if
...
S1 and S2 are reference bindings (8.5.3 [dcl.init.ref]) and neither refers to an implicit object parameter of a non-static member function declared without a ref-qualifier, and S1 binds an rvalue reference to an rvalue and S2 binds an lvalue reference... or, if not that,
S1 and S2 are reference bindings (8.5.3 [dcl.init.ref]) and S1 binds an lvalue reference to a function lvalue and S2 binds an rvalue reference to a function lvalue.
Presumably if the suggested resolution is adopted, the order of these two bullets should be inverted.
In the following case,
struct A { operator int &&() const; operator int &&() volatile; operator long(); }; int main() { int &&x = A(); }
the conversion for direct binding cannot be used because of the ambiguity, so indirect binding is used, which allows the use of the conversion to long in creating the temporary.
Is this intended? There is implementation variation.
Notes from the February, 2014 meeting:
CWG agreed that an ambiguity like this should make the initialization ill-formed instead of falling through to do indirect binding.
Consider the following example:
struct A { explicit A() = default; }; struct B : A { explicit B() = default; }; struct C { explicit C(); }; struct D : A { C c; explicit D() = default; }; template<typename T> void f() { T t = {}; } template<typename T> void g() { void x(T t); x({}); }
The question is whether f<B>, f<C>, f<D>, g<B>, g<C>, and g<D> are well-formed or ill-formed.
The crux here is whether 13.3.1.7 [over.match.list] is the governing law in each of these cases. If it is, the initialization is ill-formed, because copy-list-initialization has selected an explicit constructor. The standard seems clear that f<A> and g<A> are valid (because A is an aggregate, so 13.3.1.7 [over.match.list] is not reached nor applicable), f<B> is valid (because value-initialization does not call the default constructor, so 13.3.1.7 [over.match.list] is not reached), and that g<B>, g<C>, and g<D> are ill-formed (because 13.3.1.7 [over.match.list] is reached from 13.3.3.1.5 [over.ics.list] and selects an explicit constructor). The difference between f<B> and g<B> is troubling.
For f<C> and f<D>, it's not clear whether the default constructor call within value-initialization within list-initialization uses 13.3.1.7 [over.match.list] — but some form of overload resolution seems to be implied, since presumably we want to apply SFINAE to variadic constructor templates, diagnose classes which have multiple default constructors through the addition of default arguments, and the like.
It has been suggested that perhaps we are supposed to reach 13.3.1.7 [over.match.list] for an empty initializer list for a non-aggregate class with a default constructor only when we're coming from 13.3.3.1.5 [over.ics.list], and not when 8.5.4 [dcl.init.list] delegates to value-initialization. That would make all the fs valid, but g<B>, g<C>, and g<D> ill-formed.
12.3.1 [class.conv.ctor] paragraph 2 says explicit constructors are only used for direct-initialization or casts, which argues for at least f<C>, f<D>, g<C> and g<D> being ill-formed.
If an initializer_list object is copied and the copy is elided, is the lifetime of the underlying array object extended? E.g.,
void f() {
std::initializer_list<int> L =
std::initializer_list<int>{1, 2, 3}; // Lifetime of array extended?
}
The current wording is not clear.
Notes from the October, 2012 meeting:
The consensus of CWG was that the behavior should be the same, regardless of whether the copy is elided or not.
A default constructor that is defined as deleted is trivial, according to 12.1 [class.ctor] paragraph 5. This means that, according to 9 [class] paragraph 6, such a class can be trivial. If, however, the class has no default constructor because it has a user-declared constructor, the class is not trivial. Since both cases prevent default construction of the class, it is not clear why there is a difference in triviality between the cases.
Notes from the October, 2012 meeting:
It was observed that this issue was related to issue 1344, as the current specification allows adding a default constructor by adding default arguments to the definition of a constructor. The resolution of that issue should also resolve this one.
Notes from the September, 2013 meeting:
It was decided to resolve issue 1344 separately from this issue, so this issue now requires its own resolution.
The grammar for member-declarator (9.2 [class.mem]) does not, but should, allow for a brace-or-equal-initializer on a bit-field declarator.
According to 9.2 [class.mem] paragraph 1,
A member shall not be declared twice in the member-specification, except that a nested class or member class template can be declared and then later defined, and except that an enumeration can be introduced with an opaque-enum-declaration and later redeclared with an enum-specifier.
However, the grammar for member-declaration does not have a production that allows an opaque-enum-declaration.
Consider an example like:
struct A { struct B { auto foo() { return 0; } }; decltype(B().foo()) x; };
There does not appear to be a prohibition of cases like this, where the type of a member depends on the definition of a member function.
(See also issues 1360 and 1397.)
According to 9.4.2 [class.static.data] paragraph 4,
Unnamed classes and classes contained directly or indirectly within unnamed classes shall not contain static data members.
There is no such restriction on member functions, and there is no rationale for this difference, given that both static data members and member functions can be defined outside a unnamed class with a typedef name for linkage purposes. (Issue 406 acknowledged the lack of rationale by removing the specious note in 9.4.2 [class.static.data] that attempted to explain the restriction but left the normative prohibition in place.)
It would be more consistent to remove the restriction for classes with a typedef name for linkage purposes.
Additional note (August, 2012):
It was observed that, since no definition of a const static data member is required if it is not odr-used, there is no reason to prohibit such members in an unnamed class even without a typedef name for linkage purposes.
Describing the handling of static data members with brace-or-equal-initializers, 9.4.2 [class.static.data] paragraph 3 says,
The member shall still be defined in a namespace scope if it is odr-used (3.2 [basic.def.odr]) in the program and the namespace scope definition shall not contain an initializer.
The word “shall” implies a required diagnostic, but this is describing an ODR violation (the static data member might be defined in a different translation unit) and thus should be “no diagnostic required.”
According to 9.5 [class.union] paragraph 4,
[Note: In general, one must use explicit destructor calls and placement new operators to change the active member of a union. —end note] [Example: Consider an object u of a union type U having non-static data members m of type M and n of type N. If M has a non-trivial destructor and N has a non-trivial constructor (for instance, if they declare or inherit virtual functions), the active member of u can be safely switched from m to n using the destructor and placement new operator as follows:
u.m.~M(); new (&u.n) N;—end example]
This pattern is only “safe” if the original object that is being destroyed does not involve any const-qualified or reference types, i.e., satisfies the requirements of 3.8 [basic.life] paragraph 7, bullet 3:
the type of the original object is not const-qualified, and, if a class type, does not contain any non-static data member whose type is const-qualified or a reference type
Although paragraph 4 of 9.5 [class.union] is a note and an example, it should at least refer to the lifetime issues described in 3.8 [basic.life].
Additional note (October, 2013):
See also issue 1776, which suggests possibly changing the restriction in 3.8 [basic.life]. If such a change is made, this issue may become moot.
9.5 [class.union] paragraph 5 defines an anonymous union as follows:
A union of the form
union { member-specification } ;
is called an anonymous union; it defines an unnamed object of unnamed type.
It is obviously intended that a declaration like
static union { int i; float f; };
is a declaration of that form (cf paragraph 6, which requires the static keyword for anonymous unions declared in namespace scope). However, it would be clearer if the definition were recast in more descriptive terms, e.g.,
An anonymous union is an unnamed class that is defined with the class-key union in a simple-declaration in which the init-declarator-list is omitted. Such a simple-declaration is treated as if it contained a single declarator declaring an unnamed variable of the union's type.
(Note that this definition would require some additional tweaking to apply to class member anonymous union declarations, since simple-declarations are not included as member-declarations.)
As a related point, it is not clear how the following examples are to be treated, and there is implementation variance on some:
void f() { thread_local union { int a; }; } void g() { extern union { int b; }; } thread_local union { int c; }; // static is implied by thread_local static thread_local union { int d; }; static const union { int e = 0; }; // is e const? Clang says yes, gcc says no static constexpr union { int f = 0; };
It is not clear whether naming a member of a global anonymous union should be considered an id-expression or implicitly a member access expression. For example, given
static union { int i; }; template <int &> struct S {}; S<i> V;
is the last line well-formed? There is implementation variance on this question.
Notes from the February, 2014 meeting:
CWG agreed that the example should be ill-formed.
The term “direct member” is used in 9.5 [class.union] paragraph 8 but is not defined. It might be better to refer to the class's member-specification instead.
The resolution of issue 1816 left many aspects of bit-fields unspecified, including whether a signed bit field has a sign bit and the meaning of the bit-field width. Also, the requirement in 3.9.1 [basic.fundamental] paragraph 1 that
For narrow character types, all bits of the object representation participate in the value representation.
should not apply to oversize character-typed bit-fields.
Notes from the June, 2014 meeting:
CWG decided to address only the issue of oversized bit-fields of narrow character types at this time, splitting off the more general questions regarding bit-fields to issue 1943.
The access rules in 11.2 [class.access.base] do not appear to handle references in nested classes and outside of nonstatic member functions correctly. For example,
struct A { typedef int I; // public }; struct B: private A { }; struct C: B { void f() { I i1; // error: access violation } I i2; // OK struct D { I i3; // OK void g() { I i4; // OK } }; };
The reason for this discrepancy is that the naming class in the reference to I is different in these cases. According to 11.2 [class.access.base] paragraph 5,
The access to a member is affected by the class in which the member is named. This naming class is the class in which the member name was looked up and found.
In the case of i1, the reference to I is subject to the transformation described in 9.3.1 [class.mfct.non-static] paragraph 3:
Similarly during name lookup, when an unqualified-id (5.1 [expr.prim]) used in the definition of a member function for class X resolves to a static member, an enumerator or a nested type of class X or of a base class of X, the unqualified-id is transformed into a qualified-id (5.1 [expr.prim]) in which the nested-name-specifier names the class of the member function.
As a result, the reference to I in the declaration of i1 is transformed to C::I, so that the naming class is C, and I is inacessible in C. In the remaining cases, however, the transformation does not apply. Thus, the naming class of I in these references is A, and I is publicly accessible in A.
Presumably either the definition of “naming class” must be changed or the transformation of unqualified-ids must be broadened to include all uses within the scope of a class and not just within nonstatic member functions (and following the declarator-id in the definition of a static member, per 9.4 [class.static] paragraph 4).
According to 11.2 [class.access.base] paragraph 5,
A member m is accessible at the point R when named in class N if
...
m as a member of N is protected, and R occurs in a member or friend of class N, or in a member or friend of a class P derived from N, where m as a member of P is public, private, or protected, or
...
The granting of access via class P is troubling. At the least, there should be a restriction that P be visible at R. Alternatively, this portion of the rule could be removed altogether; this provision does not appear to be widely used in existing code and such references can be easily converted to use P instead of N for naming the member.
Notes from the June, 2014 meeting:
CWG noted that removing the friend provision would introduce an undesirable asymmetry between member functions of P and its friends. Instead, the intent is to require P to be a complete type at R.According to 11.3 [class.friend] paragraph 2,
Declaring a class to be a friend implies that the names of private and protected members from the class granting friendship can be accessed in the base-specifiers and member declarations of the befriended class.
A friend declaraton is a member-declaration, but it is not clear how far the granting of friendship goes in a friend declaration. For example:
class c { class n {}; friend struct s; }; struct s { friend class c::n; // #1 friend c::n g(); // #2 friend void f() { c::n(); } // #3 };
In particular, if a friend function is defined inside the class definition, as in #3, does its definition have access to the private and protected members of the befriending class? Implementations vary on this point.
Does the restriction in 11.4 [class.protected] apply to upcasts across protected inheritance, too? For instance,
struct B { int i; }; struct I: protected B { }; struct D: I { void f(I* ip) { B* bp = ip; // well-formed? bp->i = 5; // aka "ip->i = 5;" } };
I think the rationale for the 11.4 [class.protected] restriction applies equally well here — you don't know whether ip points to a D object or not, so D::f can't be trusted to treat the protected B subobject consistently with the policies of its actual complete object type.
The current treatment of “accessible base class” in 11.2 [class.access.base] paragraph 4 clearly makes the conversion from I* to B* well-formed. I think that's wrong and needs to be fixed. The rationale for the accessibility of a base class is whether “an invented public member” of the base would be accessible at the point of reference, although we obscured that a bit in the reformulation; it seems to me that the invented member ought to be considered a non-static member for this purpose and thus subject to 11.4 [class.protected].
(See also issues 385 and 471.).Notes from October 2004 meeting:
The CWG tentatively agreed that casting across protective inheritance should be subject to the additional restriction in 11.4 [class.protected].
Proposed resolution (April, 2011)
Change 11.2 [class.access.base] paragraph 4 as follows:
A base class B of N is accessible at R, if
an invented public member of B would be a public member of N, or
R occurs in a member or friend of class N, and an invented public member of B would be a private or protected member of N, or
R occurs in a member or friend of a class P derived from N, and an invented public member of B would be a private or (but not a protected [Footnote: A protected invented member is disallowed here for the same reason the additional check of 11.4 [class.protected] is applied to member access: it would allow casting a pointer to a derived class to a protected base class that might be a subobject of an object of a class that is different from the class context in which the reference occurs. —end footnote]) member of P, or
there exists a class S such that B is a base class of S accessible at R and S is a base class of N accessible at R.
[Example:
class B { public: int m; }; class S: private B { friend class N; }; class N: private S { void f() { B* p = this; // OK because class S satisfies the fourth condition // above: B is a base class of N accessible in f() because // B is an accessible base class of S and S is an accessible // base class of N. } }; class N2: protected B { }; class P2: public N2 { void f2(N2* n2p) { B* bp = n2p; // error: invented member would be protected and naming // class N2 not the same as or derived from the referencing // class P2 n2p->m = 0; // error (cf 11.4 [class.protected]) for the same reason } };—end example]
According to 11.4 [class.protected] paragraph 1, except when forming a pointer to member,
All other accesses involve a (possibly implicit) object expression (5.2.5 [expr.ref]).
It is not clear that this is strictly true for the invocation of a base class constructor from a mem-initializer. A wording tweak may be advisable.
The specification of when a defaulted special member function is to be defined as deleted sometimes overlooks variant and array members.
According to 12.1 [class.ctor] paragraph 6, a defaulted default constructor is constexpr if the corresponding user-written constructor would satisfy the constexpr requirements. However, the requirements apply to the definition of a constructor, and a defaulted constructor is defined only if it is odr-used, leaving it indeterminate at declaration time whether the defaulted constructor is constexpr or not.
(See also issue 1358.)
Additional notes (February, 2013):
As an example of this issue, consider:
struct S { int i = sizeof(S); };
You can't determine the value of the initializer, and thus whether the initializer is a constant expression, until the class is complete, but you can't complete the class without declaring the default constructor, and whether that constructor is constexpr or not depends on whether the member initializer is a constant expression.
A similar issue arises with the following example:
struct A { int x = 37; struct B { int x = 37; } b; B b2[2][3] = { { } }; };
This introduces an order dependency that is not specified in the current text: determining whether the default constructor of A is constexpr requires first determining the characteristics of the initializer of B::x and whether B::B() is constexpr or not.
The problem is exacerbated with class templates, since the current direction of CWG is to instantiate member initializers only when they are needed (see issue 1396). For a specific example:
struct S;
template<class T> struct X {
int i = T().i;
};
unsigned n = sizeof(X<S>); // Error?
struct S { int i; };
This also affects determining whether a class template specialization is a literal type or not; presumably getting the right answer to that requires instantiating the class and all its nonstatic data member initializers.
See also issues 1397 and 1594.
Notes from the September, 2013 meeting:
This issue should be resolved together with issue 1397.
Proposed resolution (May, 2014):
Change 12.1 [class.ctor] paragraphs 4-5 as follows:
A defaulted default constructor for class X is defined as deleted if:
...
any potentially constructed subobject has a type with a destructor that is deleted or inaccessible from the defaulted default constructor.
An implicitly-declared default constructor is constexpr if:
X has no virtual bases; and
for each non-variant non-static data member or base class subobject M, either M is initialized via brace-or-equal-initializer or default-initialization of M uses a constexpr constructor; and
if X is a union having variant members, or, if X is a non-union-class, for each anonymous union member having variant members, exactly one non-static data member is initialized via brace-or-equal-initializer.
A default constructor is trivial if it is not user-provided and if:
...
for all the non-static data members of its class that are of class type (or array thereof), each such class has a trivial default constructor.
Otherwise, the default constructor is non-trivial.
A default constructor that is defaulted and not defined as deleted is implicitly defined when it is odr-used (3.2 [basic.def.odr]) to create an object of its class type (1.8 [intro.object]) or when it is explicitly defaulted after its first declaration. The implicitly-defined default constructor performs the set of initializations of the class that would be performed by a user-written default constructor for that class with no ctor-initializer (12.6.2 [class.base.init]) and an empty compound-statement. If that user-written default constructor would be ill-formed, the program is ill-formed. If that user-written default constructor would satisfy the requirements of a constexpr constructor (7.1.5 [dcl.constexpr]), the implicitly-defined default constructor is constexpr. Before the defaulted default constructor for a class is implicitly defined, all the non-user-provided default constructors for its base classes and its non-static data members shall have been implicitly defined. [Note:...
Additional notes, May, 2014:
The proposed resolution inadvertently allows a defaulted default constructor of a class with virtual bases to be constexpr. It has been updated with a change addressing that oversight and returned to "review" status.
See also issue 1890.
According to 12.1 [class.ctor] paragraph 5,
A defaulted default constructor for class X is defined as deleted if:
X is a union-like class that has a variant member with a non-trivial default constructor,
...
X is a union and all of its variant members are of const-qualified type (or array thereof),
X is a non-union class and all members of any anonymous union member are of const-qualified type (or array thereof),
...
Because the presence of a non-static data member initializer is the moral equivalent of a mem-initializer, these rules should probably be modified not to define the generated constructor as deleted when a union member has a non-static data member initializer. (Note the non-normative references in 9.5 [class.union] paragraphs 2-3 and 7.1.6.1 [dcl.type.cv] paragraph 2 that would also need to be updated if this restriction is changed.)
It would also be helpful to add a requirement to 9.5 [class.union] requiring either a non-static data member initializer or a user-provided constructor if all the members of the union have const-qualified types.
On a more general note, why is the default constructor defined as deleted just because a member has a non-trivial default constructor? The union itself doesn't know which member is the active one, and default construction won't initialize any members (assuming no brace-or-equal-initializer). It is up to the “owner” of the union to control the lifetime of the active member (if any), and requiring a user-provided constructor is forcing a design pattern that doesn't make sense. Along the same lines, why is the default destructor defined as deleted just because a member has a non-trivial destructor? I would agree with this restriction if it only applied when the union also has a user-provided constructor.
See also issues 1460, 1562, 1587, and 1621.
It used to be clear that an implicitly-declared default constructor is not explicit. That has been inadvertently lost due to other changes, so this specification should be added to 12.1 [class.ctor] in parallel with the similar statement in 12.8 [class.copy] paragraph 3.
In an example like,
struct S { ~S(); }; struct X { X(); X(const X&); }; struct T { S &&s; X x; }; void f(); void g() { T t = T{ {}, {} }; f(); }
it appears that the current wording allows two ways of handling this:
The copy to t in g is not elided. X(const X&) is called, then ~S() is called, then f() is called.
However, EDG and g++ produce a third behavior: they do not call X(const X&), but they destroy the S() temporary at the end of its full-expression. The current wording does not appear to permit this behavior, but it seems preferable that lifetime extension does not depend on whether copy elision is done.
Presumably the following example is intended to be ill-formed:
struct A {
(*operator int*());
};
A a;
int *x = a; // Ok?
It is not clear, however, which rule is supposed to reject such a member-declaration.
Mark Mitchell raised a number of issues related to the resolution of issue 244 and of destructor lookup in general.
Issue 244 says:
... in a qualified-id of the form:::opt nested-name-specifieropt class-name :: ~ class-name
the second class-name is looked up in the same scope as the first.
But if the reference is "p->X::~X()", the first class-name is looked up in two places (normal lookup and a lookup in the class of p). Does the new wording mean:
This is a test case that illustrates the issue:
struct A { typedef A C; }; typedef A B; void f(B* bp) { bp->B::~B(); // okay B found by normal lookup bp->C::~C(); // okay C found by class lookup bp->B::~C(); // B found by normal lookup C by class -- okay? bp->C::~B(); // C found by class lookup B by normal -- okay? }
A second issue concerns destructor references when the class involved is a template class.
namespace N { template <typename T> struct S { ~S(); }; } void f(N::S<int>* s) { s->N::S<int>::~S(); }
The issue here is that the grammar uses "~class-name" for destructor names, but in this case S is a template name when looked up in N.
Finally, what about cases like:
template <typename T> void f () { typename T::B x; x.template A<T>::template B<T>::~B(); }
When parsing the template definition, what checks can be done on "~B"?
Sandor Mathe adds :
The standard correction for issue 244 (now in DR status) is still incomplete.
Paragraph 5 of 3.4.3 [basic.lookup.qual] is not applicable for p->T::~T since there is no nested-name-specifier. Section 3.4.5 [basic.lookup.classref] describes the lookup of p->~T but p->T::~T is still not described. There are examples (which are non-normative) that illustrate this sort of lookup but they still leave questions unanswered. The examples imply that the name after ~ should be looked up in the same scope as the name before the :: but it is not stated. The problem is that the name to the left of the :: can be found in two different scopes. Consider the following:
struct S { struct C { ~C() { } }; }; typedef S::C D; int main() { D* p; p->C::~D(); // valid? }
Should the destructor call be valid? If there were a nested name specifier, then D should be looked for in the same scope as C. But here, C is looked for in 2 different ways. First, it is searched for in the type of the left hand side of -> and it is also looked for in the lexical context. It is found in one or if both, they must match. So, C is found in the scope of what p points at. Do you only look for D there? If so, this is invalid. If not, you would then look for D in the context of the expression and find it. They refer to the same underlying destructor so this is valid. The intended resolution of the original defect report of the standard was that the name before the :: did not imply a scope and you did not look for D inside of C. However, it was not made clear whether this was to be resolved by using the same lookup mechanism or by introducing a new form of lookup which is to look in the left hand side if that is where C was found, or in the context of the expression if that is where C was found. Of course, this begs the question of what should happen when it is found in both? Consider the modification to the above case when C is also found in the context of the expression. If you only look where you found C, is this now valid because it is in 1 of the two scopes or is it invalid because C was in both and D is only in 1?
struct S { struct C { ~C() { } }; }; typedef S::C D; typedef S::C C; int main() { D* p; p->C::~D(); // valid? }
I agree that the intention of the committee is that the original test case in this defect is broken. The standard committee clearly thinks that the last name before the last :: does not induce a new scope which is our current interpretation. However, how this is supposed to work is not defined. This needs clarification of the standard.
Martin Sebor adds this example (September 2003), along with errors produced by the EDG front end:
namespace N { struct A { typedef A NA; }; template <class T> struct B { typedef B NB; typedef T BT; }; template <template <class> class T> struct C { typedef C NC; typedef T<A> CA; }; } void foo (N::A *p) { p->~NA (); p->NA::~NA (); } template <class T> void foo (N::B<T> *p) { p->~NB (); p->NB::~NB (); } template <class T> void foo (typename N::B<T>::BT *p) { p->~BT (); p->BT::~BT (); } template <template <class> class T> void foo (N::C<T> *p) { p->~NC (); p->NC::~NC (); } template <template <class> class T> void foo (typename N::C<T>::CA *p) { p->~CA (); p->CA::~CA (); } Edison Design Group C/C++ Front End, version 3.3 (Sep 3 2003 11:54:55) Copyright 1988-2003 Edison Design Group, Inc. "t.cpp", line 16: error: invalid destructor name for type "N::B<T>" p->~NB (); ^ "t.cpp", line 17: error: qualifier of destructor name "N::B<T>::NB" does not match type "N::B<T>" p->NB::~NB (); ^ "t.cpp", line 30: error: invalid destructor name for type "N::C<T>" p->~NC (); ^ "t.cpp", line 31: error: qualifier of destructor name "N::C<T>::NC" does not match type "N::C<T>" p->NC::~NC (); ^ 4 errors detected in the compilation of "t.cpp".
John Spicer: The issue here is that we're unhappy with the destructor names when doing semantic analysis of the template definitions (not during an instantiation).
My personal feeling is that this is reasonable. After all, why would you call p->~NB for a class that you just named as N::B<T> and you could just say p->~B?
Additional note (September, 2004)
The resolution for issue 244 removed the discussion of p->N::~S, where N is a namespace-name. However, the resolution did not make this construct ill-formed; it simply left the semantics undefined. The meaning should either be defined or the construct made ill-formed.
Paragraph 4 of 12.5 [class.free] speaks of looking up a deallocation function. While it is an error if a placement deallocation function alone is found by this lookup, there seems to be an assumption that a placement deallocation function and a usual deallocation function can both be declared in a given class scope without creating an ambiguity. The normal mechanism by which ambiguity is avoided when functions of the same name are declared in the same scope is overload resolution; however, there is no mention of overload resolution in the description of the lookup. In fact, there appears to be nothing in the current wording that handles this case. That is, the following example appears to be ill-formed, according to the current wording:
struct S { void operator delete(void*); void operator delete(void*, int); }; void f(S* p) { delete p; // ill-formed: ambiguous operator delete }
Suggested resolution (Mike Miller, March 2002):
I think you might get the right effect by replacing the last sentence of 12.5 [class.free] paragraph 4 with something like:
After removing all placement deallocation functions, the result of the lookup shall contain an unambiguous and accessible deallocation function.
Additional notes (October, 2012):
This issue should be reconsidered in list of paper N3396, as it would add additional overloads for allocation and deallocation functions.
The term “usual deallocation function” is defined in 3.7.4.2 [basic.stc.dynamic.deallocation] paragraph 2; perhaps it could be used to good effect in 5.3.5 [expr.delete] paragraph 7. The specifications in 12.5 [class.free] paragraphs 4 and 5 should probably also be moved into 5.3.5 [expr.delete].
The effect of a non-static data member initializer in an anonymous union is not clearly described in the current wording. Consider the following example:
struct A {
struct B {
union {
int x = 37;
};
union {
int y = x + 47; // Well-formed?
};
} a;
};
Does an anonymous union have a constructor that applies a non-static data member initializer? Or is the initialization performed by the constructor of the class in which the anonymous union appears? In particular, is the reference to x in the initializer for y well-formed or not? If the initialization of y is performed by B's constructor, there is no problem because B::x is a member of the object being initialized. If an anonymous union has its own constructor, B::x is just a member of the containing class and is a reference to a non-static data member without an object, which is ill-formed. Implementations currently appear to take the latter interpretation and report an error for that initializer.
As a further example, consider:
union { // #1 union { // #2 union { // #3 int y = 32; }; }; } a { } ;
One interpretation might be that union #3 has a non-trivial default constructor because of the initializer of y, which would give union #2 a deleted default constructor, which would make the example ill-formed.
As yet another example, consider:
union { union { int x; }; union { int y = 3; }; union { int z; }; } a { };
Assuming the current proposed resolution of issue 1502, what is the correct interpretation of this code? Is it well-formed, and if so, what initialization is performed?
Finally, consider
struct S { union { int x = 1; }; union { int y = 2; }; } s{};
Does this violate the prohibition of aggregates containing member initializers in 8.5.1 [dcl.init.aggr] paragraph 1?
See also issues 1460, 1562, 1587, and 1623.
The current wording of 12.7 [class.cdtor] paragraph 4 does not describe the behavior of calling a virtual function in a mem-initializer for a base class, only for a non-static data member. Also, the changes for issue 1202 should have been, but were not, applied to the description of the behavior of typeid and dynamic_cast in paragraphs 5 and 6.
In addition, the resolution of issue 597 allowing the out-of-lifetime conversion of pointers/lvalues to non-virtual base classes, should have been, but were not, applied to paragraph 3.
Proposed resolution (August, 2013):
Change 12.7 [class.cdtor] paragraph 1 as follows:
For an object with a non-trivial constructor, referring to any non-static member or virtual base class of the object before the constructor begins execution results in undefined behavior. For an object with a non-trivial destructor, referring to any non-static member or virtual base class of the object after the destructor finishes execution results in undefined behavior. [Example:struct X { int i; }; struct Y : X { Y(); }; // non-trivial struct A { int a; }; struct B : public virtual A { int j; Y y; }; // non-trivial extern B bobj; B* pb = &bobj; // OK int* p1 = &bobj.a; // undefined, refers to base class member int* p2 = &bobj.y.i; // undefined, refers to member's member A* pa = &bobj; // undefined, upcast to a virtual base class type B bobj; // definition of bobj extern X xobj; int* p3 = &xobj.i; //OK, X is a trivial class X xobj;
Change 12.7 [class.cdtor] paragraphs 3-6 as follows:
To explicitly or implicitly convert a pointer (a glvalue) referring to an object of class X to a pointer (reference) to a direct or indirect virtual base class B of X, the construction of X and the construction of all of its direct or indirect bases that directly or indirectly derive from for which B is a direct or indirect virtual base shall have started and the destruction of these classes shall not have completed, otherwise the conversion results in undefined behavior. To form a pointer to (or access the value of) a direct non-static member...
Member functions, including virtual functions (10.3 [class.virtual]), can be called during construction or destruction (12.6.2 [class.base.init]). When a virtual function is called directly or indirectly from a constructor or from a destructor, including during the construction or destruction of the class's non-static data members, and the object to which the call applies is the object (call it x) under construction or destruction, the function called is the final overrider in the constructor's or destructor's class and not one overriding it in a more-derived class. If the virtual function call uses an explicit class member access (5.2.5 [expr.ref]) and the object expression refers to the complete object of x or one of that object's base class subobjects but not to x or one of its base class subobjects, the behavior is undefined. The period of construction of an object or subobject whose type is a class type C begins immediately after the construction of all its base class subobjects is complete and concludes when the last constructor of class C exits. The period of destruction of an object or subobject whose type is a class type C begins when the destructor for C begins execution and concludes immediately before beginning the destruction of its base class subobjects. A polymorphic operation is a virtual function call (5.2.2 [expr.call]), the typeid operator (5.2.8 [expr.typeid]) when applied to a glvalue of polymorphic type, or the dynamic_cast operator (5.2.7 [expr.dynamic.cast]) when applied to a pointer to or glvalue of a polymorphic type. A polymorphic operand is the object expression in a virtual function call or the operand of a polymorphic typeid or dynamic_cast.
During the period of construction or period of destruction of an object or subobject whose type is a class type C (call it x), the effect of performing a polymorphic operation in which the polymorphic operand designates x or a base class subobject thereof is as if the dynamic type of the object were class C. [Footnote: This is true even if C is an abstract class, which cannot be the type of a most-derived object. —end footnote] If a polymorphic operand refers to an object or subobject having class type C before its period of construction begins or after its period of destruction is complete, the behavior is undefined. [Note: This includes the evaluation of an expression appearing in a mem-initializer of C in which the mem-initializer-id designates C or one of its base classes. —end note] [Example:
struct V { V(); V(int); virtual void f(); virtual void g(); }; struct A : virtual V { virtual void f(); virtual int h(); A() : V(h()) { } // undefined behavior: virtual function h called // before A's period of construction begins }; struct B : virtual V { virtual void g(); B(V*, A*); }; struct D : A, B { virtual void f(); virtual void g(); D() : B((A*)this, this) { } }; B::B(V* v, A* a) { f(); // calls V::f, not A::f g(); // calls B::g, not D::g v->g(); // v is base of B, the call is well-defined, calls B::g a->f(); // undefined behavior, a's type not a base of B typeid(*this); // type_info for B typeid(*v); // well-defined: *v has type V, a base of B, // so its period of construction is complete; // yields type_info for B typeid(*a); // undefined behavior: A is not a base of B, // so its period of construction has not begun dynamic_cast<B*>(v); // well-defined: v has type V*, V is a base of B, // so its period of construction is complete; // results in this dynamic_cast<B*>(a); // undefined behavior: A is not a base of B, // so its period of construction has not begun }—end example]
The typeid operator (5.2.8 [expr.typeid]) can be used during construction or destruction (12.6.2 [class.base.init]). When typeid is used in a constructor (including the mem-initializer or brace-or-equal-initializer for a non-static data member) or in a destructor, or used in a function called (directly or indirectly) from a constructor or destructor, if the operand of typeid refers to the object under construction or destruction, typeid yields the std::type_info object representing the constructor or destructor's class. If the operand of typeid refers to the object under construction or destruction and the static type of the operand is neither the constructor or destructor's class nor one of its bases, the result of typeid is undefined.
dynamic_casts (5.2.7 [expr.dynamic.cast]) can be used during construction or destruction (12.6.2 [class.base.init]). When a dynamic_cast is used in a constructor (including the mem-initializer or brace-or-equal-initializer for a non-static data member) or in a destructor, or used in a function called (directly or indirectly) from a constructor or destructor, if the operand of the dynamic_cast refers to the object under construction or destruction, this object is considered to be a most derived object that has the type of the constructor or destructor's class. If the operand of the dynamic_cast refers to the object under construction or destruction and the static type of the operand is not a pointer to or object of the constructor or destructor's own class or one of its bases, the dynamic_cast results in undefined behavior. [Example:
struct V { virtual void f(); }; struct A : virtual V { }; struct B : virtual V { B(V*, A*); }; struct D : A, B { D() : B((A*)this, this) { } }; B::B(V* v, A* a) { typeid(*this); // type_info for B typeid(*v); // well-defined: *v has type V, a base of B // yields type_info for B typeid(*a); // undefined behavior: type A not a base of B dynamic_cast<B*>(v); // well-defined: v of type V*, V base of B // results in B* dynamic_cast<B*>(a); // undefined behavior, // a has type A*, A not a base of B
—end example]
Moving to always doing overload resolution for determining exception specifications and implicit deletion creates some unfortunate cycles:
template<typename T> struct A { T t; }; template <typename T> struct B { typename T::U u; }; template <typename T> struct C { C(const T&); }; template <typename T> struct D { C<B<T> > v; }; struct E { typedef A<D<E> > U; }; extern A<D<E> > a; A<D<E> > a2(a);
If declaring the copy constructor for A<D<E>> is part of instantiating the class, then we need to do overload resolution on D<E>, and thus C<B<E>>. We consider C(const B<E>&), and therefore look to see if there's a conversion from C<B<E>> to B<E>, which instantiates B<E>, which fails because it has a field of type A<D<E>> which is already being instantiated.
Even if we wait until A<D<E>> is considered complete before finalizing the copy constructor declaration, declaring the copy constructor for B<E> will want to look at the copy constructor for A<D<E>>, so we still have the cycle.
I think that to avoid this cycle we need to short-circuit consideration of C(const T&) somehow. But I don't see how we can do that without breaking
struct F { F(F&); }; struct G; struct G2 { G2(const G&); }; struct G { G(G&&); G(const G2&); }; struct H: F, G { }; extern H h; H h2(h);
Here, since G's move constructor suppresses the implicit copy constructor, the defaulted H copy constructor calls G(const G2&) instead. If the move constructor did not suppress the implicit copy constructor, I believe the implicit copy constructor would always be viable, and therefore a better match than a constructor taking a reference to another type.
So perhaps the answer is to reconsider that suppression and then disqualify any constructor taking (a reference to) a type other than the constructor's class from consideration when looking up a subobject constructor in an implicitly defined constructor. (Or assignment operator, presumably.)
Another possibility would be that when we're looking for a conversion from C<B<E>> to B<E> we could somehow avoid considering, or even declaring, the B<E> copy constructor. But that seems a bit dodgy.
Additional note (October, 2010):
An explicitly declared move constructor/op= should not suppress the implicitly declared copy constructor/op=; it should cause it to be deleted instead. This should prevent a member function taking a (reference to) an un-reference-related type from being chosen by overload resolution in a defaulted member function.
And we should clarify that member functions taking un-reference-related types are not even considered during overload resolution in a defaulted member function, to avoid requiring their parameter types to be complete.
Bullet 4 of 12.8 [class.copy] paragraph 23 says that a defaulted copy/move assignment operator is defined as deleted if the class has
a non-static data member of class type M (or array thereof) that cannot be copied/moved because overload resolution (13.3 [over.match]), as applied to M's corresponding assignment operator, results in an ambiguity or a function that is deleted or inaccessible from the defaulted assignment operator
The intent of this is that if overload resolution fails to find a corresponding copy/move assignment operator that can validly be called to copy/move a member, the class's assignment operator will be defined as deleted. However, this wording does not cover an example like the following:
struct A { A(); }; struct B { B(); const A a; }; typedef B& (B::*pmf)(B&&); pmf p =&B::operator=;
Here, the problem is simply that overload resolution failed to find a callable function, which is not one of the cases listed in the current wording. A similar problem exists for base classes in the fifth bullet.
Additional note (January, 2013):
A similar omission exists in paragraph 11 for copy constructors.
The current wording of 12.8 [class.copy] paragraph 31 refers only to constructors and destructors:
When certain criteria are met, an implementation is allowed to omit the copy/move construction of a class object, even if the constructor selected for the copy/move operation and/or the destructor for the object have side effects.
However, in some cases (e.g., auto_ptr) a conversion function is also involved in the copying, and it could presumably also have visible side effects that would be eliminated by copy elision. (Some additional contexts that may also require changes in this regard are mentioned in the resolution of issue 535.)
Additional note (September, 2012):
The default arguments of an elided constructor can also have side effects and should be mentioned, as well; however, the elision should not change the odr-use status of functions and variables appearing in those default arguments.
Copy initialization in some cases uses constructors that are not copy/move constructors (e.g., a specialization of a constructor template might be selected by overload resolution, or in copy-list-initialization, any constructor could be selected). Some ABIs require that an object of certain class types be passed in a register (effectively using the trivial copy/move constructor), even if the class has a non-trivial constructor that would be selected to do the copy. The Standard should be changed to permit this usage.
Proposed resolution (April, 2013):
Add the following as a new paragraph following 12.8 [class.copy] paragraph 1:
When an object of class type X is passed to or returned from a function, if X has a trivial, accessible copy or move constructor that is not deleted, and X has no non-trivial copy constructors, move constructors, or destructors, implementations are permitted to perform an additional copy or move of the object using the trivial constructor (even if it would not be selected by overload resolution to perform a copy or move of the object). [Note: This latitude is granted to allow objects of class type to be passed to or returned from functions in registers. —end note]
See also issue 1928.
Additional note, May, 2014:
Questions have been raised regarding this resolution. In particular, the interaction of the “extra copy” with copy elision, lifetime, and access checking context are not specified. In addition, some concern has also been expressed regarding the requirement that the trivial copy/move constructor be accessible. The issue is being returned to "review" status for discussion of these points.
Notes from the June, 2014 meeting:
CWG felt that the requirements for accessibility should be removed, in line with the idea making all access opublic in a program should not change its semantics. Similarly, the prohibition of non-trivial functions was not desirable. The approach should be to recognize the extra copy as a temporary object and deal explicitly with its lifetime.
The implicit declaration of a special member function sometimes requires overload resolution, in order to select a special member to use for base classes and non-static data members. This can be required to determine whether the member is or would be deleted, and whether the member is trivial, for instance. The standard appears to require such overload resolution be performed at the end of the definition of the class, but in practice, implementations perform it lazily. This optimization appears to be non-conforming, in the case where overload resolution would hit an error. In order to enable this optimization, such errors should be “no diagnostic required.”
Additional note (March, 2013):
See also issue 1360.
Notes from the September, 2013 meeting:
The problem with this approach is that hard errors (not in the immediate context) can occur, affecting portability. There are some cases, such as a virtual assignment operator in the base class, where lazy evaluation cannot be done, so it cannot be mandated.
The intent was for PODs in C++11 to be a superset of C++03 PODs. Consequently, in the following example, C should be a POD but isn't:
struct A { const int m; A& operator=(A const&) = default; // deleted and trivial, so A is a // POD, as it would be in 2003 // without this explicit op= decl }; static_assert(__is_trivially_copyable(A), ""); struct B { int i; B& operator=(B &) & = default; // non-trivial B& operator=(B const&) & = default; // trivial }; struct C { const B m; C& operator=(C const& r) = default; // deleted (apparently), but non-trivial (apparently) /* Notionally: C& operator=(C const& r) { (*this).m.operator=(r.m); return *this; } */ }; static_assert(!__is_trivially_copyable(C), "");
This is because of the following text from 12.8 [class.copy] paragraph 25:
for each non-static data member of X that is of class type (or array thereof), the assignment operator selected to copy/move that member is trivial;
In this case, overload resolution fails, so no assignment operator is selected, so C::operator=(const C&) is non-trivial.
12.8 [class.copy] paragraph 31 uses the phrase, “same cv-unqualified type,” twice. This is ambiguous, potentially either requiring that the types not be cv-qualified or meaning that cv-qualification should be ignored. The latter meaning is intended and the phrase should be replaced accordingly.
For an example like
struct A { constexpr A(int, float = 0); explicit A(int, int = 0); A(int, int, int = 0) = delete; }; struct B : A { using A::A; };
it is not clear from 12.9 [class.inhctor] what should happen: is B::B(int) constexpr and/or explicit? Is B::B(int, int) explicit and/or deleted? Although the rationale given in the note in paragraph 7,
If two using-declarations declare inheriting constructors with the same signatures, the program is ill-formed (9.2 [class.mem], 13.1 [over.load]), because an implicitly-declared constructor introduced by the first using-declaration is not a user-declared constructor and thus does not preclude another declaration of a constructor with the same signature by a subsequent using-declaration.
might be thought to apply, paragraph 1 talks about a set of candidate constructors based on their parameter types, so presumably such a set would contain only a single declaration of A::A(int) and one for A::A(int, int). The constructor characteristics of that declaration, however, are not specified.
One possibility might be to declare such a constructor, resulting from the transformation of more than one base class constrctor, to be deleted, so there would be an error only if it were used.
Notes from the April, 2013 meeting:
CWG agreed with the direction of defining such constructors as deleted.
Additional note, June, 2014:
See issue 1941 for an alternative approach to this problem.
Consider the following example:
template<class T> struct S { private: typedef int X; friend struct B; }; struct B { template<class T> B(T, typename T::X); }; struct D: B { using B::B; }; S<int> s; B b(s, 2); // Okay, thanks to friendship. D d(s, 2); // Error: friendship is not inherited.
My understanding is that the construction of d fails because typename T::X expands to S<int>::X in this case, and that is not accessible from D.
However, I'm not sure that makes sense from a usability perspective. The user of D just wanted to be able to wrap class B, and the fact that friendship was granted to B to enable its constructor parameter seems like just an implementation detail that D shouldn't have to cope with.
Would it perhaps be better to suspend access checking during the instantiation of inheriting member function template declarations (not definitions), since real access problems (e.g., the selection of a private constructor) would presumably be revealed when doing the full instantiation?
Proposed resolution (February, 2014):
Change 12.9 [class.inhctor] paragraph 4 as follows:
A constructor so declared has the same access as the corresponding constructor in X. It is deleted if the corresponding constructor in X is deleted (8.4 [dcl.fct.def] 8.4.3 [dcl.fct.def.delete]). While performing template argument substitution (14.8.2 [temp.deduct]) for constructor templates so declared, name lookup, overload resolution, and access checking are performed in the context of the corresponding constructor template of X. [Example:
struct B { template<class T> B(T, typename T::Q); }; class S { using Q = int; template<class T> friend B::B(T, typename T::Q); }; struct D : B { using B::B; }; B b(S(), 1); // OK: B::B is a friend of S D d(S(), 2); // OK: access control is in the context of B::B
—end example] An inheriting constructor shall not be explicitly instantiated (14.7.2 [temp.explicit]) or explicitly specialized (14.7.3 [temp.expl.spec]).
Additional note (June, 2014):
This issue is being returned to "review" status in light of a suggestion for an alternative approach to the problem; see issue 1941.
A local class cannot, according to 14.5.2 [temp.mem] paragraph 2, have member templates. Presumably, then, an example like the following is ill-formed:
struct S { template<class T> S(T) { struct L: S { using S::S; }; } };
It is accepted by current implementations, however. Does something need to be said about this case in 12.9 [class.inhctor], either to explicitly allow or forbid it, or is the restriction in 14.5.2 [temp.mem] sufficient?
Default arguments are a common mechanism for applying SFINAE to constructors. However, default arguments are not carried over when base class constructors are inherited; instead, an overload set of constructors with various numbers of arguments is created in the derived class. This seems problematic.
One possibility would be to change the mechanism for how constructors are inherited; a using-declaration might actually introduce the base class constructors into the derived class, as other using-declarations do, and if a base class constructor were selected, the remaining derived class members would be default-initialized.
This approach would also address issues 1645, as duplicated constructors would simply fail during overload resolution, and 1715, since there would be no synthesized constructors for which access checking would be needed.
The effect of such an approach on ABIs, including mangling, would need to be considered.
The Standard does not allow overloading of member functions that differ only in their return type (cf enable_if).
The normative text of 13.2 [over.dcl] relies on the term “equivalent,” for which it refers to 13.1 [over.load], but the term appears there only in non-normative text. The resolution of this issue should be coordinated with that of issue 1668.
Footnote 127 of 13.3.1.1.1 [over.call.func] paragraph 3 reads,
An implied object argument must be contrived to correspond to the implicit object parameter attributed to member functions during overload resolution. It is not used in the call to the selected function. Since the member functions all have the same implicit object parameter, the contrived object will not be the cause to select or reject a function.
It is not true that “the member functions all have the same implicit object parameter.” This statement does not take into account member functions brought into the class by using-declarations or cv-qualifiers and ref-qualifiers on the non-static member functions:
struct B { char f(); // B & }; struct D : B { using B::f; long f(); // D & char g() const; // D const & long g(); // D & char h() &; // D & long h() &&; // D && }; int main() { // D::f() has better match than B::f() decltype(D().f()) *p1 = (long *)0; // D::g() has better match than D::g() const decltype(D().g()) *p2 = (long *)0; // D::h() & is not viable function // D::h() && is viable function decltype(D().h()) *p3 = (long *)0; }
The value category of a contrived object expression is not specified by the rules and, probably, cannot be properly specified in presence of ref-qualifiers, so the statement “the contrived object will not be the cause to select or reject a function” should be normative rather than informative:
struct X
{
static void f(double) {}
void f(int) & {}
void f(int) && {}
};
int main()
{
X::f(0); // ???
}
It's not clear how overloading and partial ordering handle non-deduced pairs of corresponding arguments. For example:
template<typename T> struct A { typedef char* type; }; template<typename T> char* f1(T, typename A<T>::type); // #1 template<typename T> long* f1(T*, typename A<T>::type*); // #2 long* p1 = f1(p1, 0); // #3
I thought that #3 is ambiguous but different compilers disagree on that. Comeau C/C++ 4.3.3 (EDG 3.0.3) accepted the code, GCC 3.2 and BCC 5.5 selected #1 while VC7.1+ yields ambiguity.
I intuitively thought that the second pair should prevent overloading from triggering partial ordering since both arguments are non-deduced and has different types - (char*, char**). Just like in the following:
template<typename T> char* f2(T, char*); // #3 template<typename T> long* f2(T*, char**); // #4 long* p2 = f2(p2, 0); // #5
In this case all the compilers I checked found #5 to be ambiguous. The standard and DR 214 is not clear about how partial ordering handle such cases.
I think that overloading should not trigger partial ordering (in step 13.3.3 [over.match.best]/1/5) if some candidates have non-deduced pairs with different (specialized) types. In this stage the arguments are already adjusted so no need to mention it (i.e. array to pointer). In case that one of the arguments is non-deuced then partial ordering should only consider the type from the specialization:
template<typename T> struct B { typedef T type; }; template<typename T> char* f3(T, T); // #7 template<typename T> long* f3(T, typename B<T>::type); // #8 char* p3 = f3(p3, p3); // #9
According to my reasoning #9 should yield ambiguity since second pair is (T, long*). The second type (i.e. long*) was taken from the specialization candidate of #8. EDG and GCC accepted the code. VC and BCC found an ambiguity.
John Spicer: There may (or may not) be an issue concerning whether nondeduced contexts are handled properly in the partial ordering rules. In general, I think nondeduced contexts work, but we should walk through some examples to make sure we think they work properly.
Rani's description of the problem suggests that he believes that partial ordering is done on the specialized types. This is not correct. Partial ordering is done on the templates themselves, independent of type information from the specialization.
Notes from October 2004 meeting:
John Spicer will investigate further to see if any action is required.
(See also issue 885.)
In determining the implicit conversion sequence for an initializer list argument passed to a reference parameter, the intent is that a temporary of the appropriate type will be created and bound to the reference, as reflected in 13.3.3.1.5 [over.ics.list] paragraph 5:
Otherwise, if the parameter is a reference, see 13.3.3.1.4 [over.ics.ref]. [Note: The rules in this section will apply for initializing the underlying temporary for the reference. —end note]
However, 13.3.3.1.4 [over.ics.ref] deals only with expression arguments, not initializer lists:
When a parameter of reference type binds directly (8.5.3 [dcl.init.ref]) to an argument expression, the implicit conversion sequence is the identity conversion, unless the argument expression has a type that is a derived class of the parameter type, in which case the implicit conversion sequence is a derived-to-base Conversion (13.3.3.1 [over.best.ics])... If the parameter binds directly to the result of applying a conversion function to the argument expression, the implicit conversion sequence is a user-defined conversion sequence (13.3.3.1.2 [over.ics.user]), with the second standard conversion sequence either an identity conversion or, if the conversion function returns an entity of a type that is a derived class of the parameter type, a derived-to-base Conversion.
When a parameter of reference type is not bound directly to an argument expression, the conversion sequence is the one required to convert the argument expression to the underlying type of the reference according to 13.3.3.1 [over.best.ics]. Conceptually, this conversion sequence corresponds to copy-initializing a temporary of the underlying type with the argument expression. Any difference in top-level cv-qualification is subsumed by the initialization itself and does not constitute a conversion.
(Note in particular that the reference binding refers to 8.5.3 [dcl.init.ref], which also does not handle initializer lists, and not to 8.5.4 [dcl.init.list].)
Either 13.3.3.1.4 [over.ics.ref] needs to be revised to handle binding references to initializer list arguments or 13.3.3.1.5 [over.ics.list] paragraph 5 needs to be clearer on how the expression specification is intended to be applied to initializer lists.
The current rules make an example like
template<class T, size_t N> void foo(T (&)[N]); template<class T> void foo(T *t); int arr[3]{1, 2, 3}; foo(arr);
ambiguous, even though the first is an identity match and the second requires an lvalue transformation. Is this desirable?
Static data members of template classes and of nested classes of template classes are not themselves templates but receive much the same treatment as template. For instance, 14 [temp] paragraph 1 says that templates are only "classes or functions" but implies that "a static data member of a class template or of a class nested within a class template" is defined using the template-declaration syntax.
There are many places in the clause, however, where static data members of one sort or another are overlooked. For instance, 14 [temp] paragraph 6 allows static data members of class templates to be declared with the export keyword. I would expect that static data members of (non-template) classes nested within class templates could also be exported, but they are not mentioned here.
Paragraph 8, however, overlooks static data members altogether and deals only with "templates" in defining the effect of the export keyword; there is no description of the semantics of defining a static data member of a template to be exported.
These are just two instances of a systematic problem. The entire clause needs to be examined to determine which statements about "templates" apply to static data members, and which statements about "static data members of class templates" also apply to static data members of non-template classes nested within class templates.
(The question also applies to member functions of template classes; see issue 217, where the phrase "non-template function" in 8.3.6 [dcl.fct.default] paragraph 4 is apparently intended not to include non-template member functions of template classes. See also issue 108, which would benefit from understanding nested classes of class templates as templates. Also, see issue 249, in which the usage of the phrase "member function template" is questioned.)
Notes from the 4/02 meeting:
Daveed Vandevoorde will propose appropriate terminology.
The type adjustment of template non-type parameters described in 14.1 [temp.param] paragraph 8 appears to be underspecified. For example, implementations vary in their treatment of
template<typename T, T[T::size]> struct A {}; int dummy; A<int, &dummy> a;
and
template<typename T, T[1]> struct A; template<typename T, T*> struct A {}; int dummy; A<int, &dummy> a;
See also issues 1322 and 1668.
Default function arguments are instantiated only when needed. Is the same true of default template arguments? For example, is the following well-formed?
#include <type_traits> template<class T> struct X { template<class U = typename T::type> static void foo(int){} static void foo(...){} }; int main(){ X<std::enable_if<false>>::foo(0); }
Also, is the effect on lookup the same? E.g.,
struct S { template<typename T = U> void f(); struct U {}; };
According to 14.2 [temp.names] paragraph 4,
When the name of a member template specialization appears after . or -> in a postfix-expression or after a nested-name-specifier in a qualified-id, and the object expression of the postfix-expression is type-dependent or the nested-name-specifier in the qualified-id refers to a dependent type, but the name is not a member of the current instantiation (14.6.2.1 [temp.dep.type]), the member template name must be prefixed by the keyword template.
In other words, the template keyword is only required when forming a template-id. However, current compilers reject an example like:
template<typename T, template<typename> class U = T::X> struct A;
and require the template keyword before X. Should the rule be amended to require the template keyword in cases like this?
The relationship between declarations and definitions of variable templates is not clear. For example:
template<typename T> auto var0 = T(); // #1a. template<typename T> extern T var0; // #1b. template<typename T> T var1; // #2a. template<typename T> extern auto var1; // #2b. template<typename T> extern T var1; // #2c. template<typename T> T var1; // #2d.
Questions:
When is a variable template declaration a definition and when a non-defining declaration?
What declarations are valid?
Should auto declarations be allowed?
To what extent, if any, do these involve type matching?
How are types matched, especially in the presence of auto?
Is it permitted for a variable template to have an unnamed type?
With the new core rules in regard to variadic pack expansions the library specification of the traits template common_type is now broken, the reason is that it is defined as a series of specializations of the primary template
template <class ...T> struct common_type;
The broken one is this pair:
template <class T, class U> struct common_type<T, U> { typedef decltype(true ? declval<T>() : declval<U>()) type; }; template <class T, class U, class... V> struct common_type<T, U, V...> { typedef typename common_type<typename common_type<T, U>::type, V...>::type type; };
With the new rules both specializations would now be ambiguous for an instantiation like common_type<X, Y>.
(See also issue 1395.)
Notes from the October, 2012 meeting:
It is possible that 14.5.5.2 [temp.class.order] may resolve this problem.
The status of an example like the following is not clear:
template<class> struct x { template<class T> friend bool operator==(x<T>, x<T>) { return false; } }; int main() { x<int> x1; x<double> x2; x1 == x1; x2 == x2; }
Such a friend definition is permitted by 14.5.4 [temp.friend] paragraph 2:
A friend function template may be defined within a class or class template...
Paragraph 4 appears to be related, but deals only with friend functions, not friend function templates:
When a function is defined in a friend function declaration in a class template, the function is instantiated when the function is odr-used. The same restrictions on multiple declarations and definitions that apply to non-template function declarations and definitions also apply to these implicit definitions.
During the discussion of issue 1804, it was noted that the process of determining whether a member of an explicit or partial specialization corresponds to a member of the primary template is not well specified. In particular, it should be clarified that the primary template should not be instantiated during this process; instead, the template arguments from the specialization should simply be substituted into the member declaration.
The rationale for the restriction in 14.5.5 [temp.class.spec] paragraph 8, first bullet is not clear:
A partially specialized non-type argument expression shall not involve a template parameter of the partial specialization except when the argument expression is a simple identifier. [Example:
template <int I, int J> struct A {}; template <int I> struct A<I+5, I*2> {}; // error template <int I, int J> struct B {}; template <int I> struct B<I, I> {}; // OK
—end example]
In the example, it's clear that I is non-deducible, but this rule prevents plausible uses like:
template <int I, int J> struct A {}; template <int I> struct A<I, I*2> {};
The Standard appears to be silent on whether the types of non-type template arguments in a partial specialization must be the same as those of the primary template or whether conversions are permitted. For example,
template<char...> struct char_values {}; template<int C1, char C3> struct char_values<C1, 12, C3> { static const unsigned value = 1; }; int check0[char_values<1, 12, 3>::value == 1? 1 : -1];
The closest the current wording comes to dealing with this question is 14.5.5 [temp.class.spec] paragraph 8 bullet 1:
A partially specialized non-type argument expression shall not involve a template parameter of the partial specialization except when the argument expression is a simple identifier.
In this example, one might think of the first template argument in the partial specialization as (char)C1, which would violate the requirement, but that reasoning is tenuous.
It would be reasonable to require the types to match in cases like this. If this kind of usage is allowed it could get messy if the primary template were int... and the partial specialization had a parameter that was char because not all of the possible values from the primary template could be represented in the parameter of the partial specialization. A similar issue exists if the primary template takes signed char and the partial specialization takes unsigned int.
There is implementation variance in the treatment of this example.
It appears that partial specializations of variable templates are intended to be supported, as 14.3.3 [temp.arg.template] paragraph 2 says,
Any partial specializations (14.5.5 [temp.class.spec]) associated with the primary class template or primary variable template are considered when a specialization based on the template template-parameter is instantiated.
However, there is no explicit specification for how they are to be handled, and the wording in 14.5.5 [temp.class.spec] and its subsections explicitly applies only to partial specializations of class templates.
In the following example, the template parameter in the partial specialization is non-deducible:
template <class T> struct A { typedef T U; }; template <class T> struct C { }; template <class T> struct C<typename A<T>::U> { };
Several compilers issue errors for this case, but there appears to be nothing in the Standard that would make this ill-formed; it simply seems that the partial specialization will never be matched, so the primary template will be used for all specializations. Should it be ill-formed?
(See also issue 1246.)
Notes from the April, 2006 meeting:
It was noted that there are similar issues for constructors and conversion operators with non-deducible parameters, and that they should probably be dealt with similarly.
According to 14.5.5.3 [temp.class.spec.mfunc] paragraph 2,
If a member template of a class template is partially specialized, the member template partial specializations are member templates of the enclosing class template; if the enclosing class template is instantiated (14.7.1 [temp.inst], 14.7.2 [temp.explicit]), a declaration for every member template partial specialization is also instantiated as part of creating the members of the class template specialization.
Does this imply that only partial specializations of member templates that are declared before the enclosing class is instantiated are considered? For example, in
template<typename A> struct X { template<typename B> struct Y; }; template struct X<int>; template<typename A> template<typename B> struct X<A>::Y<B*> { int n; }; int k = X<int>::Y<int*>().n;
is the last line valid? There is implementation variance on this point. Similarly, for an example like
template<typename A> struct Outer { template<typename B, typename C> struct Inner; }; Outer<int> outer; template<typename A> template<typename B> struct Outer<A>::Inner<typename A::error, B> {};
at what point, if at all, is the declaration of the partial specialization instantiated? Again, there is implementation variance in the treatment of this example.
Notes from the February, 2014 meeting:
CWG decided that partial specialization declarations should be instantiated only when needed to determine whether the partial specialization matches or not.
Issue 1244 was resolved by changing the example in 14.4 [temp.type] paragraph 1 from
template<template<class> class TT> struct X { }; template<class> struct Y { }; template<class T> using Z = Y<T>; X<Y> y; X<Z> z;
to
template<class T> struct X { }; template<class> struct Y { }; template<class T> using Z = Y<T>; X<Y<int> > y; X<Z<int> > z;
In fact, the original intent was that the example should have been correct as written; however, the normative wording to make it so was missing. The current wording of 14.5.7 [temp.alias] deals only with the equivalence of a specialization of an alias template with the type-id after substitution. Wording needs to be added specifying under what circumstances an alias template itself is equivalent to a class template.
Proposed resolution (September, 2012):
Add the following as a new paragraph following 14.5.7 [temp.alias] paragraph 2:
When the type-id in the declaration of alias template (call it A) consists of a simple-template-id in which the template-argument-list consists of a list of identifiers naming each template-parameter of A exactly once in the same order in which they appear in A's template-parameter-list, the alias template is equivalent to the template named in the simple-template-id (call it T) if A and T have the same number of template-parameters. [Footnote: This rule is transitive: if an alias template A is equivalent to another alias template B that is equivalent to a class template C, then A is also equivalent to C, and A and B are also equivalent to each other. —end footnote] [Example:
template<typename T, U = T> struct A; template<typename V, typename W> using B = A<V, W>; // equivalent to A template<typename V, typename W> using C = A<V>; // not equivalent to A: // not all parameters used template<typename V> using D = A<V>; // not equivalent to A: // different number of parameters template<typename V, typename W> using E = A<W, V>; // not equivalent to A: // template-arguments in wrong order template<typename V, typename W = int> using F = A<V, W>; // equivalent to A: // default arguments not considered template<typename V, typename W> using G = A<V, W>; // equivalent to A and B template<typename V, typename W> using H = E<V, W>; // equivalent to E template<typename V, typename W> using I = A<V, typename W::type>; // not equivalent to A: // argument not identifier
—end example]
Change 14.4 [temp.type] paragraph 1 as follows:
Two template-ids refer to the same class or function if
...
their corresponding template template-arguments refer to the same or equivalent (14.5.7 [temp.alias]) templates.
[Example:
...declares x2 and x3 to be of the same type. Their type differs from the types of x1 and x4.
template<class T template<class> class TT> struct X { }; template<class> struct Y { }; template<class T> using Z = Y<T>; X<Y<int> Y> y; X<Z<int> Z> z;declares y and z to be of the same type. —end example]
There appears to be no requirement that a redeclaration of an alias template must be equivalent to the earlier one. An alias-declaration is not a definition (3.1 [basic.def] paragraph 2), so presumably an alias template declaration is also not a definition and thus the ODR does not apply.
Originally, a pack expansion could not expand into a fixed-length template parameter list, but this was changed in N2555. This works fine for most templates, but causes issues with alias templates.
In most cases, an alias template is transparent; when it's used in a template we can just substitute in the dependent template arguments. But this doesn't work if the template-id uses a pack expansion for non-variadic parameters. For example:
template<class T, class U, class V> struct S {}; template<class T, class V> using A = S<T, int, V>; template<class... Ts> void foo(A<Ts...>);
There is no way to express A<Ts...> in terms of S, so we need to hold onto the A until we have the Ts to substitute in, and therefore it needs to be handled in mangling.
Currently, EDG and Clang reject this testcase, complaining about too few template arguments for A. G++ did as well, but I thought that was a bug. However, on the ABI list John Spicer argued that it should be rejected.
(See also issue 1558.)
Notes from the October, 2012 meeting:
The consensus of CWG was that this usage should be prohibited, disallowing use of an alias template when a dependent argument can't simply be substituted directly into the type-id.
Additional note, April, 2013:
For another example, consider:
template<class... x> class list{}; template<class a, class... b> using tail=list<b...>; template <class...T> void f(tail<T...>); int main() { f<int,int>({}); }
There is implementation variance in the handling of this example.
The interaction of alias templates and access control is not clear from the current wording of 14.5.7 [temp.alias]. For example:
template <class T> using foo = typename T::foo;
class B {
typedef int foo;
friend struct C;
};
struct C {
foo<B> f; // Well-formed?
};
Is the substitution of B::foo for foo<B> done in the context of the befriended class C, making the reference well-formed, or is the access determined independently of the context in which the alias template specialization appears?
If the answer to this question is that the access is determined independently from the context, care must be taken to ensure that an access failure is still considered to be “in the immediate context of the function type” (14.8.2 [temp.deduct] paragraph 8) so that it results in a deduction failure rather than a hard error.
Notes from the October, 2012 meeting:
The consensus of CWG was that instantiation (lookup and access) for alias templates should be as for other templates, in the definition context rather than in the context where they are used. They should still be expanded immediately, however.
Additional note (February, 2014):
A related problem is raised by the definition of std::enable_if_t (20.10.2 [meta.type.synop]):
template <bool b, class T = void> using enable_if_t = typename enable_if<b,T>::type;
If b is false, there will be no type member. The intent is that such a substitution failure is to be considered as being “in the immediate context” where the alias template specialization is used, but the existing wording does not seem to accomplish that goal.
Consider the following example:
template <class T> struct Outer { struct Inner { Inner* self(); }; }; template <class T> Outer<T>::Inner* Outer<T>::Inner::self() { return this; }
According to 14.6 [temp.res] paragraph 3 (before the salient wording was inadvertently removed, see issue 559),
A qualified-id that refers to a type and in which the nested-name-specifier depends on a template-parameter (14.6.2 [temp.dep]) but does not refer to a member of the current instantiation (14.6.2.1 [temp.dep.type]) shall be prefixed by the keyword typename to indicate that the qualified-id denotes a type, forming a typename-specifier.
Because Outer<T>::Inner is a member of the current instantiation, the Standard does not currently require that it be prefixed with typename when it is used in the return type of the definition of the self() member function. However, it is difficult to parse this definition correctly without knowing that the return type is, in fact, a type, which is what the typename keyword is for. Should the Standard be changed to require typename in such contexts?
According to 14.6 [temp.res] paragraph 3,
When a qualified-id is intended to refer to a type that is not a member of the current instantiation (14.6.2.1 [temp.dep.type]) and its nested-name-specifier refers to a dependent type, it shall be prefixed by the keyword typename, forming a typename-specifier. If the qualified-id in a typename-specifier does not denote a type, the program is ill- formed.
The intent of the programmer cannot form the basis for a compiler determining whether to issue a diagnostic or not.
Suggested resolution:Let N be a qualified-id with a nested-name-specifier that denotes a dependent type. If N is not prefixed by the keyword typename, N shall refer to a member of the current instantiation or it shall not refer to a type.
typename-specifier:
typename nested-name-specifier identifier
typename nested-name-specifier templateopt simple-template-idIf the qualified-id in a typename-specifier does not denote a type, the program is ill-formed.
(See also issues 590 and 591.)
According to 14.6 [temp.res] paragraph 8,
No diagnostic shall be issued for a template for which a valid specialization can be generated.
One sentence later, it says,
If every valid specialization of a variadic template requires an empty template parameter pack, the template is ill-formed, no diagnostic required.
This appears to be a contradiction: in the latter case, there is postulated to exist a “valid” specialization (with an empty pack expansion), for which a diagnostic might or might not be issued. The first quoted sentence, however, forbids issuing a diagnostic for a template that has at least one valid specialization.
According to 14.6.1 [temp.local]paragraph 1,
Like normal (non-template) classes, class templates have an injected-class-name (Clause 9 [class]). The injected-class-name can be used as a template-name or a type-name. When it is used with a template-argument-list, as a template-argument for a template template-parameter, or as the final identifier in the elaborated-type-specifier of a friend class template declaration, it refers to the class template itself. Otherwise, it is equivalent to the template-name followed by the template-parameters of the class template enclosed in <>.
The intent is that a < following such an injected-class-name is to be interpreted as the start of a template-argument-list (and an error if the following tokens do not constitute a valid template-argument-list), but that is not said explicitly.
Use of the injected-class-name of a class template with a template-argument-list that relies on default arguments is not clearly specified in the current wording of the Standard. In particular, according to 14.1 [temp.param] paragraph 10,
The set of default template-arguments available for use with a template declaration or definition is obtained by merging the default arguments from the definition (if in scope) and all declarations in scope in the same way default function arguments are (8.3.6 [dcl.fct.default]).
However, the injected-class-name hides the template declarations, so it is not clear whether the default arguments are available at that point or not.
The resolution of issue 1321 changed the term “dependent name” to apply only to unqualified-ids, presumably on the basis that only unqualified-ids affect the lookup set. However, the rule from 14.5.6.1 [temp.over.link] paragraph 5,
For determining whether two dependent names (14.6.2 [temp.dep]) are equivalent, only the name itself is considered, not the result of name lookup in the context of the template. If multiple declarations of the same function template differ in the result of this name lookup, the result for the first declaration is used.
should apply to non-dependent qualified-ids naming functions called with dependent arguments, as well.
There should also be a statement that the name of a member of an unknown specialization is a dependent name and so should fall under the rules of 14.6.4 [temp.dep.res] and not 14.6.3 [temp.nondep].
The note in 14.6.2.1 [temp.dep.type] paragraph 7 reads,
[Note: the result of name lookup differs only when the member of the current instantiation was found in a non-dependent base class of the current instantiation and a member with the same name is also introduced by the substitution for a dependent base class of the current instantiation. —end note]
However, this is not correct. Consider the following example:
struct Y { int X; }; template<typename T> struct A : Y { enum B : int; void f() { A::X; } // finds Y::X here! }; template<typename T> enum A<T>::B : int { X // introduces member A::X into A<T>! }; void g() { A<int> a; a.f(); }
A::X is a member of the current instantiation, so paragraph 7 requires it to be looked up again when instantiating and to give a diagnostic if the lookup differs from the lookup in the definition context. The note incorrectly indicates that this can only happen if the conflicting name was introduced by a dependent base class.
Proposed resolution (August, 2011):
Change 14.6.2.1 [temp.dep.type] paragraph 7 as follows:
...If the result of this lookup differs from the result of name lookup in the template definition context, name lookup is ambiguous. [Note: the result of name lookup differs only when the member of the current instantiation was found in a non-dependent base class of the current instantiation and a member with the same name is also introduced by the substitution for a dependent base class of the current instantiation. —end note] [Example:
struct A { int m; }; struct B { int m; }; template<typename T> struct C : A, T { int f() { return this->m; } // finds A::m in the template definition context }; int g(C<B> cb) { return cb.f(); // error: finds both A::m and B::m in the template instantiation context }
—end example]
Notes from the December, 2011 teleconference:
Changes to the exposition were suggested and the issue returned to "drafting" status.
According to 14.6.2.1 [temp.dep.type] paragraph 8, a type is dependent (among other things) if it is
a simple-template-id in which either the template name is a template parameter or any of the template arguments is a dependent type or an expression that is type-dependent or value-dependent
This applies to alias template specializations, even if the resulting type does not depend on the template argument:
struct B { typedef int type; }; template<typename> using foo = B; template<typename T> void f() { foo<T>::type * x; //error: typename required }
Is a change to the rules for cases like this warranted?
Notes from the October, 2012 meeting:
CWG agreed that no typename should be required in this case. In some ways, an alias template specialization is like the current instantiation and can be known at template definition time.
The correct handling of an example like the following is unclear:
template<typename T> struct A { struct B: A { }; };
A type used as a base must be complete (10 [class.derived] paragraph 2). The fact that the base class in this example is the current instantiation could be interpreted as indicating that it should be available for lookup, and thus the normal rule should apply, as members declared after the nested class would not be visible.
On the other hand, 14.6.2 [temp.dep] paragraph 3 says,
In the definition of a class or class template, if a base class depends on a template-parameter, the base class scope is not examined during unqualified name lookup either at the point of definition of the class template or member or during an instantiation of the class template or member.
This wording refers not to a dependent type, which would permit lookup in the current instantiation, but simply to a type that “depends on a template-parameter,” and the current instantiation is such a type.
Implementations vary on the handling of this example.
(See also issue 1526 for another case related to the distinction between a “dependent type” and a “type that depends on a template-parameter.”)
Notes from the October, 2012 meeting:
CWG determined that the example should be ill-formed.
14.6.2.3 [temp.dep.constexpr] paragraph 1 begins,
Except as described below, a constant expression is value-dependent if...
However, this terminology is misleading, because “constant expression” is now defined in terms of evaluation, and a value-dependent expression cannot be evaluated.
template <class T> class Foo { public: typedef int Bar; Bar f(); }; template <class T> typename Foo<T>::Bar Foo<T>::f() { return 1;} --------------------In the class template definition, the declaration of the member function is interpreted as:
int Foo<T>::f();In the definition of the member function that appears outside of the class template, the return type is not known until the member function is instantiated. Must the return type of the member function be known when this out-of-line definition is seen (in which case the definition above is ill-formed)? Or is it OK to wait until the member function is instantiated to see if the type of the return type matches the return type in the class template definition (in which case the definition above is well-formed)?
Suggested resolution: (John Spicer)
My opinion (which I think matches several posted on the reflector recently) is that the out-of-class definition must match the declaration in the template. In your example they do match, so it is well formed.
I've added some additional cases that illustrate cases that I think either are allowed or should be allowed, and some cases that I don't think are allowed.
template <class T> class A { typedef int X; }; template <class T> class Foo { public: typedef int Bar; typedef typename A<T>::X X; Bar f(); Bar g1(); int g2(); X h(); X i(); int j(); }; // Declarations that are okay template <class T> typename Foo<T>::Bar Foo<T>::f() { return 1;} template <class T> typename Foo<T>::Bar Foo<T>::g1() { return 1;} template <class T> int Foo<T>::g2() { return 1;} template <class T> typename Foo<T>::X Foo<T>::h() { return 1;} // Declarations that are not okay template <class T> int Foo<T>::i() { return 1;} template <class T> typename Foo<T>::X Foo<T>::j() { return 1;}In general, if you can match the declarations up using only information from the template, then the declaration is valid.
Declarations like Foo::i and Foo::j are invalid because for a given instance of A<T>, A<T>::X may not actually be int if the class is specialized.
This is not a problem for Foo::g1 and Foo::g2 because for any instance of Foo<T> that is generated from the template you know that Bar will always be int. If an instance of Foo is specialized, the template member definitions are not used so it doesn't matter whether a specialization defines Bar as int or not.
Implementations differ in their treatment of the following code:
template <class T> struct A { typename T::X x; }; template <class T> struct B { typedef T* X; A<B> a; }; int main () { B<int> b; }
Some implementations accept it. At least one rejects it because the instantiation of A<B<int> > requires that B<int> be complete, and it is not at the point at which A<B<int> > is being instantiated.
Erwin Unruh:
In my view the programm is ill-formed. My reasoning:
So each class needs the other to be complete.
The problem can be seen much easier if you replace the typedef with
typedef T (*X) [sizeof(B::a)];
Now you have a true recursion. The compiler cannot easily distinguish between a true recursion and a potential recursion.
John Spicer:
Using a class to form a qualified name does not require the class to be complete, it only requires that the named member already have been declared. In other words, this kind of usage is permitted:
class A { typedef int B; A::B ab; };
In the same way, once B has been declared in A, it is also visible to any template that uses A through a template parameter.
The standard could be more clear in this regard, but there are two notes that make this point. Both 3.4.3.1 [class.qual] and 5.1.1 [expr.prim.general] paragraph 7 contain a note that says "a class member can be referred to using a qualified-id at any point in its potential scope (3.3.7 [basic.scope.class])." A member's potential scope begins at its point of declaration.
In other words, a class has three states: incomplete, being completed, and complete. The standard permits a qualified name to be used once a name has been declared. The quotation of the notes about the potential scope was intended to support that.
So, in the original example, class A does not require the type of T to be complete, only that it have already declared a member X.
Bill Gibbons:
The template and non-template cases are different. In the non-template case the order in which the members become declared is clear. In the template case the members of the instantiation are conceptually all created at the same time. The standard does not say anything about trying to mimic the non-template case during the instantiation of a class template.
Mike Miller:
I think the relevant specification is 14.6.4.1 [temp.point] paragraph 3, dealing with the point of instantiation:
For a class template specialization... if the specialization is implicitly instantiated because it is referenced from within another template specialization, if the context from which the specialization is referenced depends on a template parameter, and if the specialization is not instantiated previous to the instantiation of the enclosing template, the point of instantiation is immediately before the point of instantiation of the enclosing template. Otherwise, the point of instantiation for such a specialization immediately precedes the namespace scope declaration or definition that refers to the specialization.
That means that the point of instantiation of A<B<int> > is before that of B<int>, not in the middle of B<int> after the declaration of B::X, and consequently a reference to B<int>::X from A<B<int> > is ill-formed.
To put it another way, I believe John's approach requires that there be an instantiation stack, with the results of partially-instantiated templates on the stack being available to instantiations above them. I don't think the Standard mandates that approach; as far as I can see, simply determining the implicit instantiations that need to be done, rewriting the definitions at their respective points of instantiation with parameters substituted (with appropriate "forward declarations" to allow for non-instantiating references), and compiling the result normally should be an acceptable implementation technique as well. That is, the implicit instantiation of the example (using, e.g., B_int to represent the generated name of the B<int> specialization) could be something like
struct B_int; struct A_B_int { B_int::X x; // error, incomplete type }; struct B_int { typedef int* X; A_B_int a; };
Notes from 10/01 meeting:
This was discussed at length. The consensus was that the template case should be treated the same as the non-template class case it terms of the order in which members get declared/defined and classes get completed.
Proposed resolution:
In 14.6.4.1 [temp.point] paragraph 3 change:
the point of instantiation is immediately before the point of instantiation of the enclosing template. Otherwise, the point of instantiation for such a specialization immediately precedes the namespace scope declaration or definition that refers to the specialization.
To:
the point of instantiation is the same as the point of instantiation of the enclosing template. Otherwise, the point of instantiation for such a specialization immediately precedes the nearest enclosing declaration. [Note: The point of instantiation is still at namespace scope but any declarations preceding the point of instantiation, even if not at namespace scope, are considered to have been seen.]
Add following paragraph 3:
If an implicitly instantiated class template specialization, class member specialization, or specialization of a class template references a class, class template specialization, class member specialization, or specialization of a class template containing a specialization reference that directly or indirectly caused the instantiation, the requirements of completeness and ordering of the class reference are applied in the context of the specialization reference.
and the following example
template <class T> struct A { typename T::X x; }; struct B { typedef int X; A<B> a; }; template <class T> struct C { typedef T* X; A<C> a; }; int main () { C<int> c; }
Notes from the October 2002 meeting:
This needs work. Moved back to drafting status.
C++11 expanded the lookup rules for dependent function calls (14.6.4.2 [temp.dep.candidate] paragraph 1 bullet 2) to include functions with internal linkage; previously only functions with external linkage were considered. However, 14.6.4.1 [temp.point] paragraph 6 still says,
The instantiation context of an expression that depends on the template arguments is the set of declarations with external linkage declared prior to the point of instantiation of the template specialization in the same translation unit.
Presumably this wording was overlooked and should be harmonized with the new specification.
The current wording of 14.6.4.1 [temp.point] does not define the point of instantiation of a variable template specialization. Presumably replacing the references to “static data member of a class template” with “variable template” in paragraphs 1 and 8 would be sufficient.
Many statements in the Standard apply only to templates, for example, 14.6 [temp.res] paragraph 8:
If no valid specialization can be generated for a template definition, and that template is not instantiated, the template definition is ill-formed, no diagnostic required.
This clearly should apply to non-template member functions of class templates, not just to templates per se. Terminology should be established to refer to these generic entities that are not actually templates.
Additional notes (August, 2012):
Among the generic entities that should be covered by such a term are default function arguments, as they can be instantiated independently. If issue 1330 is resolved as expected, exception-specifications should also be covered by the same term.
See also issue 1484.
Three points have been raised where the wording in 14.7.1 [temp.inst] may not be sufficiently clear.
A class template specialization is implicitly instantiated... if the completeness of the class type affects the semantics of the program...
It is not clear what it means for the "completeness... [to affect] the semantics." Consider the following example:
template<class T> struct A; extern A<int> a; void *foo() { return &a; } template<class T> struct A { #ifdef OPTION void *operator &() { return 0; } #endif };
The question here is whether it is necessary for template class A to declare an operator & for the semantics of the program to be affected. If it does not do so, the meaning of &a will be the same whether the class is complete or not and thus arguably the semantics of the program are not affected.
Presumably what was intended is whether the presence or absence of certain member declarations in the template class might be relevant in determining the meaning of the program. A clearer statement may be desirable.
If the overload resolution process can determine the correct function to call without instantiating a class template definition, it is unspecified whether that instantiation actually takes place.
The intent of this wording, as illustrated in the example in that paragraph, is to allow a "smart" implementation not to instantiate class templates if it can determine that such an instantiation will not affect the result of overload resolution, even though the algorithm described in clause 13 [over] requires that all the viable functions be enumerated, including functions that might be found as members of specializations.
Unfortunately, the looseness of the wording allowing this latitude for implementations makes it unclear what "the overload resolution process" is — is it the algorithm in 13 [over] or something else? — and what "the correct function" is.
If an implicit instantiation of a class template specialization is required and the template is declared but not defined, the program is ill-formed.
Here, it is not clear what conditions "require" an implicit instantiation. From the context, it would appear that the intent is to refer to the conditions in paragraph 4 that cause a specialization to be instantiated.
This interpretation, however, leads to different treatment of template and non-template incomplete classes. For example, by this interpretation,
class A; template <class T> struct TA; extern A a; extern TA<int> ta; void f(A*); void f(TA<int>*); int main() { f(&a); // well-formed; undefined if A // has operator &() member f(&ta); // ill-formed: cannot instantiate }
A different approach would be to understand "required" in paragraph 6 to mean that a complete type is required in the expression. In this interpretation, if an incomplete type is acceptable in the context and the class template definition is not visible, the instantiation is not attempted and the program is well-formed.
The meaning of "required" in paragraph 6 must be clarified.
Notes on 10/01 meeting:
It was felt that item 1 is solved by addition of the word "might" in the resolution for issue 63; item 2 is not much of a problem; and item 3 could be solved by changing "required" to "required to be complete".
Non-static data member initializers get the same late parsing as member functions and default arguments, but are they also instantiated as needed like them? And when is their validity checked?
Notes from the October, 2012 meeting:
CWG agreed that non-static data member initializers should be handled like default arguments.
Additional note (March, 2013):
Determining whether a defaulted constructor is constexpr or not requires parsing the class's non-static data member initializers; see also issue 1360.
Consider a case like
struct X { template<typename T> void f(T); void f(int); }; template void X::f(int);
or
template<typename T> void f(T) {} void f(int); template void f(int);
Presumably in both these cases the explicit instantiation should refer to the template and not to the non-template; however, 14.5.2 [temp.mem] paragraph 2 says,
A normal (non-template) member function with a given name and type and a member function template of the same name, which could be used to generate a specialization of the same type, can both be declared in a class. When both exist, a use of that name and type refers to the non-template member unless an explicit template argument list is supplied.
This would appear to give the wrong answer for the first example. It's not clearly stated, but consistency would suggest a similar wrong answer for the second. Presumably a statement is needed somewhere that an explicit instantiation directive applies to a template and not a non-template function if both are visible.
Additional note, January, 2014:
A related example has been raised:
template<typename T> class Matrix { public: Matrix(){} Matrix(const Matrix&){} template<typename U> Matrix(const Matrix<U>&); }; template Matrix<int>::Matrix(const Matrix&); Matrix<int> m; Matrix<int> mm(m);
If the explicit instantiation directive applies to the constructor template, there is no way to explicitly instantiate the copy constructor.
It is not clear whether the following is well-formed or not:
template<typename T> int arr[sizeof(T)] = {}; template int arr<int>[];
Are we supposed to instantiate the specialization and treat the explicit instantiation declaration as if it were a redeclaration (in which case the omitted array bound would presumably be OK), or is the type of the explicit instantiation declaration required to exactly match the type that the instantiated specialization has (in which case the omitted bound would presumably not be OK)? Or something else?
(See also issue 1728.)
It is not clear to what extent the type in an explicit instantiation must match that of a variable template. For example:
template<typename T> T var = T(); template float var<float>; // #1. template int* var<int>; // #2. template auto var<char>; // #3.
(See also issue 1704.)
Paragraph 17 of 14.7.3 [temp.expl.spec] says,
A member or a member template may be nested within many enclosing class templates. In an explicit specialization for such a member, the member declaration shall be preceded by a template<> for each enclosing class template that is explicitly specialized.
This is curious, because paragraph 3 only allows explicit specialization of members of implicitly-instantiated class specializations, not explicit specializations. Furthermore, paragraph 4 says,
Definitions of members of an explicitly specialized class are defined in the same manner as members of normal classes, and not using the explicit specialization syntax.
Paragraph 18 provides a clue for resolving the apparent contradiction:
In an explicit specialization declaration for a member of a class template or a member template that appears in namespace scope, the member template and some of its enclosing class templates may remain unspecialized, except that the declaration shall not explicitly specialize a class member template if its enclosing class templates are not explicitly specialized as well. In such explicit specialization declaration, the keyword template followed by a template-parameter-list shall be provided instead of the template<> preceding the explicit specialization declaration of the member.
It appears from this and the following example that the phrase “explicitly specialized” in paragraphs 17 and 18, when referring to enclosing class templates, does not mean that explicit specializations have been declared for them but that their names in the qualified-id are followed by template argument lists. This terminology is confusing and should be changed.
Proposed resolution (October, 2005):
Change 14.7.3 [temp.expl.spec] paragraph 17 as indicated:
A member or a member template may be nested within many enclosing class templates. In an explicit specialization for such a member, the member declaration shall be preceded by a template<> for each enclosing class template that is explicitly specialized specialization. [Example:...
Change 14.7.3 [temp.expl.spec] paragraph 18 as indicated:
In an explicit specialization declaration for a member of a class template or a member template that appears in namespace scope, the member template and some of its enclosing class templates may remain unspecialized, except that the declaration shall not explicitly specialize a class member template if its enclosing class templates are not explicitly specialized as well that is, the template-id naming the template may be composed of template parameter names rather than template-arguments. In For each unspecialized template in such an explicit specialization declaration, the keyword template followed by a template-parameter-list shall be provided instead of the template<> preceding the explicit specialization declaration of the member. The types of the template-parameters in the template-parameter-list shall be the same as those specified in the primary template definition. In such declarations, an unspecialized template-id shall not precede the name of a template specialization in the qualified-id naming the member. [Example:...
Notes from the April, 2006 meeting:
The revised wording describing “unspecialized” templates needs more work to ensure that the parameter names in the template-id are in the correct order; the distinction between template arguments and parameters is also probably not clear enough. It might be better to replace this paragraph completely and avoid the “unspecialized” wording altogether.
Proposed resolution (February, 2010):
Change 14.7.3 [temp.expl.spec] paragraph 17 as follows:
A member or a member template may be nested within many enclosing class templates. In an explicit specialization for such a member, the member declaration shall be preceded by a template<> for each enclosing class template that is explicitly specialized specialization. [Example:...
Change 14.7.3 [temp.expl.spec] paragraph 18 as follows:
In an explicit specialization declaration for a member of a class template or a member template that appears in namespace scope, the member template and some of its enclosing class templates may remain unspecialized, except that the declaration shall not explicitly specialize a class member template if its enclosing class templates are not explicitly specialized as well. In such explicit specialization declaration, the keyword template followed by a template-parameter-list shall be provided instead of the template<> preceding the explicit specialization declaration of the member. The types of the template-parameters in the template-parameter-list shall be the same as those specified in the primary template definition. that is, the corresponding template prefix may specify a template-parameter-list instead of template<> and the template-id naming the template be written using those template-parameters as template-arguments. In such a declaration, the number, kinds, and types of the template-parameters shall be the same as those specified in the primary template definition, and the template-parameters shall be named in the template-id in the same order that they appear in the template-parameter-list. An unspecialized template-id shall not precede the name of a template specialization in the qualified-id naming the member. [Example:...
14.7.3 [temp.expl.spec] paragraph 2 requires that explicit specializations of member templates be declared in namespace scope, not in the class definition. This restriction does not apply to partial specializations of member templates; that is,
struct A { template<class T> struct B; template <class T> struct B<T*> { }; // well-formed template <> struct B<int*> { }; // ill-formed };
There does not seem to be a good reason for this inconsistency.
Additional note (October, 2013):
EWG has requested CWG to consider resolving this issue. See EWG issue 41.
The resolution of issue 941 permits a non-deleted explicit specialization of a deleted function template. For example:
template<typename T> void f() = delete; decltype(f<int>()) *p; template<> void f<int>();
However, the existing normative wording is not adequate to handle this usage. For one thing, =delete is formally, at least, a function definition, and an implementation is not permitted to instantiate a function definition unless it is used; presumably, then, an implementation could not reject the decltype above as a reference to a deleted specialization. Furthermore, there should be a requirement that a non-deleted explicit specialization of a deleted function template must precede any reference to that specialization. (I.e., the example should be ill-formed as written but well-formed if the last two lines were interchanged.)
According to 14.8.1 [temp.arg.explicit] paragraph 6,
Implicit conversions (Clause 4 [conv]) will be performed on a function argument to convert it to the type of the corresponding function parameter if the parameter type contains no template-parameters that participate in template argument deduction. [Note: Template parameters do not participate in template argument deduction if they are explicitly specified...
But this isn't clear about when these conversions are done. Consider
template<class T> struct A { typename T::N n; }; template<class T> struct B { }; template<class T, class T2> void foo(const A<T>& r); // #1 template<class T> void foo(const B<T>& r); // #2 void baz() { B<char> b; foo(b); // OK foo<char>(b); // error }
With the explicit template argument, the first parameter of #1 no longer participates in template argument deduction, so implicit conversions are done. If we check for the implicit conversion during the deduction process, we end up instantiating A<char>, resulting in a hard error. If we wait until later to check the conversion, we can reject #1 because T2 is not deduced and never need to consider the conversion.
But if we just accept the parameter and leave it up to normal overload resolution to reject an unsuitable candidate, that breaks this testcase:
template<class T> struct A { typename T::N n; }; template<class T> struct B { }; template <class T, class... U> typename A<T>::value_t bar(int, T, U...); template <class T> T bar(T, T); void baz() { B<char> b; bar(b, b); }
Here, if deduction succeeds, we substitute in the deduced arguments of T = B<char>, U = { }, and end up instantiating A<B<char>>, which fails.
EDG and GCC currently reject the first testcase and accept the second; clang accepts both.
Notes from the October, 2012 meeting:
The position initially favored by CWG was that implicit conversions are not considered during deduction but are only applied afterwards, so the second example is ill-formed, and that the normative wording of the referenced paragraph should be moved into the note. This approach does not handle some examples currently accepted by some implementations, however; for example:
template <class T> struct Z { typedef T::x xx; }; template <class T> Z<T>::xx f(void *, T); template <class T> void f(int, T); struct A {} a; int main() { f(1, a); // If the implementation rules out the first overload // because of the invalid conversion from int to void*, // the error instantiating Z<A> will be avoided }
Additional discussion is required.
Notes from the April, 2013 meeting:
The approach needed to accept this code appears to be doing the convertibility check between deduction and substitution.
There are certain constructs that are not covered by the existing categories of “type dependent” and “value dependent.” For example, the expression sizeof(sizeof(T())) is neither type-dependent nor value-dependent, but its validity depends on whether T can be value-constructed. We should be able to overload on such characteristics and select via deduction failure, but we need a term like “instantiation-dependent” to describe these cases in the Standard. The phrase “expression involving a template parameter” seems to come pretty close to capturing this idea.
Notes from the November, 2010 meeting:
The CWG favored extending the concepts of “type-dependent” and “value-dependent” to cover these additional cases, rather than adding a new concept.
Notes from the March, 2011 meeting:
The CWG reconsidered the direction from the November, 2010 meeting, as it would make more constructs dependent, thus requiring more template and typename keywords, resulting in worse error messages, etc.
Notes from the August, 2011 meeting:
The following example (from issue 1273) was deemed relevant for this issue:
template <class T> struct C; class A { int i; friend struct C<int>; } a; class B { int i; friend struct C<float>; } b; template <class T> struct C { template <class U> decltype (a.i) f() { } // #1 template <class U> decltype (b.i) f() { } // #2 }; int main() { C<int>().f<int>(); // calls #1 C<float>().f<float>(); // calls #2 }
The discussion of issue 1001 seemed to have settled on the approach of doing the 8.3.5 [dcl.fct] transformations immediately to the function template declaration, so that the original form need not be remembered. However, the example in 14.8.2 [temp.deduct] paragraph 8 suggests otherwise:
template <class T> int f(T[5]);
int I = f<int>(0);
int j = f<void>(0); // invalid array
One way that might be addressed would be to separate the concepts of the type of the template that participates in overload resolution and function matching from the type of the template that is the source for template argument substitution. (See also the example in paragraph 3 of the same section.)
Notes, January, 2012:
According to 14.8.2 [temp.deduct] paragraph 8,
If a substitution results in an invalid type or expression, type deduction fails. An invalid type or expression is one that would be ill-formed, with a diagnostic required, if written using the substituted arguments.
Presumably the phrase “if written” refers to rewriting the template declaration in situ with the substituted arguments, rather than writing that type or expression at some arbitrary location, e.g.,
void g(double) = delete; template<class T> auto f(T t) -> decltype(g(t)); void g(int); void h() { typedef int T; T t = 42; g(t); // Ok (I “wrote the substituted arguments”, and it seems fine) f(42); // Presumably substitution is meant to fail. }
Perhaps a clearer formulation could be used?
The handling of an example like
template<typename T, std::size_t S = sizeof(T)> struct X {}; template<typename T> X<T> foo(T*); void foo(...); void test() { struct S *s; foo(s); }
varies among implementations, presumably because the meaning of “immediate context” in determining whether an error is a substitution failure or a hard error is not clearly defined.
According to 14.8.2.1 [temp.deduct.call] paragraph 1,
If removing references and cv-qualifiers from P gives std::initializer_list<P'> for some P' and the argument is an initializer list (8.5.4 [dcl.init.list]), then deduction is performed instead for each element of the initializer list, taking P' as a function template parameter type and the initializer element as its argument. Otherwise, an initializer list argument causes the parameter to be considered a non-deduced context (14.8.2.5 [temp.deduct.type]).
It is not entirely clear whether the deduction for an initializer list meeting a std::initializer_list<T> is a recursive subcase, or part of the primary deduction. A relevant question is: if the deduction on that part fails, does the entire deduction fail, or is the parameter to be considered non-deduced?
Notes from the October, 2012 meeting:
CWG determined that the entire deduction fails in this case.
The intent of the resolution of issue 1184 appears not to have been completely realized. In particular, the phrase, “contains no template-parameters that participate in template argument deduction” in both the note in 14.8.2.1 [temp.deduct.call] paragraph 4 and the normative wording in 14.8.1 [temp.arg.explicit] paragraph 6 is potentially misleading and probably should say something like, “contains no template-parameters outside non-deduced contexts.” Also, the normative wording should be moved to 14.8.2.1 [temp.deduct.call] paragraph 4, since it applies when there are no explicitly-specified template arguments. For example,
template<typename T> void f(T, typename identity<T>::type*);
Presumably the second parameter should allow pointer conversions, even though it does contain a template-parameter that participates in deduction (via the first function parameter).
The rules for deducing template arguments when taking the address of a function template in 14.8.2.2 [temp.deduct.funcaddr] do not appear to allow for a base-to-derived conversion in a case like:
struct Base { template<class U> void f(U); }; struct Derived : Base { }; int main() { void (Derived::*pmf)(int) = &Derived::f; }
Most implementations appear to allow this adjustment, however.
Given
template<class C> void foo(const C* val) {} template<int N> void foo(const char (&t)[N]) {}
it is intuitive that the second template is more specialized than the first. However, the current rules make them unordered. In 14.8.2.4 [temp.deduct.partial] paragraph 4, we have P as const C* and A as const char (&)[N]. Paragraph 5 transforms A to const char[N]. Finally, paragraph 7 removes top-level cv-qualification; since a cv-qualified array element type is considered to be cv-qualification of the array (3.9.3 [basic.type.qualifier] paragraph 5, cf issue 1059), A becomes char[N]. P remains const C*, so deduction fails because of the missing const in A.
Notes from the April, 2013 meeting:
CWG agreed that the const should be preserved in the array type.
Given the following example,
template <class ...T> int f(T*...) { return 1; } template <class T> int f(const T&) { return 2; } void g() { f((int*)0); }
the current specification makes the call ambiguous because deduction fails in both directions: with A being T and P being T* in one direction and A being T* and P being T, because 14.8.2.4 [temp.deduct.partial] paragraph 8 says,
If A was transformed from a function parameter pack and P is not a parameter pack, type deduction fails.
It is not clear whether this is the best outcome, however; it might be better to consider the first template more specialized, with the variadic/non-variadic test being a tie-breaker if there is no other reason to prefer one over the other based on the parameter types.
Notes from the February, 2014 meeting:
CWG felt that the best approach would be, when comparing P and A, if A is a pack and P is not, A should be repeated for each remaining instance of P and then use the variadic/nonvariadic criterion as a late tiebreaker if the result is still ambiguous. This would apply in the general case (including 14.8.2.4 [temp.deduct.partial]), not just in function calls.
The resolution of issue 692 (found in document N3281) made the following example ambiguous and thus ill-formed:
template<class T> void print(ostream &os, const T &t) { os << t; } template <class T, class... Args> void print(ostream &os, const T &t, const Args&... rest) { os << t << ", "; print(os, rest...); } int main() { print(cout, 42); print(cout, 42, 1.23); }
This pattern seems fairly intuitive; is it reason to reconsider or modify the outcome of issue 692?
(See also issue 1432.)
Notes from the October, 2012 meeting:
CWG agreed that the example should be accepted, handling this case as a late tiebreaker, preferring an omitted parameter over a parameter pack.
Additional note (March, 2013):
For another example:
template<typename ...T> int f(T*...) { return 1; } template<typename T> int f(const T&) { return 2; } int main() { if (f((int*)0) != 1) { return 1; } return 0; }
This worked as expected prior to the resolution of issue 692.
There is implementation divergence in the handling of an example like
template<typename D> struct A { }; template<typename T> struct Wrap1 { typedef T type; }; template<typename T> struct Wrap2 { typedef T type; }; template<typename T1> A<typename Wrap1<T1>::type> fn(const A<T1>& x, const A<T1>& y); template<typename T2, typename U> A<typename Wrap2<T2>::type> fn(const A<T2>& x, const A<U>& y); A<int> (*p)(const A<int>&, const A<int>&) = fn;
The implementations that accept this example do so by not comparing the return types of the two templates during partial ordering, which seems to make sense given that partial ordering would not have been performed if the candidate specializations were not indistinguishable from the perspective of overload resolution. However, the existing wording is not clear that that is how such types are be handled.
It appears that some of the recent changes to the description of constant expressions have allowed constructs into preprocessor expressions that do not belong there. Some changes are required to restrict the current capabilities of constant expressions to what is intended to be allowed in preprocessor expressions.
Proposed resolution (February, 2012):
Change 16.1 [cpp.cond] paragraph 2 as follows:
Each preprocessing token that remains (in the list of preprocessing tokens that will become the controlling expression) after all macro replacements have occurred shall be in the lexical form of a token (2.7 [lex.token]). Any such token that is a literal (2.14.1 [lex.literal.kinds]) shall be an integer-literal, a character-literal, or a boolean-literal.
Change 16.1 [cpp.cond] paragraph 4 as follows:
...using arithmetic that has at least the ranges specified in 18.3 [support.limits]. The only operators permitted in the controlling constant expression are ?:, ||, &&, |, ^, &, ==, !=, <, <=, >, >=, <<, >>, -, +, *, /, %, !, and ~. For the purposes of this token conversion...
Although it seems to be common implementation practice to reject a macro invocation that begins in a header file and whose closing right parenthesis appears in the file that included it, there does not seem to be a prohibition of this case in the specification of function-style macros. Should this be accepted?
Notes from the February, 2014 meeting:
CWG agreed that macro invocations spanning file boundaries should be prohibited. Resolution of this issue should be coordinated with WG14.
When a string literal containing an extended character is stringized (16.3.2 [cpp.stringize]), the result contains a universal-character-name instead of the original extended character. The reason is that the extended character is translated to a universal-character-name in translation phase 1 (2.2 [lex.phases]), so that the string literal "@" (where @ represents an extended character) becomes "\uXXXX". Because the preprocessing token is a string literal, when the stringizing occurs in translation phase 4, the \ is doubled, and the resulting string literal is "\"\\uXXXX\"". As a result, the universal-character-name is not recognized as such when the translation to the execution character set occurs in translation phase 5. (Note that phase 5 translation does occur if the stringized extended character does not appear in a string literal.) Existing practice appears to ignore these rules and preserve extended characters in stringized string literals, however.
See also issue 578.
Additional note (August, 2013):
Implementations are granted substantial latitude in their handling of extended characters and universal-character-names in 2.2 [lex.phases] paragraph 1 phase 1, i.e.,
(An implementation may use any internal encoding, so long as an actual extended character encountered in the source file, and the same extended character expressed in the source file as a universal-character-name (i.e., using the \uXXXX notation), are handled equivalently except where this replacement is reverted in a raw string literal.)
However, this freedom is mostly nullified by the requirements of stringizing in 16.3.2 [cpp.stringize] paragraph 2:
If, in the replacement list, a parameter is immediately preceded by a # preprocessing token, both are replaced by a single character string literal preprocessing token that contains the spelling of the preprocessing token sequence for the corresponding argument.
This means that, in order to handle a construct like
#define STRINGIZE_LITERAL( X ) # X #define STRINGIZE( X ) STRINGIZE_LITERAL( X ) STRINGIZE( STRINGIZE( identifier_\u00fC\U000000Fc ) )
an implementation must recall the original spelling, including the form of UCN and the capitalization of any non-numeric hexadecimal digits, rather than simply translating the characters into a convenient internal representation.
To effect the freedom asserted in 2.2 [lex.phases], the description of stringizing should make the spelling of a universal-character-name implementation-defined.
Stringizing a raw string literal containing a newline produces an invalid (unterminated) string literal and hence results in undefined behavior. It should be specified that a newline in a string literal is transformed to the two characters '\' 'n' in the resulting string literal.
A slightly related case involves stringizing a bare backslash character: because backslashes are only escaped within a string or character literal, a stringized bare backslash becomes "\", which is invalid and hence results in undefined behavior.
A number of differences between C++03 and C++11 were omitted from C.2 [diff.cpp03]:
New keywords. Although these are said in C.2.1 [diff.cpp03.lex] only to invalidate C++03 code, they can also change the meaning, e.g., thread_local x(y), which would declare a variable thread_local initialized with y in C++03 and a thread-local variable y in C++11.
New deduction rules.
Removal of the deprecated string literal conversion.
When a friend function defined in a class template is actually defined (i.e., with each instantiation or only when odr-used).
Removal of access declarations.
Use of the injected-class-name of a class template as a template template-argument.
Additional note (January, 2012):
In addition to the items previously mentioned, access declarations were removed from C++11 but are not mentioned in C.2 [diff.cpp03].
Proposed (partial) resolution (February, 2012):
Add the following as a new section in C.2 [diff.cpp03]:
C.2.5 Clause 11 [class.access]: member access control pdiff.cpp03.class.access Change: Remove access declarations.
Rationale: Removal of feature deprecated since C++ 1998.
Effect on original feature: Valid C++ 2003 code that uses access declarations is ill-formed in this International Standard. Instead, using-declarations (7.3.3 [namespace.udecl]) can be used.
Is the following well-formed?
auto concept HasDestructor<typename T> { T::~T(); } concept_map HasDestructor<int&> { }
According to _N2914_.14.10.2.1 [concept.map.fct] paragraph 4, the destructor requirement in the concept map results in an expression x.~X(), where X is the type int&. According to 5.2.4 [expr.pseudo], this expression is ill-formed because the object type and the type-name must be the same type, but the object type cannot be a reference type (references are dropped from types used in expressions, 5 [expr] paragraph 5).
It is not clear whether this should be addressed by changing 5.2.4 [expr.pseudo] or _N2914_.14.10.2.1 [concept.map.fct].
The definition of an argument does not seem to cover many assumed use cases, and we believe that is not intentional. There should be answers to questions such as: Are lambda-captures arguments? Are type names in a throw-spec arguments? “Argument” to casts, typeid, alignof, alignas, decltype and sizeof? why in x[arg] arg is not an argument, but the value forwarded to operator[]() is? Does not apply to operators as call-points not bounded by parentheses? Similar for copy initialization and conversion? What are deduced template “arguments?” what are “default arguments?” can attributes have arguments? What about concepts, requires clauses and concept_map instantiations? What about user-defined literals where parens are not used?
According to 1.4 [intro.compliance] paragraph 7,
A freestanding implementation is one in which execution may take place without the benefit of an operating system, and has an implementation-defined set of libraries that includes certain language-support libraries (17.6.1.3 [compliance]).
This definition links two relatively separate topics: the lack of an operating system and the minimal set of libraries. Furthermore, 3.6.1 [basic.start.main] paragraph 1 says:
[Note: in a freestanding environment, start-up and termination is implementation-defined; start-up contains the execution of constructors for objects of namespace scope with static storage duration; termination contains the execution of destructors for objects with static storage duration. —end note]
It would be helpful if the two characteristics (lack of an operating system and restricted set of libraries) were named separately and if these statements were clarified to identify exactly what is implementation-defined.
Notes from the October, 2009 meeting:
The CWG felt that it needed a specific proposal in a paper before attempting to resolve this issue.
According to 1.9 [intro.execution] paragraph 14, “sequenced before” is a relation between “evaluations.” However, 3.6.3 [basic.start.term] paragraph 3 says,
If the completion of the initialization of a non-local object with static storage duration is sequenced before a call to std::atexit (see <cstdlib>, 18.5 [support.start.term]), the call to the function passed to std::atexit is sequenced before the call to the destructor for the object. If a call to std::atexit is sequenced before the completion of the initialization of a non-local object with static storage duration, the call to the destructor for the object is sequenced before the call to the function passed to std::atexit. If a call to std::atexit is sequenced before another call to std::atexit, the call to the function passed to the second std::atexit call is sequenced before the call to the function passed to the first std::atexit call.
Except for the calls to std::atexit, these events do not correspond to “evaluation” of expressions that appear in the program. If the “sequenced before” relation is to be applied to them, a more comprehensive definition is needed.
According to 2.2 [lex.phases] paragraph 1, in translation phase 1,
Any source file character not in the basic source character set (2.3 [lex.charset]) is replaced by the universal-character-name that designates that character.
If a character that is not in the basic character set is preceded by a backslash character, for example
"\á"
the result is equivalent to
"\\u00e1"
that is, a backslash character followed by the spelling of the universal-character-name. This is different from the result in C99, which accepts characters from the extended source character set without replacing them with universal-character-names.
See also issue 1335.
The description of how to handle file not ending in a newline in 2.2 [lex.phases] paragraph 1, phase 2, is:
Each instance of a backslash character (\) immediately followed by a new-line character is deleted, splicing physical source lines to form logical source lines. Only the last backslash on any physical source line shall be eligible for being part of such a splice. If, as a result, a character sequence that matches the syntax of a universal-character-name is produced, the behavior is undefined. A source file that is not empty and that does not end in a new-line character, or that ends in a new-line character immediately preceded by a backslash character before any such splicing takes place, shall be processed as if an additional new-line character were appended to the file.
This is not clear regarding what happens if the last character in the file is a backslash. In such a case, presumably the result of adding the newline should not be a line splice but rather a backslash preprocessing-token (that will be diagnosed as an invalid token in phase 7), but that should be spelled out.
According to 2.3 [lex.charset] paragraph 2,
If the hexadecimal value for a universal-character-name corresponds to a surrogate code point (in the range 0xD800-0xDFFF, inclusive), the program is ill-formed. Additionally, if the hexadecimal value for a universal-character-name outside the c-char-sequence, s-char-sequence, or r-char-sequence of a character or string literal corresponds to a control character (in either of the ranges 0x00-0x1F or 0x7F-0x9F, both inclusive) or to a character in the basic source character set, the program is ill-formed.
These restrictions should not apply to comment text. Arguably the prohibitions of control characters and characters in the basic character set already do not apply, as they require that the preprocessing tokens for literals have already been recognized; this occurs in phase 3, which also replaces comments with single spaces. However, the prohibition of surrogate code points is not so limited and might conceivably be applied within comments.
Probably the most straightforward way of addressing this problem would be simply to state in 2.8 [lex.comment] that character sequences that resemble universal-character-names are not recognized as such within comment text.
(From item JP 03 of the Japanese National Body comments on the C++14 DIS ballot.)
A digit separator is allowed immediately following the prefix for an octal literal but not for a binary or hexadecimal literal. For example, 0'01 is permitted but 0b'01 and 0x'01 are not. This asymmetry makes tools such as automatic code generators more complicated than necessary. The digit separator should be consistently allowed or disallowed immediately following the prefix in all non-decimal integer literals.
2.14.5 [lex.string] paragraph 5 reads
Escape sequences and universal-character-names in string literals have the same meaning as in character literals, except that the single quote ' is representable either by itself or by the escape sequence \', and the double quote " shall be preceded by a \. In a narrow string literal, a universal-character-name may map to more than one char element due to multibyte encoding.
The first sentence refers us to 2.14.3 [lex.ccon], where we read in the first paragraph that "An ordinary character literal that contains a single c-char has type char [...]." Since the grammar shows that a universal-character-name is a c-char, something like '\u1234' must have type char (and thus be a single char element); in paragraph 5, we read that "A universal-character-name is translated to the encoding, in the execution character set, of the character named. If there is no such encoding, the universal-character-name is translated to an implemenation-defined encoding."
This is in obvious contradiction with the second sentence. In addition, I'm not really clear what is supposed to happen in the case where the execution (narrow-)character set is UTF-8. Consider the character \u0153 (the oe in the French word oeuvre). Should '\u0153' be a char, with an "error" value, say '?' (in conformance with the requirement that it be a single char), or an int, with the two char values 0xC5, 0x93, in an implementation defined order (in conformance with the requirement that a character representable in the execution character set be represented). Supposing the former, should "\u0153" be the equivalent of "?" (in conformance with the first sentence), or "\xC5\x93" (in conformance with the second).
Notes from October 2003 meeting:
We decided we should forward this to the C committee and let them resolve it. Sent via e-mail to John Benito on November 14, 2003.
Reply from John Benito:
I talked this over with the C project editor, we believe this was handled by the C committee before publication of the current standard.
WG14 decided there needed to be a more restrictive rule for one-to-one mappings: rather than saying "a single c-char" as C++ does, the C standard says "a single character that maps to a single-byte execution character"; WG14 fully expect some (if not many or even most) UCNs to map to multiple characters.
Because of the fundamental differences between C and C++ character types, I am not sure the C committee is qualified to answer this satisfactorily for WG21. WG14 is willing to review any decision reached for compatibility.
I hope this helps.
(See also issue 912 for a related question.)
The decimal-literal in a user-defined-integer-literal might be too large for an unsigned long long to represent (in implementations with extended integer types). In such cases, the original intent appears to have been to call a raw literal operator or a literal operator template; however, the existing wording of 2.14.8 [lex.ext] paragraph 3 always calls the unsigned long long literal operator if it exists, regardless of the value of the decimal-literal.
Consider the following complete program:
void f(); template<typename T> void g() { f(); } int main() { }
Must f() be defined to make this program well-formed? The current wording of 3.2 [basic.def.odr] does not make any special provision for expressions that appear only in uninstantiated template definitions.
(See also issue 1254.)The current description of unqualified name lookup in 3.4.1 [basic.lookup.unqual] paragraph 8 does not correctly handle complex cases of nesting. The Standard currently reads,
A name used in the definition of a function that is a member function (9.3) of a class X shall be declared in one of the following ways:In particular, this formulation does not handle the following example:
- before its use in the block in which it is used or in an enclosing block (6.3), or
- shall be a member of class X or be a member of a base class of X (10.2), or
- if X is a nested class of class Y (9.7), shall be a member of Y, or shall be a member of a base class of Y (this lookup applies in turn to Y's enclosing classes, starting with the innermost enclosing class), or
- if X is a local class (9.8) or is a nested class of a local class, before the definition of class X in a block enclosing the definition of class X, or
- if X is a member of namespace N, or is a nested class of a class that is a member of N, or is a local class or nested class within a local class of a function that is a member of N, before the member function definition, in namespace N or in one of N's enclosing namespaces.
struct outer { static int i; struct inner { void f() { struct local { void g() { i = 5; } }; } }; };Here the reference to i is from a member function of a local class of a member function of a nested class. Nothing in the rules allows outer::i to be found, although intuitively it should be found.
A more comprehensive formulation is needed that allows traversal of any combination of blocks, local classes, and nested classes. Similarly, the final bullet needs to be augmented so that a function need not be a (direct) member of a namespace to allow searching that namespace when the reference is from a member function of a class local to that function. That is, the current rules do not allow the following example:
int j; // global namespace struct S { void f() { struct local2 { void g() { j = 5; } }; } };
There seems to be some confusion in the Standard regarding the relationship between 3.4.1 [basic.lookup.unqual] (Unqualified name lookup) and 3.4.2 [basic.lookup.argdep] (Argument-dependent lookup). For example, 3.4.1 [basic.lookup.unqual] paragraph 3 says,
The lookup for an unqualified name used as the postfix-expression of a function call is described in 3.4.2 [basic.lookup.argdep].
In other words, nothing in 3.4.1 [basic.lookup.unqual] applies to function names; the entire lookup is described in 3.4.2 [basic.lookup.argdep].
3.4.2 [basic.lookup.argdep] does not appear to share this view of its responsibility. The closest it comes is in 3.4.2 [basic.lookup.argdep] paragraph 2a:
...the set of declarations found by the lookup of the function name is the union of the set of declarations found using ordinary unqualified lookup and the set of declarations found in the namespaces and classes associated with the argument types.
Presumably, "ordinary unqualified lookup" is a reference to the processing described in 3.4.1 [basic.lookup.unqual], but, as noted above, 3.4.1 [basic.lookup.unqual] explicitly precludes applying that processing to function names. The details of "ordinary unqualified lookup" of function names are not described anywhere.
The other clauses that reference 3.4.2 [basic.lookup.argdep], clauses 13 [over] and 14 [temp], are split over the question of the relationship between 3.4.1 [basic.lookup.unqual] and 3.4.2 [basic.lookup.argdep]. 13.3.1.1.1 [over.call.func] paragraph 3, for instance, says
The name is looked up in the context of the function call following the normal rules for name lookup in function calls (3.4.2 [basic.lookup.argdep]).
I.e., this reference assumes that 3.4.2 [basic.lookup.argdep] is self-contained. The same is true of 13.3.1.2 [over.match.oper] paragraph 3, second bullet:
The set of non-member candidates is the result of the unqualified lookup of operator@ in the context of the expression according to the usual rules for name lookup in unqualified function calls (3.4.2 [basic.lookup.argdep]), except that all member functions are ignored.
On the other hand, however, 14.6.4.2 [temp.dep.candidate] paragraph 1 explicitly assumes that 3.4.1 [basic.lookup.unqual] and 3.4.2 [basic.lookup.argdep] are both involved in function name lookup and do different things:
For a function call that depends on a template parameter, if the function name is an unqualified-id but not a template-id, the candidate functions are found using the usual lookup rules (3.4.1 [basic.lookup.unqual], 3.4.2 [basic.lookup.argdep]) except that:
- For the part of the lookup using unqualified name lookup (3.4.1 [basic.lookup.unqual]), only function declarations with external linkage from the template definition context are found.
- For the part of the lookup using associated namespaces (3.4.2 [basic.lookup.argdep]), only function declarations with external linkage found in either the template definition context or the template instantiation context are found.
Suggested resolution:
Change 3.4.1 [basic.lookup.unqual] paragraph 1 from
...name lookup ends as soon as a declaration is found for the name.
to
...name lookup ends with the first scope containing one or more declarations of the name.
Change the first sentence of 3.4.1 [basic.lookup.unqual] paragraph 3 from
The lookup for an unqualified name used as the postfix-expression of a function call is described in 3.4.2 [basic.lookup.argdep].
to
An unqualified name used as the postfix-expression of a function call is looked up as described below. In addition, argument-dependent lookup (3.4.2 [basic.lookup.argdep]) is performed on this name to complete the resulting set of declarations.
Although 3.3.9 [basic.scope.temp] now describes the scope of a template parameter, the description of unqualified name lookup in 3.4.1 [basic.lookup.unqual] do not cover uses of template parameter names. The note in 3.4.1 [basic.lookup.unqual] paragraph 16 says,
the rules for name lookup in template definitions are described in 14.6 [temp.res].
but the rules there cover dependent and non-dependent names, not template parameters themselves.
Consider the following example:
template <typename T> struct B { }; namespace N { namespace L { template <int> void A(); } namespace M { template <int> struct A { typedef int y; }; } using namespace L; using namespace M; } B<N::/*template */A<0>::y> (x);
Which A is referenced in the last line? According to 3.4.3 [basic.lookup.qual] paragraph 1,
If a :: scope resolution operator in a nested-name-specifier is not preceded by a decltype-specifier, lookup of the name preceding that :: considers only namespaces, types, and templates whose specializations are types.
It is not clear whether this applies to the example or not, and the interpretation of the < token depends on the result of the lookup.
Notes from the September, 2013 meeting:
The restricted lookup mentioned in 3.4.3 [basic.lookup.qual] paragraph 1 is based on a one-token lookahead; because the next token following A in the example is not ::, the restricted lookup does not apply, and the result is ambiguous. Uncommenting the template keyword in the example does not affect the lookup.
Both 3.4.3.1 [class.qual] and 3.4.3.2 [namespace.qual] specify that some lookups are to be performed “in the context of the entire postfix-expression,” ignoring the fact that qualified-ids can appear outside of expressions.
It was suggested in document J16/05-0156 = WG21 N1896 that these uses be changed to “the context in which the qualified-id occurs,” but it isn't clear that this formulation adequately covers all the places a qualified-id can occur.
It is unclear to what extent entities without names match across translation units. For example,
struct S { int :2; enum { a, b, c } x; static class {} *p; };
If this declaration appears in multiple translation units, are all these members "the same" in each declaration?
A similar question can be asked about non-member declarations:
// Translation unit 1: extern enum { d, e, f } y; // Translation unit 2: extern enum { d, e, f } y; // Translation unit 3: enum { d, e, f } y;
Is this valid C++? Is it valid C?
James Kanze: S::p cannot be defined, because to do so requires a type specifier and the type cannot be named. ::y is valid C because C only requires compatible, not identical, types. In C++, it appears that there is a new type in each declaration, so it would not be valid. This differs from S::x because the unnamed type is part of a named type — but I don't know where or if the Standard says that.
John Max Skaller: It's not valid C++, because the type is a synthesised, unique name for the enumeration type which differs across translation units, as if:
extern enum _synth1 { d,e,f} y; .. extern enum _synth2 { d,e,f} y;
had been written.
However, within a class, the ODR implies the types are the same:
class X { enum { d } y; };
in two translation units ensures that the type of member y is the same: the two X's obey the ODR and so denote the same class, and it follows that there's only one member y and one type that it has.
(See also issues 132 and 216.)
The standard says that an unnamed class or enum definition can be given a "name for linkage purposes" through a typedef. E.g.,
typedef enum {} E; extern E *p;
can appear in multiple translation units.
How about the following combination?
// Translation unit 1: struct S; extern S *q; // Translation unit 2: typedef struct {} S; extern S *q;
Is this valid C++?
Also, if the answer is "yes", consider the following slight variant:
// Translation unit 1: struct S {}; // <<-- class has definition extern S *q; // Translation unit 2: typedef struct {} S; extern S *q;
Is this a violation of the ODR because two definitions of type S consist of differing token sequences?
The following declarations are allowed within a translation unit:
struct S; enum { S };
However, 3.5 [basic.link] paragraph 9 seems to say these two declarations cannot appear in two different translation units. That also would mean that the inclusion of a header containing the above in two different translation units is not valid C++.
I suspect this is an oversight and that users should be allowed to have the declarations above appear in different translation units. (It is a fairly common thing to do, I think.)
Mike Miller: I think you meant "enum E { S };" -- enumerators only have external linkage if the enumeration does (3.5 [basic.link] paragraph 4), and 3.5 [basic.link] paragraph 9 only applies to entities with external linkage.
I don't remember why enumerators were given linkage; I don't think it's necessary for mangling non-type template arguments. In any event, I can't think why cross-TU name collisions between enumerators and other entities would cause a problem, so I guess a change here would be okay. I can think of three changes that would have that effect:
Daveed Vandevoorde: I don't think any of these are sufficient in the sense that the problem isn't limited to enumerators. E.g.:
struct X; extern void X();shouldn't create cross-TU collisions either.
Mike Miller: So you're saying that cross-TU collisions should only be prohibited if both names denote entities of the same kind (both functions, both objects, both types, etc.), or if they are both references (regardless of what they refer to, presumably)?
Daveed Vandevoorde: Not exactly. Instead, I'm saying that if two entities (with external linkage) can coexist when they're both declared in the same translation unit (TU), then they should also be allowed to coexist when they're declared in two different translation units.
For example:
int i; void i(); // ErrorThis is an error within a TU, so I don't see a reason to make it valid across TUs.
However, "tag names" (class/struct/union/enum) can sometimes coexist with identically named entities (variables, functions & enumerators, but not namespaces, templates or type names).
Is a compiler allowed to interleave constructor calls when performing dynamic initialization of nonlocal objects? What I mean by interleaving is: beginning to execute a particular constructor, then going off and doing something else, then going back to the original constructor. I can't find anything explicit about this in clause 3.6.2 [basic.start.init].
I'll present a few different examples, some of which get a bit wild. But a lot of what this comes down to is exactly what the standard means when it talks about the order of initialization. If it says that some object x must be initialized before a particular event takes place, does that mean that x's constructor must be entered before that event, or does it mean that it must be exited before that event? If object x must be initialized before object y, does that mean that x's constructor must exit before y's constructor is entered?
(The answer to that question might just be common sense, but I couldn't find an answer in clause 3.6.2 [basic.start.init]. Actually, when I read 3.6.2 [basic.start.init] carefully, I find there are a lot of things I took for granted that aren't there.)
OK, so a few specific scenerios.
<runtime gunk> <Enter A's constructor> <Enter f> <runtime gunk> <Enter B's constructor> <Enter f> <Leave f> <Leave B's constructor> <Leave f> <Leave A's constructor>The implication of a 'yes' answer for users is that any function called by a constructor, directly or indirectly, must be reentrant.
At this point, you might be thinking we could avoid all of this nonsense by removing compilers' freedom to defer initialization until after the beginning of main(). I'd resist that, for two reasons. First, it would be a huge change to make after the standard has been out. Second, that freedom is necessary if we want to have support for dynamic libraries. I realize we don't yet say anything about dynamic libraries, but I'd hate to make decisions that would make such support even harder.
According to 3.6.2 [basic.start.init] paragraph 5,
It is implementation-defined whether the dynamic initialization of a non-local variable with static or thread storage duration is done before the first statement of the initial function of the thread. If the initialization is deferred to some point in time after the first statement of the initial function of the thread, it shall occur before the first odr-use (3.2 [basic.def.odr]) of any variable with thread storage duration defined in the same translation unit as the variable to be initialized.
This doesn't consider that initialization of instantiations of static data members of class templates (which can be thread_local) are unordered. Presumably odr-use of such a static data member should not trigger the initialization of any thread_local variable other than that one?
3.6.3 [basic.start.term] paragraph 2 says,
If a function contains a local object of static storage duration that has been destroyed and the function is called during the destruction of an object with static storage duration, the program has undefined behavior if the flow of control passes through the definition of the previously destroyed local object.
I would like to turn this behavior from undefined to well-defined behavior for the purpose of achieving a graceful shutdown, especially in a multi-threaded world.
Background: Alexandrescu describes the “phoenix singleton” in Modern C++ Design. This is a class used as a function local static, that will reconstruct itself, and reapply itself to the atexit chain, if the program attempts to use it after it is destructed in the atexit chain. It achieves this by setting a “destructed flag” in its own state in its destructor. If the object is later accessed (and a member function is called on it), the member function notes the state of the “destructed flag” and does the reconstruction dance. The phoenix singleton pattern was designed to address issues only in single-threaded code where accesses among static objects can have a non-scoped pattern. When we throw in multi-threading, and the possibility that threads can be running after main returns, the chances of accessing a destroyed static significantly increase.
The very least that I would like to see happen is to standardize what I believe is existing practice: When an object is destroyed in the atexit chain, the memory the object occupied is left in whatever state the destructor put it in. If this can only be reliably done for objects with standard layout, that would be an acceptable compromise. This would allow objects to set “I'm destructed” flags in their state and then do something well-defined if accessed, such as throw an exception.
A possible refinement of this idea is to have the compiler set up a 3-state flag around function-local statics instead of the current 2-state flag:
We have the first two states today. We might choose to add the third state, and if execution passes over a function-local static with “destroyed” state, an exception could be thrown. This would mean that we would not have to guarantee memory stability in destroyed objects of static duration.
This refinement would break phoenix singletons, and is not required for the ~mutex()/~condition() I've described and prototyped. But it might make it easier for Joe Coder to apply this kind of guarantee to his own types.
There are several problems with 3.7 [basic.stc]:
3.7 [basic.stc] paragraph 2 says that "Static and automatic storage durations are associated with objects introduced by declarations (3.1 [basic.def]) and implicitly created by the implementation (12.2 [class.temporary])."
In fact, objects "implicitly created by the implementation" are the temporaries described in (12.2 [class.temporary]), and have neither static nor automatic storage duration, but a totally different duration, described in 12.2 [class.temporary].
3.7 [basic.stc] uses the expression "local object" in several places, without ever defining it. Presumably, what is meant is "an object declared at block scope", but this should be said explicitly.
In a recent discussion in comp.lang.c++.moderated, on poster interpreted "local objects" as including temporaries. This would require them to live until the end of the block in which they are created, which contradicts 12.2 [class.temporary]. If temporaries are covered by this section, and the statement in 3.7 [basic.stc] seems to suggest, and they aren't local objects, then they must have static storage duration, which isn't right either.
I propose adding a fourth storage duration to the list after 3.7 [basic.stc] paragraph 1:
And rewriting the second paragraph of this section as follows:
Temporary storage duration is associated with objects implicitly created by the implementation, and is described in 12.2 [class.temporary]. Static and automatic storage durations are associated with objects defined by declarations; in the following, an object defined by a declaration with block scope is a local object. The dynamic storage duration is associated with objects created by the operator new.
Steve Adamczyk: There may well be an issue here, but one should bear in mind the difference between storage duration and object lifetime. As far as I can see, there is no particular problem with temporaries having automatic or static storage duration, as appropriate. The point of 12.2 [class.temporary] is that they have an unusual object lifetime.
Notes from Ocrober 2002 meeting:
It might be desirable to shorten the storage duration of temporaries to allow reuse of them. The as-if rule allows some reuse, but such reuse requires analysis, including noting whether the addresses of such temporaries have been taken.
Notes from the August, 2011 meeting:
The CWG decided that further consideration of this issue would be deferred until someone produces a paper explaining the need for action and proposing specific changes.
The global allocation functions are implicitly declared in every translation unit with exception-specifications (3.7.4 [basic.stc.dynamic] paragraph 2). It is not clear what should happen if a replacement allocation function is declared without an exception-specification. Is that a conflict with the implicitly-declared function (as it would be with explicitly-declared functions, and presumably is if the <new> header is included)? Or does the new declaration replace the implicit one, including the lack of an exception-specification? Or does the implicit declaration prevail? (Regardless of the exception-specification or lack thereof, it is presumably undefined behavior for an allocation function to exit with an exception that cannot be caught by a handler of type std::bad_alloc (3.7.4.1 [basic.stc.dynamic.allocation] paragraph 3).)
Requirements for allocation functions are given in 3.7.4.1 [basic.stc.dynamic.allocation] paragraph 1:
An allocation function can be a function template. Such a template shall declare its return type and first parameter as specified above (that is, template parameter types shall not be used in the return type and first parameter type). Template allocation functions shall have two or more parameters.
There are a couple of problems with this description. First, it is instances of function templates that can be allocation functions, not the templates themselves (cf 3.7.4.2 [basic.stc.dynamic.deallocation] paragraph 2, which uses the correct terminology regarding deallocation functions).
More importantly, this specification was written before template metaprogramming was understood and hence prevents use of SFINAE on the return type or parameter type to select among function template specializations. (The parallel passage for deallocation functions in 3.7.4.2 [basic.stc.dynamic.deallocation] paragraph 2 shares this deficit.)
(See also issue 1628.)
When an object is deleted, 3.7.4.2 [basic.stc.dynamic.deallocation] says that the deallocation “[renders] invalid all pointers referring to any part of the deallocated storage.” According to 3.9.2 [basic.compound] paragraph 3, a pointer whose address is one past the end of an array is considered to point to an unrelated object that happens to reside at that address. Does this need to be clarified to specify that the one-past-the-end pointer of an array is not invalidated by deleting the following object? (See also 5.3.5 [expr.delete] paragraph 4, which also mentions that the system deallocation function renders a pointer invalid.)
Consider
extern "C" int printf (const char *,...); struct Base { Base();}; struct Derived: virtual public Base { Derived() {;} }; Derived d; extern Derived& obj = d; int i; Base::Base() { if ((Base *) &obj) i = 4; printf ("i=%d\n", i); } int main() { return 0; }
12.7 [class.cdtor] paragraph 2 makes this valid, but 3.8 [basic.life] paragraph 5 implies that it isn't valid.
Steve Adamczyk: A second issue:
extern "C" int printf(const char *,...); struct A { virtual ~A(); int x; }; struct B : public virtual A { }; struct C : public B { C(int); }; struct D : public C { D(); }; int main() { D t; printf("passed\n");return 0; } A::~A() {} C::C(int) {} D::D() : C(this->x) {}
Core issue 52 almost, but not quite, says that in evaluating "this->x" you do a cast to the virtual base class A, which would be an error according to 12.7 [class.cdtor] paragraph 2 because the base class B constructor hasn't started yet. 5.2.5 [expr.ref] should be clarified to say that the cast does need to get done.
James Kanze submitted the same issue via comp.std.c++ on 11 July 2003:
Richard Smith: Nonsense. You can use "this" perfectly happily in a constructor, just be careful that (a) you're not using any members that are not fully initialised, and (b) if you're calling virtual functions you know exactly what you're doing.
In practice, and I think in intent, you are right. However, the standard makes some pretty stringent restrictions in 3.8 [basic.life]. To start with, it says (in paragraph 1):
The lifetime of an object is a runtime property of the object. The lifetime of an object of type T begins when:(Emphasis added.) Then when we get down to paragraph 5, it says:The lifetime of an object of type T ends when:
- storage with the proper alignment and size for type T is obtained, and
- if T is a class type with a non-trivial constructor, the constructor calls has COMPLETED.
- if T is a class type with a non-trivial destructor, the destructor call STARTS, or
- the storage which the object occupies is reused or released.
Before the lifetime of an object has started but after the storage which the object will occupy has been allocated [which sounds to me like it would include in the constructor, given the text above] or, after the lifetime of an object has ended and before the storage which the object occupied is reused or released, any pointer that refers to the storage location where the object will be or was located may be used but only in limited ways. [...] If the object will be or was of a non-POD class type, the program has undefined behavior if:
[...]
- the pointer is implicitly converted to a pointer to a base class type, or [...]
I can't find any exceptions for the this pointer.
Note that calling a non-static function in the base class, or even constructing the base class in initializer list, involves an implicit conversion of this to a pointer to the base class. Thus undefined behavior. I'm sure that this wasn't the intent, but it would seem to be what this paragraph is saying.
The Standard is self-contradictory regarding which destructor calls end the lifetime of an object. 3.8 [basic.life] paragraph 1 says,
The lifetime of an object of type T ends when:
if T is a class type with a non-trivial destructor (12.4 [class.dtor]), the destructor call starts, or
the storage which the object occupies is reused or released.
i.e., the lifetime of an object of a class type with a trivial destructor persists until its storage is reused or released. However, 12.4 [class.dtor] paragraph 15 says,
Once a destructor is invoked for an object, the object no longer exists; the behavior is undefined if the destructor is invoked for an object whose lifetime has ended (3.8 [basic.life]).
implying that invoking any destructor, even a trivial one, ends the lifetime of the associated object. Similarly, 12.7 [class.cdtor] paragraph 1 says,
For an object with a non-trivial destructor, referring to any non-static member or base class of the object after the destructor finishes execution results in undefined behavior.
A similar question arises for pseudo-destructors for non-class types.
Notes from the August, 2011 meeting:
CWG will need a paper exploring this topic before it can act on the issue.
Sent in by David Abrahams:
Yes, and to add to this tangent, 3.9.1 [basic.fundamental] paragraph 1 states "Plain char, signed char, and unsigned char are three distinct types." Strangely, 3.9 [basic.types] paragraph 2 talks about how "... the underlying bytes making up the object can be copied into an array of char or unsigned char. If the content of the array of char or unsigned char is copied back into the object, the object shall subsequently hold its original value." I guess there's no requirement that this copying work properly with signed chars!
Notes from October 2002 meeting:
We should do whatever C99 does. 6.5p6 of the C99 standard says "array of character type", and "character type" includes signed char (6.2.5p15), and 6.5p7 says "character type". But see also 6.2.6.1p4, which mentions (only) an array of unsigned char.
Proposed resolution (April 2003):
Change 3.8 [basic.life] paragraph 5 bullet 3 from
to
Change 3.8 [basic.life] paragraph 6 bullet 3 from
to
Change the beginning of 3.9 [basic.types] paragraph 2 from
For any object (other than a base-class subobject) of POD type T, whether or not the object holds a valid value of type T, the underlying bytes (1.7 [intro.memory]) making up the object can be copied into an array of char or unsigned char.
to
For any object (other than a base-class subobject) of POD type T, whether or not the object holds a valid value of type T, the underlying bytes (1.7 [intro.memory]) making up the object can be copied into an array of byte-character type.
Add the indicated text to 3.9.1 [basic.fundamental] paragraph 1:
Objects declared as characters (char) shall be large enough to store any member of the implementation's basic character set. If a character from this set is stored in a character object, the integral value of that character object is equal to the value of the single character literal form of that character. It is implementation-defined whether a char object can hold negative values. Characters can be explicitly declared unsigned or signed. Plain char, signed char, and unsigned char are three distinct types, called the byte-character types. A char, a signed char, and an unsigned char occupy the same amount of storage and have the same alignment requirements (3.9 [basic.types]); that is, they have the same object representation. For byte-character types, all bits of the object representation participate in the value representation. For unsigned byte-character types, all possible bit patterns of the value representation represent numbers. These requirements do not hold for other types. In any particular implementation, a plain char object can take on either the same values as a signed char or an unsigned char; which one is implementation-defined.
Change 3.10 [basic.lval] paragraph 15 last bullet from
to
Notes from October 2003 meeting:
It appears that in C99 signed char may have padding bits but no trap representation, whereas in C++ signed char has no padding bits but may have -0. A memcpy in C++ would have to copy the array preserving the actual representation and not just the value.
March 2004: The liaisons to the C committee have been asked to tell us whether this change would introduce any unnecessary incompatibilities with C.
Notes from October 2004 meeting:
The C99 Standard appears to be inconsistent in its requirements. For example, 6.2.6.1 paragraph 4 says:
The value may be copied into an object of type unsigned char [n] (e.g., by memcpy); the resulting set of bytes is called the object representation of the value.
On the other hand, 6.2 paragraph 6 says,
If a value is copied into an object having no declared type using memcpy or memmove, or is copied as an array of character type, then the effective type of the modified object for that access and for subsequent accesses that do not modify the value is the effective type of the object from which the value is copied, if it has one.
Mike Miller will investigate further.
Proposed resolution (February, 2010):
Change 3.8 [basic.life] paragraph 5 bullet 4 as follows:
...The program has undefined behavior if:
...
the pointer is used as the operand of a static_cast (5.2.9 [expr.static.cast]) (except when the conversion is to cv void*, or to cv void* and subsequently to char*, or unsigned char* a pointer to a cv-qualified or cv-unqualified byte-character type (3.9.1 [basic.fundamental])), or
...
Change 3.8 [basic.life] paragraph 6 bullet 4 as follows:
...The program has undefined behavior if:
...
the lvalue is used as the operand of a static_cast (5.2.9 [expr.static.cast]) except when the conversion is ultimately to cv char& or cv unsigned char& a reference to a cv-qualified or cv-unqualified byte-character type (3.9.1 [basic.fundamental]) or an array thereof, or
...
Change 3.9 [basic.types] paragraph 2 as follows:
For any object (other than a base-class subobject) of trivially copyable type T, whether or not the object holds a valid value of type T, the underlying bytes (1.7 [intro.memory]) making up the object can be copied into an array of char or unsigned char a byte-character type (3.9.1 [basic.fundamental]).39 If the content of the that array of char or unsigned char is copied back into the object, the object shall subsequently hold its original value. [Example:...
Change 3.9.1 [basic.fundamental] paragraph 1 as follows:
...Characters can be explicitly declared unsigned or signed. Plain char, signed char, and unsigned char are three distinct types, called the byte-character types. A char, a signed char, and an unsigned char occupy the same amount of storage and have the same alignment requirements (3.11 [basic.align]); that is, they have the same object representation. For byte-character types, all bits of the object representation participate in the value representation. For unsigned character types unsigned char, all possible bit patterns of the value representation represent numbers...
Change 3.10 [basic.lval] paragraph 15 final bullet as follows:
If a program attempts to access the stored value of an object through an lvalue of other than one of the following types the behavior is undefined 52
...
a char or unsigned char byte-character type (3.9.1 [basic.fundamental]).
Change 3.11 [basic.align] paragraph 6 as follows:
The alignment requirement of a complete type can be queried using an alignof expression (5.3.6 [expr.alignof]). Furthermore, the byte-character types (3.9.1 [basic.fundamental]) char, signed char, and unsigned char shall have the weakest alignment requirement. [Note: this enables the byte-character types to be used as the underlying type for an aligned memory area (7.6.2 [dcl.align]). —end note]
Change 5.3.4 [expr.new] paragraph 10 as follows:
...For arrays of char and unsigned char a byte-character type (3.9.1 [basic.fundamental]), the difference between the result of the new-expression and the address returned by the allocation function shall be an integral multiple of the strictest fundamental alignment requirement (3.11 [basic.align]) of any object type whose size is no greater than the size of the array being created. [Note: Because allocation functions are assumed to return pointers to storage that is appropriately aligned for objects of any type with fundamental alignment, this constraint on array allocation overhead permits the common idiom of allocating byte-character arrays into which objects of other types will later be placed. —end note]
Notes from the March, 2010 meeting:
The CWG was not convinced that there was a need to change the existing specification at this time. Some were concerned that there might be implementation difficulties with giving signed char the requisite semantics; implementations for which that is true can currently make char equivalent to unsigned char and avoid those problems, but the suggested change would undermine that strategy.
3.9.1 [basic.fundamental] does not impose a requirement on the floating point types that there be an exact representation of the value zero. This omission is significant in 4.12 [conv.bool] paragraph 1, in which any non-zero value converts to the bool value true.
Suggested resolution: require that all floating point types have an exact representation of the value zero.
3.9.1 [basic.fundamental] paragraph 2 says that
There are four signed integer types: "signed char", "short int", "int", and "long int."
This would indicate that const int is not a signed integer type.
The relationship between the values representable by corresponding signed and unsigned integer types is not completely described, but 3.9 [basic.types] paragraph 4 says,
The value representation of an object is the set of bits that hold the value of type T.
and 3.9.1 [basic.fundamental] paragraph 3 says,
The range of nonnegative values of a signed integer type is a subrange of the corresponding unsigned integer type, and the value representation of each corresponding signed/unsigned type shall be the same.
I.e., the maximum value of each unsigned type must be larger than the maximum value of the corresponding signed type.
C90 doesn't have this restriction, and C99 explicitly says (6.2.6.2, paragraph 2),
For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M <= N).
Unlike C++, the sign bit is not part of the value, and on an architecture that does not have native support of unsigned types, an implementation can emulate unsigned integers by simply ignoring what would be the sign bit in the signed type and be conforming.
The question is whether we intend to make a conforming implementation on such an architecture impossible. More generally, what range of architectures do we intend to support? And to what degree do we want to follow C99 in its evolution since C89?
(See paper J16/08-0141 = WG21 N2631.)
The taxonomy of value categories in 3.10 [basic.lval] classifies temporaries as prvalues. However, some temporaries are explicitly referred to as lvalues (cf 15.1 [except.throw] paragraph 3).
It is not clear what constraints are placed on a floating point implementation by the wording of the Standard. For instance, is an implementation permitted to generate a "fused multiply-add" instruction if the result would be different from what would be obtained by performing the operations separately? To what extent does the "as-if" rule allow the kinds of optimizations (e.g., loop unrolling) performed by FORTRAN compilers?
Although the note in 3.10 [basic.lval] paragraph 1 states that
The discussion of each built-in operator in Clause 5 [expr] indicates the category of the value it yields and the value categories of the operands it expects
in fact, many of the operators that take prvalue operands do not make that requirement explicit. Possible approaches to address this failure could be a blanket statement that an operand whose value category is not stated is assumed to be a prvalue; adding prvalue requirements to each operand description for which it is missing; or changing the description of the usual arithmetic conversions to state that they imply the lvalue-to-rvalue conversion, which would cover the majority of the omissions.
(See also issue 1685, which deals with an inaccurately-specified value category.)
According to 5.1.2 [expr.prim.lambda] paragraph 9,
A lambda-expression whose smallest enclosing scope is a block scope (3.3.3 [basic.scope.block]) is a local lambda expression; any other lambda-expression shall not have a capture-list in its lambda-introducer. The reaching scope of a local lambda expression is the set of enclosing scopes up to and including the innermost enclosing function and its parameters.
Consequently, lambdas appearing in mem-initializers and brace-or-equal-initializers cannot have a capture-list. However, these expressions are evaluated in the context of the constructor and are permitted to access this and non-static data members.
Should the definition of a local lambda be modified to permit capturing lambdas within these contexts?
Notes from the April, 2013 meeting:
CWG agreed with the intent of this issue.
According to 5.2.1 [expr.sub] paragraph 11,
No entity is captured by an init-capture.
It should be made clearer that a variable, odr-used by an init-capture in a nested lambda, is still captured by the containing lambda as a result of the init-capture.
According to 5.1.2 [expr.prim.lambda] paragraph 7, names appearing in the compound-statement of a lambda-expression are looked up in the context of the lambda-expression, ignoring the fact that the compound-statement will be transformed into the body of the closure type's function operator. This leaves unspecified how the lambda-expression's parameters are found by name lookup. Presumably the parameters hide the corresponding names from the surrounding scope, but this needs to be specified.
According to 5.1.2 [expr.prim.lambda] paragraph 4,
If a lambda-expression does not include a lambda-declarator, it is as if the lambda-declarator were (). The lambda return type is auto, which is replaced by the trailing-return-type if provided...
trailing-return-type is a syntactic nonterminal that includes the -> and thus cannot be used directly to refer to the type. It should instead say something like, ...the type specified by the trailing-return-type.
The reference in 8.3.5 [dcl.fct] paragraph 2, “...returning trailing-return-type” should be similarly adjusted.
According to 5.2.3 [expr.type.conv] paragraphs 1 and 3 (stated directly or by reference to another section of the Standard), all the following expressions create temporaries:
T(1) T(1, 2) T{1} T{}
However, paragraph 2 says,
The expression T(), where T is a simple-type-specifier or typename-specifier for a non-array complete effective object type or the (possibly cv-qualified) void type, creates an rvalue of the specified type, which is value-initialized (8.5 [dcl.init]; no initialization is done for the void() case).
This does not say that the result is a temporary, which means that the lifetime of the result is not specified by 12.2 [class.temporary]. Presumably this is just an oversight.
Notes from the October, 2009 meeting:
The specification in 5.2.3 [expr.type.conv] is in error, not because it fails to state that T() is a temporary but because it requires a temporary for the other cases with fewer than two operands. The case where T is a class type is covered by 12.2 [class.temporary] paragraph 1 (“a conversion that creates an rvalue”), and a temporary should not be created when T is not a class type.
Given the following declarations:
struct S { signed long long sll: 3; }; S s = { -1 };
the expressions s.sll-- < 0u and s.sll < 0u have different results. The reason for this is that s.sll-- is an rvalue of type signed long long (5.2.6 [expr.post.incr]), which means that the usual arithmetic conversions (5 [expr] paragraph 10) convert 0u to signed long long and the result is true. s.sll, on the other hand, is a bit-field lvalue, which is promoted (4.5 [conv.prom] paragraph 3) to int; both operands of < have the same rank, so s.sll is converted to unsigned int to match the type of 0u and the result is false. This disparity seems undesirable.
The original proposed resolution for issue 160 included changing extended_type_info (5.2.8 [expr.typeid] paragraph 1, footnote 61) to std::extended_type_info. There was no consensus on whether this name ought to be part of namespace std or in a vendor-specific namespace, so the question was moved into a separate issue.
5.2.8 [expr.typeid] paragraph 4 says,
When typeid is applied to a type-id, the result refers to a std::type_info object representing the type of the type-id. If the type of the type-id is a reference type, the result of the typeid expression refers to a std::type_info object representing the referenced type. If the type of the type-id is a class type or a reference to a class type, the class shall be completely-defined.
I'm wondering whether this is not overly restrictive. I can't think of a reason to require that T be completely-defined in typeid(T) when T is a class type. In fact, several popular compilers enforce that restriction for typeid(T), but not for typeid(T&). Can anyone explain this?
Nathan Sidwell: I think this restriction is so that whenever the compiler has to emit a typeid object of a class type, it knows what the base classes are, and can therefore emit an array of pointers-to-base-class typeids. Such a tree is necessary to implement dynamic_cast and exception catching (in a commonly implemented and obvious manner). If the class could be incomplete, the compiler might have to emit a typeid for incomplete Foo in one object file and a typeid for complete Foo in another object file. The compilation system will then have to make sure that (a) those compare equal and (b) the complete Foo gets priority, if that is applicable.
Unfortunately, there is a problem with exceptions that means there still can be a need to emit typeids for incomplete class. Namely one can throw a pointer-to-pointer-to-incomplete. To implement the matching of pointer-to-derived being caught by pointer-to-base, it is necessary for the typeid of a pointer type to contain a pointer to the typeid of the pointed-to type. In order to do the qualification matching on a multi-level pointer type, one has a chain of pointer typeids that can terminate in the typeid of an incomplete type. You cannot simply NULL-terminate the chain, because one must distinguish between different incomplete types.
Dave Abrahams: So if implementations are still required to be able to do it, for all practical purposes, why aren't we letting the user have the benefits?
Notes from the April, 2006 meeting:
There was some concern expressed that this might be difficult under the IA64 ABI. It was also observed that while it is necessary to handle exceptions involving incomplete types, there is no requirement that the RTTI data structures be used for exception handling.
During the discussion of issue 799, which specified the result of using reinterpret_cast to convert an operand to its own type, it was observed that it is probably reasonable to allow reinterpret_cast between any two types that have the same size and alignment.
According to 5.3.1 [expr.unary.op] paragraph 10,
There is an ambiguity in the unary-expression ~X(), where X is a class-name or decltype-specifier. The ambiguity is resolved in favor of treating ~ as a unary complement rather than treating ~X as referring to a destructor.
It is not clear whether this is intended to apply to an expression like (~S)(). In large measure, that depends on whether a class-name is an id-expression or not. If it is, the ambiguity described in 5.3.1 [expr.unary.op] paragraph 10 does apply; if not, the expression is an unambiguous reference to the destructor for class S. There are several places in the Standard that indicate that the name of a type is an id-expression, but that might be more confusing than helpful.
Requirements for the alignment of pointers returned by new-expressions are given in 5.3.4 [expr.new] paragraph 10:
For arrays of char and unsigned char, the difference between the result of the new-expression and the address returned by the allocation function shall be an integral multiple of the most stringent alignment requirement (3.9 [basic.types]) of any object type whose size is no greater than the size of the array being created.
The intent of this wording is that the pointer returned by the new-expression will be suitably aligned for any data type that might be placed into the allocated storage (since the allocation function is constrained to return a pointer to maximally-aligned storage). However, there is an implicit assumption that each alignment requirement is an integral multiple of all smaller alignment requirements. While this is probably a valid assumption for all real architectures, there's no reason that the Standard should require it.
For example, assume that int has an alignment requirement of 3 bytes and double has an alignment requirement of 4 bytes. The current wording only requires that a buffer that is big enough for an int or a double be aligned on a 4-byte boundary (the more stringent requirement), but that would allow the buffer to be allocated on an 8-byte boundary — which might not be an acceptable location for an int.
Suggested resolution: Change "of any object type" to "of every object type."
A similar assumption can be found in 5.2.10 [expr.reinterpret.cast] paragraph 7:
...converting an rvalue of type "pointer to T1" to the type "pointer to T2" (where ... the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value...
Suggested resolution: Change the wording to
...converting an rvalue of type "pointer to T1" to the type "pointer to T2" (where ... the alignment requirements of T1 are an integer multiple of those of T2) and back to its original type yields the original pointer value...
The same change would also be needed in paragraph 9.
Looking up operator new in a new-expression uses a different mechanism from ordinary lookup. According to 5.3.4 [expr.new] paragraph 9,
If the new-expression begins with a unary :: operator, the allocation function's name is looked up in the global scope. Otherwise, if the allocated type is a class type T or array thereof, the allocation function's name is looked up in the scope of T. If this lookup fails to find the name, or if the allocated type is not a class type, the allocation function's name is looked up in the global scope.
Note in particular that the scope in which the new-expression occurs is not considered. For example,
void f() { void* operator new(std::size_t, void*); int* i = new int; // okay? }
In this example, the implicit reference to operator new(std::size_t) finds the global declaration, even though the block-scope declaration of operator new with a different signature would hide it from an ordinary reference.
This seems strange; either the block-scope declaration should be ill-formed or it should be found by the lookup.
Notes from October 2004 meeting:
The CWG agreed that the block-scope declaration should not be found by the lookup in a new-expression. It would, however, be found by ordinary lookup if the allocation function were invoked explicitly.
According to 5.3.4 [expr.new] paragraphs 18-20, an exception thrown during the initialization of an object allocated by a new-expression will cause a deallocation function to be called for the object's storage if a matching deallocation function can be found. The rules deal only with functions, however; nothing is said regarding a mechanism by which a deallocation function template might be instantiated to free the storage, although 3.7.4.2 [basic.stc.dynamic.deallocation] paragraph 2 indicates that a deallocation function can be an instance of a function template.
One possibility for this processing might be to perform template argument deduction on any deallocation function templates; if there is a specialization that matches the allocation function, by the criteria listed in paragraph 20, that function template would be instantiated and used, although a matching non-template function would take precedence as is the usual outcome of overloading between function template specializations and non-template functions.
Another possibility might be to match non-template deallocation functions with non-template allocation functions and template deallocation functions with template allocation functions.
There is a slightly related wording problem in 5.3.4 [expr.new] paragraph 21:
If a placement deallocation function is called, it is passed the same additional arguments as were passed to the placement allocation function, that is, the same arguments as those specified with the new-placement syntax.
This wording ignores the possibility of default arguments in the allocation function, in which case the arguments passed to the deallocation function might be a superset of those specified in the new-placement.
(See also issue 1682.)
The description in 5.3.4 [expr.new] paragraph 23 regarding calling a deallocation function following an exception during the initialization of an object resulting from a placement new-expression says,
If a placement deallocation function is called, it is passed the same additional arguments as were passed to the placement allocation function, that is, the same arguments as those specified with the new-placement syntax. If the implementation is allowed to make a copy of any argument as part of the call to the allocation function, it is allowed to make a copy (of the same original value) as part of the call to the deallocation function or to reuse the copy made as part of the call to the allocation function. If the copy is elided in one place, it need not be elided in the other.
This seems curious, as it allows reuse of a parameter object that presumably is destroyed immediately upon the return of the allocation function (but see issue 1880 for a question about the timing of such destructions).
5.3.4 [expr.new] paragraph 10 says that the result of an array allocation function and the value of the array new-expression from which it was invoked may be different, allowing for space preceding the array to be used for implementation purposes such as saving the number of elements in the array. However, there is no corresponding description of the relationship between the operand of an array delete-expression and the argument passed to its deallocation function.
3.7.4.2 [basic.stc.dynamic.deallocation] paragraph 3 does state that
the value supplied to operator delete[](void*) in the standard library shall be one of the values returned by a previous invocation of either operator new[](std::size_t) or operator new[](std::size_t, const std::nothrow_t&) in the standard library.
This statement might be read as requiring an implementation, when processing an array delete-expression and calling the deallocation function, to perform the inverse of the calculation applied to the result of the allocation function to produce the value of the new-expression. (5.3.5 [expr.delete] paragraph 2 requires that the operand of an array delete-expression "be the pointer value which resulted from a previous array new-expression.") However, it is not completely clear whether the "shall" expresses an implementation requirement or a program requirement (or both). Furthermore, there is no direct statement about user-defined deallocation functions.
Suggested resolution: A note should be added to 5.3.5 [expr.delete] to clarify that any offset added in an array new-expression must be subtracted in the array delete-expression.
The meaning of an old-style cast is described in terms of const_cast, static_cast, and reinterpret_cast in 5.4 [expr.cast] paragraph 5. Ignoring const_cast for the moment, it basically says that if the conversion performed by a given old-style cast is one of those performed by static_cast, the conversion is interpreted as if it were a static_cast; otherwise, it's interpreted as if it were a reinterpret_cast, if possible. The following example is given in illustration:
struct A {}; struct I1 : A {}; struct I2 : A {}; struct D : I1, I2 {}; A *foo( D *p ) { return (A*)( p ); // ill-formed static_cast interpretation }
The obvious intent here is that a derived-to-base pointer conversion is one of the conversions that can be performed using static_cast, so (A*)(p) is equivalent to static_cast<A*>(p), which is ill-formed because of the ambiguity.
Unfortunately, the description of static_cast in 5.2.9 [expr.static.cast] does NOT support this interpretation. The problem is in the way 5.2.9 [expr.static.cast] lists the kinds of casts that can be performed using static_cast. Rather than saying something like "All standard conversions can be performed using static_cast," it says
An expression e can be explicitly converted to a type T using a static_cast of the form static_cast<T>(e) if the declaration "T t(e);" is well-formed, for some invented temporary variable t.
Given the declarations above, the hypothetical declaration
A* t(p);
is NOT well-formed, because of the ambiguity. Therefore the old-style cast (A*)(p) is NOT one of the conversions that can be performed using static_cast, and (A*)(p) is equivalent to reinterpret_cast<A*>(p), which is well-formed under 5.2.10 [expr.reinterpret.cast] paragraph 7.
Other situations besides ambiguity which might raise similar questions include access violations, casting from virtual base to derived, and casting pointers-to-members when virtual inheritance is involved.
The current definition of constant expressions appears to make unevaluated operands constant expressions; for example, new char[10] would seem to be a constant expression if it appears as the operand of sizeof. This seems wrong.
7 [dcl.dcl] paragraph 3 reads,
In a simple-declaration, the optional init-declarator-list can be omitted only when... the decl-specifier-seq contains either a class-specifier, an elaborated-type-specifier with a class-key (9.1 [class.name] ), or an enum-specifier. In these cases and whenever a class-specifier or enum-specifier is present in the decl-specifier-seq, the identifiers in those specifiers are among the names being declared by the declaration... In such cases, and except for the declaration of an unnamed bit-field (9.6 [class.bit] ), the decl-specifier-seq shall introduce one or more names into the program, or shall redeclare a name introduced by a previous declaration. [Example:In the absence of any explicit restrictions in 7.1.3 [dcl.typedef] , this paragraph appears to allow declarations like the following:enum { }; // ill-formed typedef class { }; // ill-formed—end example]
typedef struct S { }; // no declarator typedef enum { e1 }; // no declaratorIn fact, the final example in 7 [dcl.dcl] paragraph 3 would seem to indicate that this is intentional: since it is illustrating the requirement that the decl-specifier-seq must introduce a name in declarations in which the init-declarator-list is omitted, presumably the addition of a class name would have made the example well-formed.
On the other hand, there is no good reason to allow such declarations; the only reasonable scenario in which they might occur is a mistake on the programmer's part, and it would be a service to the programmer to require that such errors be diagnosed.
Suppose we've got this class definition:
struct X { void f(); static int n; };
I think I can deduce from the existing standard that the following member definitions are ill-formed:
static void X::f() { } static int X::n;
To come to that conclusion, however, I have to put together several things in different parts of the standard. I would have expected to find an explicit statement of this somewhere; in particular, I would have expected to find it in 7.1.1 [dcl.stc]. I don't see it there, or anywhere.
Gabriel Dos Reis: Or in 3.5 [basic.link] which is about linkage. I would have expected that paragraph to say that that members of class types have external linkage when the enclosing class has an external linkage. Otherwise 3.5 [basic.link] paragraph 8:
Names not covered by these rules have no linkage.
might imply that such members do not have linkage.
Notes from the April, 2005 meeting:
The question about the linkage of class members is already covered by 3.5 [basic.link] paragraph 5.
The resolution of issue 482 allows a typedef to be redeclared in the same or a containing scope using a qualified declarator-id. This was not the principal goal of the issue and is not supported by current implementations. Should the prohibition of qualified declarator-ids be reinstated for typedefs?
The resolution of issue 407 does not cover cases involving using-declarations. For example:
namespace A { struct S {}; } namespace B { // This is valid per issue 407 using A::S; typedef A::S S; struct S s; } namespace C { // The typedef does not redefine the name S in this // scope, so issue 407's resolution does not apply. typedef A::S S; using A::S; // The name lookup here isn't ambiguous, because it only finds one // entity, but it finds both a typedef-name and a non-typedef-name referring // to that entity, so the standard doesn't appear to say whether this is valid. struct S s; }
The same issue appears with using-directives:
namespace D { typedef A::S S; }
namespace E {
using namespace A;
using namespace D;
struct S s; // ok? issue 407 doesn't apply here either
}
One possibility might be to remove the rule that a typedef-name declaration redefines an already-defined name and instead rely on struct stat-style hiding, taking the non-typedef-name if name lookup finds both and they refer to the same type.
Notes from the June, 2014 meeting:
CWG felt that these examples should be well-formed.
7.1.6.3 [dcl.type.elab] paragraph 1 seems to impose an ordering constraint on the elements of friend class declarations. However, the general rule is that declaration specifiers can appear in any order. Should
class C friend;be well-formed?
According to 7.3 [basic.namespace] paragraph 1,
The name of a namespace can be used to access entities declared in that namespace; that is, the members of the namespace.
implying that all declarations in a namespace, including definitions of members of nested namespaces, explicit instantiations, and explicit specializations, introduce members of the containing namespace. 7.3.1.2 [namespace.memdef] paragraph 3 clarifies the intent somewhat:
Every name first declared in a namespace is a member of that namespace.
However, current changes to clarify the behavior of deleted functions (which must be deleted on their “first declaration”) state that an explicit specialization of a function template is its first declaration.
Section 7.3.3 [namespace.udecl] paragraph 8 says:
A using-declaration is a declaration and can therefore be used repeatedly where (and only where) multiple declarations are allowed.It contains the following example:
namespace A { int i; } namespace A1 { using A::i; using A::i; // OK: double declaration } void f() { using A::i; using A::i; // error: double declaration }However, if "using A::i;" is really a declaration, and not a definition, it is far from clear that repeating it should be an error in either context. Consider:
namespace A { int i; void g(); } void f() { using A::g; using A::g; }Surely the definition of f should be analogous to
void f() { void g(); void g(); }which is well-formed because "void g();" is a declaration and not a definition.
Indeed, if the double using-declaration for A::i is prohibited in f, why should it be allowed in namespace A1?
Proposed Resolution (04/99): Change the comment "// error: double declaration" to "// OK: double declaration". (This should be reviewed against existing practice.)
Notes from 04/00 meeting:
The core language working group was unable to come to consensus over what kind of declaration a using-declaration should emulate. In a straw poll, 7 members favored allowing using-declarations wherever a non-definition declaration could appear, while 4 preferred to allow multiple using-declarations only in namespace scope (the rationale being that the permission for multiple using-declarations is primarily to support its use in multiple header files, which are seldom included anywhere other than namespace scope). John Spicer pointed out that friend declarations can appear multiple times in class scope and asked if using-declarations would have the same property under the "like a declaration" resolution.
As a result of the lack of agreement, the issue was returned to "open" status.
See also issues 56, 85, and 138..
Additional notes (January, 2005):
Some related issues have been raised concerning the following example (modified from a C++ validation suite test):
struct A { int i; static int j; }; struct B : A { }; struct C : A { }; struct D : virtual B, virtual C { using B::i; using C::i; using B::j; using C::j; };
Currently, it appears that the using-declarations of i are ill-formed, on the basis of 7.3.3 [namespace.udecl] paragraph 10:
Since a using-declaration is a declaration, the restrictions on declarations of the same name in the same declarative region (3.3 [basic.scope]) also apply to using-declarations.
Because the using-declarations of i refer to different objects, declaring them in the same scope is not permitted under 3.3 [basic.scope]. It might, however, be preferable to treat this case as many other ambiguities are: allow the declaration but make the program ill-formed if a name reference resolves to the ambiguous declarations.
The status of the using-declarations of j, however, is less clear. They both declare the same entity and thus do not violate the rules of 3.3 [basic.scope]. This might (or might not) violate the restrictions of 9.2 [class.mem] paragraph 1:
Except when used to declare friends (11.3 [class.friend]) or to introduce the name of a member of a base class into a derived class (7.3.3 [namespace.udecl], _N3225_.11.3 [class.access.dcl]), member-declarations declare members of the class, and each such member-declaration shall declare at least one member name of the class. A member shall not be declared twice in the member-specification, except that a nested class or member class template can be declared and then later defined.
Do the using-declarations of j repeatedly declare the same member? Or is the preceding sentence an indication that a using-declaration is not a declaration of a member?
7.3.3 [namespace.udecl] paragraph 20 says,
If a using-declaration uses the keyword typename and specifies a dependent name (14.6.2 [temp.dep]), the name introduced by the using-declaration is treated as a typedef-name (7.1.3 [dcl.typedef]).
This wording does not address use of typename in a using-declaration with a non-dependent name; the primary specification of the typename keyword in 14.6 [temp.res] does not appear to describe this case, either.
The status of an example like the following is unclear in the current Standard:
struct B { void f(); }; template<typename T> struct S: T { using B::f; };
7.3.3 [namespace.udecl] does not deal explicitly with dependent base classes, but does say in paragraph 3,
In a using-declaration used as a member-declaration, the nested-name-specifier shall name a base class of the class being defined. If such a using-declaration names a constructor, the nested-name-specier shall name a direct base class of the class being defined; otherwise it introduces the set of declarations found by member name lookup (10.2 [class.member.lookup], 3.4.3.1 [class.qual]).
In the definition of S, B::f is not a dependent name but resolves to an apparently unrelated class. However, because S could be instantiated as S<B>, presumably 14.6 [temp.res] paragraph 8 would apply:
No diagnostic shall be issued for a template definition for which a valid specialization can be generated.
Note also the resolution of issue 515, which permitted a similar use of a dependent base class named with a non-dependent name.
A using-declaration cannot name a scoped enumerator, according to 7.3.3 [namespace.udecl] paragraph 7. This is presumably because a scoped enumerator belongs to an enumeration scope and thus logically cannot belong to the non-enumeration scope in which the using-declaration appears. It seems inconsistent, however, to permit using-declarations to name unscoped enumerators but not scoped enumerators.
Also, 7.3.3 [namespace.udecl] paragraph 3 says,
In a using-declaration used as a member-declaration, the nested-name-specifier shall name a base class of the class being defined.
The consequence of this is that
enum E { e0 }; void f() { using E::e0; }
is well-formed, but
struct B { enum E { e0 }; }; struct D : B { using B::E::e0; };
is not. Again, this seems inconsistent. Should these rules be relaxed?
The set of declarations introduced by a using-declaration that is a member-declaration is specified by 7.3.3 [namespace.udecl] paragraph 3. However, there is no corresponding specification for a non-member using-declaration.
The status of an example like the following is not clear:
void f(int, int); template<typename T> void g(T t) { f(t); } void f(int, int = 0); void h() { g(0); }
According to 14.6.4 [temp.dep.res] paragraph 1,
In resolving dependent names, names from the following sources are considered:
Declarations that are visible at the point of definition of the template.
- ...
If this is to be interpreted as meaning that only the declarations that are visible at the point of definition can be used in overload resolution for dependent calls, the call g(0) is ill-formed. If, however, it is the names, not the declarations, that are captured, then presumably the second declaration of f should be considered, making the call well-formed. There is implementation divergence for this example.
The resolution of issue 1551 recently clarified the requirements in similar cases involving using-declarations:
namespace N { void f(int, int); } using N::f; template<typename T> void g(T t) { f(t); } namespace N { void f(int, int = 0); } void h() { g(0); }
The note added to 7.3.3 [namespace.udecl] paragraph 11 makes clear that the call g(0) is well-formed in this example.
This outcome results in an unfortunate discrepancy between how default arguments and overloaded functions are treated, even though default arguments could conceptually be viewed as simply adding extra overloads for the additional arguments.
Notes from the June, 2014 meeting:
CWG was unable to come to consensus regarding the desired outcome, with an approximately equal split between desiring the first example to be well-formed or ill-formed. It was noted that the resolution of issue 1850 makes the corresponding case for non-dependent references ill-formed, with no diagnostic required. Similar questions also apply to completing an array type, which also involves a modification to an existing entity declaration in a given scope.
It is not clear whether some of the wording in 7.5 [dcl.link] that applies only to function types and names ought also to apply to object names. In particular, paragraph 3 says,
Every implementation shall provide for linkage to functions written in the C programming language, "C", and linkage to C++ functions, "C++".
Nothing is said about variable names, apparently meaning that implementations need not provide C (or even C++!) linkage for variable names. Also, paragraph 5 says,
Except for functions with C++ linkage, a function declaration without a linkage specification shall not precede the first linkage specification for that function. A function can be declared without a linkage specification after an explicit linkage specification has been seen; the linkage explicitly specified in the earlier declaration is not affected by such a function declaration.
There doesn't seem to be a good reason for these provisions not to apply to variable names, as well.
Does the language linkage of a block-scope declaration determine the language linkage of a subsequent declaration of the same name in a different scope? For example,
extern "C" void f() { void g(); // Implicitly extern "C" } void g() { } // Also extern "C" or linkage mismatch?
In other contexts, inheritance of linkage requires that the earlier declaration be visible, as in 3.5 [basic.link] paragraph 6:
The name of a function declared in block scope and the name of a variable declared by a block scope extern declaration have linkage. If there is a visible declaration of an entity with linkage having the same name and type, ignoring entities declared outside the innermost enclosing namespace scope, the block scope declaration declares that same entity and receives the linkage of the previous declaration.
The specification for language linkage in 7.5 [dcl.link] paragraph 5, however, makes no mention of visibility:
A function can be declared without a linkage specification after an explicit linkage specification has been seen; the linkage explicitly specified in the earlier declaration is not affected by such a function declaration.
According to 7.6.2 [dcl.align] paragraph 6,
If the defining declaration of an entity has an alignment-specifier, any non-defining declaration of that entity shall either specify equivalent alignment or have no alignment-specifier. Conversely, if any declaration of an entity has an alignment-specifier, every defining declaration of that entity shall specify an equivalent alignment. No diagnostic is required if declarations of an entity have different alignment-specifiers in different translation units.
Because this is phrased in terms of the definition of an entity, an example like the following is presumably well-formed (even though there can be no definition of n):
alignas(8) extern int n; alignas(16) extern int n;
Is this intentional?
Split off from issue 453.
It is in general not possible to determine at compile time whether a reference is used before it is initialized. Nevertheless, there is some sentiment to require a diagnostic in the obvious cases that can be detected at compile time, such as the name of a reference appearing in its own initializer. The resolution of issue 453 originally made such uses ill-formed, but the CWG decided that this question should be a separate issue.
Rationale (October, 2005):
The CWG felt that this error was not likely to arise very often in practice. Implementations can warn about such constructs, and the resolution for issue 453 makes executing such code undefined behavior; that seemed to address the situation adequately.
Note (February, 2006):
Recent discussions have suggested that undefined behavior be reduced. One possibility (broadening the scope of this issue to include object declarations as well as references) was to require a diagnostic if the initializer uses the value, but not just the address, of the object or reference being declared:
int i = i; // Ill-formed, diagnostic required void* p = &p; // Okay
The current wording of 8.3.5 [dcl.fct] paragraph 6 encompasses more than it should:
If the type of a parameter includes a type of the form “pointer to array of unknown bound of T” or “reference to array of unknown bound of T,” the program is ill-formed. [Footnote: This excludes parameters of type “ptr-arr-seq T2” where T2 is “pointer to array of unknown bound of T” and where ptr-arr-seq means any sequence of “pointer to” and “array of” derived declarator types. This exclusion applies to the parameters of the function, and if a parameter is a pointer to function or pointer to member function then to its parameters also, etc. —end footnote]
The normative wording (contrary to the intention expressed in the footnote) excludes declarations like
template<class T> struct S {}; void f(S<int (*)[]>);
and
struct S {}; void f(int(*S::*)[]);
but not
struct S {}; void f(int(S::*)[]);
Is this program well-formed?
struct S { static int f2(int = f1()); // OK? static int f1(int = 2); }; int main() { return S::f2(); }
A class member function can in general refer to class members that are declared lexically later. But what about referring to default arguments of member functions that haven't yet been declared?
It seems to me that if f2 can refer to f1, it can also refer to the default argument of f1, but at least one compiler disagrees.
Notes from the February, 2012 meeting:
Implementations seem to have come to agreement that this example is ill-formed.
Additional note (March, 2013):
Additional discussion has occurred suggesting the following examples as illustrations of this issue:
struct B {
struct A { int a = 0; };
B(A = A()); // Not permitted?
};
as well as
struct C { struct A { int a = C().n; }; // can we use the default argument here? C(int k = 0); int n; }; bool f(); struct D { struct A { bool a = noexcept(B()); }; // can we use the default initializer here? struct B { int b = f() ? throw 0 : 0; }; };
(See also issue 325.)
Additional note (October, 2013):
Issue 1330 treats exception-specifications like default arguments, evaluated in the completed class type. That raises the same questions regarding self-referential noexcept clauses that apply to default arguments.
It is not clear from 8.3.6 [dcl.fct.default] whether the following is well-formed or not:
template<typename... T> void f2(int a = 0, T... b, int c = 1); f2<>(); // parameter a has the value 0 and parameter c has the value 1
(T... b is a non-deduced context per 14.8.2.5 [temp.deduct.type] paragraph 5, so the template arguments must be specified explicitly.)
Notes from the April, 2013 meeting:
CWG agreed that the example should be ill-formed.
Additional note (August, 2013):
8.3.6 [dcl.fct.default] paragraph 4 explicitly allows for a function parameter pack to follow a parameter with a default argument:
In a given function declaration, each parameter subsequent to a parameter with a default argument shall have a default argument supplied in this or a previous declaration or shall be a function parameter pack.
However, any instantiation of such a function template with a non-empty pack expansion would result in a function declaration in which one or more parameters without default arguments (from the pack expansion) would follow a parameter with a default argument and thus would be ill-formed. Such a function template declaration thus violates 14.6 [temp.res] paragraph 8:
If every valid specialization of a variadic template requires an empty template parameter pack, the template is ill-formed, no diagnostic required.
Although the drafting review teleconference of 2013-08-26 suggested closing the issue as NAD, it is being kept open to discuss and resolve this apparent contradiction.
Notes from the September, 2013 meeting:
CWG agreed that this example should be accepted; the restriction on default arguments applies to the template declaration itself, not to its specializations.
In this example:
struct A {}; struct B: A { B(int); B(B&); B(A); }; void foo(B); void bar() { foo(0); }
we are copy-initializing a B from 0. So by 13.3.1.4 [over.match.copy] we consider all the converting constructors of B, and choose B(int) to create a B. Then, by 8.5 [dcl.init] paragraph 15, we direct-initialize the parameter from that temporary B. By 13.3.1.3 [over.match.ctor] we consider all constructors. The copy constructor cannot be called with a temporary, but B(A) is callable.
As far as I can tell, the Standard says that this example is well-formed, and calls B(A). EDG and G++ have rejected this example with a message about the copy constructor not being callable, but I have been unsuccessful in finding anything in the Standard that says that we only consider the copy constructor in the second step of copy-initialization. I wouldn't mind such a rule, but it doesn't seem to be there. And implementing issue 391 causes G++ to start accepting the example.
This question came up before in a GCC bug report; in the discussion of that bug Nathan Sidwell said that some EDG folks explained to him why the testcase is ill-formed, but unfortunately didn't provide that explanation in the bug report.
I think the resolution of issue 391 makes this example well-formed; it was previously ill-formed because in order to bind the temporary B(0) to the argument of A(const A&) we needed to make another temporary B, and that's what made the example ill-formed. If we want this example to stay ill-formed, we need to change something else.
Steve Adamczyk:
I tracked down my response to Nathan at the time, and it related to my paper N1232 (on the auto_ptr problem). The change that came out of that paper is in 13.3.3.1 [over.best.ics] paragraph 4:
However, when considering the argument of a user-defined conversion function that is a candidate by 13.3.1.3 [over.match.ctor] when invoked for the copying of the temporary in the second step of a class copy-initialization, or by 13.3.1.4 [over.match.copy], 13.3.1.5 [over.match.conv], or 13.3.1.6 [over.match.ref] in all cases, only standard conversion sequences and ellipsis conversion sequences are allowed.
This is intended to prevent use of more than one implicit user- defined conversion in an initialization.
I told Nathan B(A) can't be called because its argument would require yet another user-defined conversion, but I was wrong. I saw the conversion from B to A and immediately thought “user-defined,” but in fact because B is a derived class of A the conversion according to 13.3.3.1 [over.best.ics] paragraph 6 is a derived-to-base Conversion (even though it will be implemented by calling a copy constructor).
So I agree with you: with the analysis above and the change for issue 391 this example is well-formed. We should discuss whether we want to make a change to keep it ill-formed.
There is an inconsistency in the handling of references vs pointers in user defined conversions and overloading. The reason for that is that the combination of 8.5.3 [dcl.init.ref] and 4.4 [conv.qual] circumvents the standard way of ranking conversion functions, which was probably not the intention of the designers of the standard.
Let's start with some examples, to show what it is about:
struct Z { Z(){} }; struct A { Z x; operator Z *() { return &x; } operator const Z *() { return &x; } }; struct B { Z x; operator Z &() { return x; } operator const Z &() { return x; } }; int main() { A a; Z *a1=a; const Z *a2=a; // not ambiguous B b; Z &b1=b; const Z &b2=b; // ambiguous }
So while both classes A and B are structurally equivalent, there is a difference in operator overloading. I want to start with the discussion of the pointer case (const Z *a2=a;): 13.3.3 [over.match.best] is used to select the best viable function. Rule 4 selects A::operator const Z*() as best viable function using 13.3.3.2 [over.ics.rank] since the implicit conversion sequence const Z* -> const Z* is a better conversion sequence than Z* -> const Z*.
So what is the difference to the reference case? Cv-qualification conversion is only applicable for pointers according to 4.4 [conv.qual]. According to 8.5.3 [dcl.init.ref] paragraphs 4-7 references are initialized by binding using the concept of reference-compatibility. The problem with this is, that in this context of binding, there is no conversion, and therefore there is also no comparing of conversion sequences. More exactly all conversions can be considered identity conversions according to 13.3.3.1.4 [over.ics.ref] paragraph 1, which compare equal and which has the same effect. So binding const Z* to const Z* is as good as binding const Z* to Z* in terms of overloading. Therefore const Z &b2=b; is ambiguous. [13.3.3.1.4 [over.ics.ref] paragraph 5 and 13.3.3.2 [over.ics.rank] paragraph 3 rule 3 (S1 and S2 are reference bindings ...) do not seem to apply to this case]
There are other ambiguities, that result in the special treatment of references: Example:
struct A {int a;}; struct B: public A { B() {}; int b;}; struct X { B x; operator A &() { return x; } operator B &() { return x; } }; main() { X x; A &g=x; // ambiguous }
Since both references of class A and B are reference compatible with references of class A and since from the point of ranking of implicit conversion sequences they are both identity conversions, the initialization is ambiguous.
So why should this be a defect?
So overall I think this was not the intention of the authors of the standard.
So how could this be fixed? For comparing conversion sequences (and only for comparing) reference binding should be treated as if it was a normal assignment/initialization and cv-qualification would have to be defined for references. This would affect 8.5.3 [dcl.init.ref] paragraph 6, 4.4 [conv.qual] and probably 13.3.3.2 [over.ics.rank] paragraph 3.
Another fix could be to add a special case in 13.3.3 [over.match.best] paragraph 1.
The normative wording of 8.5.4 [dcl.init.list] regarding the lifetime of the array underlying an initializer_list object does not match the intent as specified in the example in paragraph 6 of that section, even after application of the resolution of issue 1290. That example contains the lines:
void f() { std::initializer_list<int> i3 = { 1, 2, 3 }; }
The commentary indicates that the lifetime of the array created for the initialization of i3 “persists for the lifetime of the variable.” However, that is not the effect of the normative wording. According to paragraph 3,
if T is a specialization of std::initializer_list<E>, an initializer_list object is constructed as described below and used to initialize the object according to the rules for initialization of an object from a class of the same type (8.5 [dcl.init]).
In other words, the underlying array for {1,2,3} in the example is associated with the temporary and shares its lifetime; its lifetime is not extended to that of the variable.
Notes from the February, 2014 meeting:
The resolution of issue 1299, clarifying the relationship between temporary expressions and temporary objects, is relevant to this issue.
A POD-struct is not permitted to have a user-declared copy assignment operator (9 [class] paragraph 4). However, a template assignment operator is not considered a copy assignment operator, even though its specializations can be selected by overload resolution for performing copy operations (12.8 [class.copy] paragraph 9 and especially footnote 114). Consequently, X in the following code is a POD, notwithstanding the fact that copy assignment (for a non-const operand) is a member function call rather than a bitwise copy:
struct X {
template<typename T> const X& operator=(T&);
};
void f() {
X x1, x2;
x1 = x2; // calls X::operator=<X>(X&)
}
Is this intentional?
9.2 [class.mem] paragraph 1 allows nested classes, class templates, and enumerations to be declared and then later defined in the class member-specification. There does not appear to be a restriction on using a qualified-id in that definition. Should such a restriction be added?
There doesn't seem to be a prohibition in 9.5 [class.union] against a declaration like
union { int : 0; } x;Should that be valid? If so, 8.5 [dcl.init] paragraph 5 third bullet, which deals with default-initialization of unions, should say that no initialization is done if there are no data members.
What about:
union { } x; static union { };If the first example is well-formed, should either or both of these cases be well-formed as well?
(See also the resolution for issue 151.)
Notes from 10/00 meeting: The resolution to issue 178, which was accepted as a DR, addresses the first point above (default initialization). The other questions have not yet been decided, however.
CWG decided at the 2014-06 (Rapperswil) meeting to address only a limited subset of the questions raised by issues 1857 and 1861. This issue is a placeholder for the remaining questions, such as defining a “bit” in terms of a value of 2n, specifying whether a bit-field has a sign bit, etc.
The term "ambiguous base class" doesn't seem to be actually defined anywhere. 10.2 [class.member.lookup] paragraph 7 seems like the place to do it.
Referring to a private member of a class, 11 [class.access] paragraph 1 says,
its name can be used only by members and friends of the class in which it is declared.
That wording does not appear to reflect the intent of access control, however. Consider the following:
struct S {
void f(int);
private:
void f(double);
};
void g(S* sp) {
sp->f(2); // Ill-formed?
}
The statement from 11 [class.access] paragraph 1 says that the name f can be used only by members and friends of S. Function g is neither, and it clearly contains a use of the name f. That appears to make it ill-formed, in spite of the fact that overload resolution will select the public member.
A related question is whether the use of the term “name” in the description of the effect of access control means that it does not apply to constructors and destructors, which do not have names.
Mike Miller: The phrase “its name can be used” should be understood as “it can be referred to by name.” Paragraph 4, among other places, makes it clear that access control is applied after overload resolution. The “name” phrasing is there to indicate that access control does not apply where the name is not used (in a call via a pointer, for example).
I have heard a claim that the following code is valid, but I don't see why.
struct A { int foo (); }; struct B: A { private: using A::foo; }; int main () { return B ().foo (); }
It seems to me that the using declaration in B should hide the public foo in A. Then the call to B::foo should fail because B::foo is not accessible in main.
Am I missing something?
Steve Adamczyk: This is similar to the last example in 11.2 [class.access.base]. In prose, the rule is that if you have access to cast to a base class and you have access to the member in the base class, you are given access in the derived class. In this case, A is a public base class of B and foo is public in A, so you can access foo through a B object. The actual permission for this is in the fourth bullet in 11.2 [class.access.base] paragraph 4.
The wording changes for issue 9 make this clearer, but I believe even without them this example could be discerned to be valid.
See my paper J16/96-0034, WG21/N0852 on this topic.
Steve Clamage: But a using-declaration is a declaration (7.3.3 [namespace.udecl]). Compare with
struct B : A { private: int foo(); };
In this case, the call would certainly be invalid, even though your argument about casting B to an A would make it OK. Your argument basically says that an access adjustment to make something less accessible has no effect. That also doesn't sound right.
Steve Adamczyk: I agree that is strange. I do think that's what 11.2 [class.access.base] says, but perhaps that's not what we want it to say.
With the change from a scope-based to an entity-based definition of friendship (see issues 372 and 580), it could well make sense to grant friendship to enumerations and variables, for example:
enum E: int; class C { static const int i = 5; // Private friend E; friend int x; }; enum E { e = C::i; }; // OK: E is a friend int x = C::i; // OK: x is a friend
According to the current wording of 11.3 [class.friend] paragraph 3, the friend declaration of E is well-formed but ignored, while the friend declaration of x is ill-formed.
Although it is not possible to specify a constructor's template arguments in a constructor invocation (because the constructor has no name but is invoked by use of the constructor's class's name), it is possible to “name” the constructor in declarative contexts: per 3.4.3.1 [class.qual] paragraph 2,
In a lookup in which the constructor is an acceptable lookup result, if the nested-name-specifier nominates a class C, and the name specified after the nested-name-specifier, when looked up in C, is the injected-class-name of C (clause 9 [class]), the name is instead considered to name the constructor of class C... Such a constructor name shall be used only in the declarator-id of a declaration that names a constructor.
Should it therefore be possible to specify template-arguments for a templated constructor in an explicit instantiation or specialization? For example,
template <int dim> struct T {}; struct X { template <int dim> X (T<dim> &) {}; }; template X::X<> (T<2> &);
If so, that should be clarified in the text. In particular, 12.1 [class.ctor] paragraph 1 says,
Constructors do not have names. A special declarator syntax using an optional sequence of function-specifiers (7.1.2 [dcl.fct.spec]) followed by the constructor's class name followed by a parameter list is used to declare or define the constructor.
This certainly sounds as if the parameter list must immediately follow the class name, with no allowance for a template argument list.
It would be worthwhile in any event to revise this wording to utilize the “considered to name” approach of 3.4.3.1 [class.qual]; as it stands, this wording sounds as if the following would be acceptable:
struct S {
S();
};
S() { } // qualified-id not required?
Notes from the October, 2006 meeting:
It was observed that explicitly specifying the template arguments in a constructor declaration is never actually necessary because the arguments are, by definition, all deducible and can thus be omitted.
It is not clear when, if ever, a constructor template can be considered to provide a default constructor. For example:
struct A { template<typename ...T> A(T...); // #1 A(std::initializer_list<long>); // #2 }; A a{};
According to 8.5.4 [dcl.init.list] paragraph 3, A will be value-initialized if it has a default constructor, and there is implementation divergence whether this example calls #1 or #2.
Similarly, for an example like
struct B { template<typename T=int> B(T = 0); };
it is not completely clear whether a default constructor should be implicitly declared or not.
More generally, do utterances in the Standard concerning “constructors” also apply to constructor templates?
Notes from the February, 2014 meeting:
One possibility discussed was that we may need to change places that explicitly refer to a default constructor to use overload resolution, similar to the change that was made a few years ago with regard to copy construction vs “copy constructor.” One additional use of “default constructor” is in determining the triviality of a class, but it might be a good idea to remove the concept of a trivial class altogether. This possibility will be explored.
In an example like,
struct Y {}; template <typename T> struct X : public virtual Y { }; template <typename T> class A : public X<T> { template <typename S> A (S) : S () { } }; template A<int>::A (Y);
Should S be found? (S is a dependent name, so if it resolves to a base class type in the instantiated template, it should satisfy the requirements.) All the compilers I tried allowed this example, but 12.6.2 [class.base.init] paragraph 2 says,
Names in a mem-initializer-id are looked up in the scope of the constructor's class and, if not found in that scope, are looked up in the scope containing the constructor's definition.
The name S is not declared in those scopes.
Mike Miller: Here's another example that is accepted by most/all compilers but not by the current wording:
namespace N { struct B { B(int); }; typedef B typedef_B; struct D: B { D(); }; } N::D::D(): typedef_B(0) { }
Except for the fact that the constructor function parameter names are ignored (see paragraph 7), what the compilers seem to be doing is essentially ordinary unqualified name lookup.
Notes from the October, 2009 meeting:
The eventual resolution of this issue should take into account the template parameter scope introduced by the resolution of issue 481.
The resolution of issue 597 and anticipated resolution of issue 1517 allow access to non-virtual base classes outside the lifetime of the object. However, for no apparent reason, references to nonstatic data members are still prohibited. This disparity should be rectified.
[Picked up by evolution group at October 2002 meeting.]
(See also paper J16/99-0005 = WG21 N1182.)At the London meeting, 12.8 [class.copy] paragraph 31 was changed to limit the optimization described to only the following cases:
Can we find an appropriate description for the desired cases?
Rationale (04/99): The absence of this optimization does not constitute a defect in the Standard, although the proposed resolution in the paper should be considered when the Standard is revised.
Note (March, 2008):
The Evolution Working Group has accepted the intent of this issue and referred it to CWG for action (not for C++0x). See paper J16/07-0033 = WG21 N2173.
Notes from the June, 2008 meeting:
The CWG decided to take no action on this issue until an interested party produces a paper with analysis and a proposal.
Consider the following example:
int c; struct A { A() { ++c; } A(const A&) { ++c; } }; struct B { A a; B(const A& a): a(a) { } }; int main() { (B(A())); return c - 1; }
Here we would like to be able to avoid the copy and just construct the A() directly into the A subobject of B. But we can't, because it isn't allowed by 12.8 [class.copy] paragraph 34 bullet 3:
when a temporary class object that has not been bound to a reference (12.2 [class.temporary]) would be copied/moved to a class object with the same cv-unqualified type, the copy/move operation can be omitted by constructing the temporary object directly into the target of the omitted copy/move
The part about not being bound to a reference was added for an unrelated reason by issue 185. If that resolution were recast to require that the temporary object is not accessed after the copy, rather than banning the reference binding, this optimization could be applied.
The similar example using pass by value is also not one of the allowed cases, which could be considered part of issue 6.
The current wording of the Standard does not make clear whether a special member function that is defaulted and implicitly deleted is trivial. Triviality is visible in various ways that don't involve invoking the function, such as determining whether a type is trivially copyable and determining the result of various type traits. It also factors into some ABI specifications.
Notes from the June, 2014 meeting:
CWG felt that deleted functions should be trivial. See also issue 1590.
Consider the following example:
class B1 {}; typedef void (B1::*PB1) (); // memptr to B1 class B2 {}; typedef void (B2::*PB2) (); // memptr to B2 class D1 : public B1, public B2 {}; typedef void (D1::*PD) (); // memptr to D1 struct S { operator PB1(); // can be converted to PD } s; struct T { operator PB2(); // can be converted to PD } t; void foo() { s == t; // Is this an error? }
According to 13.6 [over.built] paragraph 16, there is an operator== for PD (“For every pointer to member type...”), so why wouldn't it be used for this comparison?
Mike Miller: The problem, as I understand it, is that 13.3.1.2 [over.match.oper] paragraph 3, bullet 3, sub-bullet 3 is broader than it was intended to be. It says that candidate built-in operators must “accept operand types to which the given operand or operands can be converted according to 13.3.3.1 [over.best.ics].” 13.3.3.1.2 [over.ics.user] describes user-defined conversions as having a second standard conversion sequence, and there is nothing to restrict that second standard conversion sequence.
My initial thought on addressing this would be to say that user-defined conversion sequences whose second standard conversion sequence contains a pointer conversion or a pointer-to-member conversion are not considered when selecting built-in candidate operator functions. They would still be applicable after the hand-off to Clause 5 (e.g., in bringing the operands to their common type, 5.10 [expr.eq], or composite pointer type, 5.9 [expr.rel]), just not in constructing the list of built-in candidate operator functions.
I started to suggest restricting the second standard conversion sequence to conversions having Promotion or Exact Match rank, but that would exclude the Boolean conversions, which are needed for !, &&, and ||. (It would have also restricted the floating-integral conversions, though, which might be a good idea. They can't be used implicitly, I think, because there would be an ambiguity among all the promoted integral types; however, none of the compilers I tested even tried those conversions because the errors I got were not ambiguities but things like “floating point operands not allowed for %”.)
Bill Gibbons: I recall seeing this problem before, though possibly not in committee discussions. As written this rule makes the set of candidate functions dependent on what classes have been defined, including classes not otherwise required to have been defined in order for "==" to be meaningful. For templates this implies that the set is dependent on what templates have been instantiated, e.g.
template<class T> class U : public T { }; U<B1> u; // changes the set of candidate functions to include // operator==(U<B1>,U<B1>)?
There may be other places where the existence of a class definition, or worse, a template instantiation, changes the semantics of an otherwise valid program (e.g. pointer conversions?) but it seems like something to be avoided.
(See also issue 954.)
Although the intent is that the ! operator should be usable with an operand that is a class object having an explicit conversion to bool (i.e., its operand is “contextually converted to bool”), the selection of the conversion operator is done via 13.3.1.2 [over.match.oper], 13.3.2 [over.match.viable], and 13.3.3 [over.match.best], which do not make specific allowance for this special characteristic of the ! operator and thus will not select the explicit conversion function.
Notes from the June, 2014 meeting:
CWG noted that this same issue affects && and ||.
According to 13.3.1.5 [over.match.conv] paragraph 1, when a class type S is used as an initializer for an object of type T,
The conversion functions of S and its base classes are considered. Those non-explicit conversion functions that are not hidden within S and yield type T or a type that can be converted to type T via a standard conversion sequence (13.3.3.1.1 [over.ics.scs]) are candidate functions.
Because conversion from std::nullptr_t to bool is only permitted in direct-initialization (4.12 [conv.bool]), it is not clear whether there is a standard conversion sequence from std::nullptr_t to bool, considering that an implicit conversion sequence is intended to model copy-initialization. Should 13.3.1.5 [over.match.conv] be understood to refer only to conversions permitted in copy-initialization, or should the form of the initialization be considered? For example,
struct SomeType {
operator std::nullptr_t();
};
bool b{ SomeType() }; // Well-formed?
Note also 13.3.3.2 [over.ics.rank] paragraph 4, which may bear on the intent (or, alternatively, might describe a situation that cannot arise):
A conversion that does not convert a pointer, a pointer to member, or std::nullptr_t to bool is better than one that does.
According to 13.3.3 [over.match.best] paragraph 4, the following program appears to be ill-formed:
void f(int, int=0); void f(int=0, int); void g() { f(); }
Though I do not expect this is the intent of this paragraph in the standard.
13.3.3 [over.match.best] paragraph 4:
If the best viable function resolves to a function for which multiple declarations were found, and if at least two of these declarations or the declarations they refer to in the case of using-declarations specify a default argument that made the function viable, the program is ill-formed. [Example:namespace A { extern "C" void f(int = 5); } namespace B { extern "C" void f(int = 5); } using A::f; using B::f; void use() { f(3); //OK, default argument was not used for viability f(); //Error: found default argument twice }end example]
Both paragraph 3 and paragraph 4 of 13.3.3.2 [over.ics.rank] have overload resolution tiebreakers for reference binding. It might be possible to merge those into a single treatment.
The Standard is not clear whether the following example is well-formed or not:
struct S { static void f(int); static void f(double); }; S s; void (*pf)(int) = &s.f;
According to 5.2.5 [expr.ref] paragraph 4 bullet 3, you do function overload resolution to determine whether x.f is a static or non-static member function. 5.3.1 [expr.unary.op] paragraph 6 says that you can only take the address of an overloaded function in a context that determines the overload to be chosen, and the initialization of a function pointer is such a context (13.4 [over.over] paragraph 1). The problem is that 13.4 [over.over] is phrased in terms of “an overloaded function name,” and this is a member access expression, not a name.
There is variability among implementations as to whether this example is accepted; some accept it as written, some only if the & is omitted, and some reject it in both forms.
Additional note (October, 2010):
A related question concerns an example like
struct S { static void g(int*) {} static void g(long) {} } s; void foo() { (&s.g)(0L); }
Because the address occurs in a call context and not in one of the contexts mentioned in 13.4 [over.over] paragraph 1, the call expression in foo is presumably ill-formed. Contrast this with the similar example
void g1(int*) {} void g1(long) {} void foo1() { (&g1)(0L); }
This call presumably is well-formed because 13.3.1.1 [over.match.call] applies to “the address of a set of overloaded functions.” (This was clearer in the wording prior to the resolution of issue 704: “...in this context using &F behaves the same as using the name F by itself.”) It's not clear that there's any reason to treat these two cases differently.
This question also bears on the original question of this issue, since the original wording of 13.3.1.1 [over.match.call] also described the case of an ordinary member function call like s.g(0L) as involving the “name” of the function, even though the postfix-expression is a member access expression and not a “name.” Perhaps the reference to “name” in 13.4 [over.over] should be similarly understood as applying to member access expressions?
Even though a function cannot take a parameter of type void, the current rules for overload resolution require consideration of overloaded operators when one operand has a user-defined or enumeration type and the other has type void. This can result in side effects and possibly errors, for example:
template <class T> struct A { T t; typedef T type; }; struct X { typedef A<void> type; }; template <class T> void operator ,(typename T::type::type, T) {} int main() { X(), void(); // OK void(), X(); // error: A<void> is instantiated with a field of // type void }
Although numeric literals can have extended integer types, user-defined literal operators cannot have a parameter of an extended integer type. This seems like an oversight.
According to the Standard (although not implemented this way in most implementations), the following code exhibits non-intuitive behavior:
struct T { operator short() const; operator int() const; }; short s; void f(const T& t) { s = t; // surprisingly calls T::operator int() const }
The reason for this choice is 13.6 [over.built] paragraph 18:
For every triple (L, VQ, R), where L is an arithmetic type, VQ is either volatile or empty, and R is a promoted arithmetic type, there exist candidate operator functions of the form
VQ L& operator=(VQ L&, R);
Because R is a "promoted arithmetic type," the second argument to the built-in assignment operator is int, causing the unexpected choice of conversion function.
Suggested resolution: Provide built-in assignment operators for the unpromoted arithmetic types.
Related to the preceding, but not resolved by the suggested resolution, is the following problem. Given:
struct T { operator int() const; operator double() const; };
I believe the standard requires the following assignment to be ambiguous (even though I expect that would surprise the user):
double x; void f(const T& t) { x = t; }
The problem is that both of these built-in operator=()s exist (13.6 [over.built] paragraph 18):
double& operator=(double&, int); double& operator=(double&, double);
Both are an exact match on the first argument and a user conversion on the second. There is no rule that says one is a better match than the other.
The compilers that I have tried (even in their strictest setting) do not give a peep. I think they are not following the standard. They pick double& operator=(double&, double) and use T::operator double() const.
I hesitate to suggest changes to overload resolution, but a possible resolution might be to introduce a rule that, for built-in operator= only, also considers the conversion sequence from the second to the first type. This would also resolve the earlier question.
It would still leave x += t etc. ambiguous -- which might be the desired behavior and is the current behavior of some compilers.
Notes from the 04/01 meeting:
The difference between initialization and assignment is disturbing. On the other hand, promotion is ubiquitous in the language, and this is the beginning of a very slippery slope (as the second report above demonstrates).
Additional note (August, 2010):
See issue 507 for a similar example involving comparison operators.
Consider the following example:
struct NullClass { template<typename T> operator T () { return 0 ; } }; int main() { NullClass n; n==5; // #1 return 0; }
The comparison at #1 is, according to the current Standard, ambiguous. According to 13.6 [over.built] paragraph 12, the candidates for operator==(L, R) include functions “for every pair of promoted arithmetic types,” so L could be either int or long, and the conversion operator template will provide an exact match for either.
Some implementations unambiguously choose the int candidate. Perhaps the overload resolution rules could be tweaked to prefer candidates in which L and R are the same type?
(See also issue 545.)
According to 14 [temp] paragraph 5,
Except that a function template can be overloaded either by (non-template) functions with the same name or by other function templates with the same name (14.8.3 [temp.over] ), a template name declared in namespace scope or in class scope shall be unique in that scope.3.3.10 [basic.scope.hiding] paragraph 2 agrees that only functions, not function templates, can hide a class name declared in the same scope:
A class name (9.1 [class.name] ) or enumeration name (7.2 [dcl.enum] ) can be hidden by the name of an object, function, or enumerator declared in the same scope.However, 3.3 [basic.scope] paragraph 4 treats functions and template functions together in this regard:
Given a set of declarations in a single declarative region, each of which specifies the same unqualified name,
- they shall all refer to the same entity, or all refer to functions and function templates; or
- exactly one declaration shall declare a class name or enumeration name that is not a typedef name and the other declarations shall all refer to the same object or enumerator, or all refer to functions and function templates; in this case the class name or enumeration name is hidden
John Spicer: You should be able to take an existing program and replace an existing function with a function template without breaking unrelated parts of the program. In addition, all of the compilers I tried allow this usage (EDG, Sun, egcs, Watcom, Microsoft, Borland). I would recommend that function templates be handled exactly like functions for purposes of name hiding.
Martin O'Riordan: I don't see any justification for extending the purview of what is decidedly a hack, just for the sake of consistency. In fact, I think we should go further and in the interest of consistency, we should deprecate the hack, scheduling its eventual removal from the C++ language standard.
The hack is there to allow old C programs and especially the 'stat.h' file to compile with minimum effort (also several other Posix and X headers). People changing such older programs have ample opportunity to "do it right". Indeed, if you are adding templates to an existing program, you should probably be placing your templates in a 'namespace', so the issue disappears anyway. The lookup rules should be able to provide the behaviour you need without further hacking.
The Standard does not normatively define which > and >> tokens are to be taken as closing a template-argument-list; instead, 14.2 [temp.names] paragraph 3 uses the undefined and imprecise term “non-nested:”
When parsing a template-id, the first non-nested > is taken as the end of the template-argument-list rather than a greater-than operator. Similarly, the first non-nested >> is treated as two consecutive but distinct > tokens, the first of which is taken as the end of the template-argument-list and completes the template-id.
The (non-normative) footnote clarifies that
A > that encloses the type-id of a dynamic_cast, static_cast, reinterpret_cast or const_cast, or which encloses the template-arguments of a subsequent template-id, is considered nested for the purpose of this description.
Aside from the questionable wording of this footnote (e.g., in what sense does a single terminating character “enclose” anything, and is a nested template-id “subsequent?”) and the fact that it is non-normative, it does not provide a complete definition of what “nesting” is intended to mean. For example, is the first > in this putative template-id “nested” or not?
X<a ? b > c : d>
Additional note (January, 2014):
A similar problem exists for an operator> template:
struct S; template<void (*)(S, S)> struct X {}; void operator>(S, S); X<operator> > x;
Somehow the specification must be written to avoid taking the > token in the operator name as the end of the template argument list for X.
None of my compilers accept this, which surprised me a little. Is the base-to-derived member function conversion considered to be a runtime-only thing?
template <class D> struct B { template <class X> void f(X) {} template <class X, void (D::*)(X) = &B<D>::f<X> > struct row {}; }; struct D : B<D> { void g(int); row<int,&D::g> r1; row<char*> r2; };
John Spicer: This is not among the permitted conversions listed in 14.3.
I'm not sure there is a terribly good reason for that. Some of the template argument rules for external entities were made conservatively because of concerns about issues of mangling template argument names.
David Abrahams: I'd really like to see that restriction loosened. It is a serious inconvenience because there appears to be no way to supply a usable default in this case. Zero would be an OK default if I could use the function pointer's equality to zero as a compile-time switch to choose an empty function implementation:
template <bool x> struct tag {}; template <class D> struct B { template <class X> void f(X) {} template <class X, void (D::*pmf)(X) = 0 > struct row { void h() { h(tag<(pmf == 0)>(), pmf); } void h(tag<1>, ...) {} void h(tag<0>, void (D::*q)(X)) { /*something*/} }; }; struct D : B<D> { void g(int); row<int,&D::g> r1; row<char*> r2; };
But there appears to be no way to get that effect either. The result is that you end up doing something like:
template <class X, void (D::*pmf)(X) = 0 > struct row { void h() { if (pmf) /*something*/ } };
which invariably makes compilers warn that you're switching on a constant expression.
[Picked up by evolution group at October 2002 meeting.]
How are default template arguments handled with respect to template template parameters? Two separate questions have been raised:
template <class T, class U = int> class ARG { }; template <class X, template <class Y> class PARM> void f(PARM<X>) { } // specialization permitted? void g() { ARG<int> x; // actually ARG<int, int> f(x); // does ARG (2 parms, 1 with default) // match PARM (1 parm)?Template template parameters are deducible (14.8.2.5 [temp.deduct.type] paragraph 9), but 14.3.3 [temp.arg.template] does not specify how matching is done.
Jack Rouse: I implemented template template parameters assuming template signature matching is analogous to function type matching. This seems like the minimum reasonable implementation. The code in the example would not be accepted by this compiler. However, template default arguments are compile time entities so it seems reasonable to relax the matching rules to allow cases like the one in the example. But I would consider this to be an extension to the language.
Herb Sutter: An open issue in the LWG is that the standard doesn't explicitly permit or forbid implementations' adding additional template-parameters to those specified by the standard, and the LWG may be leaning toward explicitly permitting this. [Under this interpretation,] if the standard is ever modified to allow additional template-parameters, then writing "a template that takes a standard library template as a template template parameter" won't be just ugly because you have to mention the defaulted parameters; it would not be (portably) possible at all except possibly by defining entire families of overloaded templates to account for all the possible numbers of parameters vector<> (or anything else) might actually have. That seems unfortunate.
template <template <class T, class U = int> class PARM> class C { PARM<int> pi; };
Jack Rouse: I decided they could not in the compiler I support. This continues the analogy with function type matching. Also, I did not see a strong need to allow default arguments in this context.
A class template used as a template template argument can have default template arguments from its declarations. How are the two sources of default arguments to be reconciled? The default arguments from the template template formal could override. But it could be cofusing if a template-id using the argument template, ARG<int>, behaves differently from a template-id using the template formal name, FORMAL<int>.
Rationale (10/99): Template template parameters are intended to be handled analogously to function function parameters. Thus the number of parameters in a template template argument must match the number of parameters in a template template parameter, regardless of whether any of those paramaters have default arguments or not. Default arguments are allowed for the parameters of a template template parameter, and those default arguments alone will be considered in a specialization of the template template parameter within a template definition; any default arguments for the parameters of a template template argument are ignored.
Note (Mark Mitchell, February, 2006):
Perhaps it is already obvious to all, but it seems worth noting that this extension would change the meaning of conforming programs:
struct Dense { static const unsigned int dim = 1; }; template <template <typename> class View, typename Block> void operator+(float, View<Block> const&); template <typename Block, unsigned int Dim = Block::dim> struct Lvalue_proxy { operator float() const; }; void test_1d (void) { Lvalue_proxy<Dense> p; float b; b + p; }
If Lvalue_proxy is allowed to bind to View, then the template operator+ will be used to perform addition; otherwise, Lvalue_proxy's implicit conversion to float, followed by the built-in addition on floats will be used.
Note (March, 2008):
The Evolution Working Group has accepted the intent of this issue and referred it to CWG for action (not for C++0x). See paper J16/07-0033 = WG21 N2173.
Notes from the June, 2008 meeting:
The CWG decided to take no action on this issue until an interested party produces a paper with analysis and a proposal.
It is not clear what should happen for an example like:
template<typename T> struct A { class B { class C {}; }; }; class X { static int x; template <typename T> friend class A<T>::B::C; }; template<> struct A<int> { typedef struct Q B; }; struct Q { class C { int f() { return X::x; } }; };
It appears that the friend template matches Q::C, because that class is also A<int>::B::C, but neither GCC nor EDG allow this code (saying X::x is inaccessible). (Clang doesn't support friend template declarations with a dependent scope.)
A strict reading of 14.5.4 [temp.friend] paragraph 5 might suggest that the friend declaration itself is ill-formed, because it does not declare a member of a class template, but I can't find any compiler that implements template friends that way.
During the discussion of issue 1918, it was decided that the last part of the issue should be split off into a separate issue. According to 14.5.4 [temp.friend] paragraph 5,
A member of a class template may be declared to be a friend of a non-template class.
Does this make the example from issue 1918,
template<typename T> struct A { class B { class C {}; }; }; class X { static int x; template <typename T> friend class A<T>::B::C; }; template<> struct A<int> { typedef struct Q B; }; struct Q { class C { int f() { return X::x; } }; };
ill-formed because the friend declaration does not refer to a member of a class template? This does not appear to be the interpretation chosen by most implementations.
The Standard does not appear to specify clearly the effect of a partial specialization of a member template of a class template. For example:
template<class T> struct B { template<class U> struct A { // #1 void h() {} }; template<class U> struct A<U*> { // #2 void f() {} }; }; template<> template<class U> struct B<int>::A { // #3 void g() {} }; void q(B<int>::A<char*>& p) { p.f(); // #4 }
The explicit specialization at #3 replaces the primary member template #1 of B<int>; however, it is not clear whether the partial specialization #2 should be considered to apply to the explicitly-specialized member template of A<int> (thus allowing the call to p.f() at #4) or whether the partial specialization will be used only for specializations of B that are implicitly instantiated (meaning that #4 could call p.g() but not p.f()).
I get the following error diagnostic [from the EDG front end]:
line 8: error: function template "example<T>::foo<R,A>(A)" has already been declared R foo(const A); ^when compiling this piece of code:
struct example { template<class R, class A> // 1-st member template R foo(A); template<class R, class A> // 2-nd member template const R foo(A&); template<class R, class A> // 3-d member template R foo(const A); }; /*template<> template<> int example<char>::foo(int&);*/ int main() { int (example<char>::* pf)(int&) = &example<char>::foo; }
The implementation complains that
template<class R, class A> // 1-st member template R foo(A); template<class R, class A> // 3-d member template R foo(const A);cannot be overloaded and I don't see any reason for it since it is function template specializations that are treated like ordinary non-template functions, meaning that the transformation of a parameter-declaration-clause into the corresponding parameter-type-list is applied to specializations (when determining its type) and not to function templates.
What makes me think so is the contents of 14.5.6.1 [temp.over.link] and the following sentence from 14.8.2.1 [temp.deduct.call] "If P is a cv-qualified type, the top level cv-qualifiers of P are ignored for type deduction". If the transformation was to be applied to function templates, then there would be no reason for having that sentence in 14.8.2.1 [temp.deduct.call].
14.8.2.2 [temp.deduct.funcaddr], which my example is based upon, says nothing about ignoring the top level cv-qualifiers of the function parameters of the function template whose address is being taken.
As a result, I expect that template argument deduction will fail for the 2-nd and 3-d member templates and the 1-st one will be used for the instantiation of the specialization.
Although 15.4 [except.spec] paragraph 3 says,
Two exception-specifications are compatible if:
...
both have the form noexcept(constant-expression) and the constant-expressions are equivalent, or
...
it is not clear whether “equivalent” in this context should be taken as a reference to the definition of equivalent given in 14.5.6.1 [temp.over.link] paragraph 5:
Two expressions involving template parameters are considered equivalent if two function definitions containing the expressions would satisfy the one definition rule (3.2 [basic.def.odr]), except that the tokens used to name the template parameters may differ as long as a token used to name a template parameter in one expression is replaced by another token that names the same template parameter in the other expression.
since the context there is expressions that appear in function template parameters and return types.
There is implementation variance on this question.
This was split off from issue 214 at the April 2003 meeting.
Nathan Sidwell: John Spicer's proposed resolution does not make the following well-formed.
template <typename T> int Foo (T const *) {return 1;} //#1 template <unsigned I> int Foo (char const (&)[I]) {return 2;} //#2 int main () { return Foo ("a") != 2; }
Both #1 and #2 can deduce the "a" argument, #1 deduces T as char and #2 deduces I as 2. However, neither is more specialized because the proposed rules do not have any array to pointer decay.
#1 is only deduceable because of the rules in 14.8.2.1 [temp.deduct.call] paragraph 2 that decay array and function type arguments when the template parameter is not a reference. Given that such behaviour happens in deduction, I believe there should be equivalent behaviour during partial ordering. #2 should be resolved as more specialized as #1. The following alteration to the proposed resolution of DR214 will do that.
Insert before,
the following
For the example above, this change results in deducing 'T const *' against 'char const *' in one direction (which succeeds), and 'char [I]' against 'T const *' in the other (which fails).
John Spicer: I don't consider this a shortcoming of my proposed wording, as I don't think this is part of the current rules. In other words, the resolution of 214 might make it clearer how this case is handled (i.e., clearer that it is not allowed), but I don't believe it represents a change in the language.
I'm not necessarily opposed to such a change, but I think it should be reviewed by the core group as a related change and not a defect in the proposed resolution to 214.
Notes from the October 2003 meeting:
There was some sentiment that it would be desirable to have this case ordered, but we don't think it's worth spending the time to work on it now. If we look at some larger partial ordering changes at some point, we will consider this again.
14.5.6.2 [temp.func.order] paragraph 3 says,
To produce the transformed template, for each type, non-type, or template template parameter (including template parameter packs (14.5.3 [temp.variadic]) thereof) synthesize a unique type, value, or class template respectively and substitute it for each occurrence of that parameter in the function type of the template.
The characteristics of the synthesized entities and how they are determined is not specified. For example, members of a dependent type referred to in non-deduced contexts are not specified to exist, even though the transformed function type would be invalid in their absence.
Example 1:
template<typename T, typename U> struct A; template<typename T> void foo(A<T, typename T::u> *) { } // #1 // synthetic T1 has member T1::u template <typename T> void foo(A<T, typename T::u::v> *) { } // #2 // synthetic T2 has member T2::u and member T2::u::v // T in #1 deduces to synthetic T2 in partial ordering; // deduced A for the parameter is A<T2, T2::u> * --this is not necessarily compatible // with A<T2, T2::u::v> * and it does not need to be. See Note 1. The effect is that // (in the call below) the compatibility of B::u and B::u::v is respected. // T in #2 cannot be successfully deduced in partial ordering from A<T1, T1::u> *; // invalid type T1::u::v will be formed when T1 is substituted into non-deduced contexts. struct B { struct u { typedef u v; }; }; int main() { foo((A<B, B::u> *)0); // calls #2 }
Note 1: Template argument deduction is an attempt to match a P and a deduced A; however, template argument deduction is not specified to fail if the P and the deduced A are incompatible. This may occur in the presence of non-deduced contexts. Notwithstanding the parenthetical statement in 14.8.2.4 [temp.deduct.partial] paragraph 9, template argument deduction may succeed in determining a template argument for every template parameter while producing a deduced A that is not compatible with the corresponding P.
Example 2:
template <typename T, typename U, typename V> struct A; template <typename T> void foo(A<T, struct T::u, struct T::u::u> *); // #2.1 // synthetic T1 has member non-union class T1::u template <typename T, typename U> void foo(A<T, U , U> *); // #2.2 // synthetic T2 and U2 has no required properties // T in #2.1 cannot be deduced in partial ordering from A<T2, U2, U2> *; // invalid types T2::u and T2::u::u will be formed when T2 is substituted in nondeduced contexts. // T and U in #2.2 deduces to, respectively, T1 and T1::u from A<T1, T1::u, struct T1::u::u> * unless // struct T1::u::u does not refer to the injected-class-name of the class T1::u (if that is possible). struct B { struct u { }; }; int main() { foo((A<B, B::u, struct B::u::u> *)0); // calls #2.1 }
It is, however, unclear to what extent an implementation will have to go to determine these minimal properties.
The current wording of the Standard does not permit repeated alias template declarations within a scope, but some current implementations allow it, presumably by analogy with typedef declarations. Should the Standard be changed to permit this usage?
The Standard does not appear to specify whether a non-dependent reference to a template specialization in a template definition that is never instantiated causes the implicit instantiation of the referenced specialization.
The standard prohibits a class template from having the same name as one of its template parameters (14.6.1 [temp.local] paragraph 4). This prohibits
template <class X> class X;for the reason that the template name would hide the parameter, and such hiding is in general prohibited.
Presumably, we should also prohibit
template <template <class T> class T> struct A;for the same reason.
Currently, member of nondependent base classes hide references to template parameters in the definition of a derived class template.
Consider the following example:
class B { typedef void *It; // (1) // ... }; class M: B {}; template<typename> X {}; template<typename It> struct S // (2) : M, X<It> { // (3) S(It, It); // (4) // ... };
As the C++ language currently stands, the name "It" in line (3) refers to the template parameter declared in line (2), but the name "It" in line (4) refers to the typedef in the private base class (declared in line (1)).
This situation is both unintuitive and a hindrance to sound software engineering. (See also the Usenet discussion at http://tinyurl.com/32q8d .) Among other things, it implies that the private section of a base class may change the meaning of the derived class, and (unlike other cases where such things happen) there is no way for the writer of the derived class to defend the code against such intrusion (e.g., by using a qualified name).
Changing this can break code that is valid today. However, such code would have to:
It has been suggested to make situations like these ill-formed. That solution is unattractive however because it still leaves the writer of a derived class template without defense against accidental name conflicts with base members. (Although at least the problem would be guaranteed to be caught at compile time.) Instead, since just about everyone's intuition agrees, I would like to see the rules changed to make class template parameters hide members of the same name in a base class.
See also issue 458.
Notes from the March 2004 meeting:
We have some sympathy for a change, but the current rules fall straightforwardly out of the lookup rules, so they're not “wrong.” Making private members invisible also would solve this problem. We'd be willing to look at a paper proposing that.
Additional discussion (April, 2005):
John Spicer: Base class members are more-or-less treated as members of the class, [so] it is only natural that the base [member] would hide the template parameter.
Daveed Vandevoorde: Are base class members really “more or less” members of the class from a lookup perspective? After all, derived class members can hide base class members of the same name. So there is some pretty definite boundary between those two sets of names. IMO, the template parameters should either sit between those two sets, or they should (for lookup purposes) be treated as members of the class they parameterize (I cannot think of a practical difference between those two formulations).
John Spicer: How is [hiding template parameters] different from the fact that namespace members can be hidden by private parts of a base class? The addition of int C to N::A breaks the code in namespace M in this example:
namespace N { class A { private: int C; }; } namespace M { typedef int C; class B : public N::A { void f() { C c; } }; }
Daveed Vandevoorde: C++ has a mechanism in place to handle such situations: qualified names. There is no such mechanism in place for template parameters.
Nathan Myers: What I see as obviously incorrect ... is simply that a name defined right where I can see it, and directly attached to the textual scope of B's class body, is ignored in favor of something found in some other file. I don't care that C1 is defined in A, I have a C1 right here that I have chosen to use. If I want A::C1, I can say so.
I doubt you'll find any regular C++ coder who doesn't find the standard behavior bizarre. If the meaning of any code is changed by fixing this behavior, the overwhelming majority of cases will be mysterious bugs magically fixed.
John Spicer: I have not heard complaints that this is actually a cause of problems in real user code. Where is the evidence that the status quo is actually causing problems?
In this example, the T2 that is found is the one from the base class. I would argue that this is natural because base class members are found as part of the lookup in class B:
struct A { typedef int T2; }; template <class T2> struct B : public A { typedef int T1; T1 t1; T2 t2; };
This rule that base class members hide template parameters was formalized about a dozen years ago because it fell out of the principle that base class members should be found at the same stage of lookup as derived class members, and that to do otherwise would be surprising.
Gabriel Dos Reis: The bottom line is that:
Unless presented with real major programming problems the current rules exhibit, I do not think the simple rule “scopes nest” needs a change that silently mutates program meaning.
Mike Miller: The rationale for the current specification is really very simple:
That's it. Because template parameters are not members, they are hidden by member names (whether inherited or not). I don't find that “bizarre,” or even particularly surprising.
I believe these rules are straightforward and consistent, so I would be opposed to changing them. However, I am not unsympathetic toward Daveed's concern about name hijacking from base classes. How about a rule that would make a program ill-formed if a direct or inherited member hides a template parameter?
Unless this problem is a lot more prevalent than I've heard so far, I would not want to change the lookup rules; making this kind of collision a diagnosable error, however, would prevent hijacking without changing the lookup rules.
Erwin Unruh: I have a different approach that is consistent and changes the interpretation of the questionable code. At present lookup is done in this sequence:
If we change this order to
it is still consistent in that no lookup is placed between the base class and the derived class. However, it introduces another inconsistency: now scopes do not nest the same way as curly braces nest — but base classes are already inconsistent this way.
Nathan Myers: This looks entirely satisfactory. If even this seems like too big a change, it would suffice to say that finding a different name by this search order makes the program ill-formed. Of course, a compiler might issue only a portability warning in that case and use the name found Erwin's way, anyhow.
Gabriel Dos Reis: It is a simple fact, even without templates, that a writer of a derived class cannot protect himself against declaration changes in the base class.
Richard Corden: If a change is to be made, then making it ill-formed is better than just changing the lookup rules.
struct B { typedef int T; virtual void bar (T const & ); }; template <typename T> struct D : public B { virtual void bar (T const & ); }; template class D<float>;
I think changing the semantics of the above code silently would result in very difficult-to-find problems.
Mike Miller: Another case that may need to be considered in deciding on Erwin's suggestion or the “ill-formed” alternative is the treatment of friend declarations described in 3.4.1 [basic.lookup.unqual] paragraph 10:
struct A { typedef int T; void f(T); }; template<typename T> struct B { friend void A::f(T); // Currently T is A::T };
Notes from the October, 2005 meeting:
The CWG decided not to consider a change to the existing rules at this time without a paper exploring the issue in more detail.
The definition of the current instantiation, given in 14.6.2.1 [temp.dep.type] paragraph 1, is phrased in terms of the meaning of a name (“A name refers to the current instantiation if it is...”); it does not define when a type is the current instantiation. Thus the interpretation of *this and of phrases like “member of a class that is the current instantiation” is not formally specified.
The specification of dependent types in 14.6.2.1 [temp.dep.type] is given in terms of names. However, one might consider some unnamed types as dependent. Consider the following example:
template <typename T> struct A { struct { } obj; void foo() { bar(obj); // lookup for bar when/where? } }; void bar(...); int main() { A<int> a; a.foo(); // calls bar(...)? }
If the type of A::obj had a name, it would be dependent. However, the rationale for making nested types dependent is that they are subject to explicit specialization and thus not knowable at the point of the template definition. An unnamed type, as in this example, cannot be explicitly specialized and thus could be considered as a member of the current instantiation. Which treatment is intended?
Notes from the February, 2014 meeting:
There are other cases in which a named entity is dependnet, even though it cannot be explicitly specialized. CWG felt that the most consistent rule would be to make all nested classes dependent, whether named or not.
The current wording of 14.6.4 [temp.dep.res] seems to assume that dependent names can only appear in the definition of a template:
In resolving dependent names, names from the following sources are considered:
Declarations that are visible at the point of definition of the template.
Declarations from namespaces associated with the types of the function arguments both from the instantiation context (14.6.4.1 [temp.point]) and from the definition context.
However, dependent names can occur in non-defining declarations of the template as well; for instance,
template<typename T> T foo(T, decltype(bar(T())));
bar needs to be looked up, even though there is no definition of foo in the translation unit.
Additional note (February, 2011):
The resolution of this issue can't simply replace the word “definition” with the word “declaration,” mutatis mutandis, because there can be multiple declarations in a translation unit (which isn't true of “the definition”). As a result, the issue was moved back to "open" status for further consideration.
Consider the following example:
template<typename T> struct A { operator int() { return 0; } void f() { operator T(); } }; int main() { A<int> a; a.f(); }
One might expect this to call operator int when instantiating. But since operator T is a dependent name, it is looked up by unqualified lookup only in the definition context, where it will find no declaration. Argument-dependent lookup will not find anything in the instantiation context either, so this code is ill-formed. If we change operator int() to operator T(), which is a seemingly unrelated change, the code becomes well-formed.
There is implementation variability on this point.
A template instantiation can be “required” without there being a need for it at link time if it can appear in a constant expression:
template <class T> struct A { static const T t; }; template <class T> const T A<T>::t = 0; template <int I> struct B { }; int a = sizeof(B<A<int>::t>); template <class T> constexpr T f(T t) { return t; } int b = sizeof(B<f(42)>);
It seems like it might be useful to define a term other than odr-used for this sort of use, which is like odr-used but doesn't depend on potentially evaluated context or lvalue-rvalue conversions.
Nikolay Ivchenkov:
Another possibility would be to introduce the extension described in the closed issue 1272 and then change 3.2 [basic.def.odr] paragraph 2 as follows:
An expression E is potentially evaluated unless it is an unevaluated operand (Clause 5 [expr]) or a subexpression thereof. if and only if
E is a full-expression, or
E appears in a context where a constant expression is required, or
E is a direct subexpression of a potentially-evaluated expression and E is not an unevaluated operand.
An expression S is a direct subexpression of an expression E if and only if S and E are different expressions, S is a subexpression of E, and there is no expression X such that X differs from both S and E, S is a subexpression of X, and X is a subexpression of E. A variable whose name appears as a potentially-evaluated expression is odr-used unless it is an object that satisfies the requirements for appearing in a constant expression (5.19 [expr.const]) and the lvalue-to-rvalue conversion (4.1) is immediately applied...
[Example:
template <class T> struct X { static int const m = 1; static int const n; }; template <class T> int const X<T>::n = 2; int main() { // X<void>::m is odr-used, // X<void>::m is defined implicitly std::cout << X<void>::m << std::endl; // X<void>::n is odr-used, // X<void>::n is defined explicitly std::cout << X<void>::n << std::endl; // OK (issue 712 is not relevant here) std::cout << (1 ? X<void>::m : X<void>::n) << std::endl; }
(See also issues 712 and 1254.)
The Standard does not appear to specify the linkage of a template specialization. 14.7.1 [temp.inst] paragraph 11 does say,
Implicitly instantiated class and function template specializations are placed in the namespace where the template is defined.
which could be read as implying that the specialization has the same linkage as the template itself. Implementation practice seems to be that the weakst linkage of the template and the arguments is used for the specialization.
During the discussion of issue 1484, it was observed that the current rules do not adequately address indirect nested classes of class templates (i.e., member classes of member classes of class templates) in regard to their potential separate instantiation.
14.7.2 [temp.explicit] defines an explicit instantiation as
Syntactically, that allows things like:
template int S<int>::i = 5, S<int>::j = 7;
which isn't what anyone actually expects. As far as I can tell, nothing in the standard explicitly forbids this, as written. Syntactically, this also allows:
template namespace N { void f(); }
although perhaps the surrounding context is enough to suggest that this is invalid.
Suggested resolution:
I think we should say:
[Steve Adamczyk: presumably, this should have template at the beginning.]
and then say that:
There are similar problems in 14.7.3 [temp.expl.spec]:
Here, I think we want:
with similar restrictions as above.
[Steve Adamczyk: This also needs to have template <> at the beginning, possibly repeated.]
According to 14.7.2 [temp.explicit] paragraph 10,
An entity that is the subject of an explicit instantiation declaration and that is also used in the translation unit shall be the subject of an explicit instantiation definition somewhere in the program; otherwise the program is ill-formed, no diagnostic required.
The term “used” is too vague and needs to be defined. In particular, “use” of a class template specialization as an incomplete type — to form a pointer, for instance — should not require the presence of an explicit instantiation definition elsewhere in the program.
The note in paragraph 5 of 14.8.1 [temp.arg.explicit] makes clear that explicit template arguments cannot be supplied in invocations of constructors and conversion functions because they are called without using a name. However, there is nothing in the current wording of the Standard that makes declaring a constructor or conversion operator that is unusable because of nondeduced parameters (i.e., that would need to be specified explicitly) ill-formed. It would be a service to the programmer to diagnose this useless construct as early as possible.
Nicolai Josuttis sent me an example like the following:
template <typename RET, typename T1, typename T2> const RET& min (const T1& a, const T2& b) { return (a < b ? a : b); } template const int& min<int>(const int&,const int&); // #1 template const int& min(const int&,const int&); // #2
Among the questions was whether explicit instantiation #2 is valid, where deduction is required to determine the type of RET.
The first thing I realized when researching this is that the standard does not really spell out the rules for deduction in declarative contexts (friend declarations, explicit specializations, and explicit instantiations). For explicit instantiations, 14.7.2 [temp.explicit] paragraph 2 does mention deduction, but it doesn't say which set of deduction rules from 14.8.2 [temp.deduct] should be applied.
Second, Nicolai pointed out that 14.7.2 [temp.explicit] paragraph 6 says
A trailing template-argument can be left unspecified in an explicit instantiation provided it can be deduced from the type of a function parameter (14.8.2 [temp.deduct]).
This prohibits cases like #2, but I believe this was not considered in the wording as there is no reason not to include the return type in the deduction process.
I think there may have been some confusion because the return type is excluded when doing deduction on a function call. But there are contexts where the return type is included in deduction, for example, when taking the address of a function template specialization.
Suggested resolution:
Andrei Iltchenko points out that the standard has no wording that defines how to determine which template is specialized by an explicit specialization of a function template. He suggests "template argument deduction in such cases proceeds in the same way as when taking the address of a function template, which is described in 14.8.2.2 [temp.deduct.funcaddr]."
John Spicer points out that the same problem exists for all similar declarations, i.e., friend declarations and explicit instantiation directives. Finding a corresponding placement operator delete may have a similar problem.
John Spicer: There are two aspects of "determining which template" is referred to by a declaration: determining the function template associated with the named specialization, and determining the values of the template arguments of the specialization.
template <class T> void f(T); #1 template <class T> void f(T*); #2 template <> void f(int*);
In other words, which f is being specialized (#1 or #2)? And then, what are the deduced template arguments?
14.5.6.2 [temp.func.order] does say that partial ordering is done in contexts such as this. Is this sufficient, or do we need to say more about the selection of the function template to be selected?
14.8.2 [temp.deduct] probably needs a new section to cover argument deduction for cases like this.
14.8.2 [temp.deduct] is all about function types, but these rules also apply, e.g., when matching a class template partial specialization. We should add a note stating that we could be doing substitution into the template-id for a class template partial specialization.
Additional note (August 2008):
According to 14.5.5.1 [temp.class.spec.match] paragraph 2, argument deduction is used to determine whether a given partial specialization matches a given argument list. However, there is nothing in 14.5.5.1 [temp.class.spec.match] nor in 14.8.2 [temp.deduct] and its subsections that describes exactly how argument deduction is to be performed in this case. It would seem that more than just a note is required to clarify this processing.
Consider the following program:
template <typename T> int ref (T&) { return 0; } template <typename T> int ref (const T&) { return 1; } template <typename T> int ref (const volatile T&) { return 2; } template <typename T> int ref (volatile T&) { return 4; } template <typename T> int ptr (T*) { return 0; } template <typename T> int ptr (const T*) { return 8; } template <typename T> int ptr (const volatile T*) { return 16; } template <typename T> int ptr (volatile T*) { return 32; } void foo() {} int main() { return ref(foo) + ptr(&foo); }
The Standard appears to specify that the value returned from main is 2. The reason for this result is that references and pointers are handled differently in template argument deduction.
For the reference case, 14.8.2.1 [temp.deduct.call] paragraph 3 says that “If P is a reference type, the type referred to by P is used for type deduction.” Because of issue 295, all four of the types for the ref function parameters are the same, with no cv-qualification; overload resolution does not find a best match among the parameters and thus the most-specialized function is selected.
For the pointer type, argument deduction does not get as far as forming a cv-qualified function type; instead, argument deduction fails in the cv-qualified cases because of the cv-qualification mismatch, and only the cv-unqualified version of ptr survives as a viable function.
I think the choice of ignoring cv-qualifiers in the reference case but not the pointer case is very troublesome. The reason is that when one considers function objects as function parameters, it introduces a semantic difference whether the function parameter is declared a reference or a pointer. In all other contexts, it does not matter: a function name decays to a pointer and the resulting semantics are the same.
(See also issue 1584.)
The current partial ordering rules produce surprising results in the presence of reference collapsing.
Since partial ordering is currently based solely on the signature of the function templates, the lack of difference following substitution of the template type parameter in the following is not taken into account.
Especially unsettling is that the allegedly "more specialized" template (#2) is not a candidate in the first call where template argument deduction fails for it despite a lack of non-deduced contexts.
template <typename T> void foo(T&&); // #1 template <typename T> void foo(volatile T&&); // #2 int main(void) { const int x = 0; foo(x); // calls #1 with T='const int &' foo<const int &>(x); // calls #2 }
It is not clear how an example like the following is to be handled:
template <typename U> struct A { template <typename V> operator A<V>(); }; template <typename T> void foo(A<void (T)>); void foo(); int main() { A<void (int, char)> a; foo<int>(a); foo(a); // deduces T to be int }
In subclause 14.8.2.5 [temp.deduct.type] paragraph 10, deduction from a function type considers P/A pairs from the parameter-type-list only where the "P" function type has a parameter. Deduction is not specified to fail if there are additional parameters in the corresponding "A" function type.
Notes from the September, 2013 meeting:
CWG agreed that this example should not be accepted. The existing rules seem to cover this case (deduction is not specified to “succeed,” so it's a reasonable conclusion that it fails), but it might be helpful to be clearer.
The specification of std::current_exception() in 18.8.5 [propagation] allows either referring to the exception object itself or to a copy thereof, implying that the exception object must be copyable. However, the specification of throw-expression allows throwing objects that cannot be copied, only moved. Presumably the requirements should include a non-deleted accessible copy constructor that is odr-used by a throw-expression, even if the object being thrown is moved to the exception object.
Additional note, February, 2014:
This issue was referred to CWG by EWG at the September, 2013 meeting but was overlooked at that time.
The resolution of issue 1351 results in the following:
void (*p)() throw(int);
void (&r)() throw(int) = *p; // ill-formed
The reason is that the set of potential exceptions for an indirection is “any” instead of maintaining the known potential exceptions of the operand. It would seem to be reasonable to propagate the set in such cases.
A similar issue arises with function template argument deduction:
template<typename T> T& f(T* p);
void (*p)() throw(int);
void (&r)() throw(int) = f(p); // ill-formed
When a function throws an exception that is not in its exception-specification, std::unexpected() is called. According to 15.5.2 [except.unexpected] paragraph 2,
If [std::unexpected()] throws or rethrows an exception that the exception-specification does not allow then the following happens: If the exception-specification does not include the class std::bad_exception (18.8.2 [bad.exception]) then the function std::terminate() is called, otherwise the thrown exception is replaced by an implementation-defined object of the type std::bad_exception, and the search for another handler will continue at the call of the function whose exception-specification was violated.
The “replaced by” wording is imprecise and undefined. For example, does this mean that the destructor is called for the existing exception object, or is it simply abandoned? Is the replacement in situ, so that a pointer to the existing exception object will now point to the std::bad_exception object?
Mike Miller: The call to std::unexpected() is not described as analogous to invoking a handler, but if it were, that would resolve this question; it is clearly specified what happens to the previous exception object when a new exception is thrown from a handler (15.1 [except.throw] paragraph 4).
This approach would also clarify other questions that have been raised regarding the requirements for stack unwinding. For example, 15.5.1 [except.terminate] paragraph 2 says that
In the situation where no matching handler is found, it is implementation-defined whether or not the stack is unwound before std::terminate() is called.
This requirement could be viewed as in conflict with the statement in 15.5.2 [except.unexpected] paragraph 1 that
If a function with an exception-specification throws an exception that is not listed in the exception-specification, the function std::unexpected() is called (D.11 [exception.unexpected]) immediately after completing the stack unwinding for the former function.
If it is implementation-defined whether stack unwinding occurs before calling std::terminate() and std::unexpected() is called only after doing stack unwinding, does that mean that it is implementation-defined whether std::unexpected() is called if there is ultimately no handler found?
Again, if invoking std::unexpected() were viewed as essentially invoking a handler, the answer to this would be clear, because unwinding occurs before invoking a handler.
According to 16.1 [cpp.cond] paragraph 4,
The resulting tokens comprise the controlling constant expression which is evaluated according to the rules of 5.19 [expr.const] using arithmetic that has at least the ranges specified in 18.3 [support.limits], except that all signed and unsigned integer types act as if they have the same representation as, respectively, intmax_t or uintmax_t (_N3035_.18.4.2 [stdinth]). This includes interpreting character literals, which may involve converting escape sequences into execution character set members.
Ordinary character literals with a single c-char have the type char, which is neither a signed nor an unsigned integer type. Although 4.5 [conv.prom] paragraph 1 is clear that char values promote to int, regardless of whether the implementation treats char as having the values of signed char or unsigned char, 16.1 [cpp.cond] paragraph 4 isn't clear on whether character literals should be treated as signed or unsigned values. In C99, such literals have type int, so the question does not arise. If an implementation in which plain char has the values of unsigned char were to treat character literals as unsigned, an expression like '0'-'1' would thus have different values in C and C++, namely -1 in C and some large unsigned value in C++.
Given the following input,
#define F(A, B, C) A ## x.B ## y.C ## z #define STRINGIFY(x) #x #define EXPAND_AND_STRINGIFY(x) STRINGIFY(x) char v[] = EXPAND_AND_STRINGIFY(F(a, b, c))
there is implementation variance in the value of v: some produce the string "ax.by.cz" and others produce the string "ax. by. cz". Although 16.3.2 [cpp.stringize] paragraph 2 is explicit in its treatment of leading and trailing white space, it is not clear whether there is latitude for inserting spaces between tokens, as some implementations do, since the description otherwise is written solely in terms of preprocessing tokens. There may be cases in which such spaces would be needed to preserve the original tokenization, but it is not clear whether the result of stringization needs to produce something that would lex to the same tokens.
Notes from the April, 2013 meeting:
Because the preprocessor specification is primarily copied directly from the C Standard, this issue has been referred to the C liaison for consultation with WG14.
It is not clear from the Standard what the result of the following example should be:
#define NIL(xxx) xxx #define G_0(arg) NIL(G_1)(arg) #define G_1(arg) NIL(arg) G_0(42)
The relevant text from the Standard is found in 16.3.4 [cpp.rescan] paragraph 2:
If the name of the macro being replaced is found during this scan of the replacement list (not including the rest of the source file's preprocessing tokens), it is not replaced. Further, if any nested replacements encounter the name of the macro being replaced, it is not replaced. These nonreplaced macro name preprocessing tokens are no longer available for further replacement even if they are later (re)examined in contexts in which that macro name preprocessing token would otherwise have been replaced.
The sequence of expansion of G0(42) is as follows:
G0(42) NIL(G_1)(42) G_1(42) NIL(42)
The question is whether the use of NIL in the last line of this sequence qualifies for non-replacement under the cited text. If it does, the result will be NIL(42). If it does not, the result will be simply 42.
The original intent of the J11 committee in this text was that the result should be 42, as demonstrated by the original pseudo-code description of the replacement algorithm provided by Dave Prosser, its author. The English description, however, omits some of the subtleties of the pseudo-code and thus arguably gives an incorrect answer for this case.
Suggested resolution (Mike Miller): Replace the cited paragraph with the following:
As long as the scan involves only preprocessing tokens from a given macro's replacement list, or tokens resulting from a replacement of those tokens, an occurrence of the macro's name will not result in further replacement, even if it is later (re)examined in contexts in which that macro name preprocessing token would otherwise have been replaced.
Once the scan reaches the preprocessing token following a macro's replacement list — including as part of the argument list for that or another macro — the macro's name is once again available for replacement. [Example:
#define NIL(xxx) xxx #define G_0(arg) NIL(G_1)(arg) #define G_1(arg) NIL(arg) G_0(42) // result is 42, not NIL(42)The reason that NIL(42) is replaced is that (42) comes from outside the replacement list of NIL(G_1), hence the occurrence of NIL within the replacement list for NIL(G_1) (via the replacement of G_1(42)) is not marked as nonreplaceable. —end example]
(Note: The resolution of this issue must be coordinated with J11/WG14.)
Notes (via Tom Plum) from April, 2004 WG14 Meeting:
Back in the 1980's it was understood by several WG14 people that there were tiny differences between the "non-replacement" verbiage and the attempts to produce pseudo-code. The committee's decision was that no realistic programs "in the wild" would venture into this area, and trying to reduce the uncertainties is not worth the risk of changing conformance status of implementations or programs.
C99 is very clear that a #error directive causes a translation to fail: Clause 4 paragraph 4 says,
The implementation shall not successfully translate a preprocessing translation unit containing a #error preprocessing directive unless it is part of a group skipped by conditional inclusion.
C++, on the other hand, simply says that a #error directive “renders the program ill-formed” (16.5 [cpp.error]), and the only requirement for an ill-formed program is that a diagnostic be issued; the translation may continue and succeed. (Noted in passing: if this difference between C99 and C++ is addressed, it would be helpful for synchronization purposes in other contexts as well to introduce the term “preprocessing translation unit.”)
According to 16.6 [cpp.pragma] paragraph 1, the effect of a #pragma is to cause
the implementation to behave in an implementation-defined manner. The behavior might cause translation to fail or cause the translator or the resulting program to behave in a non-conforming manner.
It should be clarified that the extent of the non-conformance is limited to the implementation-defined behavior.
The specification of how the string-literal in a _Pragma operator is handled does not deal with the new kinds of string literals. 16.9 [cpp.pragma.op] says,
The string literal is destringized by deleting the L prefix, if present, deleting the leading and trailing double-quotes, replacing each escape sequence...
The various other prefixes should either be handled or prohibited.
Additional note (October, 2013):
If raw string literals are supported, the question of how to handle line splicing is relevant. The wording says that “the characters are processed through translation phase 3,” which is a bit ambiguous as to whether that includes phases 1 and 2 or not. It would be better to be explicit and say that the processing of phase 3 or of phases 1 through 3 is applied.
Some new features of C++ not only introduce incompatibilities with previous versions of C++ but also with C; however, the organizaation of Annex C [diff] makes it difficult to specify that a given feature is incompatible with both languages, and the practice has been only to document the C++ incompatibilities. Some means of specifying both sets of incompatibilities should be found, hopefully without excessive cuplication between the C and C++ sections.
The description of incompatibilities with C in Annex C.1 [diff.iso] refers to C89, but there are a number of new features in C99 that should be covered.
According to 1.10 [intro.multithread] paragraph 24,
The implementation may assume that any thread will eventually do one of the following:
terminate,
make a call to a library I/O function,
access or modify a volatile object, or
perform a synchronization operation or an atomic operation.
[Note: This is intended to allow compiler transformations such as removal of empty loops, even when termination cannot be proven. —end note]
Some programmers find this liberty afforded to implementations to be disadvantageous; see this blog post for a discussion of the subject.
According to 1.10 [intro.multithread] paragraph 9,
An evaluation A carries a dependency to an evaluation B if
the value of A is used as an operand of B, unless:
...
A is the left operand of a built-in logical AND (&&, see 5.14 [expr.log.and]) or logical OR (||, see 5.15 [expr.log.or]) operator, or
...
The intent is that this does not apply to the second operands of such operators if the first operand is such that they are not evaluated, but the wording is not clear to that effect. (A similar question applies to the non-selected operand of the conditional operator ?:.)
Regarding initialization of a block-scope static variable, 6.7 [stmt.dcl] paragraph 4 says,
If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.
This specification does not use the terminology of 1.10 [intro.multithread], so the meaning of “wait” is not clear. For example, will a concurrent thread that “waited” see (in the sense of happens-before) the result of the initialization (including side effects caused during the initialization)?
Perhaps the “synchronizes-with” terminology could be used here.