Document number: | PL22.16/11-0006 = WG21 N3236 |
Date: | 2011-02-28 |
Project: | Programming Language C++ |
Reference: | ISO/IEC IS 14882:2003 |
Reply to: | William M. Miller |
Edison Design Group, Inc. | |
wmm@edg.com |
This document contains the C++ core language issues on which the Committee (J16 + WG21) has not yet acted, that is, issues with status "Ready," "Tentatively Ready," "Review," "Drafting," and "Open."
This document is part of a group of related documents that together describe the issues that have been raised regarding the C++ Standard. The other documents in the group are:
Section references in this document reflect the section numbering of document PL22.16/10-0215 = WG21 N3225.
The purpose of these documents is to record the disposition of issues that have come before the Core Language Working Group of the ANSI (INCITS PL22.16) and ISO (WG21) C++ Standard Committee.
Some issues represent potential defects in the ISO/IEC IS 14882:2003 document and corrected defects in the earlier ISO/IEC 14882:1998 document; others refer to text in the working draft for the next revision of the C++ language, informally known as C++0x, and not to any Standard text. Issues are not necessarily formal ISO Defect Reports (DRs). While some issues will eventually be elevated to DR status, others will be disposed of in other ways. (See Issue Status below.)
The most current public version of this document can be found at http://www.open-std.org/jtc1/sc22/wg21. Requests for further information about these documents should include the document number, reference ISO/IEC 14882:2003, and be submitted to the InterNational Committee for Information Technology Standards (INCITS), 1250 Eye Street NW, Suite 200, Washington, DC 20005, USA.
Information regarding how to obtain a copy of the C++ Standard, join the Standard Committee, or submit an issue can be found in the C++ FAQ at http://www.comeaucomputing.com/csc/faq.html. Public discussion of the C++ Standard and related issues occurs on newsgroup comp.std.c++.
Issues progress through various statuses as the Core Language Working Group and, ultimately, the full PL22.16 and WG21 committees deliberate and act. For ease of reference, issues are grouped in these documents by their status. Issues have one of the following statuses:
Open: The issue is new or the working group has not yet formed an opinion on the issue. If a Suggested Resolution is given, it reflects the opinion of the issue's submitter, not necessarily that of the working group or the Committee as a whole.
Drafting: Informal consensus has been reached in the working group and is described in rough terms in a Tentative Resolution, although precise wording for the change is not yet available.
Review: Exact wording of a Proposed Resolution is now available for an issue on which the working group previously reached informal consensus.
Ready: The working group has reached consensus that the issue is a defect in the Standard, the Proposed Resolution is correct, and the issue is ready to forward to the full Committee for ratification as a proposed defect report.
Tentatively Ready: Like "ready" except that the resolution was produced and approved by a subset of the working group membership between meetings. Persons not participating in these betwee-meeting activities are encouraged to review such resolutions carefully and to alert the working group with any problems that may be found.
DR: The full Committee has approved the item as a proposed defect report. The Proposed Resolution in an issue with this status reflects the best judgment of the Committee at this time regarding the action that will be taken to remedy the defect; however, the current wording of the Standard remains in effect until such time as a Technical Corrigendum or a revision of the Standard is issued by ISO.
TC1: A DR issue included in Technical Corrigendum 1. TC1 is a revision of the Standard issued in 2003.
CD1: A DR issue not resolved in TC1 but included in Committee Draft 1. CD1 was advanced for balloting at the September, 2008 WG21 meeting.
CD2: A DR issue not resolved in CD1 but included in the Final Committee Draft advanced for balloting at the March, 2010 WG21 meeting.
WP: A DR issue whose resolution is reflected in the current Working Paper. The Working Paper is a draft for a future version of the Standard.
Dup: The issue is identical to or a subset of another issue, identified in a Rationale statement.
NAD: The working group has reached consensus that the issue is not a defect in the Standard. A Rationale statement describes the working group's reasoning.
Extension: The working group has reached consensus that the issue is not a defect in the Standard but is a request for an extension to the language. The working group expresses no opinion on the merits of an issue with this status; however, the issue will be maintained on the list for possible future consideration as an extension proposal.
Concepts: The issue relates to the “Concepts” proposal that was removed from the working paper at the Frankfurt (July, 2009) meeting and hence is no longer under consideration.
According to 3.4.5 [basic.lookup.classref] paragraph 1,
In a class member access expression (5.2.5 [expr.ref]), if the . or -> token is immediately followed by an identifier followed by a <, the identifier must be looked up to determine whether the < is the beginning of a template argument list (14.2 [temp.names]) or a less-than operator. The identifier is first looked up in the class of the object expression. If the identifier is not found, it is then looked up in the context of the entire postfix-expression and shall name a class template. If the lookup in the class of the object expression finds a template, the name is also looked up in the context of the entire postfix-expression and
if the name is not found, the name found in the class of the object expression is used, otherwise
if the name is found in the context of the entire postfix-expression and does not name a class template, the name found in the class of the object expression is used, otherwise
if the name found is a class template, it shall refer to the same entity as the one found in the class of the object expression, otherwise the program is ill-formed.
This makes the following ill-formed:
#include <set> using std::set; struct X { template <typename T> void set(const T& value); }; void foo() { X x; x.set<double>(3.2); }
That's confusing and unnecessary. The compiler has already done the lookup in X's scope, and the obviously-correct resolution is that one, not the identifier from the postfix-expression's scope. Issue 305 fixed a similar issue for destructor names but missed member functions.
Suggested resolution: Delete the end of paragraph 1, starting with “If the lookup in the class...” and including all three bullets.
Proposed resolution (November, 2010):
Change 3.4.3.1 [class.qual] paragraph 1 bullet 2 as follows:
a conversion-type-id of an conversion-function-id is looked up both in the scope of the class and in the context in which the entire postfix-expression occurs and shall refer to the same type in both contexts in the same manner as a conversion-type-id in a class member access (see 3.4.5 [basic.lookup.classref]);
Change 3.4.5 [basic.lookup.classref] paragraph 1 as follows:
In a class member access expression (5.2.5 [expr.ref]), if the . or -> token is immediately followed by an identifier followed by a <, the identifier must be looked up to determine whether the < is the beginning of a template argument list (14.2 [temp.names]) or a less-than operator. The identifier is first looked up in the class of the object expression. If the identifier is not found, it is then looked up in the context of the entire postfix-expression and shall name a class template. If the lookup in the class of the object expression finds a template, the name is also looked up in the context of the entire postfix-expression and
if the name is not found, the name found in the class of the object expression is used, otherwise
if the name is found in the context of the entire postfix-expression and does not name a class template, the name found in the class of the object expression is used, otherwise
if the name found is a class template, it shall refer to the same entity as the one found in the class of the object expression, otherwise the program is ill-formed.
Change 3.4.5 [basic.lookup.classref] paragraph 4 as follows:
If the id-expression in a class member access is a qualified-id of the form
class-name-or-namespace-name::...the class-name-or-namespace-name following the . or -> operator is looked up both in the context of the entire postfix-expression and in the scope of the class of the object expression. If the name is found only in the scope of the class of the object expression, the name shall refer to a class-name. If the name is found only in the context of the entire postfix-expression, the name shall refer to a class-name or namespace-name. If the name is found in both contexts, the class-name-or-namespace-name shall refer to the same entity. first looked up in the class of the object expression and the name, if found, is used. Otherwise it is looked up in the context of the entire postfix-expression. [Note: See 3.4.3 [basic.lookup.qual], which describes the lookup of a name before ::, which will only find a type or namespace name. —end note]
Change 3.4.5 [basic.lookup.classref] paragraph 7 as follows:
If the id-expression is a conversion-function-id, its conversion-type-id shall denote the same type in both the context in which the entire postfix-expression occurs and in the context of the class of the object expression (or the class pointed to by the pointer expression). is first looked up in the class of the object expression and the name, if found and denotes a type, is used. Otherwise it is looked up in the context of the entire postfix-expression and the name shall denote a type. [Example:
struct A { }; namespace N { struct A { void g() { } template <class T> operator T(); }; } int main() { N::A a; a.operator A(); // calls N::A::operator N::A }
—end example]
The note in 3.6.2 [basic.start.init] paragraph 3 contains the following example:
inline double fd() { return 1.0; } extern double d1; double d2 = d1; // unspecified: // may be statically initialized to 0.0 or // dynamically initialized to 1.0 double d1 = fd(); // may be initialized statically to 1.0
The comment for d2 overlooks the third possibility: if both d1 and d2 are dynamically initialized, d2 will be initialized to 0.
Proposed resolution (November, 2010):
Change the comments in the example in 3.6.2 [basic.start.init] paragraph 3 as follows:
inline double fd() { return 1.0; } extern double d1; double d2 = d1; // unspecified: // may be statically initialized to 0.0 or // dynamically initialized to 1.0 to 0.0 if d1 is dynamically initialized, or 1.0 otherwise double d1 = fd(); // may be initialized statically or dynamically to 1.0
(The note should also be in running text following the bulleted list instead of appearing as a bulleted item, as well.)
3.9 [basic.types] paragraph 10 requires that a class have at least one constexpr constructor other than the copy constructor in order to be considered a literal type. However, a constexpr constructor template might be instantiated in such a way that the constexpr specifier is ignored (7.1.5 [dcl.constexpr] paragraph 5). It is therefore not known whether a class with a constexpr constructor template is a literal type or not until the constructor template is specialized, which could mean that an example like
struct IntValue { template<typename T> constexpr IntValue(T t) : val(t) { } constexpr intmax_t get_value() { return val; } private: intmax_t val; };
is ill-formed, because it is an error to declare a member function (like get_value()) of a non-literal class to be constexpr (7.1.5 [dcl.constexpr] paragraph 6).
3.9 [basic.types] paragraph 10 should be revised so that either a constexpr constructor or constexpr constructor template allows a class to be a literal type.
Proposed resolution (November, 2010):
Change 3.9 [basic.types] paragraph 10 as follows:
A type is a literal type if it is:
a scalar type; or
a class type (Clause 9 [class]) with that
a trivial copy constructor,
no non-trivial move constructor,
has a trivial destructor,
a trivial default constructor or is an aggregate type (8.5.1 [dcl.init.aggr]) or has at least one constexpr constructor other than the or constructor template that is not a copy or move constructor, and
has all non-static data members and base classes of literal types; or
an array of literal type.
This resolution also resolves issues 1071 and 1198.
According to 3.9 [basic.types] paragraph 10, one of the requirements for a literal class type is
a trivial default constructor or at least one constexpr constructor other than the copy or move constructor
This rule has unfortunate consequences. For example, in
struct A { int x; }; struct B: A { int y; };
B is a literal type, even though it is impossible to initialize a constant of that type. Conversely, in
struct C { int a, b; constexpr C(int x, int y): a(x), b(y) { } }; struct D { int x; C c; };
D is not a literal type, even though it could be initialized as an aggregate.
It would be an improvement to replace the requirement for a trivial default constructor with a requirement that the class be an aggregate.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 981.
According to 3.9 [basic.types] paragraph 10, a literal class type has
a trivial copy constructor,
no non-trivial move constructor,
...
Is this intended to mean that
struct A { A(const A&) = default; A(A&); };
is a literal class because it does have a trivial copy constructor even though it also has a non-trivial one? That seems inconsistent with the prohibition of non-trivial move constructors.
My preference would be to resolve this inconsistency by dropping the restriction on non-trivial move constructors. It seems to me that having a trivial copy or move constructor is sufficient, we don't need to prohibit additional non-trivial ones. Actually, it's not clear to me that we need the first condition either; a literal type could be used for singleton variables even if it can't be copied.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 981.
According to 7.2 [dcl.enum] paragraph 10,
An expression of arithmetic or enumeration type can be converted to an enumeration type explicitly.
However, 5.2.9 [expr.static.cast] paragraph 10 says only,
A value of integral or enumeration type can be explicitly converted to an enumeration type.
This omits floating-point values. Presumably unscoped enumeration types are covered by paragraph 7,
The inverse of any standard conversion sequence (Clause 4 [conv]), other than the lvalue-to-rvalue (4.1 [conv.lval]), array-to- pointer (4.2 [conv.array]), function-to-pointer (4.3 [conv.func]), and boolean (4.12 [conv.bool]) conversions, can be performed explicitly using static_cast.
because 4.9 [conv.fpint] paragraph 2 allows an unscoped enumeration value to be implicitly converted to a floating point type. (Although that also covers the integral types, so it's not clear why they would be mentioned specifically in 5.2.9 [expr.static.cast] paragraph 10.) However, this should presumably say “arithmetic” instead of “integral” to match the statement in 7.2 [dcl.enum] paragraph 10.
Proposed resolution (November, 2010):
Change 5.2.9 [expr.static.cast] paragraph 10 as follows:
A value of integral or enumeration type can be explicitly converted to an enumeration type. The value is unchanged if the original value is within the range of the enumeration values (7.2 [dcl.enum]). Otherwise, the resulting enumeration value is unspecified (and might not be in that range). A value of floating-point type can also be converted to an enumeration type. The resulting value is the same as converting the original value to the underlying type of the enumeration (4.9 [conv.fpint]), and subsequently to the enumeration type.
Add the following footnote to the end of 7.2 [dcl.enum] paragraph 7:
...If the enumerator-list is empty, the values of the enumeration are as if the enumeration had a single enumerator with value 0. [Footnote: This set of values is used to define promotion and conversion semantics for the enumeration type; it does not exclude an expression of enumeration type from having a value that falls outside this range. —end footnote]
Delete 7.2 [dcl.enum] paragraph 10:
An expression of arithmetic or enumeration type can be converted to an enumeration type explicitly. The value is unchanged if it is in the range of enumeration values of the enumeration type; otherwise the resulting enumeration value is unspecified.
The resolution to issue 195 makes “converting a pointer to a function into a pointer to an object type or vice versa” conditionally-supported behavior. In doing so, however, it overlooked the fact that void is not an “object type” (3.9 [basic.types] paragraph 9). The wording should be amended to allow conversion to and from void* types.
Proposed resolution (November, 2010):
Change 3.7.4.3 [basic.stc.dynamic.safety] paragraphs 1-2 as follows:
A traceable pointer object is
an object of pointer-to-object an object pointer type (3.9.2 [basic.compound]), or
an object of an integral type that is at least as large as std::intptr_t, or
a sequence of elements in an array of character type, where the size and alignment of the sequence match that those of some pointer-to-object object pointer type.
A pointer value is a safely-derived pointer to a dynamic object only if it has pointer-to-object an object pointer type and it is...
Change 3.9.2 [basic.compound] paragraphs 3-4 as follows:
The type of a pointer to void or a pointer to an object type is called an object pointer type. [Note: A pointer to void does not have a pointer-to-object type, however, because void is not an object type. —end note] The type of a pointer that can designate a function is called a function pointer type. A pointer to objects of type T is referred to as a “pointer to T.” [Example:...
Objects of cv-qualified (3.9.3 [basic.type.qualifier]) or cv-unqualified type void* (pointer to void), A pointer to cv-qualified (3.9.3 [basic.type.qualifier]) or cv-unqualified void can be used to point to objects of unknown type. A void* Such a pointer shall be able to hold any object pointer. A cv-qualified or cv-unqualified (3.9.3 [basic.type.qualifier]) An object of type cv void* shall have the same representation and alignment requirements as a cv-qualified or cv-unqualified cv char*.
Change 4.10 [conv.ptr] paragraph 1 as follows:
...A null pointer constant can be converted to a pointer type; the result is the null pointer value of that type and is distinguishable from every other value of pointer to object or pointer to function object pointer or function pointer type...
Change 4.11 [conv.mem] paragraph 2 footnote 58 as follows:
...Note that a pointer to member is not a pointer to object or a pointer to function an object pointer or a function pointer and...
Change 5.2.10 [expr.reinterpret.cast] paragraphs 6-8 as follows:
A pointer to a function pointer can be explicitly converted to a pointer to a function pointer of a different type...
A pointer to an An object pointer can be explicitly converted to a pointer to a different object type an object pointer of a different type...
Converting a pointer to a function into a pointer to an object function pointer to an object pointer type or vice versa is conditionally-supported...
Change the note in 8.3.5 [dcl.fct] paragraph 6 as follows:
[Note: function types are checked during the assignments and initializations of pointer-to-functions, reference-to-functions, and pointer-to-member-functions pointers to functions, references to functions, and pointers to member functions. —end note]
In the “Index of Implementation-defined Behavior,” change the following item as indicated:
converting pointer to function into pointer to object function pointer to object pointer and vice versa
[Drafting note: 5.3.5 [expr.delete] paragraph 1 was not changed, so the operand of delete still cannot be a void*. 13.6 [over.built] paragraph 14 was not changed, so void* pointers still do not get overloads for operator-. 14.1 [temp.param] paragraph 4 was not changed and thus continues to allow only pointers to objects, not object pointers, as non-type template parameters.]
(See also issue 1120.)
It is not permitted to use reinterpret_cast to convert between pointers to object type and pointers to void.
See also issue 573.
Proposed resolution (August, 2010):
Change 5.2.10 [expr.reinterpret.cast] paragraph 7 as follows:
A pointer to an object An object pointer can be explicitly converted to a pointer to a different object type an object pointer of a different type.70 When a prvalue v of type “pointer to T1” is converted to the type “pointer to cv T2”, the result is static_cast<cv T2*>(static_cast<cv void*>(v)) if both T1 and T2 are standard-layout types (3.9 [basic.types]) and the alignment requirements of T2 are no stricter than those of T1, or if either type is void. Converting a prvalue of type “pointer to T1” to the type “pointer to T2” (where T1 and T2 are object types and where the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value. The result of any other such pointer conversion is unspecified.
(Note: this resolution depends on that of issue 573.)
It's not clear whether the current rules for constant expressions allow indirect calls of constexpr functions and constexpr member functions; for example,
constexpr bool is_negative(int x) { return x < 0; } constexpr bool check(int x, bool (*p)(int)) { return p(x); } static_assert(check(-2, is_negative), "Error");
If this is to be permitted, there does not seem to be a reason to prohibit equality comparison of pointers to functions or pointers to objects of static storage duration -- these can be tracked as is already done for non-type template parameters.
Proposed resolution (November, 2010):
Change 5.19 [expr.const] paragraph 2 bullet 19 as follows:
a relational (5.9 [expr.rel]) or equality (5.10 [expr.eq]) operator where at least one of the operands is a pointer the result is unspecified;
According to 7.2 [dcl.enum] paragraph 10,
An expression of arithmetic or enumeration type can be converted to an enumeration type explicitly. The value is unchanged if it is in the range of enumeration values of the enumeration type; otherwise the resulting enumeration value is unspecified.
(There is similar wording in 5.2.9 [expr.static.cast].) Does the phrase “resulting enumeration value” mean that the result, although unspecified, must lie within the range of enumeration values of the enumeration type? Existing practice seems to allow out-of-range values to be preserved if the underlying type is large enough to represent the value. This freedom is important both for efficiency (to avoid having to mask values while storing and/or fetching) and to prevent optimizers from removing code that tests for out-of-range values.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 1094.
It should be allowed to explicitly default a non-public special member function on its first declaration. It is very likely that users will want to default protected/private constructors and copy constructors without having to write such defaulting outside the class.
Proposed resolution (November, 2010):
Change 8.4.2 [dcl.fct.def.default] paragraphs 1-5 as follows:
A function definition of the form:
attribute-specifieropt decl-specifier-seqopt declarator = default ;
is called an explicitly-defaulted definition. A function that is explicitly defaulted shall
be a special member function,
have the same declared function type (except for possibly differing ref-qualifiers and except that in the case of a copy constructor or copy assignment operator, the parameter type may be “reference to non-const T”, where T is the name of the member function's class) as if it had been implicitly declared, and
not have default arguments, and.
not have an exception-specification.
[Note: This implies that parameter types, return type, and cv-qualifiers must match the hypothetical implicit declaration. —end note]
An explicitly-defaulted function may be declared constexpr only if it would have been implicitly declared as constexpr, and may have an explicit exception-specification only if it is compatible (15.4 [except.spec]) with the exception-specification on the implicit declaration. If it a function is explicitly defaulted on its first declaration,
it shall be public,
it shall not be explicit,
it shall not be virtual,
it is implicitly considered to be constexpr if the implicit declaration would be,
it is implicitly considered to have the same exception-specification as if it had been implicitly declared (15.4 [except.spec]), and
in the case of a copy constructor, move constructor, copy assignment operator, or move assignment operator, it shall have the same parameter type as if it had been implicitly declared.
[Note: Such a special member function may be trivial, and thus its accessibility and explicitness should match the hypothetical implicit definition; see below. —end note] [Example:
struct S { constexpr S() = default; //ill-formed: implicit S() is not constexpr S(int a = 0) = default; // ill-formed: default argument void operator=(const S&) = default; // ill-formed: non-matching return type ~S() throw(int) = default; // ill-formed: exception specification doesn't match private: int i; S(S&); // OK: private copy constructor }; S::S(S&) = default; // OK: defines copy constructor—end example]
Explicitly-defaulted functions and implicitly-declared functions are collectively called defaulted functions, and the implementation shall provide implicit definitions for them (12.1 [class.ctor] 12.4 [class.dtor], 12.8 [class.copy]), which might mean defining them as deleted. A special member function is user-provided if it is user-declared and not explicitly defaulted or deleted on its first declaration. A user-provided explicitly-defaulted function (i.e., explicitly defaulted after its first declaration) is defined at the point where it is explicitly defaulted; if such a function is implicitly defined as deleted, the program is ill-formed. [Note: while an implicitly-declared special member function is inline (Clause 12 [special]), an explicitly-defaulted definition may be non-inline. Non-inline definitions are user-provided, and hence non-trivial (12.1 [class.ctor], 12.4 [class.dtor], 12.8 [class.copy]). This rule enables Declaring a function as defaulted after its first declaration can provide efficient execution and concise definition while enabling a stable binary interface to an evolving code base. —end note]
[Example:
struct trivial { trivial() = default; trivial(const trivial&) = default; trivial(trivial&&) = default; trivial& operator=(const trivial&) = default; trivial& operator=(trivial&&) = default; ~trivial() = default; }; struct nontrivial1 { nontrivial1(); }; nontrivial1::nontrivial1() = default; // not inline first declaration struct nontrivial2 { nontrivial2(); }; inline nontrivial2::nontrivial2() = default; // not first declaration struct nontrivial3 { virtual ~nontrivial3() = 0; // virtual }; inline nontrivial3::~nontrivial3() = default; // not first declaration—end example]
Change 12.1 [class.ctor] paragraph 5 as follows:
Change 12.4 [class.dtor] paragraph 3 as follows:
...A destructor is trivial if it is neither not user-provided nor deleted and...
Change 12.8 [class.copy] paragraph 13 as follows:
A copy/move constructor for class X is trivial if it is neither not user-provided nor deleted and...
Change 12.8 [class.copy] paragraph 27 as follows:
A copy/move assignment operator for class X is trivial trivial if it is neither not user-provided nor deleted and...
This resolution also resolves issues 1136, 1137, 1140, 1145, 1149, and 1208.
It should be allowed to explicitly default an explicit special member function on its first declaration. It is very likely that users will want to default explicit copy constructors without having to write such defaulting outside of the class.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 1135.
It should be allowed to explicitly default a virtual special member function on its first declaration. It is very likely that users will want to default virtual copy assignment operators and destructors without having to write such defaulting outside of the class.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 1135.
The class
struct A { const int i; };
was a POD in C++98, but is not a POD under the FCD rules because it does not have a trivial default constructor. C++0x POD was intended to be a superset of C++98 POD.
Suggested resolution: Change POD to be standard layout and trivially copyable.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 1135.
It is currently not permitted to specify an exception-specification in a defaulted definition. It would be nice to be able to do so (providing the explicit specification matches the one that would be implicitly supplied) for documentation purposes.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 1135.
What effect does defaulting have on triviality? Related to issue 1135, non-public special members defaulted on their first declaration should retain triviality, because they shouldn't be considered user-provided. Related to issue 1137, defaulted member functions that are virtual should not be considered trivial, but there's no reason why non-virtuals could not be.
Furthermore, a class with a non-public explicitly-defaulted constructor isn't ever trivially constructible under the current rules. If such a class is used as a subobject, the constructor of the aggregating class should be trivial if it can access the non-public explicitly defaulted constructor of a subobject.
Suggested resolution: Change the triviality rules so that a class can have a trivial default constructor if the class has access to the default constructors of its subobjects and the default constructors of the subobjects are explicitly defaulted on first declaration, even if said defaulted constructors are non-public.
See also issue 1149.
Rationale (August, 2010):
The consensus of the CWG was that this change should not be made at this point in the standardization process, but that it might be considered at a later date.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 1135.
A defaulted destructor should be implicitly defined as deleted if operator delete is deleted or inaccessible.
Proposed resolution (November, 2010):
Change 12.4 [class.dtor] paragraph 3 as follows:
...A defaulted destructor for a class X is defined as deleted if:
X is a union-like class that has a variant member with a non-trivial destructor,
any of the non-static data members has class type M (or array thereof) and M has a deleted destructor or a destructor that is inaccessible from the defaulted destructor, or
any direct or virtual base class has a deleted destructor or a destructor that is inaccessible from the defaulted destructor.,
or, for a virtual destructor, lookup of the non-array deallocation function results in an ambiguity or in a function that is deleted or inaccessible from the defaulted destructor.
A destructor is trivial if...
12.8 [class.copy] paragraphs 6-7 currently read,
A declaration of a constructor for a class X is ill-formed if its first parameter is of type (optionally cv-qualified) X and either there are no other parameters or else all other parameters have default arguments.
A member function template is never instantiated to perform the copy of a class object to an object of its class type. [Example:
struct S { template<typename T> S(T); template<typename T> S(T&&); S(); }; S f(); const S g; void h() { S a( f() ); // does not instantiate member template; // uses the implicitly generated move constructor S a(g); // does not instantiate the member template; // uses the implicitly generated copy constructor }
These paragraphs were previously a single paragraph, and the second sentence was intended to mean that
template <class T> A(T):
will never be instantiated to produce A(A). It should not have been split and the example should not have been amended to include move construction.
Lawrence Crowl: I suggest something along the lines of
A member function template is never instantiated to match the signature of an ill-formed constructor.
Proposed resolution (November, 2010):
Merge 12.8 [class.copy] paragraphs 6 and 7 and change the text as follows:
A declaration of a constructor for a class X is ill-formed if its first parameter is of type (optionally cv-qualified) X and either there are no other parameters or else all other parameters have default arguments. A member function template is never instantiated to perform the copy of a class object to an object of its class type produce such a constructor signature. [Example:
struct S { template<typename T> S(T); template<typename T> S(T&&); S(); }; S f(); const S g; void h() { S a( f() ); // does not instantiate member template; // uses the implicitly generated move constructor S a(g); // does not instantiate the member template to produce S::S<S>(S); // uses the implicitly generated declared copy constructor }
A class with a non-public explicitly-defaulted copy constructor isn't ever trivially copyable under the current rules. If such a class is used as a subobject, the copy constructor of the aggregating class should be trivial if it can access the non-public explicitly defaulted copy constructor of a subobject.
See also issue 1145.
Rationale (August, 2010):
The consensus of the CWG was that this change should not be made at this point in the standardization process, but that it might be considered at a later date.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 1135.
Overload resolution should first look for a viable list constructor, then look for a non-list constructor if no list constructor is viable.
Proposed resolution (August, 2010):
Change 8.5.4 [dcl.init.list] paragraph 3 bullet 5 as follows:
Otherwise, if T is a class type, constructors are considered. If T has an initializer-list constructor, the argument list consists of the initializer list as a single argument; otherwise, the argument list consists of the elements of the initializer list. The applicable constructors are enumerated (13.3.1.7 [over.match.list]) and the best one is chosen through overload resolution (13.3.1.7 [over.match.list], 13.3 [over.match]). If a narrowing conversion (see below) is required to convert any of the arguments, the program is ill-formed. [Example:...
Change 13.3.1.7 [over.match.list] as follows:
When objects of non-aggregate class type T are list-initialized (8.5.4 [dcl.init.list]), overload resolution selects the constructor in two phases as follows, where T is the cv-unqualified class type of the object being initialized:
If T has an initializer-list constructor (8.5.4 [dcl.init.list]), Initially, the candidate functions are the initializer-list constructors (8.5.4 [dcl.init.list]) of the class T and the argument list consists of the initializer list as a single argument.
; otherwise, If no viable initializer-list constructor is found, overload resolution is performed again, where the candidate functions are all the constructors of the class T and the argument list consists of the elements of the initializer list.
For direct-list-initialization, the candidate functions are all the constructors of the class T.
For In copy-list-initialization, the candidate functions are all the constructors of T. However, if an explicit constructor is chosen, the initialization is ill-formed. [Note: This differs from other situations (13.3.1.3 [over.match.ctor], 13.3.1.4 [over.match.copy]), where only converting constructors are considered for copy-initialization. This restriction only applies if this initialization is part of the final result of overload resolution. —end note]
It is not clear how to handle compatible dynamic-exception-specifications and noexcept-specifications. For example, given
void f() throw(); void f() noexcept { throw 1; }
should we call terminate() or unexpected()? And for
void g() throw (int); void g() noexcept (false) { throw 1.0; }
should this call unexpected or propagate the exception? Does the order of the declarations (and which is the definition) matter?
Alisdair Meredith:
And what about something like
struct A { ~A() throw() { } }; struct B { ~B() noexcept { } }; struct C: A, B { };
What is the exception specification for C's destructor?
Proposed resolution (November, 2010):
Change 15.4 [except.spec] paragraph 3 as follows:
Two exception-specifications are compatible if:
both are non-throwing (see below), regardless of their form,
both have the form noexcept(constant-expression) and the constant-expressions are equivalent, or
one exception-specification is a noexcept-specification allowing all exceptions and the other is of the form throw(type-id-list), or
both are dynamic-exception-specifications that have the same set of adjusted types.
Add the following note to the end of 15.4 [except.spec] paragraph 9:
Whenever an exception is thrown and the search...
—end example]
[Note: A function can have multiple declarations with different non-throwing exception-specifications; for this purpose, the one on the function definition is used. —end note]
It is not entirely clear that a function-try-block on a destructor will catch exceptions from a base or member destructor; whether such exceptions might be swallowed with a simple return statement rather than being rethrown; and whether such a clause might be entered multiple times if multiple bases/members throw, or if that is an automatic terminate call.
Proposed resolution (August, 2010):
Change 15 [except] paragraph 4 as follows:
...An exception thrown during the execution of the initializer expressions in the ctor-initializer or during the execution of the compound-statement, or — in the case of a destructor — during the destruction of a subobject, transfers control to a handler in a function-try-block in the same way as an exception thrown during the execution of a try-block transfers control to other handlers. [Example:...
Additional note (October, 2010):
There is a related problem with this wording: it covers only “the execution of the initializer expressions in the ctor-initializer,” when it should also cover execution of base and member constructors, regardless of whether they have initializer expressions in the ctor-initializer or not.
The issue has been moved back to "review" status to allow consideration of amending the proposed resolution to something like
...during the execution of the compound-statement or, if the function is a constructor or destructor, during the initialization or destruction of the class's subobjects, transfers control...
Proposed resolution (November, 2010):
Change 15 [except] paragraph 4 as follows:
A function-try-block associates a handler-seq with the ctor-initializer, if present, and the compound-statement. An exception thrown during the execution of the initializer expressions in the ctor-initializer or during the execution of the compound-statement or, for constructors and destructors, during the initialization or destruction, respectively, of the class's subobjects, transfers control to a handler in a function-try-block in the same way as an exception thrown during the execution of a try-block transfers control to other handlers.
According to 3.2 [basic.def.odr] paragraph 2,
A declaration is a definition unless it declares a function without specifying the function's body (8.4 [dcl.fct.def]), it contains the extern specifier (7.1.1 [dcl.stc]) or a linkage-specification25 (7.5 [dcl.link]) and neither an initializer nor a function-body...
Because = delete and = default are not forms of function-body, this description does not cover defaulted and deleted functions, even though these declarations are elsewhere referred to as being definitions.
Proposed resolution (January, 2011):
Change the grammar in 8.4.1 [dcl.fct.def.general] paragraph 1 as follows:
The current wording of 3.3.2 [basic.scope.pdecl] does not specify the point of declaration for an alias-declaration (although it does do so in paragraph 3 for a template alias: “The point of declaration of a template alias immediately follows the identifier for the alias being declared”). One might assume that an alias-declaration would be the same, but it's not clear that that is the right resolution (for either declaration, but especially for the alias-declaration).
An alias-declaration is intended to be essentially a different syntactic form of a typedef declaration (7.1.3 [dcl.typedef] paragraph 2). Placing the point of declaration at the trailing semicolon instead of following the name of the alias would allow more compatibility with the capabilities of typedefs, for instance:
struct S { }; namespace N { using S = S; }
Notes from the November, 2010 meeting:
The CWG agreed that the point of declaration for both template and non-template cases should be at the semicolon.
Proposed resolution (January, 2011):
Change 3.3.2 [basic.scope.pdecl] paragraph 3 as follows:
...The point of declaration of a template an alias or alias template immediately follows the identifier for the alias being declared the type-id to which the alias refers.
The current draft uses the term “built-in type” several times, but it is not defined anywhere. The Index appears to make it synonymous with “fundamental type,” but the implication of 4 [conv] paragraph 1 is that compound types like pointers should also be considered as “built-in.”
Proposed resolution (January, 2011):
Change 1.8 [intro.object] paragraph 7 as follows:
[Note: C++ provides a variety of built-in fundamental types and several ways of composing new types from existing types (3.9 [basic.types]). —end note]
Change 2.5 [lex.pptoken] as follows:
[Example: The program fragment x+++++y is parsed as x ++ ++ + y, which, if x and y are of built-in have integral types, violates a constraint on increment operators, even though the parse x ++ + ++ y might yield a correct expression. —end example]
Change 18.3.1.2 [numeric.limits.members] paragraph 58 as follows:
True if the set of values representable by the type is finite.220 [Note: All built-in fundamental types (3.9.1 [basic.fundamental]) are bounded. This member would be false for arbitrary precision types. —end note]
Change 24.2.1 [iterator.requirements.general] paragraph 1 as follows:
...All input iterators i support the expression *i, resulting in a value of some class, enumeration, or built-in object type T, called the value type of the iterator...
Change 24.2.3 [input.iterators] paragraph 1 as follows:
A class or a built-in pointer type X satisfies the requirements of an input iterator for the value type T if X satisfies the Iterator (24.2.2 [iterator.iterators]) and EqualityComparable (Table 33) requirements and the expressions in Table 107 are valid and have the indicated semantics.
Change 24.2.4 [output.iterators] paragraph 1 as follows:
A class or a built-in pointer type X satisfies the requirements of an output iterator if X if X satisfies the Iterator requirements (24.2.2 [iterator.iterators]) and the expressions in Table 108 are valid and have the indicated semantics.
Change 24.2.5 [forward.iterators] paragraph 1 as follows:
A class or a built-in pointer type X satisfies the requirements of a forward iterator if...
Change 24.2.6 [bidirectional.iterators] paragraph 1 as follows:
A class or a built-in pointer type X satisfies the requirements of a bidirectional iterator if, in addition to satisfying the requirements for forward iterators, the following expressions are valid as shown in Table 110.
Change 24.2.7 [random.access.iterators] paragraph 1 as follows:
A class or a built-in pointer type X satisfies the requirements of a random access iterator if, in addition to satisfying the requirements for bidirectional iterators, the following expressions are valid as shown in Table 111.
Change C.1.2 [diff.basic] section 3.1 as follows:
Rationale: This avoids having different initialization rules for built-in fundamental types and user-defined types.
Change C.1.7 [diff.class] section 9.1 as follows:
...This new name space definition provides important notational conveniences to C++ programmers and helps making the use of the user-defined types as similar as possible to the use of built-in fundamental types...
Delete the index entry, “built-in type; see fundamental type
.”(Note: This resolution assumes that the resolution for issue 572 has been applied, removing “built-in type” from 4 [conv] paragraph 1.)
4 [conv] paragraph 1 says,
Standard conversions are implicit conversions defined for built-in types.
However, enumeration types (which take part in the integral promotions) and class types (which take part in the lvalue-to-rvalue conversion) are not “built-in” types, so the definition of “standard conversions” is wrong.
Proposed resolution (October, 2006):
Change 4 [conv] paragraph 1 as follows:
Standard conversions are implicit conversions defined for built-in types with built-in meaning...
The description of class member access expressions in 5.2.5 [expr.ref] paragraph 2 defines the terms “object expression” and “pointer expression:”
For the first option (dot) the type of the first expression (the object expression) shall be “class object” (of a complete type). For the second option (arrow) the type of the first expression (the pointer expression) shall be “pointer to class object” (of a complete type).
(Note in passing that the phrase “class object” seems very odd when describing a type.) The rest of that section is based on the equivalence of the expression E1->E2 to (*(E1)).E2 and thus is phrased only in terms of “object expression.” This terminology appears to have been misapplied in other parts of the Standard, using the term “object expression” to refer both to class types and pointers to class types. The most egregious of these is 5.5 [expr.mptr.oper] paragraph 4, describing the operands of a pointer-to-member expression:
The first operand is called the object expression. If the dynamic type of the object expression does not contain the member to which the pointer refers, the behavior is undefined.
The dynamic type of the first operand in the ->* case is a pointer type, not a class type, so it cannot “contain the member to which the pointer refers.” Another example is 3.4.5 [basic.lookup.classref], describing the lookup in a class member access expression. The first paragraph uses the term consistently with its use in 5.2.5 [expr.ref], but paragraph 2 reads:
If the id-expression in a class member access (5.2.5 [expr.ref]) is an unqualified-id, and the type of the object expression is of a class type C, the unqualified-id is looked up in the scope of class C. If the type of the object expression is of pointer to scalar type, the unqualified-id is looked up in the context of the complete postfix-expression.
Paragraph 7 gets it right:
...in the context of the class of the object expression (or the class pointed to by the pointer expression).
Another misapplication of the term occurs in 5.2.2 [expr.call] paragraph 1:
...the call is as a member of the object pointed to or referred to by the object expression (5.2.5 [expr.ref], 5.5 [expr.mptr.oper])... its final overrider (10.3 [class.virtual]) in the dynamic type of the object expression is called. [Note: the dynamic type is the type of the object pointed or referred to by the current value of the object expression...
Here again we have the idea that an object expression can “point to” an object.
Another minor complication is that 5 [expr] paragraph 7 has a separate definition for the (hyphenated) term “object-expression:”
An expression designating an object is called an object-expression.
This term is used several times in the Standard, apparently interchangeably with the non-hyphenated version defined in 5.2.5 [expr.ref]; for example, 5.1.1 [expr.prim.general] paragraph 10 bullet 1 mentions
a class member access (5.2.5 [expr.ref]) in which the object-expression refers to the member's class
using the term defined in 5 [expr] paragraph 7 but linking it with 5.2.5 [expr.ref].
These uses of “object expression” and “object-expression” need to be made consistent, especially the reference in 5.5 [expr.mptr.oper] that implies that the dynamic type of a pointer is that of the complete object to which it points.
Proposed resolution (February, 2011):
Change 3.4.5 [basic.lookup.classref] paragraph 2 as follows:
...If the type of the object expression is of pointer to scalar type For a pseudo-destructor call (5.2.4 [expr.pseudo]), the unqualified-id is looked up in the context of the complete postfix-expression.
Delete 5 [expr] paragraph 7
An expression designating an object is called an object-expression.
Change 5.1.1 [expr.prim.general] paragraph 10 as follows:
An id-expression that denotes a non-static data member or non-static member function of a class can only be used:
as part of a class member access (5.2.5 [expr.ref]) in which the object-expression object expression refers to the member's class or a class derived from that class, or
...
Change 5.2.2 [expr.call] paragraph 1 as follows:
...For a member function call, the postfix expression shall be an implicit (9.3.1 [class.mfct.non-static], 9.4 [class.static]) or explicit class member access (5.2.5 [expr.ref]) whose id-expression is a function member name, or a pointer-to-member expression (5.5 [expr.mptr.oper]) selecting a function member; the call is as a member of the class object pointed to or referred to by the object expression (5.2.5 [expr.ref], 5.5 [expr.mptr.oper])... [Note: the dynamic type is the type of the object pointed or referred to by the current value of the object expression. 12.7 [class.cdtor] describes the behavior of virtual function calls when the object-expression object expression refers to an object under construction or destruction. —end note]
Change 5.2.5 [expr.ref] paragraph 2 as follows:
For the first option (dot) the type of the first expression (the object expression) shall be “class object” (of a complete type) have complete class type. For the second option (arrow) the type of the first expression (the pointer expression) shall be “pointer to class object” (of a complete type) have pointer to complete class type. The expression E1->E2 is converted to the equivalent form (*(E1)).E2; the remainder of 5.2.5 [expr.ref] will address only the first option (dot) [Footnote: Note that (*(E1)) is an lvalue. —end footnote]. In these cases either case, the id-expression shall name a member of the class or of one of its base classes...
Change 5.2.5 [expr.ref] paragraph 3 as follows:
If E1 has the type “pointer to class X,” then the expression E1->E2 is converted to the equivalent form (*(E1)).E2; the remainder of 5.2.5 [expr.ref] will address only the first option (dot)66. Abbreviating object-expressionpostfix-expression.id-expression as E1.E2, then the E1 is called the object expression. The type and value category of this expression E1.E2 are determined as follows...
Change 5.5 [expr.mptr.oper] paragraph 3 as follows:
...The result is an object or a function of the type specified by the second operand. The expression E1->*E2 is converted into the equivalent form (*(E1)).*E2.
Change 5.5 [expr.mptr.oper] paragraph 4 as follows:
The first operand Abbreviating pm-expression.*cast-expression as E1.*E2, E1 is called the object expression. If the dynamic type of the object expression E1 does not contain the member to which the pointer E2 refers, the behavior is undefined.
Change 5.5 [expr.mptr.oper] paragraph 6 as follows:
...In a .* expression whose object expression is an rvalue, the program is ill-formed if the second operand is a pointer to member function with ref-qualifier &. In a ->* expression or in a .* expression whose object expression is an lvalue, the program is ill-formed if the second operand is a pointer to member function with ref-qualifier &&. The result of a .* expression whose second operand is a pointer to a data member is of the same value category (3.10 [basic.lval]) as its first operand. The result of a .* expression whose second operand is a pointer to a member function is a prvalue. The result of an ->* expression is an lvalue if its second operand is a pointer to data member and a prvalue otherwise. If the second operand is the null pointer to member value (4.11 [conv.mem]), the behavior is undefined.
Change 9.4 [class.static] paragraph 2 as follows:
...A static member may be referred to using the class member access syntax, in which case the object-expression object expression is evaluated...
Change 12.7 [class.cdtor] paragraph 4 as follows:
...If the virtual function call uses an explicit class member access (5.2.5) and the object-expression object expression refers to the object under construction...
Change 14.2 [temp.names] paragraph 4 as follows:
When the name of a member template specialization appears after . or -> in a postfix-expression or after a nested-name-specifier in a qualified-id, and the object or pointer expression of the postfix-expression or...
[Note: although the current text of 3.4.5 [basic.lookup.classref] paragraph 7 mentions the phrase “pointer expression,” that wording will be replaced by issue 1111 or issue 1220 and is thus not addressed here.]
According to 5.19 [expr.const] paragraph 3,
A constant expression is an integral constant expression if it is of integral or enumeration type. [Note: such expressions may be used as array bounds (8.3.4 [dcl.array], 5.3.4 [expr.new]), as case expressions (6.4.2 [stmt.switch]), as bit-field lengths (9.6 [class.bit]), as enumerator initializers (7.2 [dcl.enum]), and as integral or enumeration non-type template arguments (14.3 [temp.arg]). —end note]
Although there is conceptually a conversion from enumeration type to integral type involved in using an enumerator as an array bound or bit-field length, the normative wording for those uses does not explicitly mention it and simply requires an integral constant expression. Consequently, the current wording permits uses like the following:
enum class E { e = 10; }; struct S { int arr[E::e]; int i: E::e; };
This seems surprising.
Proposed resolution (February, 2011):
This issue is resolved by the resolution of issue 1197.
It is not clear what happens when a program violates the limits on constexpr function recursion in a context that does not require a constant expression. For example,
constexpr int f(int i) { return f(i); }
const int i = f(1); // error, undefined behavior, or dynamic initialization?
(Presumably the “within its resource limits” caveat of 1.4 [intro.compliance] paragraph 2 would effectively result in undefined behavior in a context that required a constant expression.)
Notes from the November, 2010 meeting:
The CWG was of mixed opinion as to whether an infinite recursion in a constexpr function should be ill-formed or simply render an expression non-constant.
Proposed resolution (January, 2011):
Add the following bullet in 5.19 [expr.const] paragraph 2:
an invocation of a constexpr function or a constexpr constructor that would exceed the implementation-defined recursion limit (see annex B [implimits]);
According to 14.3.2 [temp.arg.nontype] paragraph 1, one of the possibilities for a template-argument for a non-type, non-template template-parameter is
an integral constant expression (including a constant expression of literal class type that can be used as an integral constant expression as described in 5.19 [expr.const])
However, the requirement for such a literal class type is (5.19 [expr.const] paragraph 5):
...that class type shall have a single non-explicit conversion function to an integral or enumeration type and that conversion function shall be constexpr.
Note that this normative requirement for a single conversion function is contradicted by the example in that paragraph, which reads in significant part,
struct A {
constexpr A(int i) : val(i) { }
constexpr operator int() { return val; }
constexpr operator long() { return 43; }
private:
int val;
};
template<int> struct X { };
constexpr A a = 42;
X<a> x; // OK: unique conversion to int
Proposed resolution (February, 2011):
This issue is resolved by the resolution of issue 1197.
The requirement in 5.19 [expr.const] that a constant expression cannot contain
an array-to-pointer conversion (4.2 [conv.array]) that is applied to a glvalue that does not designate an object with static storage duration
effectively eliminates the use of automatic constexpr arrays such as
void f() { constexpr int ar[] = { 1, 2 }; constexpr int i = ar[1]; }
There does not seem to be a problem with this kind of usage.
Proposed resolution (February, 2011):
The proposed resolution will be submitted as a separate document.
C and C++ differ in the treatment of an expression statement, in particular with regard to whether a volatile lvalue is fetched. For example,
volatile int x; void f() { x; // Fetches x in C, not in C++ }
The reason C++ is different in this regard is principally due to the fact that an assignment expression is an lvalue in C++ but not in C. If the lvalue-to-rvalue conversion were applied to expression statements, a statement like
x = 5;
would write to x and then immediately read it.
It is not clear that the current approach to dealing with the difference in assignment expressions is the only or best approach; it might be possible to avoid the unwanted fetch on the result of an assignment statement without giving up the fetch for a variable appearing by itself in an expression statement.
Proposed resolution (January, 2011):
Add a new paragraph after 5 [expr] paragraph 10:
In some contexts, an expression only appears for its side-effects. Such an expression is called a discarded-value expression. The expression is evaluated and its value is discarded. The array-to-pointer (4.2 [conv.array]) and function-to-pointer (4.3 [conv.func]) standard conversions are not applied. The lvalue-to-rvalue conversion (4.1 [conv.lval]) is applied only if the expression is an lvalue of volatile-qualified type and it has one of the following forms:
id-expression (5.1.1 [expr.prim.general]),
subscripting (5.2.1 [expr.sub]),
class member access (5.2.5 [expr.ref]),
indirection (5.3.1 [expr.unary.op]),
pointer-to-member operation (5.5 [expr.mptr.oper]),
conditional expression (5.16 [expr.cond]) where both the second and the third operand are one of the above, or
comma expression (5.18 [expr.comma]) where the right operand is one of the above.
Change 5.2.9 [expr.static.cast] paragraph 6 as follows:
Any expression can be explicitly converted to type cv void, in which case it becomes a discarded-value expression (Clause 5 [expr]). The expression value is discarded. [Note: however, if the value is in a temporary object (12.2 [class.temporary]), the destructor for that object is not executed until the usual time, and the value of the object is preserved for the purpose of executing the destructor. —end note] The lvalue-to-rvalue (4.1 [conv.lval]), array-to-pointer (4.2 [conv.array]), and function-to-pointer (4.3 [conv.func]) standard conversions are not applied to the expression.
Change 5.18 [expr.comma] paragraph 1 as follows:
...A pair of expressions separated by a comma is evaluated left-to-right; and the value of the left expression is discarded a discarded-value expression (Clause 5 [expr]).83 The lvalue-to-rvalue (4.1 [conv.lval]), array-to-pointer (4.2 [conv.array]), and function-to-pointer (4.3 [conv.func]) standard conversions are not applied to the left expression. Every value computation...
Change 6.2 [stmt.expr] paragraph 1 as follows:
...The expression is evaluated and its value is discarded a discarded-value expression (clause 5 [expr]). The lvalue-to-rvalue (4.1 [conv.lval]), array-to-pointer (4.2 [conv.array]), and function-to-pointer (4.3 [conv.func]) standard conversions are not applied to the expression. All side effects...
Here's an example:
typedef struct S { ... } S; void fs(S *x) { ... }
The big question is, to what declaration does the reference to identifier S actually refer? Is it the S that's declared as a typedef name, or the S that's declared as a class name (or in C terms, as a struct tag)? (In either case, there's clearly only one type to which it could refer, since a typedef declaration does not introduce a new type. But the debugger apparently cares about more than just the identity of the type.)
Here's a classical, closely related example:
struct stat { ... }; int stat(); ... stat( ... ) ...
Does the identifier stat refer to the class or the function? Obviously, in C, you can't refer to the struct tag without using the struct keyword, because it is in a different name space, so the reference must be to the function. In C++, the reference is also to the function, but for a completely different reason.
Now in C, typedef names and function names are in the same name space, so the natural extrapolation would be that, in the first example, S refers to the typedef declaration, as it would in C. But C++ is not C. For the purposes of this discussion, there are two important differences between C and C++
The first difference is that, in C++, typedef names and class names are not in separate name spaces. On the other hand, according to section 3.3.10 [basic.scope.hiding] (Name hiding), paragraph 2:
A class name (9.1) or enumeration name (7.2) can be hidden by the name of an object, function, or enumerator declared in the same scope. If a class or enumeration name and an object, function, or enumerator are declared in the same scope (in any order) with the same name, the class or enumeration name is hidden wherever the object, function, or enumerator name is visible.
Please consider carefully the phrase I have highlighted, and the fact that a typedef name is not the name of an object, function or enumerator. As a result, this example:
struct stat { ... }; typedef int stat;
Which would be perfectly legal in C, is disallowed in C++, both implicitly (see the above quote) and explicitly (see section 7.1.3 [dcl.typedef] (The typedef specifier), paragraph 3):
In a given scope, a typedef specifier shall not be used to redefine the name of any type declared in that scope to refer to a different type. Similarly, in a given scope, a class or enumeration shall not be declared with the same name as a typedef-name that is declared in that scope and refers to a type other than the class or enumeration itself.
From which we can conclude that in C++ typedef names do not hide class names declared in the same scope. If they did, the above example would be legal.
The second difference is that, in C++, a typedef name that refers to a class is a class-name; see 7.1.3 [dcl.typedef] paragraph 4:
A typedef-name that names a class is a class-name(9.1). If a typedef-name is used following the class-key in an elaborated-type-specifier (7.1.5.3) or in the class-head of a class declaration (9), or is used as the identifier in the declarator for a constructor or destructor declaration (12.1, 12.4), the program is ill-formed.
This implies, for instance, that a typedef-name referring to a class can be used in a nested-name-specifier (i.e. before :: in a qualified name) or following ~ to refer to a destructor. Note that using a typedef-name as a class-name in an elaborated-type-specifier is not allowed. For example:
struct X { }; typedef struct X X2; X x; // legal X2 x2; // legal struct X sx; // legal struct X2 sx2; // illegal
The final relevant piece of the standard is 7.1.3 [dcl.typedef] paragraph 2:
In a given scope, a typedef specifier can be used to redefine the name of any type declared in that scope to refer to the type to which it already refers.
This of course is what allows the original example, to which let us now return:
typedef struct S { ... } S; void fs(S *x) { ... }
The question, again is, to which declaration of S does the reference actually refer? In C, it would clearly be to the second, since the first would be accessible only by using the struct keyword. In C++, if typedef names hid class names declared in the same scope, the answer would be the same. But we've already seen that typedef names do not hide class names declared in the same scope.
So to which declaration does the reference to S refer? The answer is that it doesn't matter. The second declaration of S, which appears to be a declaration of a typedef name, is actually a declaration of a class name (7.1.3 [dcl.typedef] paragraph 4), and as such is simply a redeclaration. Consider the following example:
typedef int I, I; extern int x, x; void f(), f();
To which declaration would a reference to I, x or f refer? It doesn't matter, because the second declaration of each is really just a redeclaration of the thing declared in the first declaration. So to save time, effort and complexity, the second declaration of each doesn't add any entry to the compiler's symbol table.
Note (March, 2005):
Matt Austern: Is this legal?
struct A { }; typedef struct A A; struct A* p;
Am I right in reading the standard [to say that this is ill-formed]? On the one hand it's a nice uniform rule. On the other hand, it seems likely to confuse users. Most people are probably used to thinking that 'typedef struct A A' is a null operation, and, if this code really is illegal, it would seem to be a gratuitous C/C++ incompatibility.
Mike Miller: I think you're right. 7.1.3 [dcl.typedef] paragraph 1:
A name declared with the typedef specifier becomes a typedef-name.
7.1.3 [dcl.typedef] paragraph 2:
In a given non-class scope, a typedef specifier can be used to redefine the name of any type declared in that scope to refer to the type to which it already refers.
After the typedef declaration in the example, the name X has been “redefined” — it is no longer just a class-name, it has been “redefined” to be a typedef-name (that, by virtue of the fact that it refers to a class type, is also a class-name).
John Spicer: In C, and originally in C++, an elaborated-type-specifier did not consider typedef names, so “struct X* x” would find the class and not the typedef.
When C++ was changed to make typedefs visible to elaborated-type-specifier lookups, I believe this issue was overlooked and inadvertantly made ill-formed.
I suspect we need add text saying that if a given scope contains both a class/enum and a typedef, that an elaborated type specifier lookup finds the class/enum.
Mike Miller: I'm a little uncomfortable with this approach. The model we have for declaring a typedef in the same scope as a class/enum is redefinition, not hiding (like the “struct stat” hack). This approach seems to assume that the typedef hides the class/enum, which can then be found by an elaborated-type-specifier, just as if it were hidden by a variable, function, or enumerator.
Also, this approach reduces but doesn't eliminate the incompatibility with C. For example:
struct S { }; { typedef struct S S; struct S* p; // still ill-formed }
My preference would be for something following the basic principle that declaring a typedef-name T in a scope where T already names the type designated by the typedef should have no effect on whether an elaborated-type-specifier in that or a nested scope is well-formed or not. Another way of saying that is that a typedef-name that designates a same-named class or enumeration in the same or a containing scope is transparent with respect to elaborated-type-specifiers.
John Spicer: This strikes me as being a rather complicated solution. When we made the change to make typedefs visible to elaborated-type-specifiers we did so knowing it would make some C cases ill-formed, so this does not bother me. We've lived with the C incompatibility for many years now, so I don't personally feel a need to undo it. I also don't like the fact that you have to essentially do the old-style elaborated-type-specifier lookup to check the result of the lookup that found the typedef.
I continue to prefer the direction I described earlier where if a given scope contains both a class/enum and a typedef, that an elaborated-type-specifier lookup finds the class/enum.
Notes from the April, 2005 meeting:
The CWG agreed with John Spicer's approach, i.e., permitting a typedef-name to be used in an elaborated-type-specifier only if it is declared in the same scope as the class or enumeration it names.
Proposed resolution (January, 2011):
Add the following new paragraph after 7.1.3 [dcl.typedef] paragraph 4:
If a typedef specifier is used to redefine in a given scope an entity that can be referenced using an elaborated-type-specifier, the entity can continue to be referenced by an elaborated-type-specifier or as an enumeration or class name in an enumeration or class definition respectively. [Example:
struct S; typedef struct S S; int main() { struct S* p; // OK } struct S {}; // OK—end example]
The current requirements for constexpr functions do not permit a deleted constexpr function because the definition does not consist of a compound-statement containing just a return statement. However, it could be useful to allow this form in a case where a single piece of code is used in multiple configurations, in some of which the function is constexpr and others deleted; having to update all declarations of the function to remove the constexpr specifier is unnecessarily onerous.
Proposed resolution (January, 2011):
Change 7.1.5 [dcl.constexpr] paragraph 3 as follows:
...
its function-body shall be = delete or a compound-statement of the form
{ return expression ; }...
Change 7.1.5 [dcl.constexpr] paragraph 4 as follows:
The definition of a constexpr constructor In the definition of a constexpr constructor, each of the parameter types shall be a literal type or a reference to a literal type. In addition, either its function-body shall be = delete or it shall satisfy the following constraints:
each of its parameter types shall be a literal type or a reference to literal type;
...
Issue 1199 proposes to add the capability of defining a constexpr special function as deleted. It would be similarly useful to be able to mark a defaulted constructor as constexpr. (It should be noted that the existing text of 12.1 [class.ctor] and the proposed resolution of issue 1224 already allow for implicitly-defined constructors to be implicitly constexpr; this issue simply proposes allowing the explicit use of the constexpr specifier.)
Proposed resolution (February, 2011):
Change 7.1.5 [dcl.constexpr] paragraph 3 as follows:
...
its function-body shall be = delete, = default, or a compound-statement of the form
{ return expression ; }...
Change 7.1.5 [dcl.constexpr] paragraph 4 as follows:
In the definition of a constexpr constructor, each of the parameter types shall be a literal type or a reference to a literal type. In addition, either its function-body shall be = delete or = default or it shall satisfy the following constraints:
...
A trivial copy/move constructor is also a constexpr constructor.
8.5.1 [dcl.init.aggr] paragraph 4 says,
An initializer-list is ill-formed if the number of initializer-clauses exceeds the number of members or elements to initialize.
However, in a new-expression, the number of elements to be initialized is potentially unknown at compile time. How should an overly-long initializer-list in a new-expression be treated?
Notes from the August, 2010 meeting:
The consensus of the CWG was that this case should throw an exception at runtime.
Proposed resolution (January, 2011):
Change 5.3.4 [expr.new] paragraph 7 as follows:
When the value of the expression in a noptr-new-declarator is zero, the allocation function is called to allocate an array with no elements. If the value of that expression is less than zero or such that the size of the allocated object would exceed the implementation-defined limit, or if the new-initializer is a braced-init-list for which the number of initializer-clauses exceeds the number of elements to initialize, no storage is obtained and the new-expression terminates by throwing an exception of a type that would match a handler (15.3 [except.handle]) of type std::bad_array_new_length (18.6.2.2 [new.badlength]).
The ordering imposed by 8.5.1 [dcl.init.aggr] paragraph 17 applies only to “the full-expressions in an initializer-clause” (i.e., what follows an = in an aggregate initializer); this leaves unspecified the order in which the expressions in an initializer-list (the term used by the braced-init-list form of initializer, with no =) are evaluated.
Notes from the November, 2010 meeting:
The CWG favored guaranteeing the order of evaluation of initializer-clauses appearing in a braced-init-list, regardless of whether the braced-init-list is an aggregate initialization or constructor call.
Proposed resolution (January, 2011):
Delete 8.5.1 [dcl.init.aggr] paragraph 17:
The full-expressions in an initializer-clause are evaluated in the order in which they appear.
Insert the following as a new paragraph between paragraphs 3 and 4 of 8.5.4 [dcl.init.list]
Within the initializer-list of a braced-init-list, the initializer-clauses, including any that result from pack expansions (14.5.3 [temp.variadic]), are evaluated in the order in which they appear. That is, every value computation and side effect associated with a given initializer-clause is sequenced before every value computation and side effect associated with any initializer-clause that follows it in the comma-separated list of the initializer-list. [Note: This evaluation ordering holds regardless of the semantics of the initialization; for example, it applies when the elements of the initializer-list are interpreted as arguments of a constructor call, even though ordinarily there are no sequencing constraints on the arguments of a call. —end note]
In looking at a large handful of core issues related to elaborated-type-specifiers and the naming of classes in general, I discovered an odd fact. It turns out that there is exactly one place in the grammar where nested-name-specifier is not immediately preceded by "::opt": in class-head, which is used only for class definitions. So technically, this example is ill-formed, and should evoke a syntax error:
struct A; struct ::A { };
However, all of EDG, GCC and Microsoft's compiler accept it without a qualm. In fact, I couldn't get any of them to even warn about it.
Suggested resolution:
It would simplify the grammar, and apparently better reflect existing practice, to factor the global-scope operator into the rule for nested-name-specifier.
Proposed resolution (February, 2011):
The proposed resolution will be submitted as a separate document.
Consider the following example:
struct vector { struct iterator { }; struct const_iterator { }; iterator begin(); const_iterator begin() const; }; class block { vector v; auto end() const -> decltype(v.begin()) { return v.begin(); } };
Because the transformation of a member name into a class member access expression (9.3.1 [class.mfct.non-static] paragraph 3) only occurs inside the body of a non-static member function, the type of v in the trailing-return-type is non-const but is const in the return expression, resulting in a type mismatch between the return expression and the return type of the function.
One possibility would be to include the trailing-return-type as being subject to the transformation in 9.3.1 [class.mfct.non-static]. Note, however, that this is currently not in scope at that point (see issue 945).
Notes from the November, 2010 meeting:
The CWG felt that, because this is effectively an implicit parameter, the best approach would be to model its usability on the visibility of parameters: it could be named wherever a parameter of the function is in scope.
Proposed resolution (February, 2011):
Change 5.1.1 [expr.prim.general] paragraph 2 as follows, adding three new paragraphs:
The keyword this names a pointer to the object for which a non-static member function (9.3.2 [class.this]) is invoked or a non-static data member's initializer (9.2 [class.mem]) is evaluated. The keyword this shall be used only inside the body of a non-static member function (9.3 [class.mfct]) of the nearest enclosing class or in a brace-or-equal-initializer for a non-static data member (9.2 [class.mem]). The type of the expression is a pointer to the class of the function or non-static data member, possibly with cv-qualifiers on the class type. The expression is a prvalue.
If a function-definition or member-declarator declares a member function of a class X, the expression this is a prvalue of type “pointer to cv-qualifier-seq X” between the optional cv-qualifier-seq and the end of the function-definition or member-declarator. It shall not appear before the optional cv-qualifier-seq and it shall not appear within the declaration of a static member function (although its type and value category is defined within a static member function as it is within a non-static member function). [Note: the type and value category is defined even for the case of a static member function because declaration matching does not occur until the complete declarator is known, and this may be used in the trailing-return-type of the declarator. —end note]
Otherwise, if a member-declarator declares a non-static data member (9.2 [class.mem]) of a class X, the expression this is a prvalue of type “pointer to X” within the optional brace-or-equal-initializer. It shall not appear elsewhere in the member-declarator.
The expression this shall not appear in any other context.
[Example:...
Change 5.1.1 [expr.prim.general] paragraph 10 as follows:
An id-expression that denotes a non-static data member or non-static member function of a class can only be used:
...
- in the body of beyond the optional cv-qualifier-seq in the member-declarator or function-definition that declares a non-static member function of that class or of a class derived from that class (9.3.1 [class.mfct.non-static]), or
...
Change 9.3.1 [class.mfct.non-static] paragraph 3 as follows:
When an id-expression (5.1 [expr.prim]) that is not part of a class member access syntax (5.2.5 [expr.ref]) and not used to form a pointer to member (5.3.1 [expr.unary.op]) is used in the body declaration of a non-static member function of class X, if name lookup (3.4 [basic.lookup]) resolves the name...
According to 9.8 [class.local] paragraph 1,
Declarations in a local class can use only type names, static variables, extern variables and functions, and enumerators from the enclosing scope.
This would presumably make both of the members of S2 below ill-formed:
void test () { const int local_const = 7; struct S2 { int member:local_const; void f() { int j = local_const; } }; }
Should there be an exception to this rule for constant values? Current implementations seem to accept the reference to local_const in the bit-field declaration but not in the member function definition. Should they be the same or different?
Notes from the September, 2008 meeting:
The CWG agreed that both uses of local_const in the example above should be accepted. The intent of the restriction was to avoid the need to pass a frame pointer into local class member functions, so uses of local const variables as values should be permitted.
Notes from the October, 2009 meeting:
There was interest in an approach that would allow explicitly-captured constants to appear in constant expressions but also to be “used.” Another suggestion was to have variables captured if they appear in either “use” or “non-use” contexts.
Proposed resolution (February, 2011):
Change 5.1.2 [expr.prim.lambda] paragraph 17 as follows:
Every id-expression that is an odr-use (3.2 [basic.def.odr]) of an entity captured by copy is transformed into an access to the corresponding unnamed data member of the closure type. [Note: an id-expression that is not an odr-use refers to the original entity, never to a member of the closure type. Furthermore, such an id-expression does not cause the implicit capture of the entity. —end note] If this is captured, each odr-use of this is transformed into an access to the corresponding unnamed data member of the closure type, cast (5.4 [expr.cast]) to the type of this. [Note: the cast ensures that the transformed expression is a prvalue. —end note] [Example:
void f(const int*); void g() { const int N = 10; [=] { int arr[N]; // OK: not an odr-use, refers to automatic variable f(&N); // OK: causes N to be captured; &N points to the // corresponding member of the closure type } }
—end example]
...Declarations in a local class can use only type names, static variables, extern variables and functions, and enumerators from the shall not odr-use (3.2 [basic.def.odr]) a variable with automatic storage duration from an enclosing scope. [Example:
int x; void f() { static int s ; int x; const int N = 5; extern int g q(); struct local { int g() { return x; } // error: odr-use of automatic variable x has automatic storage duration int h() { return s; } // OK int k() { return ::x; } // OK int l() { return g q(); } // OK int m() { return N; } // OK: not an odr-use int* n() { return &N; } // error: odr-use of automatic variable N }; } local* p = 0; // error: local not in scope—end example]
Consider the following example:
struct A { A(); ~A() = delete; }; struct B: A { }; B* b = new B;
Under the current rules, B() is not deleted, but is ill-formed because it calls the deleted ~A::A() if it exits via an exception after the completion of the construction of A. A deleted subobject destructor should be added to the list of reasons for implicit deletion in 12.1 [class.ctor] and 12.8 [class.copy].
Notes from the November, 2010 meeting:
The CWG agreed that a change was needed, but only if one or more base and/or member constructors are non-trivial.
Proposed resolution (January, 2011):
Add a new bullet to 12.1 [class.ctor] paragraph 5 as follows:
...A defaulted default constructor for class X is defined as deleted if:
...
X is a non-union class and all members of any anonymous union member are of const-qualified type (or array thereof), or
any direct or virtual base class, or non-static data member with no brace-or-equal-initializer, has class type M (or array thereof) and either M has no default constructor or overload resolution (13.3 [over.match]) as applied to M's default constructor results in an ambiguity or in a function that is deleted or inaccessible from the defaulted default constructor., or
any direct or virtual base class or non-static data member has a type with a destructor that is deleted or inaccessible from the defaulted default constructor.
Add a new bullet to 12.8 [class.copy] paragraph 12 as follows:
...A defaulted copy/move constructor for a class X is defined as deleted (8.4.3 [dcl.fct.def.delete]) if X has:
a variant member with a non-trivial corresponding constructor and X is a union-like class,
a non-static data member of class type M (or array thereof) that cannot be copied/moved because overload resolution (13.3 [over.match]), as applied to M's corresponding constructor, results in an ambiguity or a function that is deleted or inaccessible from the defaulted constructor, or
a direct or virtual base class B that cannot be copied/moved because overload resolution (13.3 [over.match]), as applied to B's corresponding constructor, results in an ambiguity or a function that is deleted or inaccessible from the defaulted constructor, or
any direct or virtual base class or non-static data member of a type with a destructor that is deleted or inaccessible from the defaulted constructor,
for the copy constructor, a non-static data member of rvalue reference type, or
for the move constructor, a non-static data member or direct or virtual base class with a type that does not have a move constructor and is not trivially copyable.
The removal of the export keyword inadvertently deleted the text (previously found in 14 [temp] paragraph 8 of the 2003 Standard),
A non-exported template must be defined in every translation unit in which it is implicitly instantiated (14.7.1 [temp.inst]), unless the corresponding specialization is explicitly instantiated (14.7.2 [temp.explicit]) in some translation unit; no diagnostic is required.
This requirement must be reinstated.
Proposed resolution (January, 2011):
Add the following as a new paragraph following 14 [temp] paragraph 5:
A function template, member function of a class template, or static data member of a class template shall be defined in every translation unit in which it is implicitly instantiated (14.7.1 [temp.inst]), unless the corresponding specialization is explicitly instantiated (14.7.2 [temp.explicit]) in some translation unit; no diagnostic is required.
Since there appear to be no restrictions against it, it would appear that default arguments and template parameter packs can be used with template aliases just as with other templates. If that is the case, then, the current wording in 14.1 [temp.param] paragraph 11 requires adjustment:
If a template-parameter of a class template has a default template-argument, each subsequent template-parameter shall either have a default template-argument supplied or be a template parameter pack. If a template-parameter of a class template is a template parameter pack, it shall be the last template-parameter.
Presumably these restrictions should also apply to template aliases, but as written, they only apply to class templates.
Proposed resolution (January, 2011):
Change 14.1 [temp.param] paragraph 11 as follows:
If a template-parameter of a class template or alias template has a default template-argument, each subsequent template-parameter shall either have a default template-argument supplied or be a template parameter pack. If a template-parameter of a primary class template or alias template is a template parameter pack, it shall be the last template-parameter. [Note:...
The intent is that it is a permissible implementation technique to do template instantiation at the end of a translation unit rather than at an actual point of instantiation. This idea is not reflected in the current rules, however.
Proposed resolution (January, 2011):
Change 14.6.4.1 [temp.point] paragraph 7 as follows:
A specialization for a function template, a member function template, or of a member function or static data member of a class template may have multiple points of instantiations within a translation unit, and in addition to the points of instantiation described above, for any such specialization that has a point of instantiation within the translation unit, the end of the translation unit is also considered a point of instantiation. A specialization for a class template...
According to 14.8.2 [temp.deduct] paragraph 8,
Access checking is not done as part of the substitution process. Consequently, when deduction succeeds, an access error could still result when the function is instantiated.
This mimics the way access checking is done in overload resolution. However, experience has shown that this exemption of access errors from deduction failure significantly complicates the Standard library, so this rule should be changed.
Proposed resolution (January, 2011):
Change 14.8.2 [temp.deduct] paragraph 8 as follows:
If a substitution results in an invalid type or expression, type deduction fails. An invalid type or expression is one that would be ill-formed if written using the substituted arguments. [Note: Access checking is not done as part of the substitution process. —end note] Consequently, when deduction succeeds, an access error could still result when the function is instantiated. Only invalid types...
15.3 [except.handle] paragraph 8 defines the “currently handled exception” as
The exception with the most recently activated handler that is still active
This definition ignores the possibility that an exception might be thrown and caught in another thread during the execution of a handler. Since throw; rethrows the “currently handled exception,” one might conclude that it would be the other thread's exception that would be rethrown instead of the one that activated that handler.
Proposed resolution (January, 2011):
Change 15 [except] paragraph 1 as follows:
Exception handling provides a way of transferring control and information from a point in the execution of a program thread to an exception handler associated with a point previously passed by the execution...
Change 15.1 [except.throw] paragraph 4 as follows:
...The implementation may then deallocate the memory for the exception object; any such deallocation is done in an unspecified way. [Note: An exception thrown by a throw-expression does not propagate to other threads unless caught, stored, and rethrown using appropriate library functions; see 18.8.5 [propagation] and 30.6 [futures]. —end note]
Change 15.3 [except.handle] paragraph 6 as follows:
1.8 [intro.object] paragraph 6 says,
Two distinct objects that are neither bit-fields nor base class subobjects of zero size shall have distinct addresses.
This formulation leaves open the possibility that two base class subobjects of the same type could have the same address (because one or both might be zero-length base class subobjects).
Proposed resolution (November, 2010):
Change 1.8 [intro.object] paragraph 6 as follows:
Unless an object is a bit-field or a base class subobject of zero size, the address of that object is the address of the first byte it occupies. Two distinct objects that are neither not bit-fields nor base class subobjects of zero size shall have distinct addresses, if both have the same type or if not both are base class subobjects of zero size...
The current wording of the standard suggests that release sequences are maximal with respect to sequence inclusion, i.e. that if there are two release operations in the modification order,
mod mod rel1----->rel2----->w
then [rel1;rel2;w] is the only release sequence, as the other candidate [rel2;w] is included in it. This interpretation precludes synchronizing with releases which have other releases sequenced-before them. We believe that the intention is actually to define the maximal release sequence from a particular release operation, which would admit both [rel1;rel2;w] and [rel2;w].
Proposed resolution (August, 2010):
Change 1.10 [intro.multithread] paragraph 6 as follows:
A release sequence from a release operation A on an atomic object M is a maximal contiguous sub-sequence of side effects in the modification order of M, where the first operation is a release A, and every subsequent operation
is performed by the same thread that performed the release, or
is an atomic read-modify-write operation.
The current draft has release/acquire synchronize-with edges only between a release on one thread and an acquire on a different thread, whereas the definition of dependency-ordered-before permits the release and consume to be on the same thread; it seems odd to permit the latter. (At the moment function arguments can't race or sync with each other, but they can be dependency ordered before each other.)
We don't currently have an example in which this makes a real difference, but for symmetry could suggest changing the definition of dependency-ordered-before in 1.10 [intro.multithread].
Proposed resolution (August, 2010):
Change 1.10 [intro.multithread] paragraph 9 as follows:
An evaluation A is dependency-ordered before an evaluation B if
A performs a release operation on an atomic object M, and, on another thread, B performs a consume operation on M and reads a value written by any side effect in the release sequence headed by A, or
for some evaluation X, A is dependency-ordered before X and X carries a dependency to B.
[Note:...
A user-defined literal like 0x123DZ could be parsed either as a hexadecimal-literal of 0x123 and a ud-suffix of DZ or as a hexadecimal-literal of 0x123D and a ud-suffix of Z. There does not appear to be a rule that disambiguates the two possible parses.
Proposed resolution (November, 2010):
Change 2.14.8 [lex.ext] paragraph 1 as follows:
If a token matches both user-defined-literal and another literal kind, it is treated as the latter. [Example: 123_km, 1.2LL, "Hello"s are all user-defined-literals, but 12LL is an integer-literal. —end example] The syntactic nonterminal preceding the ud-suffix in a user-defined-literal is taken to be the longest sequence of characters that could match that nonterminal. [Example: The ud-suffix in 1.0e0X is X, not e0X; in 0x1DZ, the ud-suffix is Z, not DZ. —end example]
The list of declarations that are not definitions given in 3.1 [basic.def] paragraph 2 does not mention several plausible candidates: parameter declarations in non-defining function declarations, non-static data members, and template parameters. It might be argued that none of these are declarations (paragraph 1 does not use the syntactic non-terminal declaration but does explicitly refer to clause 7 [dcl.dcl], where that non-terminal is defined). However, the list in paragraph 2 does mention static member declarations, which also are not declarations, so the intent is not clear.
Proposed resolution (November, 2010):
Change 3.1 [basic.def] paragraph 2 as follows:
A declaration is a definition unless it declares a function without specifying the function's body (8.4 [dcl.fct.def]), it contains the extern specifier (7.1.1 [dcl.stc]) or a linkage-specification25 (7.5 [dcl.link]) and neither an initializer nor a function-body, it declares a static data member in a class definition (9.2 [class.mem], 9.4 [class.static]), it is a class name declaration (9.1 [class.name]), it is an opaque-enum-declaration (7.2 [dcl.enum]), it is a template-parameter (14.1 [temp.param]), it is a parameter-declaration (8.3.5 [dcl.fct]) in a function declaration that is not a definition, or it is a typedef declaration (7.1.3 [dcl.typedef]), an alias-declaration (7.1.3 [dcl.typedef]), a using-declaration (7.3.3 [namespace.udecl]), a static_assert-declaration (Clause 7 [dcl.dcl]), an attribute-declaration (Clause 7 [dcl.dcl]), an empty-declaration (Clause 7 [dcl.dcl]), or a using-directive (7.3.4 [namespace.udir]).
In describing static data members initialized inside the class definition, 9.4.2 [class.static.data] paragraph 3 says,
The member shall still be defined in a namespace scope if it is used in the program...
The definition of “used” is in 3.2 [basic.def.odr] paragraph 1:
An object or non-overloaded function whose name appears as a potentially-evaluated expression is used unless it is an object that satisfies the requirements for appearing in a constant expression (5.19 [expr.const]) and the lvalue-to-rvalue conversion (4.1 [conv.lval]) is immediately applied.
Now consider the following example:
struct S { static const int a = 1; static const int b = 2; }; int f(bool x) { return x ? S::a : S::b; }
According to the current wording of the Standard, this example requires that S::a and S::b be defined in a namespace scope. The reason for this is that, according to 5.16 [expr.cond] paragraph 4, the result of this conditional-expression is an lvalue and the lvalue-to-rvalue conversion is applied to that, not directly to the object, so this fails the “immediately applied” requirement. This is surprising and unfortunate, since only the values and not the addresses of the static data members are used. (This problem also applies to the proposed resolution of issue 696.)
Proposed resolution (November, 2009):
Change 3.2 [basic.def.odr] paragraph 1 as follows:
...An object or non-overloaded function whose name appears as a potentially-evaluated expression x is used unless it is an object that satisfies the requirements for appearing in a constant expression (5.19 [expr.const]) and the lvalue-to-rvalue conversion (4.1 [conv.lval]) is immediately applied eventually applied to all lvalue expressions e that could possibly denote that object, where e is a subexpression of the full-expression containing x...
Additional notes (November, 2009):
The proposed wording (like the existing wording) requires that S::a be defined in the following example:
struct S {
static const int a = 1;
};
void g() {
S::a; // no lvalue-to-rvalue conversion
}
Although this particular example is obviously unimportant, there could be similar cases where a use is buried in a nested conditional and the result eventually discarded, perhaps as the result of a macro expansion. An alternative approach that addresses this point might be something along the lines of
There is no lvalue expression e of which x is a subexpression to which a reference is bound or to which the unary & operator is applied that could possibly denote that object.
One objection to this latter approach is that it would require defining S::a if the expression were something like *&S::a, which would not be the case with the wording in the proposed resolution.
According to 3.2 [basic.def.odr] paragraph 2,
A variable or non-overloaded function whose name appears as a potentially-evaluated expression is used... A virtual member function is used if it is not pure.
However, that does not adequately address when a pure virtual function is used or not used. For example,
struct S { virtual void pure1() = 0; virtual void pure2() = 0; }; void f(S* p) { p->pure1(); p->S::pure2(); };
Both pure1 and pure2 satisfy the criterion that their name appears in a potentially-evaluated expression, but pure1 should not be considered “used” (which would require that it be defined); pure2 is “used” and must be defined.
Proposed resolution (November, 2010):
Change 3.2 [basic.def.odr] paragraph 2 as follows:
...A variable or non-overloaded function whose name appears as a potentially-evaluated expression is odr-used unless it is an object that satisfies the requirements for appearing in a constant expression (5.19 [expr.const]) and the lvalue-to-rvalue conversion (4.1 [conv.lval]) is immediately applied... A virtual member function is odr-used if it is not pure. A non-overloaded function whose name appears as a potentially-evaluated expression or a member of a set of candidate functions is odr-used if it is selected by overload resolution when referred to from a potentially-evaluated expression, are odr-used, unless the function is pure and its name is not explicitly qualified. [Note:...
Issue 678 added a bullet to the list in 3.2 [basic.def.odr] paragraph 5, inadvertently removing the second bullet from the reach of the part of that paragraph that reads,
If D is a template and is defined in more than one translation unit, then the last four requirements from the list above shall apply to names from the template's enclosing scope used in the template definition (14.6.3 [temp.nondep]),
In fixing this error, the wording should be recast to be more robust in the face of possible further edits to the list (i.e., not just changing “four” to “five”).
Proposed resolution (November, 2010):
Change 3.2 [basic.def.odr] paragraph 5 as follows:
...If D is a template and is defined in more than one translation unit, then the last four preceding requirements from the list above shall apply both to names from the template's enclosing scope used in the template definition (14.6.3 [temp.nondep]), and also to dependent names at the point of instantiation (14.6.2 [temp.dep])...
The various uses of the term “declarative region” throughout the Standard indicate that the term is intended to refer to the entire block, class, or namespace that contains a given declaration. For example, 3.3 [basic.scope] paragraph 2 says, in part:
[Example: in
int j = 24; int main() { int i = j, j; j = 42; }The declarative region of the first j includes the entire example... The declarative region of the second declaration of j (the j immediately before the semicolon) includes all the text between { and }...
However, the actual definition given for “declarative region” in 3.3 [basic.scope] paragraph 1 does not match this usage:
Every name is introduced in some portion of program text called a declarative region, which is the largest part of the program in which that name is valid, that is, in which that name may be used as an unqualified name to refer to the same entity.
Because (except in class scope) a name cannot be used before it is declared, this definition contradicts the statement in the example and many other uses of the term throughout the Standard. As it stands, this definition is identical to that of the scope of a name.
The term “scope” is also misused. The scope of a declaration is defined in 3.3 [basic.scope] paragraph 1 as the region in which the name being declared is valid. However, there is frequent use of the phrase “the scope of a class,” not referring to the region in which the class's name is valid but to the declarative region of the class body, and similarly for namespaces, functions, exception handlers, etc. There is even a mention of looking up a name “in the scope of the complete postfix-expression” (3.4.5 [basic.lookup.classref] paragraph 3), which is the exact inverse of the scope of a declaration.
This terminology needs a thorough review to make it logically consistent. (Perhaps a discussion of the scope of template parameters could also be added to section 3.3 [basic.scope] at the same time, as all other kinds of scopes are described there.)
Proposed resolution (November, 2006):
Change 3.3 [basic.scope] paragraph 1 as follows:
Every name is introduced in some portion of program text called a declarative region, which is the largest part of the program in which that name is valid, that is, in which that name may be used as an unqualified name to refer to the same entity a statement, block, function declarator, function-definition, class, handler, template-declaration, template-parameter-list of a template template-parameter, or namespace. In general, each particular name is valid may be used as an unqualified name to refer to the entity of its declaration or to the label only within some possibly discontiguous portion of program text called its scope. To determine the scope of a declaration...
Change 3.3 [basic.scope] paragraph 3 as follows:
The names declared by a declaration are introduced into the scope in which the declaration occurs declarative region that directly encloses the declaration, except that declaration-statements, function parameter names in the declarator of a function-definition, exception-declarations (3.3.3 [basic.scope.local]), the presence of a friend specifier (11.4 [class.friend]), certain uses of the elaborated-type-specifier (7.1.6.3 [dcl.type.elab]), and using-directives (7.3.4 [namespace.udir]) alter this general behavior.
Change 3.3.3 [basic.scope.local] paragraphs 1-3 and add a new paragraph 4 before the existing paragraph 4 as follows:
A name declared in a block (6.3 [stmt.block]) is local to that block. Its potential scope begins at its point of declaration (3.3.2 [basic.scope.pdecl]) and ends at the end of its declarative region. The declarative region of a name declared in a declaration-statement is the directly enclosing block (6.3 [stmt.block]). Such a name is local to the block.
The potential scope declarative region of a function parameter name (including one appearing in the declarator of a function-definition or in a lambda-parameter-declaration-clause) or of a function-local predefined variable in a function definition (8.4 [dcl.fct.def]) begins at its point of declaration. If the function has a function-try-block the potential scope of a parameter or of a function-local predefined variable ends at the end of the last associated handler, otherwise it ends at the end of the outermost block of the function definition. A parameter name is the entire function definition or lambda-expression. Such a name is local to the function definition and shall not be redeclared in the any outermost block of the function definition nor in the outermost block of any handler associated with a function-try-block function-body (including handlers of a function-try-block) or lambda-expression.
The name in a catch exception-declaration The declarative region of a name declared in an exception-declaration is its entire handler. Such a name is local to the handler and shall not be redeclared in the outermost block of the handler.
The potential scope of any local name begins at its point of declaration (3.3.2 [basic.scope.pdecl]) and ends at the end of its declarative region.
Change 3.3.5 [basic.funscope] as indicated:
Labels (6.1 [stmt.label]) have function scope and may be used anywhere in the function in which they are declared except in members of local classes (9.8 [class.local]) of that function. Only labels have function scope.
Change 6.7 [stmt.dcl] paragraph 1 as follows:
A declaration statement introduces one or more new identifiers names into a block; it has the form
declaration-statement:
block-declaration
[Note: If an identifier a name introduced by a declaration was previously declared in an outer block, the outer declaration is hidden for the remainder of the block, after which it resumes its force (3.3.10 [basic.scope.hiding]). —end note]
[Drafting notes: This resolution deals almost exclusively with the unclear definition of “declarative region.” I've left the ambiguous use of “scope” alone for now. However sections 3.3.x all have headings reading “xxx scope,” but they don't mean the scope of a declaration but the different kinds of declarative regions and their effects on the scope of declarations contained therein. To me, it looks like most of 3.4 should refer to “declarative region” and not to “scope.”
The change to 6.7 fixes an “identifier” misuse (e.g., extern T operator+(T,T); at block scope introduces a name but not an identifier) and removes normative redundancy.]
The Standard does not completely specify how to look up the type-name(s) in a pseudo-destructor-name (5.2 [expr.post] paragraph 1, 5.2.4 [expr.pseudo]), and what information it does have is incorrect and/or in the wrong place. Consider, for instance, 3.4.5 [basic.lookup.classref] paragraphs 2-3:
If the id-expression in a class member access (5.2.5 [expr.ref]) is an unqualified-id, and the type of the object expression is of a class type C (or of pointer to a class type C), the unqualified-id is looked up in the scope of class C. If the type of the object expression is of pointer to scalar type, the unqualified-id is looked up in the context of the complete postfix-expression.
If the unqualified-id is ~type-name, and the type of the object expression is of a class type C (or of pointer to a class type C), the type-name is looked up in the context of the entire postfix-expression and in the scope of class C. The type-name shall refer to a class-name. If type-name is found in both contexts, the name shall refer to the same class type. If the type of the object expression is of scalar type, the type-name is looked up in the scope of the complete postfix-expression.
There are at least three things wrong with this passage with respect to pseudo-destructors:
A pseudo-destructor call (5.2.4 [expr.pseudo]) is not a “class member access”, so the statements about scalar types in the object expressions are vacuous: the object expression in a class member access is required to be a class type or pointer to class type (5.2.5 [expr.ref] paragraph 2).
On a related note, the lookup for the type-name(s) in a pseudo-destructor name should not be described in a section entitled “Class member access.”
Although the class member access object expressions are carefully allowed to be either a class type or a pointer to a class type, paragraph 2 mentions only a “pointer to scalar type” (disallowing references) and paragraph 3 deals only with a “scalar type,” presumably disallowing pointers (although it could possibly be a very subtle way of referring to both non-class pointers and references to scalar types at once).
The other point at which lookup of pseudo-destructors is mentioned is 3.4.3 [basic.lookup.qual] paragraph 5:
If a pseudo-destructor-name (5.2.4 [expr.pseudo]) contains a nested-name-specifier, the type-names are looked up as types in the scope designated by the nested-name-specifier.
Again, this specification is in the wrong location (a pseudo-destructor-name is not a qualified-id and thus should not be treated in the “Qualified name lookup” section).
Finally, there is no place in the Standard that describes the lookup for pseudo-destructor calls of the form p->T::~T() and r.T::~T(), where p and r are a pointer and reference to scalar, respectively. To the extent that it gives any guidance at all, 3.4.5 [basic.lookup.classref] deals only with the case where the ~ immediately follows the . or ->, and 3.4.3 [basic.lookup.qual] deals only with the case where the pseudo-destructor-name contains a nested-name-specifier that designates a scope in which names can be looked up.
See document J16/06-0008 = WG21 N1938 for further discussion of this and related issues, including 244, 305, 399, and 414.
Proposed resolution (June, 2008):
Add a new paragraph following 5.2 [expr.post] paragraph 2 as follows:
When a postfix-expression is followed by a dot . or arrow -> operator, the interpretation depends on the type T of the expression preceding the operator. If the operator is ., T shall be a scalar type or a complete class type; otherwise, T shall be a pointer to a scalar type or a pointer to a complete class type. When T is a (pointer to) a scalar type, the postfix-expression to which the operator belongs shall be a pseudo-destructor call (5.2.4 [expr.pseudo]); otherwise, it shall be a class member access (5.2.5 [expr.ref]).
Change 5.2.4 [expr.pseudo] paragraph 2 as follows:
The left-hand side of the dot operator shall be of scalar type. The left-hand side of the arrow operator shall be of pointer to scalar type. This scalar type The type of the expression preceding the dot operator, or the type to which the expression preceding the arrow operator points, is the object type...
Change 5.2.5 [expr.ref] paragraph 2 as follows:
For the first option (dot) the type of the first expression (the object expression) shall be “class object” (of a complete type) is a class type. For the second option (arrow) the type of the first expression (the pointer expression) shall be “pointer to class object” (of a complete type) is a pointer to a class type. In these cases, the id-expression shall name a member of the class or of one of its base classes.
Add a new paragraph following 3.4 [basic.lookup] paragraph 2 as follows:
In a pseudo-destructor-name that does not include a nested-name-specifier, the type-names are looked up as types in the context of the complete expression.
Delete the last sentence of 3.4.5 [basic.lookup.classref] paragraph 2:
If the id-expression in a class member access (5.2.5 [expr.ref]) is an unqualified-id, and the type of the object expression is of a class type C, the unqualified-id is looked up in the scope of class C. If the type of the object expression is of pointer to scalar type, the unqualified-id is looked up in the context of the complete postfix-expression.
3.4.2 [basic.lookup.argdep] paragraph 2 excludes dependent parameter types and return types from consideration in determining the associated classes and namespaces of a function template. Presumably this means that an example like
namespace N { template<class T> struct A { }; void f(void (*)()); } template <class T> void g(T, N::A<T>); void g(); int main() { f(g); }
is ill-formed because the second parameter of the function template g does not add namespace N to the list of associated namespaces. This was probably unintentional.
See also issue 1015.
Notes from the November, 2010 meeting:
The CWG agreed that the rules should be changed to make this example well-formed.
Proposed resolution (November, 2010):
Change 3.4.2 [basic.lookup.argdep] paragraph 2 as follows:
...In addition, if the argument is the name or address of a set of overloaded functions and/or function templates, its associated classes and namespaces are the union of those associated with each of the members of the set, i.e., the classes and namespaces associated with its (non-dependent) parameter types and return type. Additionally, if the aforementioned set of overloaded functions is named with a template-id, its associated classes and namespaces are those of its type template-arguments and its template template-arguments.
This resolution also resolves issue 1015.
[Drafting note: It's not clear that we need the inserted text above, because for the example in issue 1015, the type N::S is already represented in the type of the function address, so there is no need to pull it from template arguments. For cases where template parameters are not represented in the function type, it's not clear that we want ADL to reach further.]
Currently, according to 3.4.2 [basic.lookup.argdep] paragraph 2, explicit template arguments in a function argument do not contribute to the associated namespaces in a function call, although they plausibly should in an example like the following:
namespace N {
struct S { };
void f(void (*)(S));
};
template<typename T> void g(T);
void h() {
f(g<N::S>); // Should find N::f
}
See also issue 997.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 997.
3.7.4.3 [basic.stc.dynamic.safety] paragraph 4 only prohibits the dereferencing and deallocation of non-safely-derived pointers. This is insufficient. Explicit deallocation of storage is described as rendering invalid all pointers to that storage, with the result that all operations on such a pointer value causes undefined behavior (3.7.4.2 [basic.stc.dynamic.deallocation] paragraph 4). The same should be true if the storage pointed to by a non-safely-derived pointer is garbage collected. In particular, the promise of objects having distinct addresses (1.8 [intro.object] paragraph 6) should not apply if one of those objects is designated by a non-safely-derived pointer.
Proposed resolution (November, 2010):
Change 3.7.4.3 [basic.stc.dynamic.safety] paragraph 4 as follows:
...Alternatively, an implementation may have strict pointer safety, in which case, if a pointer value that is not a safely-derived pointer value is dereferenced or deallocated, and an invalid pointer value, unless the referenced complete object is of dynamic storage duration and has not previously been declared reachable (20.9.10 [util.smartptr]), the behavior is undefined. [Note: this The effect of using an invalid pointer value (including passing it to a deallocation function) is undefined, see 3.7.4.2 [basic.stc.dynamic.deallocation]. This is true even if the unsafely-derived pointer value might compare equal to some safely-derived pointer value. —end note] It is implementation defined...
An lvalue referring to an out-of-lifetime non-POD class objects can be used in limited ways, subject to the restrictions in 3.8 [basic.life] paragraph 6:
if the original object will be or was of a non-POD class type, the program has undefined behavior if:
the lvalue is used to access a non-static data member or call a non-static member function of the object, or
the lvalue is implicitly converted (4.10 [conv.ptr]) to a reference to a base class type, or
the lvalue is used as the operand of a static_cast (5.2.9 [expr.static.cast]) except when the conversion is ultimately to cv char& or cv unsigned char& ), or
the lvalue is used as the operand of a dynamic_cast (5.2.7 [expr.dynamic.cast]) or as the operand of typeid.
There are at least a couple of questionable things in this list. First, there is no “implicit conversion to a reference to a base class,” as assumed by the second bullet. Presumably this is intended to say that the lvalue is bound to a reference to a base class, and the cross-reference should be to 8.5.3 [dcl.init.ref], not to 4.10 [conv.ptr] (which deals with pointer conversions). However, even given that adjustment, it is not clear why it is forbidden to bind a reference to a non-virtual base class of an out-of-lifetime object, as that is just an address offset calculation. (Binding to a virtual base, of course, would require access to the value of the object and thus cannot be done outside the object's lifetime.)
The third bullet also appears questionable. It's not clear why static_cast is discussed at all here, as the only permissible static_cast conversions involving reference types and non-POD classes are to references to base or derived classes and to the same type, modulo cv-qualification; if implicit “conversion” to a base class reference is forbidden in the second bullet, why would an explicit conversion be permitted in the third? Was this intended to refer to reinterpret_cast? Also, is there a reason to allow char types but disallow array-of-char types (which are more likely to be useful than a single char)?
Proposed resolution (March, 2008):
Change 3.8 [basic.life] paragraph 5 as follows:
...If the object will be or was of a non-trivial class type, the program has undefined behavior if:
the pointer is used to access a non-static data member or call a non-static member function of the object, or
the pointer is implicitly converted (
4.10 [conv.ptr] ) to a pointer to a virtual base class type, orthe pointer is used as the operand of a static_cast (5.2.9 [expr.static.cast]) (except when the conversion is to void*, or to void* and subsequently to char*, or unsigned char*). pointer to void, or to pointer to void and subsequently to pointer to cv char or pointer to cv unsigned char, or
the pointer is used as the operand of a dynamic_cast (5.2.7 [expr.dynamic.cast])...
Change 3.8 [basic.life] paragraph 6 as follows:
...if the original object will be or was of a non-trivial class type, the program has undefined behavior if:
the lvalue is used to access a non-static data member or call a non-static member function of the object, or
the lvalue is implicitly converted (4.10 [conv.ptr]) bound to a reference to a virtual base class type (8.5.3 [dcl.init.ref]), or
the lvalue is used as the operand of a static_cast (5.2.9 [expr.static.cast]) except when the conversion is ultimately to cv char& or cv unsigned char&, or
the lvalue is used as the operand of a dynamic_cast (5.2.7 [expr.dynamic.cast]) or as the operand of typeid.
[Drafting notes: Paragraph 5 was changed to track the changes to paragraph 6. See also the resolution for issue 658.]
The current treatment of constexpr constructors and constant expressions does not deal with the initializers for non-static data members, which should also be required to be constant expressions.
Proposed resolution (November, 2010):
Change 3.6.2 [basic.start.init] paragraph 2 as follows:
if an object with static or thread storage duration is initialized by a constructor call, if the constructor is a constexpr constructor, if all constructor arguments are constant expressions (including conversions), and if, after function invocation substitution (7.1.5 [dcl.constexpr]), every constructor call and full-expression in the mem-initializers and in the brace-or-equal-initializers for non-static data members is a constant expression
Change 3.9 [basic.types] paragraph 10 as follows (wording assumes the proposed resolution of issue 981)
A type is a literal type if it is:
a scalar type; or
a class type (clause 9 [class]) that has all of the following properties:
it has a trivial destructor,
every constructor call and full-expression in the brace-or-equal-initializers for non-static data members (if any) is a constant expression (5.19 [expr.const]),
it is an aggregate type (8.5.1 [dcl.init.aggr]) or has at least one constexpr constructor or constructor template that is not a copy or move constructor, and
it has all non-static data members and base classes of literal types; or
an array of literal type.
According to 3.9.1 [basic.fundamental] paragraph 9,
Any expression can be explicitly converted to type cv void (5.4 [expr.cast]). An expression of type void shall be used only as an expression statement (6.2 [stmt.expr]), as an operand of a comma expression (5.18 [expr.comma]), as a second or third operand of ?: (5.16 [expr.cond]), as the operand of typeid, or as the expression in a return statement (6.6.3 [stmt.return]) for a function with the return type void.
First, this is self-contradictory: if “any expression” can be converted to void, why is such a conversion not listed among the acceptable uses of an expression of type void?
Second, presumably an expression of type void can be used as an operand of decltype, but this use is not listed.
Finally, there are several places in the Standard that speak of expressions having a cv-qualified void type (5.16 [expr.cond] paragraph 2, 6.6.3 [stmt.return] paragraph 3). However, an expression of type void is a non-class prvalue, and there are no cv-qualified non-class prvalues (3.10 [basic.lval] paragraph 4).
Proposed resolution (February, 2011):
Change 3.9.1 [basic.fundamental] paragraph 9 as follows:
...Any expression can be explicitly converted to type cv void (5.4 [expr.cast]). An expression of type void shall be used only as an expression statement (6.2 [stmt.expr]), as an operand of a comma expression (5.18 [expr.comma]), as a second or third operand of ?: (5.16 [expr.cond]), as the operand of typeid or decltype, or as the expression in a return statement (6.6.3 [stmt.return]) for a function with the return type void, or as the operand of an explicit conversion to type cv void.
Change 5.16 [expr.cond] paragraph 2 as follows:
If either the second or the third operand has type (possibly cv-qualified) void, then...
Change 6.6.3 [stmt.return] paragraph 3 as follows:
A return statement with an expression of type “cv void” can be used only in functions with a return type of cv void; the expression is evaluated just before the function returns to its caller.
Now that alignment can be applied directly to class types, the current wording of the note at the end of 3.11 [basic.align] paragraph 3 is no longer correct:
[Note: every over-aligned type is or contains a class type with a non-static data member to which an extended alignment has been applied. —end note]
Proposed resolution (November, 2010):
Change 3.11 [basic.align] paragraph 3 as follows:
[Note: every over-aligned type is or contains a class type with a non-static data member to which an extended alignment has been applied to which extended alignment applies (possibly through a non-static data member). —end note]
4.1 [conv.lval] paragraph 1 says,
If the object to which the lvalue refers is not an object of type T and is not an object of a type derived from T, or if the object is uninitialized, a program that necessitates this conversion has undefined behavior.
I think there are at least three related issues around this specification:
Presumably assigning a valid value to an uninitialized object allows it to participate in the lvalue-to-rvalue conversion without undefined behavior (otherwise the number of programs with defined behavior would be vanishingly small :-). However, the wording here just says "uninitialized" and doesn't mention assignment.
There's no exception made for unsigned char types. The wording in 3.9.1 [basic.fundamental] was carefully crafted to allow use of unsigned char to access uninitialized data so that memcpy and such could be written in C++ without undefined behavior, but this statement undermines that intent.
It's possible to get an uninitialized rvalue without invoking the lvalue-to-rvalue conversion. For instance:
struct A { int i; A() { } // no init of A::i }; int j = A().i; // uninitialized rvalue
There doesn't appear to be anything in the current IS wording that says that this is undefined behavior. My guess is that we thought that in placing the restriction on use of uninitialized objects in the lvalue-to-rvalue conversion we were catching all possible cases, but we missed this one.
In light of the above, I think the discussion of uninitialized objects ought to be removed from 4.1 [conv.lval] paragraph 1. Instead, something like the following ought to be added to 3.9 [basic.types] paragraph 4 (which is where the concept of "value" is introduced):
Any use of an indeterminate value (5.3.4 [expr.new], 8.5 [dcl.init], 12.6.2 [class.base.init]) of any type other than char or unsigned char results in undefined behavior.
John Max Skaller:
A().i had better be an lvalue; the rules are wrong. Accessing a member of a structure requires it be converted to an lvalue, the above calculation is 'as if':
struct A { int i; A *get() { return this; } }; int j = (*A().get()).i;
and you can see the bracketed expression is an lvalue.
A consequence is:
int &j= A().i; // OK, even if the temporary evaporates
j now refers to a 'destroyed' value. Any use of j is an error. But the binding at the time is valid.
Proposed Resolution (November, 2006):
Add the indicated words to 3.9 [basic.types] paragraph 4:
... For trivial types, the value representation is a set of bits in the object representation that determines a value, which is one discrete element of an implementation-defined set of values. Any use of an indeterminate value (5.3.4 [expr.new], 8.5 [dcl.init], 12.6.2 [class.base.init]) of a type other than unsigned char results in undefined behavior.
Change 4.1 [conv.lval] paragraph 1 as follows:
If the object to which the lvalue refers is not an object of type T and is not an object of a type derived from T, or if the object is uninitialized, a program that necessitates this conversion has undefined behavior.
Additional note (May, 2008):
The C committee is dealing with a similar issue in their DR336. According to this analysis, they plan to take almost the opposite approach to the one described above by augmenting the description of their version of the lvalue-to-rvalue conversion. The CWG did not consider that access to an unsigned char might still trap if it is allocated in a register and needs to reevaluate the proposed resolution in that light. See also issue 129.
Split off from issue 315.
Incidentally, another thing that ought to be cleaned up is the inconsistent use of "indirection" and "dereference". We should pick one.
Proposed resolution (December, 2006):
Change 5.3.1 [expr.unary.op] paragraph 1 as follows:
The unary * operator performs indirection dereferences a pointer value: the expression to which it is applied shall be a pointer...
Change 8.3.4 [dcl.array] paragraph 8 as follows:
The results are added and indirection applied values are added and the result is dereferenced to yield an array (of five integers), which in turn is converted to a pointer to the first of the integers.
Change 8.3.5 [dcl.fct] paragraph 9 as follows:
The binding of *fpi(int) is *(fpi(int)), so the declaration suggests, and the same construction in an expression requires, the calling of a function fpi, and then using indirection through dereferencing the (pointer) result to yield an integer. In the declarator (*pif)(const char*, const char*), the extra parentheses are necessary to indicate that indirection through dereferencing a pointer to a function yields a function, which is then called.
Change the index for * and “dereferencing” no longer to refer to “indirection.”
[Drafting note: 26.6.9 [template.indirect.array] requires no change. Many more places in the current wording use “dereferencing” than “indirection.”]
According to the C++ Standard section 5.3.4 [expr.new] paragraph 21 it is unspecified whether the allocation function is called before evaluating the constructor arguments or after evaluating the constructor arguments but before entering the constructor.
On top of that paragraph 17 of the same section insists that
If any part of the object initialization described above [Footnote: This may include evaluating a new-initializer and/or calling a constructor.] terminates by throwing an exception and a suitable deallocation function is found, the deallocation function is called to free the memory in which the object was being constructed... If no unambiguous matching deallocation function can be found, propagating the exception does not cause the object's memory to be freed...
Now suppose we have:
struct copy_throw { copy_throw(const copy_throw&) { throw std::logic_error("Cannot copy!"); } copy_throw(long, copy_throw) { } copy_throw() { } };
int main() try { copy_throw an_object, /* undefined behaviour */ * a_pointer = ::new copy_throw(0, an_object); return 0; } catch(const std::logic_error&) { }
Here the new-expression '::new copy_throw(0, an_object)' throws an exception when evaluating the constructor's arguments and before the allocation function is called. However, 5.3.4 [expr.new] paragraph 17 prescribes that in such a case the implementation shall call the deallocation function to free the memory in which the object was being constructed, given that a matching deallocation function can be found.
So a call to the Standard library deallocation function '::operator delete(void*)' shall be issued, but what argument is an implementation supposed to supply to the deallocation function? As per 5.3.4 [expr.new] paragraph 17 - the argument is the address of the memory in which the object was being constructed. Given that no memory has yet been allocated for the object, this will qualify as using an invalid pointer value, which is undefined behaviour by virtue of 3.7.4.2 [basic.stc.dynamic.deallocation] paragraph 4.
Suggested resolution:
Change the first sentence of 5.3.4 [expr.new] paragraph 17 to read:
If the memory for the object being created has already been successfully allocated and any part of the object initialization described above...
Proposed resolution (March, 2008):
Change 5.3.4 [expr.new] paragraph 18 as follows:
If any part of the object initialization described above [Footnote: ...] terminates by throwing an exception, storage has been obtained for the object, and a suitable deallocation function can be found, the deallocation function is called...
Does an explicit temporary of an integral type qualify as an integral constant expression? For instance,
void* p = int(); // well-formed?
It would appear to be, since int() is an explicit type conversion according to 5.2.3 [expr.type.conv] (at least, it's described in a section entitled "Explicit type conversion") and type conversions to integral types are permitted in integral constant expressions (5.19 [expr.const]). However, this reasoning is somewhat tenuous, and some at least have argued otherwise.
Note (March, 2008):
This issue should be closed as NAD as a result of the rewrite of 5.19 [expr.const] in conjunction with the constexpr proposal.
One of the bullets in 5.19 [expr.const] paragraph 2 says,
a type conversion from a pointer or pointer-to-member type to a literal type
This appears to prohibit conversion from one pointer type to another; for example,
int x;
constexpr void* p = &x; // ill-formed
This seems excessive and probably unintentional.
Proposed resolution (November, 2010):
Change 5.19 [expr.const] paragraph 2 as follows:
a type conversion from a pointer or pointer-to-member type to a literal an integral type [Note: a user-defined conversion invokes a function —end note];
[Note: the proposed resolution of issue 1188 edits this bullet in an incompatible fashion.]
The status of the following example is not clear:
union U { float f; unsigned long u; }; constexpr U u1 = { 1.0 }; constexpr unsigned long u2 = u1.u;
This might be ill-formed because the aliasing causes undefined behavior, which should make the expression not a constant expression. However, a similar example using a permitted aliasing would presumably be acceptable:
union U { unsigned char c[sizeof(double)]; double d; }; constexpr U c1u = { 0x12, 0x34 /* etc. */ }; constexpr double c1 = c1u.d;
One suggestion was that unions should not be considered literal types, but some in the ensuing discussion deemed that excessive. That also would not address similar examples using reinterpret_cast, which is currently also permitted in constant expressions.
Proposed resolution (November, 2010):
Change 5.19 [expr.const] paragraph 2 as follows:
an lvalue-to-rvalue conversion...
an lvalue-to-rvalue conversion (4.1 [conv.lval]) that is applied to a glvalue that refers to a non-active member of a union or a subobject thereof;
...
a type conversion from a pointer or pointer-to-member type to a literal type [Note: a user-defined conversion invokes a function —end note] a reinterpret_cast (5.2.10 [expr.reinterpret.cast]);
[Note: the proposed resolution of issue 1098 edits this bullet in an incompatible fashion.]
It seems unfortunate that the beginning of a C-style for loop can look like
whereas the beginning of a range-based for loop looks like
So that we don't know what constraints we are trying to apply to the specifiers until we see, or don't see, a :. The inconsistency between decl-specifier-seq and type-specifier-seq seems gratuitous and inconvenient.
Proposed resolution (November, 2010):
Change the grammar 6.5 [stmt.iter] paragraph 1 as follows:
Add the following as a new paragraph at the end of 6.5.4 [stmt.ranged]:
The the decl-specifier-seq of a for-range-declaration, each decl-specifier shall be either a type-specifier or constexpr.
The grammar for declarations includes the following two nonterminals:
An attribute-specifier followed by a semicolon could thus be parsed as either an attribute-declaration or as a simple-declaration that omits the optional decl-specifier-seq and init-declarator-list, and the current wording does not disambiguate the two. (There doesn't seem to be a compelling need for attribute-declaration as a separate nonterminal, given that simple-declaration can accommodate that form.)
Proposed resolution (February, 2011):
Change 7 [dcl.dcl] paragraph 1 as follows:
...The optional attribute-specifier-seq in a simple-declaration appertains to each of the entities declared by the declarators; it shall not appear if the optional of the init-declarator-list is omitted...
The grammar for an alias-declaration does not have a place for an attribute-specifier, although a typedef declaration does. Since an alias-declaration is essentially a different syntactic form of a typedef declaration (7.1.3 [dcl.typedef] paragraph 2), this could be surprising.
Proposed resolution (February, 2011):
Change the grammar in 7 [dcl.dcl] paragraph 1 as follows:
Change 7.1.3 [dcl.typedef] paragraph 2 as follows:
A typedef-name can also be introduced by an alias-declaration. The identifier following the using keyword becomes a typedef-name and the optional attribute-specifier-seq following the identifier appertains to that typedef-name. It has the same semantics...
7.1.5 [dcl.constexpr] paragraph 5 says,
If the instantiated template specialization of a constexpr function template or member function of a class template would fail to satisfy the requirements for a constexpr function or constexpr constructor, that specialization is not a constexpr function or constexpr constructor. [Note: if the function is a member function it will still be const as described below. Implementations are encouraged to issue a warning if a function is rendered not constexpr by a non-dependent construct. —end note]
A non-dependent error in a function template renders it ill-formed with no diagnostic required (14.6 [temp.res] paragraph 8). A similar approach is being taken in the proposed resolution of issue 1125 with respect to constexpr functions. It would be more consistent to treat constexpr function templates in the same way, along the lines of
If no specialization of the template would be constexpr, the program is ill-formed, no diagnostic required.
Proposed resolution (November, 2010):
Change 7.1.5 [dcl.constexpr] paragraph 6 as follows:
If the instantiated template specialization of a constexpr function template or member function of a class template would fail to satisfy the requirements for a constexpr function or constexpr constructor, that specialization is not a constexpr function or constexpr constructor. [Note: if the function is a member function it will still be const as described below. Implementations are encouraged to issue a warning if a function is rendered not constexpr by a non-dependent construct. —end note] If no specialization of the template would yield a constexpr function or constexpr constructor, the program is ill-formed; no diagnostic required.
7.1.5 [dcl.constexpr] restricts the constexpr specifier to object and function declarations. Especially given the support for reference types in constexpr functions, and considering that constexpr pointer declarations are permitted, there does not seem to be a good reason for excluding constexpr references.
(See also issue 1195.)
Proposed resolution (November. 2010):
Change 7.1.5 [dcl.constexpr] paragraph 1 as follows:
The constexpr specifier shall be applied only to the definition of an object a variable, the declaration of a function...
Change 7.1.5 [dcl.constexpr] paragraph 8 as follows:
A constexpr specifier used in an object declaration declares the object as const. Such an object shall be initialized. If it is initialized by a constructor call, the constructor shall be a constexpr constructor and every argument to the constructor shall be a constant expression. Otherwise, or if a constexpr specifier is used in a reference declaration, every full-expression that appears in its initializer shall be a constant expression...
7.1.5 [dcl.constexpr] paragraph 3 is overly restrictive in requiring that reference parameter and return types of a constexpr function or constructor must refer to a literal type. 5.19 [expr.const] paragraph 2 already prevents any problematic uses of lvalues of non-literal types, and it permits use of pointers to non-literal types as address constants. The same should be permitted via reference parameters and return types of constexpr functions.
(See also issue 1194.)
Proposed resolution (November, 2010):
Change 3.9 [basic.types] paragraph 10 as follows:
A type is a literal type if it is:
a scalar type; or
a reference type; or
...
Change 7.1.5 [dcl.constexpr] paragraph 3 as follows:
The definition of a constexpr function shall satisfy the following constraints:
it shall not be virtual (10.3 [class.virtual])
its return type shall be a literal type or a reference to literal type
each of its parameter types shall be a literal type or a reference to literal type
...
Change 7.1.5 [dcl.constexpr] paragraph 4 as follows:
The definition of a constexpr constructor shall satisfy the following constraints:
each of its parameter types shall be a literal type or a reference to literal type;
...
A class with a virtual base should not be allowed to have a constexpr constructor.
Proposed resolution (November, 2010):
Add the following bullet to the list in 7.1.5 [dcl.constexpr] paragraph 4:
the class shall not have virtual base classes;
The constraints on type-specifiers given in 7.1.6 [dcl.type] paragraphs 2 and 3 (at most one type-specifier except as specified, at least one type-specifier, no redundant cv-qualifiers) are couched in terms of decl-specifier-seqs and declarations. However, they should also apply to constructs that are not syntactically declarations and that are defined to use type-specifier-seqs, including 5.3.4 [expr.new], 6.6 [stmt.jump], 8.1 [dcl.name], and 12.3.2 [class.conv.fct].
Proposed resolution (March, 2008):
Change 7.1.6 [dcl.type] paragraph 3 as follows:
At In a complete type-specifier-seq or in a complete decl-specifier-seq of a declaration, at least one type-specifier that is not a cv-qualifier is required in a declaration shall appear unless it the declaration declares a constructor, destructor or conversion function.
(Note: paper N2546, voted into the Working Draft in February, 2008, addresses part of this issue.)
Given
int&& f(); int i;
it is surprising that decltype(f()) and decltype(static_cast<int&&>(i)) are not the same type.
Proposed resolution (November, 2010):
Change 7.1.6.2 [dcl.type.simple] paragraph 4 as follows:
The type denoted by decltype(e) is defined as follows:
if e is an unparenthesized id-expression or a class member access (5.2.5 [expr.ref]), decltype(e) is the type of the entity named by e. If there is no such entity, or if e names a set of overloaded functions, the program is ill-formed;
otherwise, if e is a function call (5.2.2 [expr.call]) or an invocation of an overloaded operator (parentheses around e are ignored), decltype(e) is the return type of the statically chosen function an xvalue, decltype(e) is T&&, where T is the type of e;
otherwise, if e is an lvalue, decltype(e) is T&, where T is the type of e;
otherwise, decltype(e) is the type of e.
The current wording of 7.5 [dcl.link] paragraph 4 is:
...A C language linkage is ignored for the names of class members and the member function type of class member functions...
This has two problems. First, it sounds as if a class member function has a “member function type,” while in fact the type of a class member function is an ordinary function type (cf 9.2 [class.mem] paragraph 11).
Second, even if that problem is fixed, it is not accurate to say that a C language linkage is “ignored” for the type of a member function. It does not affect the language linkage of the type of the member function, but it does affect the language linkage of any function declarators appearing in the parameter and return types of the function and thus the type of the function.
Proposed resolution (November, 2010):
Change 7.5 [dcl.link] paragraph 4 as follows:
...A C language linkage is ignored for in determining the language linkage of the names of class members and the member function type of class member functions...
An attribute-argument-clause should be allowed to consist solely of (), i.e., with no balanced-token-seq. Furthermore, the grammar for balanced-token should make the balanced-token-seq optional. Both of these goals could be accomplished by making the balanced-token optional in the first production of the rule for balanced-token-seq.
Proposed resolution (February, 2011):
Change the grammar of 7.6.1 [dcl.attr.grammar] paragraph 1 as follows:
According to 7.6.2 [dcl.align] paragraph 5,
The combined effect of all alignment attributes in a declaration shall not specify an alignment that is less strict than the alignment that would otherwise be required for the entity being declared.
“...would otherwise be required” could be read as referring to the alignment set by another declaration of the entity. However, it was intended to prevent specifying an alignment smaller than the natural alignment the entity would have in the absence of an align attribute. The wording should be changed to make that clearer.
Proposed resolution (February, 2011):
Change 7.6.2 [dcl.align] paragraph 5 as follows:
The combined effect of all alignment-specifiers in a declaration shall not specify an alignment that is less strict than the alignment that would otherwise be required for the entity being declared if all alignment-specifiers were ignored (including those in other declarations).
According to 8.3 [dcl.meaning] paragraph 1,
A declarator-id shall not be qualified except for the definition of a member function (9.3 [class.mfct]) or static data member (9.4 [class.static]) outside of its class, the definition or explicit instantiation of a function or variable member of a namespace outside of its namespace, or the definition of a previously declared explicit specialization outside of its namespace, or the declaration of a friend function that is a member of another class or namespace (11.4 [class.friend]). When the declarator-id is qualified, the declaration shall refer to a previously declared member of the class or namespace to which the qualifier refers...
This restriction prohibits examples like the following:
void f(); void ::f(); // error: qualified declarator namespace N { void f(); void N::f() { } // error: qualified declarator }
There doesn't seem to be any good reason for disallowing such declarations, and a number of implementations accept them in spite of the Standard's prohibition. Should the Standard be changed to allow them?
Notes from the April, 2006 meeting:
In discussing issue 548, the CWG agreed that the prohibition of qualified declarators inside their namespace should be removed.
Proposed resolution (October, 2006):
Remove the indicated words from 8.3 [dcl.meaning] paragraph 1:
...An unqualified-id occurring in a declarator-id shall be a simple identifier except for the declaration of some special functions (12.3 [class.conv], 12.4 [class.dtor], 13.5 [over.oper]) and for the declaration of template specializations or partial specializations (). A declarator-id shall not be qualified except for the definition of a member function (9.3 [class.mfct]) or static data member (9.4 [class.static]) outside of its class, the definition or explicit instantiation of a function or variable member of a namespace outside of its namespace, or the definition of a previously declared explicit specialization outside of its namespace, or the declaration of a friend function that is a member of another class or namespace (11.4 [class.friend]). When the declarator-id is qualified, the declaration shall refer to a previously declared member of the class or namespace to which the qualifier refers, and the member shall not have been introduced by a using-declaration in the scope of the class or namespace nominated by the nested-name-specifier of the declarator-id...
[Drafting note: The omission of “outside of its class” here does not give permission for redeclaration of class members; that is still prohibited by 9.2 [class.mem] paragraph 1. The removal of the enumeration of the kinds of declarations in which a qualified-id can appear does allow a typedef declaration to use a qualified-id, which was not permitted before; if that is undesirable, the prohibition can be reinstated here.]
The following example appears to be well-formed, with the partial specialization matching the type of Y::f(), even though it is rejected by many compilers:
template<class T> struct X; template<class R> struct X< R() > { }; template<class F, class T> void test(F T::* pmf) { X<F> x; } struct Y { void f() { } }; int main() { test( &Y::f ); }
However, 8.3.5 [dcl.fct] paragraph 4 says,
A cv-qualifier-seq shall only be part of the function type for a non-static member function, the function type to which a pointer to member refers, or the top-level function type of a function typedef declaration. The effect of a cv-qualifier-seq in a function declarator is not the same as adding cv-qualification on top of the function type. In the latter case, the cv-qualifiers are ignored.
This specification makes it impossible to write a partial specialization for a const member function:
template<class R> struct X<R() const> { };
A template argument is not one of the permitted contexts for cv-qualification of a function type. This restriction should be removed.
Notes from the April, 2006 meeting:
During the meeting the CWG was of the opinion that the “R() const” specialization would not match the const member function even if it were allowed and so classified the issue as NAD. Questions have been raised since the meeting, however, suggesting that the template argument in the partial specialization would, in fact, match the type of a const member function (see, for example, the very similar usage via typedefs in 9.3 [class.mfct] paragraph 9). The issue is thus being left open for renewed discussion at the next meeting.
Proposed resolution (June, 2008):
Change 8.3.5 [dcl.fct] paragraph 7 as follows:
A cv-qualifier-seq shall only be part of the function type for a non-static member function, the function type to which a pointer to member refers, or the top-level function type of a function typedef declaration, or the top-level function type of a type-id that is a template-argument for a type template-parameter. The effect... A ref-qualifier shall only be part of the function type for a non-static member function, the function type to which a pointer to member refers, or the top-level function type of a function typedef declaration, or the top-level function type of a type-id that is a template-argument for a type template-parameter. The return type...
8.3.5 [dcl.fct] paragraph 13 says,
The type T of the declarator-id of the function parameter pack shall contain a template parameter pack; each template parameter pack in T is expanded by the function parameter pack.
I think that's incorrect. For example, I think
template<class... P> void f(int (* ...p)[sizeof...(P)]);
should be an error, and that the function parameter pack p does not expand the template parameter pack P in this case.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 778.
According to the logic in 8.5.3 [dcl.init.ref] paragraph 5, the following example should create a temporary array and bind the reference to that temporary:
const char (&p)[10] = "123";
That is presumably not intended (issue 450 calls a similar outcome for rvalue arrays “implausible”). Current implementations reject this example.
Rationale (August, 2010):
The Standard does not describe initialization of array temporaries, so a program that requires such is ill-formed.
Note (October, 2010):
Although in general an object of array type cannot be initialized from another object of array type, there is special provision in 8.5.2 [dcl.init.string] for doing so when the source object is a string literal, as in this example. The issue is thus being reopened for further consideration in this light.
Notes from the November, 2010 meeting:
The CWG agreed that the current wording appears to permit this example but still felt that array temporaries are undesirable. Wording should be added to disallow this usage.
Proposed resolution (November, 2010):
Change 8.5.3 [dcl.init.ref] paragraph 5 as follows:
...
If the initializer expression is a string literal (2.14.5 [lex.string]), the program is ill-formed.
Otherwise, a temporary of type...
(See also issue 1232, which argues in favor of allowing array temporaries.)
The current wording of the WP appears not to allow for list-initialization of a reference like the following:
int i; int& ir{i};
First, 8.5 [dcl.init] paragraph 16 bullet 1 reads,
If the initializer is a braced-init-list, the object is list-initialized (8.5.4).
A reference is not an object, so this does not appear to apply; however, the second bullet sends reference initialization off to 8.5.3 [dcl.init.ref], which does not cover braced-init-lists: paragraph 5 of that section deals only with initilizer expressions, and a braced-init-list is not an expression.
Assuming that the use of “object” in the first bullet is just an oversight, 8.5.4 [dcl.init.list] also does not cover the case of a reference to a scalar type whose initalizer is a braced-init-list with a single element. Bullet 7 of paragraph 3 reads,
Otherwise, if the initializer list has a single element, the object is initialized from that element
and would cover this case except that, again, a reference is not an object. As a result, such an initialization would end up in the last bullet and consequently be ill-formed.
Presumably all that is needed is to add “or reference” to the appropriate bullets of 8.5 [dcl.init] paragraph 16 and 8.5.4 [dcl.init.list] paragraph 3.
Proposed resolution (November, 2010):
Change 8.5 [dcl.init] paragraph 16 bullet 1 as follows:
If the initializer is a braced-init-list, the object or reference is list-initialized (8.5.4 [dcl.init.list]).
Change 8.5.4 [dcl.init.list] paragraph 3 bullet 7 as follows:
Otherwise, if the initializer list has a single element, the object or reference is initialized from that element; if a narrowing conversion (see below) is required to convert the element to T, the program is ill-formed.
The definition of a POD struct in 9 [class] paragraph 9 is not (but should be) restricted to non-union class types.
Proposed resolution (November, 2010):
Change 9 [class] paragraph 9 as follows:
A POD struct109 is a non-union class that is both a trivial class and a standard-layout class...
According to 9.2 [class.mem] paragraph 6,
The decl-specifier-seq is omitted in constructor, destructor, and conversion function declarations only.
This is incorrect, as some decl-specifiers (explicit, virtual, inline, constexpr) are permitted in these declarations. Conversely, C.1.5 [diff.dcl], “Banning implicit int,” says,
In C++ a decl-specifier-seq must contain a type-specifier.
This is also incorrect for these declarations.
Proposed resolution (February, 2011):
Change 9.2 [class.mem] paragraph 7 as follows:
The decl-specifier-seq is may be omitted in constructor, destructor, and conversion function declarations only; when declaring another kind of member the decl-specifier-seq shall contain a type-specifier that is not a cv-qualifier. The member-declarator-list can be omitted...
Change C.1.5 [diff.dcl], “Banning implicit int,” as follows:
In C++ a decl-specifier-seq must contain a type-specifier, unless it is followed by a declarator for a constructor, a destructor, or a conversion function. In the following example...
According to 9.4.2 [class.static.data] paragraph 3,
If a static data member is of const literal type, its declaration in the class definition can specify a brace-or-equal-initializer in which every initializer-clause that is an assignment-expression is a constant expression. A static data member of literal type can be declared in the class definition with the constexpr specifier; if so, its declaration shall specify a brace-or-equal-initializer in which every initializer-clause that is an assignment-expression is a constant expression. [Note: In both these cases, the member may appear in constant expressions. —end note]
However, 5.19 [expr.const] paragraph 2 bullet 7 allows only integral and enumeration types in constant expressions for the const case; the other types require constexpr to be considered constant expressions.
Proposed resolution (November, 2010):
Change 9.4.2 [class.static.data] paragraph 3 as follows:
If a non-volatile const static data member is of const literal integral or enumeration type, its declaration in the class definition can specify a brace-or-equal-initializer in which every initializer-clause that is an assignment-expression is a constant expression (5.19 [expr.const]). A static data member of literal type can be declared in the class definition with the constexpr specifier...
Can a member of a union be of a class that has a user-declared non-default constructor? The restrictions on union membership in 9.5 [class.union] paragraph 1 only mention default and copy constructors:
An object of a class with a non-trivial default constructor (12.1 [class.ctor]), a non-trivial copy constructor (12.8 [class.copy]), a non-trivial destructor (12.4 [class.dtor]), or a non-trivial copy assignment operator (13.5.3 [over.ass], 12.8 [class.copy]) cannot be a member of a union...
(12.1 [class.ctor] paragraph 11 does say, “a non-trivial constructor,” but it's not clear whether that was intended to refer only to default and copy constructors or to any user-declared constructor. For example, 12.2 [class.temporary] paragraph 3 also speaks of a “non-trivial constructor,” but the cross-references there make it clear that only default and copy constructors are in view.)
Note (March, 2008):
This issue was resolved by the adoption of paper J16/08-0054 = WG21 N2544 (“Unrestricted Unions”) at the Bellevue meeting.
Split off from issue 86.
Should binding a reference to the result of a "," operation whose second operand is a temporary extend the lifetime of the temporary?
const SFileName &C = ( f(), SFileName("abc") );
Notes from the March 2004 meeting:
We think the temporary should be extended.
Proposed resolution (October, 2004):
Change 12.2 [class.temporary] paragraph 2 as indicated:
... In all these cases, the temporaries created during the evaluation of the expression initializing the reference, except the temporary that is the overall result of the expression [Footnote: For example, if the expression is a comma expression (5.18 [expr.comma]) and the value of its second operand is a temporary, the reference is bound to that temporary.] and to which the reference is bound, are destroyed at the end of the full-expression in which they are created and in the reverse order of the completion of their construction...
[Note: this wording partially resolves issue 86. See also issue 446.]
Notes from the April, 2005 meeting:
The CWG suggested a different approach from the 10/2004 resolution, leaving 12.2 [class.temporary] unchanged and adding normative wording to 5.18 [expr.comma] specifying that, if the result of the second operand is a temporary, that temporary is the result of the comma expression as well.
Proposed Resolution (November, 2006):
Add the indicated wording to 5.18 [expr.comma] paragraph 1:
... The type and value of the result are the type and value of the right operand; the result is an lvalue if its right operand is an lvalue, and is a bit-field if its right operand is an lvalue and a bit-field. If the value of the right operand is a temporary (12.2 [class.temporary]), the result is that temporary.
A posting in comp.lang.c++.moderated prompted me to try the following code:
struct S { template<typename T, int N> (&operator T())[N]; };
The goal is to have a (deducible) conversion operator template to a reference-to-array type.
This is accepted by several front ends (g++, EDG), but I now believe that 12.3.2 [class.conv.fct] paragraph 1 actually prohibits this. The issue here is that we do in fact specify (part of) a return type.
OTOH, I think it is legitimate to expect that this is expressible in the language (preferably not using the syntax above ;-). Maybe we should extend the syntax to allow the following alternative?
struct S { template<typename T, int N> operator (T(&)[N])(); };
Eric Niebler: If the syntax is extended to support this, similar constructs should also be considered. For instance, I can't for the life of me figure out how to write a conversion member function template to return a member function pointer. It could be useful if you were defining a null_t type. This is probably due to my own ignorance, but getting the syntax right is tricky.
Eg.
struct null_t { // null object pointer. works. template<typename T> operator T*() const { return 0; } // null member pointer. works. template<typename T,typename U> operator T U::*() const { return 0; } // null member fn ptr. doesn't work (with Comeau online). my error? template<typename T,typename U> operator T (U::*)()() const { return 0; } };
Martin Sebor: Intriguing question. I have no idea how to do it in a single declaration but splitting it up into two steps seems to work:
struct null_t { template <class T, class U> struct ptr_mem_fun_t { typedef T (U::*type)(); }; template <class T, class U> operator typename ptr_mem_fun_t<T, U>::type () const { return 0; } };
Note: In the April 2003 meeting, the core working group noticed that the above doesn't actually work.
Note (June, 2010):
It has been suggested that template aliases effectively address this issue. In particular, an identity alias like
template<typename T> using id = T;
provides the necessary syntactic sugar to be able to specify types with trailing declarator elements as a conversion-type-id. For example, the two cases discussed above could be written as:
struct S { template<typename T, int N> operator id<T[N]>&(); template<typename T, typename U> operator id<T (U::*)()>() const; };
This issue should thus be closed as now NAD.
Jack Rouse: In 12.8 [class.copy] paragraph 8, the standard includes the following about the copying of class subobjects in such a constructor:
Mike Miller: I'm more concerned about 12.8 [class.copy] paragraph 7, which lists the situations in which an implicitly-defined copy constructor can render a program ill-formed. Inaccessible and ambiguous copy constructors are listed, but not a copy constructor with a cv-qualification mismatch. These two paragraphs taken together could be read as requiring the calling of a copy constructor with a non-const reference parameter for a const data member.
Proposed Resolution (November, 2006):
This issue is resolved by the proposed resolution for issue 535.
Footnote 112 (12.8 [class.copy] paragraph 2) says,
Because a template constructor is never a copy constructor, the presence of such a template does not suppress the implicit declaration of a copy constructor. Template constructors participate in overload resolution with other constructors, including copy constructors, and a template constructor may be used to copy an object if it provides a better match than other constructors.
However, many of the stipulations about copy construction are phrased to refer only to “copy constructors.” For example, 12.8 [class.copy] paragraph 14 says,
A program is ill-formed if the copy constructor... for an object is implicitly used and the special member function is not accessible (clause 11 [class.access]).
Does that mean that using an inaccessible template constructor to copy an object is permissible, because it is not a “copy constructor?” Obviously not, but each use of the term “copy constructor” in the Standard should be examined to determine if it applies strictly to copy constructors or to any constructor used for copying. (A similar issue applies to “copy assignment operators,” which have the same relationship to assignment operator function templates.)
Proposed Resolution (February, 2008):
Change 3.2 [basic.def.odr] paragraph 2 as follows:
... [Note: this covers calls to named functions (5.2.2 [expr.call]), operator overloading (clause 13 [over]), user-defined conversions (12.3.2 [class.conv.fct]), allocation function for placement new (5.3.4 [expr.new]), as well as non-default initialization (8.5 [dcl.init]). A copy constructor selected to copy class objects is used even if the call is actually elided by the implementation (12.8 [class.copy]). —end note] ... A copy-assignment function for a class An assignment operator function in a class is used by an implicitly-defined copy-assignment function for another class as specified in 12.8 [class.copy]...
Delete 12.1 [class.ctor] paragraphs 10 and 11:
A copy constructor (12.8 [class.copy]) is used to copy objects of class type.
A union member shall not be of a class type (or array thereof) that has a non-trivial constructor.
Replace the “example” in 12.2 [class.temporary] paragraph 1 with a note as follows:
[Example: even if the copy constructor is not called, all the semantic restrictions, such as accessibility (clause 11 [class.access]), shall be satisfied. —end example] [Note: This includes accessibility (clause 11 [class.access]) for the constructor selected. —end note]
Change 12.8 [class.copy] paragraph 7 as follows:
A non-user-provided copy constructor is implicitly defined if it is used to initialize an object of its class type from a copy of an object of its class type or of a class type derived from its class type (3.2 [basic.def.odr]). [Footnote: See 8.5 [dcl.init] for more details on direct and copy initialization. —end footnote] [Note: the copy constructor is implicitly defined even if the implementation elided its use (12.2 [class.temporary]) the copy operation (12.8 [class.copy]). —end note] A program is ill-formed if the class for which a copy constructor is implicitly defined or explicitly defaulted has:
a non-static data member of class type (or array thereof) with an inaccessible or ambiguous copy constructor, or
a base class with an inaccessible or ambiguous copy constructor.
Before the non-user-provided copy constructor for a class is implicitly defined...
Change 12.8 [class.copy] paragraph 8 as follows:
...Each subobject is copied in the manner appropriate to its type:
if the subobject is of class type, the copy constructor for the class is used direct-initialization (8.5 [dcl.init]) is performed [Note: If overload resolution fails or the constructor selected by overload resolution is inaccessible (11 [class.access]) in the context of X, the program is ill-formed. —end note];
if the subobject is an array...
[Drafting note: 8.5 [dcl.init] paragraph 15 requires “unambiguous” and 13.3 [over.match] paragraph 3 requires “accessible,” thus no need for normative text here.]
Change 12.8 [class.copy] paragraph 12 as follows:
A non-user-provided copy assignment operator is implicitly defined when an object of its class type is assigned a value of its class type or a value of a class type derived from its class type it is used (3.2 [basic.def.odr]). A program is ill-formed if the class for which a copy assignment operator is implicitly defined or explicitly defaulted has: a non-static data member of const or reference type.
a non-static data member of const type, or
a non-static data member of reference type, or
a non-static data member of class type (or array thereof) with an inaccessible copy assignment operator, or
a base class with an inaccessible copy assignment operator.
Change 12.8 [class.copy] paragraph 13 as follows:
... Each subobject is assigned in the manner appropriate to its type:
if the subobject is of class type, the copy assignment operator for the class the assignment operator function selected by overload resolution (13.3 [over.match]) for that class is used (as if by explicit qualification; that is, ignoring any possible virtual overriding functions in more derived classes) [Note: If overload resolution fails or the assignment operator function selected by overload resolution is inaccessible (11 [class.access]) in the context of X, the program is ill-formed. —end note];
if the subobject is an array...
Delete 12.8 [class.copy] paragraph 14:
A program is ill-formed if the copy constructor or the copy assignment operator for an object is implicitly used and the special member function is not accessible (clause 11 [class.access]). [Note: Copying one object into another using the copy constructor or the copy assignment operator does not change the layout or size of either object. —end note]
Change 12.8 [class.copy] paragraph 15 as follows:
When certain criteria are met, an implementation is allowed to omit the copy construction of a class object, even if the copy constructor selected for the copy operation and/or the destructor for the object have side effects. In such cases, the implementation treats the source and target of the omitted copy operation as simply two different ways of referring to the same object, and the destruction of that object occurs at the later of the times when the two objects would have been destroyed without the optimization. [Footnote: Because only one object is destroyed instead of two, and one copy constructor is not executed, there is still one object destroyed for each one constructed. —end footnote] This elision...
Change 13.3.3.1.2 [over.ics.user] paragraph 4 as follows:
A conversion of an expression of class type to the same class type is given Exact Match rank, and a conversion of an expression of class type to a base class of that type is given Conversion rank, in spite of the fact that a copy constructor (i.e., a user-defined conversion function) is called for those cases.
Change 15.1 [except.throw] paragraph 3 as follows:
A throw-expression initializes a temporary object, called the exception object, the type of which by copy-initialization (8.5 [dcl.init]). The type of that temporary object is determined...
Change 15.1 [except.throw] paragraph 5 as follows:
When the thrown object is a class object, the copy constructor selected for the copy-initialization and the destructor shall be accessible, even if the copy operation is elided (12.8 [class.copy]).
Change 15.3 [except.handle] paragraphs 16-17 as follows:
When the exception-declaration specifies a class type, a copy constructor copy-initialization (8.5 [dcl.init]) is used to initialize either the object declared in the exception-declaration or, if the exception-declaration does not specify a name, a temporary object of that type. The object shall not have an abstract class type. The object is destroyed when the handler exits, after the destruction of any automatic objects initialized within the handler. The copy constructor selected for the copy-initialization and the destructor shall be accessible in the context of the handler, even if the copy operation is elided (12.8 [class.copy]). If the copy constructor and destructor are implicitly declared (12.8 [class.copy]), such a use in the handler causes these functions to be implicitly defined; otherwise, the program shall provide a definition for these functions.
The copy constructor and destructor associated with the object shall be accessible even if the copy operation is elided (12.8 [class.copy]).
Change the footnote in 15.5.1 [except.terminate] paragraph 1 as follows:
[Footnote: For example, if the object being thrown is of a class with a copy constructor type, std::terminate() will be called if that copy constructor the constructor selected to copy the object exits with an exception during a throw. —end footnote]
(This resolution also resolves issue 111.)
[Drafting note: The following do not require changes: 5.17 [expr.ass] paragraph 4; 9 [class] paragraph 5; 9.5 [class.union] paragraph 1; 12.2 [class.temporary] paragraph 2; 12.8 [class.copy] paragraphs 1-2; 15.4 [except.spec] paragraph 14.]
Notes from February, 2008 meeting:
These changes overlap those that will be made when concepts are added. This issue will be maintained in “review” status until the concepts proposal is adopted and any conflicts will be resolved at that point.
12.1 [class.ctor] allows for a defaulted default constructor to be constexpr, but 12.8 [class.copy] does not do the same for a defaulted copy constructor. This seems wrong.
Proposed resolution (November, 2010):
Change 12.8 [class.copy] paragraph 14 as follows:
A copy/move constructor that is defaulted and not defined as deleted is implicitly defined if it is odr-used (3.2 [basic.def.odr]) to initialize an object of its class type from a copy of an object of its class type or of a class type derived from its class type123 or when it is explicitly defaulted after its first declaration. [Note: the copy/move constructor is implicitly defined even if the implementation elided its odr-use (3.2 [basic.def.odr], 12.2 [class.temporary]). —end note] If the implicitly-defined constructor would satisfy the requirements of a constexpr constructor (7.1.5 [dcl.constexpr]), the implicitly-defined constructor is constexpr.
The current wording makes some calls involving aggregate initialization ambiguous that should not be. For example, the calls below to f and g should each prefer the second overload:
struct A { int i; }; void f (const A &); void f (A &&); void g (A, double); void g (A, int); int main() { f ( { 1 } ); g ( { 1 }, 1 ); }
Proposed resolution (August, 2010):
Change 13.3.3.2 [over.ics.rank] paragraph 3 bullet 2 as follows:
User-defined conversion sequence U1 is a better conversion sequence than another user-defined conversion sequence U2 if they contain the same user-defined conversion function or constructor or aggregate initialization and if the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2.
Consider an example like:
template <typename T, T Value> struct bar { }; template <typename... T, T ...Value> void foo(bar<T, Value>);
The current wording in 14.1 [temp.param] is unclear as to whether this is permitted or not. For comparison, 8.3.5 [dcl.fct] paragraph 13 says,
A declarator-id or abstract-declarator containing an ellipsis shall only be used in a parameter-declaration. Such a parameter-declaration is a parameter pack (14.5.3 [temp.variadic]). When it is part of a parameter-declaration-clause, the parameter pack is a function parameter pack (14.5.3 [temp.variadic]). [Note: Otherwise, the parameter-declaration is part of a template-parameter-list and the parameter pack is a template parameter pack; see 14.1 [temp.param]. —end note] A function parameter pack, if present, shall occur at the end of the parameter-declaration-list. The type T of the declarator-id of the function parameter pack shall contain a template parameter pack; each template parameter pack in T is expanded by the function parameter pack.
The requirement here that the type of a function parameter pack must contain a template parameter pack is not repeated for template non-type parameters in 14.1 [temp.param], nor is the statement that it expands the template parameter pack.
A related issue is that neither function nor template parameter packs are listed in 14.5.3 [temp.variadic] paragraph 4 among the contexts in which a pack expansion can appear.
Proposed resolution (November, 2010):
Change 5.3.3 [expr.sizeof] paragraph 5 as follows:
The identifier in a sizeof... expression shall name a parameter pack. The sizeof... operator yields the number of arguments provided for the parameter pack identifier. The parameter pack is expanded (14.5.3 [temp.variadic]) by the sizeof... operator A sizeof... expression is a pack expansion (14.5.3 [temp.variadic]). [Example:...
Change 8.3.5 [dcl.fct] paragraph 13 as follows:
A declarator-id or abstract-declarator containing an ellipsis shall only be used in a parameter-declaration. Such a parameter-declaration is a parameter pack (14.5.3 [temp.variadic]). When it is part of a parameter-declaration-clause, the parameter pack is a function parameter pack (14.5.3 [temp.variadic]). [Note: Otherwise, the parameter-declaration is part of a template-parameter-list and the parameter pack is a template parameter pack; see 14.1 [temp.param]. —end note] The type T of the declarator-id of the function parameter pack shall contain a template parameter pack; each template parameter pack in T is expanded by the function parameter pack A function parameter pack is a pack expansion (14.5.3 [temp.variadic]). [Example:...
Change 14.1 [temp.param] paragraph 15 as follows:
If a template-parameter is a type-parameter with an ellipsis prior to its optional identifier or is a parameter-declaration that declares a parameter pack (8.3.5 [dcl.fct]), then the template-parameter is a template parameter pack (14.5.3 [temp.variadic]). A template parameter pack that is a parameter-declaration whose type contains one or more unexpanded parameter packs is a pack expansion. Similarly, a template parameter pack that is a type-parameter with a template-parameter-list containing one or more unexpanded parameter packs is a pack expansion. [Example:
template <class... Types> class Tuple; // Types is a template type parameter pack and a pack expansion template <class T, int... Dims> struct multi_array; // Dims is a non-type template parameter pack but not a pack expansion template <class T, T... Values> struct static_array; // Values is a non-type template parameter pack and a pack expansion
Change 14.5.3 [temp.variadic] paragraphs 4-6 and add a new paragraph 7 as follows:
A pack expansion is a sequence of tokens that names one or more parameter packs, followed by an ellipsis. The sequence of tokens is called the pattern of the expansion; its syntax consists of a pattern and an ellipsis, the instantiation of which produces zero or more instantiations of the pattern in a list (described below). The form of the pattern depends on the context in which the expansion occurs. Pack expansions can occur in the following contexts:
In a function parameter pack (8.3.5 [dcl.fct]); the pattern is the parameter-declaration without the ellipsis.
In a template parameter pack that is a pack expansion (14.1 [temp.param]):
if the template parameter pack is a parameter-declaration; the pattern is the parameter-declaration without the ellipsis,
if the template parameter pack is a type-parameter with a template-parameter-list; the pattern is the corresponding type-parameter without the ellipsis.
...
In a sizeof... expression (5.3.3 [expr.sizeof]), the pattern is an identifier.
[Example:...
A parameter pack whose name appears within the pattern of a pack expansion is expanded by that pack expansion. An appearance of the name of a parameter pack is only expanded by the innermost enclosing pack expansion. The pattern of a pack expansion shall name one or more parameter packs that are not expanded by a nested pack expansion; such parameter packs are called unexpanded parameter packs in the pattern. All of the parameter packs expanded...
... void g(Args ... args) { // OK: “Args” is expanded by the function parameter pack “args” ...
The instantiation of an a pack expansion that is not a sizeof... expression produces a list...
The instantiation of a sizeof... expression (5.3.3 [expr.sizeof]) produces an integral constant containing the number of elements in the parameter pack it expands.
This resolution also resolves issues 1182 and 1183.
Additional note (February, 2011):
A problematic case is a function like
template<typename... T, T... t> void f(T...) { }
where each element of the nontype pack actually has a different type. This causes problems for template argument deduction, since T and t are supposed to be deduced independently, but they're linked through their sizes. There doesn't appear to be any use case for this kind of example, so it should be ill-formed.
The rule should probably be to consider a non-type template parameter pack that expands any template parameter packs from the same template-parameter-list as ill-formed.
Presumably an out-of-class definition for an opaque enumeration member of a class template is intended to be allowed; however, the current wording of 14.5.1 [temp.class] provides only for out-of-class definitions of member functions, member classes, static data members, and member templates, not for opaque enumerations.
Proposed resolution (November, 2010):
Change 14 [temp] paragraph 1 as follows:
...The declaration in a template-declaration shall
declare or define a function or class, or
define a member function, a member class, a member enumeration, or a static data member of a class template or of a class nested within a class template, or
...
Change 14.5.1 [temp.class] paragraph 3 as follows:
When a member function, a member class, a member enumeration, a static data member or a member template of a class template is defined outside of the class template definition...
Add a new section following 14.5.1.3 [temp.static]:
14.5.1.4 Enumeration members of class templates [temp.mem.enum]
An enumeration member of a class template may be defined outside the class template definition. [Example:
template<class T> struct A { enum E: T; }; A<int> a; template<class T> enum A<T>::E: T { e1, e2 }; A<int>::E e = A<int>::e1;
—end example]
Change 14.7 [temp.spec] paragraph 2 as follows:
A function instantiated from a function template is called an instantiated function. A class instantiated from a class template is called an instantiated class. A member function, a member class, a member enumeration, or a static data member of a class template instantiated from the member definition of the class template is called, respectively, an instantiated member function, member class, member enumeration, or static data member. A member function...
Change 14.7.1 [temp.inst] paragraph 1 as follows:
...The implicit instantiation of a class template specialization causes the implicit instantiation of the declarations, but not of the definitions or default arguments, of the class member functions, member classes, scoped member enumerations, static data members and member templates; and it causes the implicit instantiation of the definitions of unscoped member enumerations and member anonymous unions. Unless a member...
Change 14.7.3 [temp.expl.spec] paragraph 1 as follows:
An explicit specialization of any of the following:
function template
class template
member function of a class template
static data member of a class template
member class of a class template
member enumeration of a class template
member class template of a class or class template
member function template of a class or class template
can be declared by a declaration introduced by template<>...
Change 14.7.3 [temp.expl.spec] paragraph 4 as follows:
A member function, a member class, a member enumeration, or a static data member of a class template may be explicitly specialized for a class specialization that is implicitly instantiated...
Add the indicated text to the example in 14.7 [temp.spec] paragraph 6:
template<> void sort<>(Array(<char*>& v); // OK: sort<char*> not yet used template<class T> struct A { enum E: T; enum class S: T; }; template<> enum A<int>::E: int { eint }; // OK template<> enum class A<int>::S: int { sint }; // OK template<class T> enum A<T>::E: T { eT }; template<class T> enum class A<T>::S: T { sT }; template<> enum A<char>::E: int { echar }; // ill-formed, A<char>::E was instantiated when A<char> was instantiated template<> enum class A<char>::S: int { schar }; // OK
Change 14.7.3 [temp.expl.spec] paragraph 7 as follows:
The placement of explicit specialization declarations for function templates, class templates, member functions of class templates, static data members of class templates, member classes of class templates, member enumerations of class templates, member class templates of class templates, member function templates...
According to 14.5.3 [temp.variadic] paragraph 4,
A pack expansion is a sequence of tokens that names one or more parameter packs, followed by an ellipsis.
This is contradicted by 5.3.3 [expr.sizeof] paragraph 5, which describes sizeof...(Types) as an expansion, as well as the case where the expansion appears in a declarator like the example given in 8.3.5 [dcl.fct] paragraph 13:
template<typename... T> void f(T (* ...t)(int, int));
This is also described as a pack expansion, although it does not fit the syntactic summary.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 778.
According to 14.6.2.1 [temp.dep.type] paragraph 1, in a primary class template a name refers to the current instantiation if it is the injected-class-name or the name of the class template followed by the template argument list of the template. Although a template-id referring to a specialization of a template alias is described as “equivalent to” the associated type, a specialization of a template alias is neither of the things that qualifies as naming the current instantiation, so presumably the typename keyword in the following example is required:
template <class T> struct A; template <class T> using B = A<T>; template <class T> struct A { struct C {}; typename B<T>::C bc; // typename needed };
(However, the list in 14.6.2.1 [temp.dep.type] may not be exactly what we want; it doesn't allow use of a typedef denoting the type of the current instantiation, either, but that presumably should be accepted.)
For analogous reasons, it should not be permitted to use a template alias as a nested-name-specifier when defining the members of a class template:
template <class T> struct A { void g(); }; template <class T> using B = A<T>; template <class T> void B<T>::g() {} // error
Notes from the November, 2010 meeting:
The CWG disagreed with the suggested direction, feeling that aliases should work like typedefs and that the examples should be accepted.
Proposed resolution (November, 2010):
Change 14.6.2.1 [temp.dep.type] paragraph 1 as follows:
In the definition of a class template, a nested class of a class template, a member of a class template, or a member of a nested class of a class template, a name refers to the current instantiation if it is
the injected-class-name (Clause
9 [class] ) of the class template or nested class,in the definition of a primary class template, the name of the class template followed by the template argument list of the primary template (as described below) enclosed in <> (or a template alias specialization equivalent to same),
in the definition of a nested class of a class template, the name of the nested class referenced as a member of the current instantiation, or
in the definition of a partial specialization, the name of the class template followed by the template argument list of the partial specialization enclosed in <> (or a template alias specialization equivalent to same). If the nth template parameter is a parameter pack, the nth template argument is a pack expansion (14.5.3 [temp.variadic]) whose pattern is the name of the parameter pack.
Change 14.6.2.1 [temp.dep.type] paragraph 3 as follows:
A template argument that is equivalent to a template parameter (i.e., has the same constant value or the same type as the template parameter) can be used in place of that template parameter in a reference to the current instantiation, except that a decltype-specifier that denotes a dependent type is always considered non-equivalent. In the case of a non-type template argument, the argument must have been given the value of the template parameter and not an expression in which the template parameter appears as a subexpression. [Example:...This resolution also resolves issue 1057.
Consider the following example:
template<class T> struct A { template<class U> friend struct A; // Which A? };
Presumably the lookup for A in the friend declaration finds the injected-class-name of the template. However, according to 14.6.1 [temp.local] paragraph 1,
The injected-class-name can be used with or without a template-argument-list. When it is used without a template-argument-list, it is equivalent to the injected-class-name followed by the template-parameters of the class template enclosed in <>. When it is used with a template-argument-list, it refers to the specified class template specialization, which could be the current specialization or another specialization.
If that rule applies, then this example is ill-formed (because you can't have a template-argument-list in a class template declaration that is not a partial specialization).
Mike Miller: The injected-class-name has a dual nature, as described in 14.6.1 [temp.local], acting as either a template name or a class name, depending on the context; a template argument list forces the name to be interpreted as a template. It seems reasonable that in this example the injected-class-name has to be understood as referring to the class template; a template header is at least as strong a contextual indicator as a template argument list. However, the current wording doesn't say that.
(See also issue 1004.)
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 1004.
The injected-class-name of a class template can be used either by itself, in which case it is a type denoting the current instantiation, or followed by a template argument list, in which case it is a template-name. It would be helpful to extend this treatment so that the injected-class-name could be used as an argument for a template template parameter:
template <class T> struct A { }; template <template <class> class TTP> struct B { }; struct C: A<int> { B<A> b; };
(This is accepted by g++.)
James Widman:
It would not be so helpful when used with overloaded function templates, for example:
template <template <class> class TTP> void f(); // #1 template < class T > void f(); // #2 template <class T> struct A { }; struct C: A<int> { void h( ) { f<A>(); // #1? #2? Substitution failure? } };
(See also issue 602.)
Proposed resolution (November, 2010):
Change 14.6.1 [temp.local] paragraphs 1-5 as follows:
Like normal (non-template) classes, class templates have an injected-class-name (Clause 9 [class]). The injected-class-name can be used with or without a template-argument-list as a template-name or a type-name. When it is used without a template-argument-list, it is equivalent to the injected-class-name followed by the template-parameters of the class template enclosed in <>. When it is used with a template-argument-list, as a template-argument for a template template-parameter, or as the final identifier in the elaborated-type-specifier of a friend class template declaration it refers to the specified class template specialization, which could be the current specialization or another specialization. class template itself. Otherwise, it is equivalent to the template-name followed by the template-parameters of the class template enclosed in <>.
Within the scope of a class template specialization or partial specialization, when the injected-class-name is not followed by a < used as a type-name, it is equivalent to the injected-class-name template-name followed by the template-arguments of the class template specialization or partial specialization enclosed in <>. [Example:
template<template<class> class T> class A { }; template<class T> class Y; template<> class Y<int> { Y* p; // meaning Y<int> Y<char>* q; // meaning Y<char> A<Y>* a; // meaning A<::Y> class B { template<class> friend class Y; // meaning ::Y }; };—end example]
The injected-class-name of a class template or class template specialization can be used either with or without a template-argument-list as a template-name or a type-name wherever it is in scope. [Example:
template <class T> struct Base { Base* p; }; template <class T> struct Derived: public Base<T> { typename Derived::Base* p; // meaning Derived::Base<T> }; template<class T, template<class> class U = T::template Base> struct Third { }; Third<Base<int>> t; // OK, default argument uses injected-class-name as a template—end example]
A lookup that finds an injected-class-name (10.2 [class.member.lookup]) can result in an ambiguity in certain cases (for example, if it is found in more than one base class). If all of the injected-class-names that are found refer to specializations of the same class template, and if the name is followed by a template-argument-list used as a template-name, the reference refers to the class template itself and not a specialization thereof, and is not ambiguous. [Example:
template <class T> struct Base { }; template <class T> struct Derived: Base<int>, Base<char> { typename Derived::Base b; // error: ambiguous typename Derived::Base<double> d; // OK };—end example]
When the normal name of the template (i.e., the name from the enclosing scope, not the injected-class-name) is used without a template-argument-list, it always refers to the class template itself and not a specialization of the template. [Example:...
This resolution also resolves issue 602.
According to 14.6.2.1 [temp.dep.type] paragraph 3,
A template argument that is equivalent to a template parameter (i.e., has the same constant value or the same type as the template parameter) can be used in place of that template parameter in a reference to the current instantiation.
This would presumably include something like
template<typename T> struct A { struct B { }; A<decltype(T())>::B b; // no typename };
However, this example is rejected by current implementations. Does this need to be clarified in the existing wording?
Notes from the November, 2010 meeting:
The example is not well-formed; if T is an rvalue reference type, for example, decltype(T()) is not equivalent to T.
Proposed resolution (November, 2010):
This issue is resolved by the resolution of issue 1056.
It is not clear why a noexcept-expression is value-dependent if its operand is value-dependent. It would seem that the value of the expression depends only on the type of the operand, not its value.
Proposed resolution (November, 2010):
Delete “noexcept( expression )” from the list in 14.6.2.3 [temp.dep.constexpr] paragraph 3.
Given
template <const char *N> struct A { static const char *p; }; template <class T> struct B { static const char c[1]; typedef A<B<T>::c> C; }; template <class T> struct D { static const char c[1]; typedef A<c> C; };
14.6.2.4 [temp.dep.temp] says that B<T>::c is dependent because it is used as a non-integral non-type template argument and it is a qualified-id with a nested-name-specifier that names a dependent type.
There doesn't seem to be anything to say that c in the definition of D<T>::C is dependent, which suggests that D<T>::C is the same type for all T, which is clearly false.
Instead of this special rule in 14.6.2.4 [temp.dep.temp], 14.6.2.3 [temp.dep.constexpr] should say that the address of a member of a dependent type is value-dependent, regardless of whether the address is written with a qualified-id.
Proposed resolution (November, 2010):
Add a new paragraph at the end of 14.6.2.3 [temp.dep.constexpr]:
An id-expression is value-dependent if it names a member of an unknown specialization.
Change 14.6.2.4 [temp.dep.temp] paragraphs 2-3 as follows:
An integral A non-type template-argument is dependent if its type is dependent or the constant expression it specifies is value-dependent.
A non-integral Furthermore, a non-type template-argument is dependent if its type is dependent or it has either of the following forms
qualified-id
& qualified-id
and contains a nested-name-specifier which specifies a class-name that names a dependent type the corresponding non-type template-parameter is of reference or pointer type and the template-argument designates or points to a member of the current instantiation or a member of a dependent type.
According to 14.7.2 [temp.explicit] paragraph 4,
For a given set of template parameters, if an explicit instantiation of a template appears after a declaration of an explicit specialization for that template, the explicit instantiation has no effect.
However, that does not appear to negate the requirements of paragraph 3 that a definition of the entity being instantiated must be in scope. Consequently, the following would appear to be ill-formed, even though there is no real need for the definition of C:
template<typename T> class C; template<> class C<int> { }; template class C<int>;
Proposed resolution (November, 2010):
Change 14.7.2 [temp.explicit] paragraphs 3-4 as follows:
A declaration of a function template shall be in scope at the point of the explicit instantiation of the function template. A definition of the class or class template containing a member function template shall be in scope at the point of the explicit instantiation of the member function template. A definition of a class template or class member template shall be in scope at the point of the explicit instantiation of the class template or class member template. A definition of a class template shall be in scope at the point of an explicit instantiation of a member function or a static data member of the class template. A definition of a member class of a class template shall be in scope at the point of an explicit instantiation of the member class. A declaration of a function template, a member function or static data member of a class template, or a member function template of a class or class template shall precede an explicit instantiation of that entity. A definition of a class template, a member class of a class template, or a member class template of a class or class template shall precede an explicit instantiation of that entity, unless the explicit instantiation is preceded by an explicit specialization of the entity with the same template arguments. If the declaration of the explicit instantiation names an implicitly-declared special member function (Clause 12 [special]), the program is ill-formed.
For a given set of template parameters arguments, if an explicit instantiation of a template appears after a declaration of an explicit specialization for that template, the explicit instantiation has no effect. Otherwise...
The Standard does not fully describe the syntax to be used when a member of an explicitly-specialized member class or member class template is defined in namespace scope. 14.7.3 [temp.expl.spec] paragraph 4 says that the “explicit specialization syntax” (presumably referring to “template<>”) is not used in defining a member of an explicit specialization when a class template is explicitly specialized as a class. However, nothing is said anywhere about how to define a member of a specialization when:
the entity being specialized is a class (member of a template class) rather than a class template.
the result of the specialization is a class template rather than a class (cf 14.7.3 [temp.expl.spec] paragraph 18, which describes this case as a “member template that... remain[s] unspecialized”).
(See paper J16/05-0148 = WG21 N1888 for further details, including a survey of existing implementation practice.)
Notes from the October, 2005 meeting:
The CWG felt that the best approach, balancing consistency with implementation issues and existing practice, would be to require that template<> be used when defining members of all explicit specializations, including those currently covered by 14.7.3 [temp.expl.spec] paragraph 4.
Proposed resolution (February, 2010):
Change 14.7.3 [temp.expl.spec] paragraph 5 as follows:
...The definition of an explicitly specialized class is unrelated to the definition of a generated specialization. That is, its members need not have the same names, types, etc. as the members of a generated specialization. Definitions of members of an explicitly specialized class are defined in the same manner as members of normal classes, and not using the syntax for explicit specialization using the same template<> prefix(es) as the explicitly specialized class. [Example:
template<class T> struct A { void f(T) { /* ... */ } struct B { /* ... */ }; template<class U> struct C { /* ... */ }; }; template<> struct A<int> { void f(int); struct B; template<class U> struct C; }; void h() { A<int> a; a.f(16); // A<int>::f must be defined somewhere } // explicit specialization syntax not used for a member of // explicitly specialized class template specialization // members of explicitly specialized classes are defined using //the same syntax as the explicitly specialized class: template<> void A<int>::f(int) { /* ... */ } template<> struct A<int>::B { /* ... */ }; template<> template<class T> struct A<int>::C { /* ... */ };—end example]
Note (June, 2010):
Because the survey of implementations on which the CWG relied in reaching this resolution is quite old, a new survey of current practice is needed.
The new wording added by issue 873 says,
...This is also done to determine whether a function template specialization matches a placement operator new (3.7.4.2 [basic.stc.dynamic.deallocation], 5.3.4 [expr.new])... If, for the set of function templates so considered, there is either no match or more than one match after partial ordering has been considered (14.5.6.2 [temp.func.order]), deduction fails and the declaration is ill-formed.
The statement describing the consequence of deduction failure (“the declaration is ill-formed”) does not apply to the case when deduction is being performed for placement operator delete, as there is no declaration involved. It may not be necessary to describe what happens when deduction fails in that case, but at least the wording should be tweaked to limit the conclusion to declarative contexts.
Proposed resolution (November, 2010):
Change 14.8.2.6 [temp.deduct.decl] paragraphs 1-2 as follows:
In a declaration whose declarator-id refers to a specialization of a function template, template argument deduction is performed to identify the specialization to which the declaration refers. Specifically, this is done for explicit instantiations (14.7.2 [temp.explicit]), explicit specializations (14.7.3 [temp.expl.spec]), and certain friend declarations (14.5.4 [temp.friend]). This is also done to determine whether a deallocation function template specialization matches a placement operator new (3.7.4.2 [basic.stc.dynamic.deallocation], 5.3.4 [expr.new]). In all these cases, P is the type of the function template being considered as a potential match and A is either the function type from the declaration or the type of the deallocation function that would match the placement operator new as described in 5.3.4 [expr.new]. The deduction is done as described in 14.8.2.5 [temp.deduct.type].
If, for the set of function templates so considered, there is either no match or more than one match after partial ordering has been considered (14.5.6.2 [temp.func.order]), deduction fails and, in the declaration cases, the declaration program is ill-formed.
I have a question about exception handling with respect to derived to base conversions of pointers caught by reference.
What should the result of this program be?
struct S {}; struct SS : public S {}; int main() { SS ss; int result = 0; try { throw &ss; // throw object has type SS* // (pointer to derived class) } catch (S*& rs) // (reference to pointer to base class) { result = 1; } catch (...) { result = 2; } return result; }
The wording of 15.3 [except.handle] paragraph 3 would seem to say that the catch of S*& does not match and so the catch ... would be taken.
All of the compilers I tried (EDG, g++, Sun, and Microsoft) used the catch of S*& though.
What do we think is the desired behavior for such cases?
My initial reaction is that this is a bug in all of these compilers, but the fact that they all do the same thing gives me pause.
On a related front, if the handler changes the parameter using the reference, what is caught by a subsequent handler?
extern "C" int printf(const char *, ...); struct S {}; struct SS : public S {}; SS ss; int f() { try { throw &ss; } catch (S*& rs) // (reference to pointer to base class) { rs = 0; throw; } catch (...) { } return 0; } int main() { try { f(); } catch (S*& rs) { printf("rs=%p, &ss=%p\n", rs, &ss); } }
EDG, g++, and Sun all catch the original (unmodified) value. Microsoft catches the modified value. In some sense the EDG/g++/Sun behavior makes sense because the later catch could catch the derived class instead of the base class, which would be difficult to do if you let the catch clause update the value to be used by a subsequent catch.
But on this non-pointer case, all of the compilers later catch the modified value:
extern "C" int printf(const char *, ...); int f() { try { throw 1; } catch (int& i) { i = 0; throw; } catch (...) { } return 0; } int main() { try { f(); } catch (int& i) { printf("i=%p\n", i); } }
To summarize:
(See also issue 729.)
Notes from the October, 2009 meeting:
The consensus of the CWG was that it should not be possible to catch a pointer to a derived class using a reference to a base class pointer, and that a handler that takes a reference to non-const pointer should allow the pointer to be modified by the handler.
Proposed resolution (February, 2010):
Change 15.3 [except.handle] paragraph 3 as follows:
A handler is a match for an exception object of type E if
The handler is of type cv T or cv T& and E and T are the same type (ignoring the top-level cv-qualifiers), or
the handler is of type cv T or cv T& and T is an unambiguous public base class of E, or
the handler is of type cv1 T* cv2 or const T& where T is a pointer type and E is a pointer type that can be converted to the type of the handler T by either or both of
a standard pointer conversion (4.10 [conv.ptr]) not involving conversions to pointers to private or protected or ambiguous classes
a qualification conversion
the handler is of type cv T or const T& where T is a pointer or pointer to member type and E is std::nullptr_t.
(This resolution also resolves issue 729.)
Given the following example:
int f() { try { /* ... */ } catch(const int*&) { return 1; } catch(int*&) { return 2; } return 3; }
can f() return 2? That is, does an int* exception object match a const int*& handler?
According to 15.3 [except.handle] paragraph 3, it does not:
A handler is a match for an exception object of type E if
The handler is of type cv T or cv T& and E and T are the same type (ignoring the top-level cv-qualifiers), or
the handler is of type cv T or cv T& and T is an unambiguous public base class of E, or
the handler is of type cv1 T* cv2 and E is a pointer type that can be converted to the type of the handler by either or both of
a standard pointer conversion (4.10 [conv.ptr]) not involving conversions to pointers to private or protected or ambiguous classes
a qualification conversion
the handler is a pointer or pointer to member type and E is std::nullptr_t.
Only the third bullet allows qualification conversions, but only the first bullet applies to a handler of reference-to-pointer type. This is consistent with how other reference bindings work; for example, the following is ill-formed:
int* p; const int*& r = p;
(The consistency is not complete; the reference binding would be permitted if r had type const int* const &, but a handler of that type would still not match an int* exception object.)
However, implementation practice seems to be in the other direction; both EDG and g++ do match an int* with a const int*&, and the Microsoft compiler issues an error for the presumed hidden handler in the code above. Should the Standard be changed to reflect existing practice?
(See also issue 388.)
Notes from the October, 2009 meeting:
The CWG agreed that matching the exception object with a handler should, to the extent possible, mimic ordinary reference binding in cases like this.
Proposed resolution (February, 2010):
This issue is resolved by the resolution of issue 388.
According to 15.4 [except.spec] paragraph2 8 and 9,
A function is said to allow an exception of type E if its dynamic-exception-specification contains a type T for which a handler of type T would be a match (15.3 [except.handle]) for an exception of type E.
Whenever an exception is thrown and the search for a handler (15.3 [except.handle]) encounters the outermost block of a function with an exception-specification that does not allow the exception, then,
if the exception-specification is a dynamic-exception-specification, the function std::unexpected() is called (15.5.2 [except.unexpected]),
otherwise, the function std::terminate() is called (15.5.1 [except.terminate]).
This does not define what it means for a noexcept-specification to allow an exception.
Proposed resolution (November, 2010):
Change 15.4 [except.spec] paragraph 8 as follows:
A function is said to allow an exception of type E if the constant-expression in its noexcept-specification evaluates to false or its dynamic-exception-specification contains a type T for which a handler of type T would be a match (15.3 [except.handle]) for an exception of type E.
2.5 [lex.pptoken] paragraph 2 specifies that there are 5 categories of tokens in phases 3 to 6. With 2.13 [lex.operators] paragraph 1, it is unclear whether new is an identifier or a preprocessing-op-or-punc; likewise for delete. This is relevant to answer the question whether
#define delete foo
is a well-formed control-line, since that requires an identifier after the define token.
(See also issue 189.)
The nonterminals operator and punctuator in 2.7 [lex.token] are not defined. There is a definition of the nonterminal operator in 13.5 [over.oper] paragraph 1, but it is apparent that the two nonterminals are not the same: the latter includes keywords and multi-token operators and does not include the nonoverloadable operators mentioned in paragraph 3.
There is a definition of preprocessing-op-or-punc in 2.13 [lex.operators] , with the notation that
Each preprocessing-op-or-punc is converted to a single token in translation phase 7 (2.1).However, this list doesn't distinguish between operators and punctuators, it includes digraphs and keywords (can a given token be both a keyword and an operator at the same time?), etc.
Suggested resolution:
Additional note (April, 2005):
The resolution for this problem should also address the fact that sizeof and typeid (and potentially others like decltype that may be added in the future) are described in some places as “operators” but are not listed in 13.5 [over.oper] paragraph 3 among the operators that cannot be overloaded.
(See also issue 369.)
According to 3.3.2 [basic.scope.pdecl] paragraph 6,
for an elaborated-type-specifier of the form
class-key identifier
if the elaborated-type-specifier is used in the decl-specifier-seq or parameter-declaration-clause of a function defined in namespace scope, the identifier is declared as a class-name in the namespace that contains the declaration; otherwise, except as a friend declaration, the identifier is declared in the smallest non-class, non-function-prototype scope that contains the declaration.
This should have been, but was not, updated when enumeration scope (3.3.8 [basic.scope.enum]) was added:
enum class E { e = sizeof((struct S*)0) };
Presumably the name S belongs to the same scope as E, not the enumeration scope of E.
In discussing issue 197, the question arose as to whether the handling of fundamental types in argument-dependent lookup is actually what is desired. This question needs further discussion.
Paragraph 7 of 3.4.5 [basic.lookup.classref] says,
If the id-expression is a conversion-function-id, its conversion-type-id shall denote the same type in both the context in which the entire postfix-expression occurs and in the context of the class of the object expression (or the class pointed to by the pointer expression).Does this mean that the following example is ill-formed?
struct A { operator int(); } a; void foo() { typedef int T; a.operator T(); // 1) error T is not found in the context // of the class of the object expression? }The second bullet in paragraph 1 of 3.4.3.1 [class.qual] says,
a conversion-type-id of an operator-function-id is looked up both in the scope of the class and in the context in which the entire postfix-expression occurs and shall refer to the same type in both contextsHow about:
struct A { typedef int T; operator T(); }; struct B : A { operator T(); } b; void foo() { b.A::operator T(); // 2) error T is not found in the context // of the postfix-expression? }Is this interpretation correct? Or was the intent for this to be an error only if T was found in both scopes and referred to different entities?
If the intent was for these to be errors, how do these rules apply to template arguments?
template <class T1> struct A { operator T1(); } template <class T2> struct B : A<T2> { operator T2(); void foo() { T2 a = A<T2>::operator T2(); // 3) error? when instantiated T2 is not // found in the scope of the class T2 b = ((A<T2>*)this)->operator T2(); // 4) error when instantiated? } }
(Note bullets 2 and 3 in paragraph 1 of 3.4.3.1 [class.qual] refer to postfix-expression. It would be better to use qualified-id in both cases.)
Erwin Unruh: The intent was that you look in both contexts. If you find it only once, that's the symbol. If you find it in both, both symbols must be "the same" in some respect. (If you don't find it, its an error).
Mike Miller: What's not clear to me in these examples is whether what is being looked up is T or int. Clearly the T has to be looked up somehow, but the "name" of a conversion function clearly involves the base (non-typedefed) type, not typedefs that might be used in a definition or reference (cf 3 [basic] paragraph 7 and 12.3 [class.conv] paragraph 5). (This is true even for types that must be written using typedefs because of the limited syntax in conversion-type-ids — e.g., the "name" of the conversion function in the following example
typedef void (*pf)(); struct S { operator pf(); };is S::operator void(*)(), even though you can't write its name directly.)
My guess is that this means that in each scope you look up the type named in the reference and form the canonical operator name; if the name used in the reference isn't found in one or the other scope, the canonical name constructed from the other scope is used. These names must be identical, and the conversion-type-id in the canonical operator name must not denote different types in the two scopes (i.e., the type might not be found in one or the other scope, but if it's found in both, they must be the same type).
I think this is all very vague in the current wording.
3.4.5 [basic.lookup.classref] does not mention template aliases as the possible result of the lookup but should do so.
In an example like
template<typename T> void f(T p)->decltype(p.T::x);
The nested-name-specifier T:: looks like it refers to the template parameter. However, if this is instantiated with a type like
struct T { int x; }; struct S: T { };
the reference will be ambiguous, since it is looked up in both the context of the expression, finding the template parameter, and in the class, finding the base class injected-class-name, and this could be a deduction failure. As a result, the same declaration with a different parameter name
template<typename U> void f(U p)->decltype(p.U::x);
is, in fact, not a redeclaration because the two can be distinguished by SFINAE.
It would be better to add a new lookup rule that says that if a name in a template definition resolves to a template parameter, that name is not subject to further lookup at instantiation time.
The resolution of issue 1111 changes 3.4.5 [basic.lookup.classref] paragraph 7 to read,
[A] conversion-type-id is first looked up in the class of the object expression and the name, if found and denotes a type, is used. Otherwise it is looked up in the context of the entire postfix-expression and the name shall denote a type.
The result of this specification is that a non-type member declaration in the class scope of the object expression will not be found (although it will hide a base class type member of the same name), but a non-type declaration in the context of the expression will be found (and make the program ill-formed).
This is inconsistent with the way other lookups are handled when they occur in a context that requires a type. For example, the lookup for a nested-name-specifier “considers only namespaces, types, and templates whose specializations are types” (3.4.3 [basic.lookup.qual] paragraph 1); the lookup for a name appearing in an elaborated-type-specifier is done “ignoring any non-type names that have been declared” (3.4.4 [basic.lookup.elab] paragraph 2); and in the lookup for a name in a base-type-specifier, “non-type names are ignored” (10 [class.derived] paragraph 2). The lookup for a conversion-type-id should be similar, and the wording in 3.4.5 [basic.lookup.classref] paragraph 7 adjusted accordingly.
An example in 3.5 [basic.link] paragraph 6 creates two file-scope variables with the same name, one with internal linkage and one with external.
static void f(); static int i = 0; //1 void g() { extern void f(); // internal linkage int i; //2: i has no linkage { extern void f(); // internal linkage extern int i; //3: external linkage } }
Is this really what we want? C99 has 6.2.2.7/7, which gives undefined behavior for having an identifier appear with internal and external linkage in the same translation unit. C++ doesn't seem to have an equivalent.
Notes from October 2003 meeting:
We agree that this is an error. We propose to leave the example but change the comment to indicate that line //3 has undefined behavior, and elsewhere add a normative rule giving such a case undefined behavior.
Proposed resolution (October, 2005):
Change 3.5 [basic.link] paragraph 6 as indicated:
...Otherwise, if no matching entity is found, the block scope entity receives external linkage. If, within a translation unit, the same entity is declared with both internal and external linkage, the behavior is undefined.
[Example:
static void f(); static int i = 0; // 1 void g () { extern void f (); // internal linkage int i; // 2: i has no linkage { extern void f (); // internal linkage extern int i; // 3: external linkage } }There are three objects named i in this program. The object with internal linkage introduced by the declaration in global scope (line //1 ), the object with automatic storage duration and no linkage introduced by the declaration on line //2, and the object with static storage duration and external linkage introduced by the declaration on line //3. Without the declaration at line //2, the declaration at line //3 would link with the declaration at line //1. But because the declaration with internal linkage is hidden, //3 is given external linkage, resulting in a linkage conflict. —end example]
Notes frum the April 2006 meeting:
According to 3.5 [basic.link] paragraph 9, the two variables with linkage in the proposed example are not “the same entity” because they do not have the same linkage. Some other formulation will be needed to describe the relationship between those two variables.
Notes from the October 2006 meeting:
The CWG decided that it would be better to make a program with this kind of linkage mismatch ill-formed instead of having undefined behavior.
Is the following well-formed?
int f() { int i = 3; new (&i) float(1.2); return i; }
The wording that is intended to prevent such shenanigans, 3.8 [basic.life] paragraphs 7-9, doesn't quite apply here. In particular, paragraph 7 reads,
If, after the lifetime of an object has ended and before the storage which the object occupied is reused or released, a new object is created at the storage location which the original object occupied, a pointer that pointed to the original object, a reference that referred to the original object, or the name of the original object will automatically refer to the new object and, once the lifetime of the new object has started, can be used to manipulate the new object, if:
the storage for the new object exactly overlays the storage location which the original object occupied, and
the new object is of the same type as the original object (ignoring the top-level cv-qualifiers), and...
The problem here is that this wording only applies “after the lifetime of an object has ended and before the storage which the object occupied is reused;” for an object of a scalar type, its lifetime only ends when the storage is reused or released (paragraph 1), so it appears that these restrictions cannot apply to such objects.
Proposed resolution (August, 2010):
This issue is resolved by the resolution of issue 1116.
Related to issue 1027, consider:
int f() { union U { double d; } u1, u2; (int&)u1.d = 1; u2 = u1; return (int&)u2.d; }
Does this involve undefined behavior? 3.8 [basic.life] paragraph 4 seems to say that it's OK to clobber u1 with an int object. Then union assignment copies the object representation, possibly creating an int object in u2 and making the return statement well-defined. If this is well-defined, compilers are significantly limited in the assumptions they can make about type aliasing. On the other hand, the variant where U has an array of unsigned char member must be well-defined in order to support std::aligned_storage.
Suggested resolution: Clarify that this case is undefined, but that adding an array of unsigned char to union U would make it well-defined — if a storage location is allocated with a particular type, it should be undefined to create an object in that storage if it would be undefined to access the stored value of the object through the allocated type.
Proposed resolution (August, 2010):
Change 3.8 [basic.life] paragraph 1 as follows:
...The lifetime of an object of type T begins when storage with the proper alignment and size for type T is obtained, and either:
- storage with the proper alignment and size for type T is obtained, and
if the object has non-trivial initialization, its initialization is complete., or
if T is trivially copyable, the object representation of another T object is copied into the storage.
The lifetime of an object of type T ends...
Change 3.8 [basic.life] paragraph 4 as follows:
A program may end the lifetime of any object by reusing the storage which the object occupies or by explicitly calling the destructor for an object of a class type with a non-trivial destructor. For an object of a class type with a non-trivial destructor, the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released; however, if there is no explicit call to the destructor or if a delete-expression (5.3.5 [expr.delete]) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior. If a program obtains storage for an object of a particular type A (e.g. with a variable definition or new-expression) and later reuses that storage for an object of another type B such that accessing the stored value of the B object through a glvalue of type A would have undefined behavior (3.10 [basic.lval]), the behavior is undefined. [Example:
int i; (double&)i = 1.0; // undefined behavior struct S { unsigned char alignas(double) ar[sizeof (double)]; } s; (double&)s = 1.0; // OK, can access stored double through s because it has an unsigned char subobject
—end example]
Change 3.10 [basic.lval] paragraph 10 as follows:
If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined52:
the dynamic type of the object,
a cv-qualified version of the dynamic type of the object,
a type similar (as defined in 4.4 [conv.qual]) to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object,
a char or unsigned char type,
an aggregate or union type that includes one of the aforementioned types among its elements, bases, or non-static data members (including, recursively, an element, base, or non-static data member of a subaggregate, base, or contained union),.
a type that is a (possibly cv-qualified) base class type of the dynamic type of the object,
a char or unsigned char type.
This resolution also resolves issue 1027.
In spite of the resolution of issue 112, the exact relationship between cv-qualifiers and array types is not clear. There does not appear to be a definitive normative statement answering the question of whether an array with a const-qualified element type is itself const-qualified; the statement in 3.9.3 [basic.type.qualifier] paragraph 5,
Cv-qualifiers applied to an array type attach to the underlying element type, so the notation “cv T,” where T is an array type, refers to an array whose elements are so-qualified. Such array types can be said to be more (or less) cv-qualified than other types based on the cv-qualification of the underlying element types.
hints at an answer but is hardly decisive. For example, is the following example well-formed?
template <class T> struct is_const { static const bool value = false; }; template <class T> struct is_const<const T> { static const bool value = true; }; template <class T> void f(T &) { char checker[is_const<T>::value]; } int const arr[1] = {}; int main() { f(arr); }
Also, when 3.10 [basic.lval] paragraph 4 says,
Class prvalues can have cv-qualified types; non-class prvalues always have cv-unqualified types.
does this apply to array rvalues, as it appears? That is, given
struct S { const int arr[10]; };
is the array rvalue S().arr an array of int or an array of const int?
(The more general question is, when the Standard refers to non-class types, should it be considered to include array types? Or perhaps only arrays of non-class types?)
The aliasing rules given in 3.10 [basic.lval] paragraph 10 rely on the concept of “dynamic type.” The problem is that the dynamic type of an object often cannot be determined (or even sufficiently constrained) at the point at which an optimizer needs to be able to determine whether aliasing might occur or not. For example, consider the function
void foo(int* p, double* q) { *p = 42; *q = 3.14; }
An optimizer, on the basis of the existing aliasing rules, might decide that an int* and a double* cannot refer to the same object and reorder the assignments. This reordering, however, could result in undefined behavior if the function foo is called as follows:
void goo() { union { int i; double d; } t; t.i = 12; foo(&t.i, &t.d); cout << t.d << endl; };
Here, the reference to t.d after the call to foo will be valid only if the assignments in foo are executed in the order in which they were written; otherwise, the union will contain an int object rather than a double.
One possibility would be to require that if such aliasing occurs, it be done only via member names and not via pointers.
Notes from the July, 2007 meeting:
This is the same issue as C's DR236. The CWG expressed a desire to address the issue the same way C99 does. The issue also occurs in C++ when placement new is used to end the lifetime of one object and start the lifetime of a different object occupying the same storage.
The current wording of the Standard does not recognize the fact that the alignment of a complete object of a given type may be different from its alignment as a subobject. This arises in particular with virtual base classes. For example,
struct B { long double d; }; struct D: virtual B { char c; };
When D is a complete object, it will have a subobject of type B, which must be aligned appropriately for a long double. On the other hand, if D appears as a suboject of another object, the B subobject might be part of a different subobject, reducing the alignment requirement on the D subobject.
The Standard should make clear that it is the complete-object alignment that is being described, in parallel with the distinction between the size of a complete object and a subobject of the same type.
3.11 [basic.align] speaks of “alignment requirements,” and 3.7.4.1 [basic.stc.dynamic.allocation] requires the result of an allocation function to point to “suitably aligned” storage, but there is no explicit statement of what happens when these requirements are violated (presumably undefined behavior).
According to 4.1 [conv.lval] paragraph 1, applying the lvalue-to-rvalue conversion to any uninitialized object results in undefined behavior. However, character types are intended to allow any data, including uninitialized objects and padding, to be copied (hence the statements in 3.9.1 [basic.fundamental] paragraph 1 that “For character types, all bits of the object representation participate in the value representation” and in 3.10 [basic.lval] paragraph 15 that char and unsigned char types can alias any object). The lvalue-to-rvalue conversion should be permitted on uninitialized objects of character type without evoking undefined behavior.
The descriptions of explicit (5.2.9 [expr.static.cast] paragraph 9) and implicit (4.11 [conv.mem] paragraph 2) pointer-to-member conversions differ in two significant ways:
(This situation cannot arise in an implicit pointer-to-member conversion where the source value is something like &X::f, since you can only implicitly convert from pointer-to-base-member to pointer-to-derived-member. However, if the source value is the result of an explicit "up-cast," the target type of the conversion might still not contain the member referred to by the source value.)
The first difference seems like an oversight. It is not clear whether the latter difference is intentional or not.
(See also issue 794.)
There are at least a couple of problems in the description of the various id-expressions in 5.1.1 [expr.prim.general]:
Paragraph 4 embodies an incorrect assumption about the syntax of qualified-ids:
The operator :: followed by an identifier, a qualified-id, or an operator-function-id is a primary-expression.
The problem here is that the :: is actually part of the syntax of qualified-id; consequently, “:: followed by... a qualified-id” could be something like “:: ::i,” which is ill-formed. Presumably this should say something like, “A qualified-id with no nested-name-specifier is a primary-expression.”
More importantly, some kinds of id-expressions are not described by 5.1.1 [expr.prim.general]. The structure of this section is that the result, type, and lvalue-ness are specified for each of the cases it covers:
paragraph 4 deals with qualified-ids that have no nested-name-specifier
paragraph 7 deals with bare identifiers and with qualified-ids containing a nested-name-specifier that names a class
paragraph 8 deals with qualified-ids containing a nested-name-specifier that names a namespace
This treatment leaves unspecified all the non-identifier unqualified-ids (operator-function-id, conversion-function-id, and template-id), as well as (perhaps) “:: template-id” (it's not clear whether the “:: followed by a qualified-id” case is supposed to apply to template-ids or not). Note also that the proposed resolution of issue 301 slightly exacerbates this problem by removing the form of operator-function-id that contains a tmeplate-argument-list; as a result, references like “::operator+<X>” are no longer covered in 5.1.1 [expr.prim.general].
At least a couple of places in the IS state that indirection through a null pointer produces undefined behavior: 1.9 [intro.execution] paragraph 4 gives "dereferencing the null pointer" as an example of undefined behavior, and 8.3.2 [dcl.ref] paragraph 4 (in a note) uses this supposedly undefined behavior as justification for the nonexistence of "null references."
However, 5.3.1 [expr.unary.op] paragraph 1, which describes the unary "*" operator, does not say that the behavior is undefined if the operand is a null pointer, as one might expect. Furthermore, at least one passage gives dereferencing a null pointer well-defined behavior: 5.2.8 [expr.typeid] paragraph 2 says
If the lvalue expression is obtained by applying the unary * operator to a pointer and the pointer is a null pointer value (4.10 [conv.ptr]), the typeid expression throws the bad_typeid exception (18.7.3 [bad.typeid]).
This is inconsistent and should be cleaned up.
Bill Gibbons:
At one point we agreed that dereferencing a null pointer was not undefined; only using the resulting value had undefined behavior.
For example:
char *p = 0; char *q = &*p;
Similarly, dereferencing a pointer to the end of an array should be allowed as long as the value is not used:
char a[10]; char *b = &a[10]; // equivalent to "char *b = &*(a+10);"
Both cases come up often enough in real code that they should be allowed.
Mike Miller:
I can see the value in this, but it doesn't seem to be well reflected in the wording of the Standard. For instance, presumably *p above would have to be an lvalue in order to be the operand of "&", but the definition of "lvalue" in 3.10 [basic.lval] paragraph 2 says that "an lvalue refers to an object." What's the object in *p? If we were to allow this, we would need to augment the definition to include the result of dereferencing null and one-past-the-end-of-array.
Tom Plum:
Just to add one more recollection of the intent: I was very happy when (I thought) we decided that it was only the attempt to actually fetch a value that creates undefined behavior. The words which (I thought) were intended to clarify that are the first three sentences of the lvalue-to-rvalue conversion, 4.1 [conv.lval]:
An lvalue (3.10 [basic.lval]) of a non-function, non-array type T can be converted to an rvalue. If T is an incomplete type, a program that necessitates this conversion is ill-formed. If the object to which the lvalue refers is not an object of type T and is not an object of a type derived from T, or if the object is uninitialized, a program that necessitates this conversion has undefined behavior.
In other words, it is only the act of "fetching", of lvalue-to-rvalue conversion, that triggers the ill-formed or undefined behavior. Simply forming the lvalue expression, and then for example taking its address, does not trigger either of those errors. I described this approach to WG14 and it may have been incorporated into C 1999.
Mike Miller:
If we admit the possibility of null lvalues, as Tom is suggesting here, that significantly undercuts the rationale for prohibiting "null references" -- what is a reference, after all, but a named lvalue? If it's okay to create a null lvalue, as long as I don't invoke the lvalue-to-rvalue conversion on it, why shouldn't I be able to capture that null lvalue as a reference, with the same restrictions on its use?
I am not arguing in favor of null references. I don't want them in the language. What I am saying is that we need to think carefully about adopting the permissive approach of saying that it's all right to create null lvalues, as long as you don't use them in certain ways. If we do that, it will be very natural for people to question why they can't pass such an lvalue to a function, as long as the function doesn't do anything that is not permitted on a null lvalue.
If we want to allow &*(p=0), maybe we should change the definition of "&" to handle dereferenced null specially, just as typeid has special handling, rather than changing the definition of lvalue to include dereferenced nulls, and similarly for the array_end+1 case. It's not as general, but I think it might cause us fewer problems in the long run.
Notes from the October 2003 meeting:
See also issue 315, which deals with the call of a static member function through a null pointer.
We agreed that the approach in the standard seems okay: p = 0; *p; is not inherently an error. An lvalue-to-rvalue conversion would give it undefined behavior.
Proposed resolution (October, 2004):
(Note: the resolution of issue 453 also resolves part of this issue.)
Add the indicated words to 3.10 [basic.lval] paragraph 2:
An lvalue refers to an object or function or is an empty lvalue (5.3.1 [expr.unary.op]).
Add the indicated words to 5.3.1 [expr.unary.op] paragraph 1:
The unary * operator performs indirection: the expression to which it is applied shall be a pointer to an object type, or a pointer to a function type and the result is an lvalue referring to the object or function to which the expression points, if any. If the pointer is a null pointer value (4.10 [conv.ptr]) or points one past the last element of an array object (5.7 [expr.add]), the result is an empty lvalue and does not refer to any object or function. An empty lvalue is not modifiable. If the type of the expression is “pointer to T,” the type of the result is “T.” [Note: a pointer to an incomplete type (other than cv void) can be dereferenced. The lvalue thus obtained can be used in limited ways (to initialize a reference, for example); this lvalue must not be converted to an rvalue, see 4.1 [conv.lval].—end note]
Add the indicated words to 4.1 [conv.lval] paragraph 1:
If the object to which the lvalue refers is not an object of type T and is not an object of a type derived from T, or if the object is uninitialized, or if the lvalue is an empty lvalue (5.3.1 [expr.unary.op]), a program that necessitates this conversion has undefined behavior.
Change 1.9 [intro.execution] as indicated:
Certain other operations are described in this International Standard as undefined (for example, the effect of dereferencing the null pointer division by zero).
Note (March, 2005):
The 10/2004 resolution interacts with the resolution of issue 73. We added wording to 3.9.2 [basic.compound] paragraph 3 to the effect that a pointer containing the address one past the end of an array is considered to “point to” another object of the same type that might be located there. The 10/2004 resolution now says that it would be undefined behavior to use such a pointer to fetch the value of that object. There is at least the appearance of conflict here; it may be all right, but it at needs to be discussed further.
Notes from the April, 2005 meeting:
The CWG agreed that there is no contradiction between this direction and the resolution of issue 73. However, “not modifiable” is a compile-time concept, while in fact this deals with runtime values and thus should produce undefined behavior instead. Also, there are other contexts in which lvalues can occur, such as the left operand of . or .*, which should also be restricted. Additional drafting is required.
(See also issue 1102.)
It is not clear from 5.3.4 [expr.new] whether a deleted operator delete is referenced by a new-expression in which there is no initialization or in which the initialization cannot throw an exception, rendering the program ill-formed. (The question also arises as to whether such a new-expression constitutes a “use” of the deallocation function in the sense of 3.2 [basic.def.odr].)
Notes from the July, 2009 meeting:
The rationale for defining a deallocation function as deleted would presumably be to prevent such objects from being freed. Treating the new-expression as a use of such a deallocation function would mean that such objects could not be created in the first place. There is already an exemption from freeing an object if “a suitable deallocation function [cannot] be found;” a deleted deallocation function should be treated similarly.
The body of a constexpr function is required by 7.1.5 [dcl.constexpr] paragraph 3 to be of the form
However, there does not seem to be any good reason for prohibiting the alternate return syntax involving a braced-init-list. The restriction should be removed.
Proposed resolution (March, 2010):
Change 6.6.3 [stmt.return] paragraph 2 as follows:
A return statement without an expression with neither an expression nor a braced-init-list can be used only in functions that do not return a value...
Change 7.1.5 [dcl.constexpr] paragraph 3 bullets 4 and 5 as follows:
its function-body shall be a compound-statement of the form
where expression is a potential constant expression (5.19), or
where every assignment-expression that is an initializer-clause appearing directly or indirectly within the braced-init-list is a potential constant expression
every constructor call and implicit conversion used in converting expression to the function return type initializing the return value (6.6.3 [stmt.return], 8.5 [dcl.init]) shall be one of those allowed in a constant expression (5.19 [expr.const]).
Notes from the March, 2010 meeting:
The new wording added in 5.19 [expr.const] in support of reference parameters for constexpr functions should also be considered to see whether additional changes are needed.
7.3.1.2 [namespace.memdef] paragraph 3 says,
If a friend declaration in a non-local class first declares a class or function the friend class or function is a member of the innermost enclosing namespace... When looking for a prior declaration of a class or a function declared as a friend, scopes outside the innermost enclosing namespace scope are not considered.It is not clear from this passage how to determine whether an entity is "first declared" in a friend declaration. One question is whether a using-declaration influences this determination. For instance:
void foo(); namespace A{ using ::foo; class X{ friend void foo(); }; }Is the friend declaration a reference to ::foo or a different foo?
Part of the question involves determining the meaning of the word "synonym" in 7.3.3 [namespace.udecl] paragraph 1:
A using-declaration introduces a name into the declarative region in which the using-declaration appears. That name is a synonym for the name of some entity declared elsewhere.Is "using ::foo;" the declaration of a function or not?
More generally, the question is how to describe the lookup of the name in a friend declaration.
John Spicer: When a declaration specifies an unqualified name, that name is declared, not looked up. There is a mechanism in which that declaration is linked to a prior declaration, but that mechanism is not, in my opinion, via normal name lookup. So, the friend always declares a member of the nearest namespace scope regardless of how that name may or may not already be declared there.
Mike Miller: 3.4.1 [basic.lookup.unqual] paragraph 7 says:
A name used in the definition of a class X outside of a member function body or nested class definition shall be declared in one of the following ways:... [Note: when looking for a prior declaration of a class or function introduced by a friend declaration, scopes outside of the innermost enclosing namespace scope are not considered.]The presence of this note certainly implies that this paragraph describes the lookup of names in friend declarations.
John Spicer: It most certainly does not. If that section described the friend lookup it would yield the incorrect results for the friend declarations of f and g below. I don't know why that note is there, but it can't be taken to mean that that is how the friend lookup is done.
void f(){} void g(){} class B { void g(); }; class A : public B { void f(); friend void f(); // ::f not A::f friend void g(); // ::g not B::g };
Mike Miller: If so, the lookups for friend functions and classes behave differently. Consider the example in 3.4.4 [basic.lookup.elab] paragraph 3:
struct Base { struct Data; // OK: declares nested Data friend class Data; // OK: nested Data is a friend };
If the friend declaration is not a reference to ::foo, there is a related but separate question: does the friend declaration introduce a conflicting (albeit "invisible") declaration into namespace A, or is it simply a reference to an as-yet undeclared (and, in this instance, undeclarable) A::foo? Another part of the example in 3.4.4 [basic.lookup.elab] paragraph 3 is related:
struct Data { friend struct Glob; // OK: Refers to (as yet) undeclared Glob // at global scope. };
John Spicer: You can't refer to something that has not yet been declared. The friend is a declaration of Glob, it just happens to declare it in a such a way that its name cannot be used until it is redeclared.
(A somewhat similar question has been raised in connection with issue 36. Consider:
namespace N { struct S { }; } using N::S; struct S; // legal?
According to 9.1 [class.name] paragraph 2,
A declaration consisting solely of class-key identifier ; is either a redeclaration of the name in the current scope or a forward declaration of the identifier as a class name.
Should the elaborated type declaration in this example be considered a redeclaration of N::S or an invalid forward declaration of a different class?)
(See also issues 95, 136, 139, 143, 165, and 166, as well as paper J16/00-0006 = WG21 N1229.)
8.3.2 [dcl.ref] paragraph 4 says:
A reference shall be initialized to refer to a valid object or function. [Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the "object" obtained by dereferencing a null pointer, which causes undefined behavior ...]
What is a "valid" object? In particular the expression "valid object" seems to exclude uninitialized objects, but the response to Core Issue 363 clearly says that's not the intent. This is an example (overloading construction on constness of *this) by John Potter, which I think is supposed to be legal C++ though it binds references to objects that are not initialized yet:
struct Fun { int x, y; Fun (int x, Fun const&) : x(x), y(42) { } Fun (int x, Fun&) : x(x), y(0) { } }; int main () { const Fun f1 (13, f1); Fun f2 (13, f2); cout << f1.y << " " << f2.y << "\n"; }
Suggested resolution: Changing the final part of 8.3.2 [dcl.ref] paragraph 4 to:
A reference shall be initialized to refer to an object or function. From its point of declaration on (see 3.3.2 [basic.scope.pdecl]) its name is an lvalue which refers to that object or function. The reference may be initialized to refer to an uninitialized object but, in that case, it is usable in limited ways (3.8 [basic.life], paragraph 6) [Note: On the other hand, a declaration like this:int & ref = *(int*)0;is ill-formed because ref will not refer to any object or function ]
I also think a "No diagnostic is required." would better be added (what about something like int& r = r; ?)
Proposed Resolution (October, 2004):
(Note: the following wording depends on the proposed resolution for issue 232.)
Change 8.3.2 [dcl.ref] paragraph 4 as follows:
A reference shall be initialized to refer to a valid object or function. If an lvalue to which a reference is directly bound designates neither an existing object or function of an appropriate type (8.5.3 [dcl.init.ref]), nor a region of memory of suitable size and alignment to contain an object of the reference's type (1.8 [intro.object], 3.8 [basic.life], 3.9 [basic.types]), the behavior is undefined. [Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” empty lvalue obtained by dereferencing a null pointer, which causes undefined behavior. As does not designate an object or function. Also, as described in 9.6 [class.bit], a reference cannot be bound directly to a bit-field. ]
The name of a reference shall not be used in its own initializer. Any other use of a reference before it is initialized results in undefined behavior. [Example:
int& f(int&); int& g(); extern int& ir3; int* ip = 0; int& ir1 = *ip; // undefined behavior: null pointer int& ir2 = f(ir3); // undefined behavior: ir3 not yet initialized int& ir3 = g(); int& ir4 = f(ir4); // ill-formed: ir4 used in its own initializer—end example]
Rationale: The proposed wording goes beyond the specific concerns of the issue. It was noted that, while the current wording makes cases like int& r = r; ill-formed (because r in the initializer does not "refer to a valid object"), an inappropriate initialization can only be detected, if at all, at runtime and thus "undefined behavior" is a more appropriate treatment. Nevertheless, it was deemed desirable to continue to require a diagnostic for obvious compile-time cases.
It was also noted that the current Standard does not say anything about using a reference before it is initialized. It seemed reasonable to address both of these concerns in the same wording proposed to resolve this issue.
Notes from the April, 2005 meeting:
The CWG decided that whether to require an implementation to diagnose initialization of a reference to itself should be handled as a separate issue (504) and also suggested referring to “storage” instead of “memory” (because 1.8 [intro.object] defines an object as a “region of storage”).
Proposed Resolution (April, 2005):
(Note: the following wording depends on the proposed resolution for issue 232.)
Change 8.3.2 [dcl.ref] paragraph 4 as follows:
A reference shall be initialized to refer to a valid object or function. If an lvalue to which a reference is directly bound designates neither an existing object or function of an appropriate type (8.5.3 [dcl.init.ref]), nor a region of storage of suitable size and alignment to contain an object of the reference's type (1.8 [intro.object], 3.8 [basic.life], 3.9 [basic.types]), the behavior is undefined. [Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” empty lvalue obtained by dereferencing a null pointer, which causes undefined behavior. As does not designate an object or function. Also, as described in 9.6 [class.bit], a reference cannot be bound directly to a bit-field. ]
Any use of a reference before it is initialized results in undefined behavior. [Example:
int& f(int&); int& g(); extern int& ir3; int* ip = 0; int& ir1 = *ip; // undefined behavior: null pointer int& ir2 = f(ir3); // undefined behavior: ir3 not yet initialized int& ir3 = g(); int& ir4 = f(ir4); // undefined behavior: ir4 used in its own initializer—end example]
Note (February, 2006):
The word “use” in the last paragraph of the proposed resolution was intended to refer to the description in 3.2 [basic.def.odr] paragraph 2. However, that section does not define what it means for a reference to be “used,” dealing only with objects and functions. Additional drafting is required to extend 3.2 [basic.def.odr] paragraph 2 to apply to references.
Additional note (May, 2008):
The proposed resolution for issue 570 adds wording to define “use” for references.
EDG rejects this code:
template <typename T> struct S {}; void f (S<int (*)[]>);G++ accepts it.
This is another case where the standard isn't very clear:
The language from 8.3.5 [dcl.fct] is:
If the type of a parameter includes a type of the form "pointer to array of unknown bound of T" or "reference to array of unknown bound of T," the program is ill-formed.Since "includes a type" is not a term defined in the standard, we're left to guess what this means. (It would be better if this were a recursive definition, the way a type theoretician would do it:
Notes from April 2003 meeting:
We agreed that the example should be allowed.
8.5 [dcl.init] paragraph 16 describes three kinds of initializers: a single expression, a braced-init-list, and a parenthesized list of expressions. It is not clear which, if any, of these categories is the appropriate description for an initializer like
T t( { 1, 2 } )
and thus not clear which of the bullets in the list applies.
There is an inconsistency in the handling of references vs pointers in user defined conversions and overloading. The reason for that is that the combination of 8.5.3 [dcl.init.ref] and 4.4 [conv.qual] circumvents the standard way of ranking conversion functions, which was probably not the intention of the designers of the standard.
Let's start with some examples, to show what it is about:
struct Z { Z(){} }; struct A { Z x; operator Z *() { return &x; } operator const Z *() { return &x; } }; struct B { Z x; operator Z &() { return x; } operator const Z &() { return x; } }; int main() { A a; Z *a1=a; const Z *a2=a; // not ambiguous B b; Z &b1=b; const Z &b2=b; // ambiguous }
So while both classes A and B are structurally equivalent, there is a difference in operator overloading. I want to start with the discussion of the pointer case (const Z *a2=a;): 13.3.3 [over.match.best] is used to select the best viable function. Rule 4 selects A::operator const Z*() as best viable function using 13.3.3.2 [over.ics.rank] since the implicit conversion sequence const Z* -> const Z* is a better conversion sequence than Z* -> const Z*.
So what is the difference to the reference case? Cv-qualification conversion is only applicable for pointers according to 4.4 [conv.qual]. According to 8.5.3 [dcl.init.ref] paragraphs 4-7 references are initialized by binding using the concept of reference-compatibility. The problem with this is, that in this context of binding, there is no conversion, and therefore there is also no comparing of conversion sequences. More exactly all conversions can be considered identity conversions according to 13.3.3.1.4 [over.ics.ref] paragraph 1, which compare equal and which has the same effect. So binding const Z* to const Z* is as good as binding const Z* to Z* in terms of overloading. Therefore const Z &b2=b; is ambiguous. [13.3.3.1.4 [over.ics.ref] paragraph 5 and 13.3.3.2 [over.ics.rank] paragraph 3 rule 3 (S1 and S2 are reference bindings ...) do not seem to apply to this case]
There are other ambiguities, that result in the special treatment of references: Example:
struct A {int a;}; struct B: public A { B() {}; int b;}; struct X { B x; operator A &() { return x; } operator B &() { return x; } }; main() { X x; A &g=x; // ambiguous }
Since both references of class A and B are reference compatible with references of class A and since from the point of ranking of implicit conversion sequences they are both identity conversions, the initialization is ambiguous.
So why should this be a defect?
So overall I think this was not the intention of the authors of the standard.
So how could this be fixed? For comparing conversion sequences (and only for comparing) reference binding should be treated as if it was a normal assignment/initialization and cv-qualification would have to be defined for references. This would affect 8.5.3 [dcl.init.ref] paragraph 6, 4.4 [conv.qual] and probably 13.3.3.2 [over.ics.rank] paragraph 3.
Another fix could be to add a special case in 13.3.3 [over.match.best] paragraph 1.
It's not clear how lookup of a non-dependent qualified name should be handled in a non-static member function of a class template. For example,
struct A { int f(int); static int f(double); }; struct B {}; template<typename T> struct C : T { void g() { A::f(0); } };
The call to A::f inside C::g() appears non-dependent, so one might expect that it would be bound at template definition time to A::f(double). However, the resolution for issue 515 changed 9.3.1 [class.mfct.non-static] paragraph 3 to transform an id-expression to a member access expression using (*this). if lookup resolves the name to a non-static member of any class, making the reference dependent. The result is that if C is instantiated with A, A::f(int) is called; if C is instantiated with B, the call is ill-formed (the call is transformed to (*this).A::f(0), and there is no A subobject in C<B>). Both these results seem unintuitive.
(See also issue 1017.)
Notes from the November, 2010 meeting:
The CWG agreed that the resolution of issue 515 was ill-advised and should be reversed.The following innocuous-appearing code is currently ill-formed:
struct A { int a; }; struct B { void f() { decltype(A::a) i; // ill-formed } };
The reason is that, according to 9.3.1 [class.mfct.non-static] paragraph 3, the reference to A::a is transformed into (*this).A::a, and there is no A subobject of B. It would seem reasonable to suppress this transformation in unevaluated operands, where a reference to a non-static member is permitted without an object expression.
(See also issue 1005.)
Notes from the November, 2010 meeting:
The CWG agreed that the resolution of issue 515 was ill-advised and should be reversed.The resolution of issue 372 leaves unclear whether the following are well-formed or not:
class C { typedef int I; // private template <int> struct X; template <int> friend struct Y; } template <C::I> struct C::X { }; // C::I accessible to member? template <C::I> struct Y { }; // C::I accessible to friend?
Presumably the answer to both questions is “yes,” but the new wording does not address template-parameters.
Proposed resolution (June, 2008):
Change 11 [class.access] paragraph 6 as follows:
...For purposes of access control, the base-specifiers of a class, the template-parameters of a template-declaration, and the definitions of class members that appear outside of the class definition are considered to be within the scope of that class...
Notes from the September, 2008 meeting:
The proposed resolution preserves the word “scope” as a holdover from the original specification prior to issue 372, which intended to change access determination from a scope-based model to an entity-based model. The resolution should eliminate all references to scope and simply use the entity-based model.
(See also issue 718.)
Proposed resolution (February, 2010):
Change 11 [class.access] paragraphs 6-7 as follows:
All access controls in Clause 11 [class.access] affect the ability to access a class member name from a declaration of a particular scope entity, including references appearing in those parts of the declaration that precede the name of the entity being declared and implicit references to constructors, conversion functions, and destructors involved in the creation and destruction of a static data member. For purposes of access control, the base-specifiers of a class and the definitions of class members that appear outside of the class definition are considered to be within the scope of that class. In particular, access controls apply as usual to member names accessed as part of a function return type, even though it is not possible to determine the access privileges of that use without first parsing the rest of the function declarator. Similarly, access control for implicit calls to the constructors, the conversion functions, or the destructor called to create and destroy a static data member is performed as if these calls appeared in the scope of the member's class. [Example:
class A { typedef int I; // private member I f(); friend I g(I); static I x; template<int> struct X; template<int> friend struct Y; protected: struct B { }; }; A::I A::f() { return 0; } A::I g(A::I p = A::x); A::I g(A::I p) { return 0; } A::I A::x = 0; template<A::I> struct A::X { }; template<A::I> struct Y { }; struct D: A::B, A { };Here, all the uses of A::I are well-formed because A::f and, A::x, and A::X are members of class A and g is a friend and Y are friends of class A. This implies, for example, that access checking on the first use of A::I must be deferred until it is determined that this use of A::I is as the return type of a member of class A. Similarly, the use of A::B as a base-specifier is well-formed because D is derived from A, so checking of base-specifiers must be deferred until the entire base-specifier-list has been seen. —end example]
The access rules in 11.2 [class.access.base] do not appear to handle references in nested classes and outside of nonstatic member functions correctly. For example,
struct A { typedef int I; // public }; struct B: private A { }; struct C: B { void f() { I i1; // error: access violation } I i2; // OK struct D { I i3; // OK void g() { I i4; // OK } }; };
The reason for this discrepancy is that the naming class in the reference to I is different in these cases. According to 11.2 [class.access.base] paragraph 5,
The access to a member is affected by the class in which the member is named. This naming class is the class in which the member name was looked up and found.
In the case of i1, the reference to I is subject to the transformation described in 9.3.1 [class.mfct.non-static] paragraph 3:
Similarly during name lookup, when an unqualified-id (5.1 [expr.prim]) used in the definition of a member function for class X resolves to a static member, an enumerator or a nested type of class X or of a base class of X, the unqualified-id is transformed into a qualified-id (5.1 [expr.prim]) in which the nested-name-specifier names the class of the member function.
As a result, the reference to I in the declaration of i1 is transformed to C::I, so that the naming class is C, and I is inacessible in C. In the remaining cases, however, the transformation does not apply. Thus, the naming class of I in these references is A, and I is publicly accessible in A.
Presumably either the definition of “naming class” must be changed or the transformation of unqualified-ids must be broadened to include all uses within the scope of a class and not just within nonstatic member functions (and following the declarator-id in the definition of a static member, per 9.4 [class.static] paragraph 4).
Does the restriction in 11.5 [class.protected] apply to upcasts across protected inheritance, too? For instance,
struct B { int i; }; struct I: protected B { }; struct D: I { void f(I* ip) { B* bp = ip; // well-formed? bp->i = 5; // aka "ip->i = 5;" } };
I think the rationale for the 11.5 [class.protected] restriction applies equally well here — you don't know whether ip points to a D object or not, so D::f can't be trusted to treat the protected B subobject consistently with the policies of its actual complete object type.
The current treatment of “accessible base class” in 11.2 [class.access.base] paragraph 4 clearly makes the conversion from I* to B* well-formed. I think that's wrong and needs to be fixed. The rationale for the accessibility of a base class is whether “an invented public member” of the base would be accessible at the point of reference, although we obscured that a bit in the reformulation; it seems to me that the invented member ought to be considered a non-static member for this purpose and thus subject to 11.5 [class.protected].
(See also issues 385 and 471.).Notes from October 2004 meeting:
The CWG tentatively agreed that casting across protective inheritance should be subject to the additional restriction in 11.5 [class.protected].
Proposed resolution (February, 2010):
Change 11.2 [class.access.base] paragraph 4 as follows:
A base class B of N is accessible at R, if
an invented public member of B would be a public member of N, or
R occurs in a member or friend of class N, and an invented public member of B would be a private or protected member of N, or
R occurs in a member or friend of a class P derived from N, and an invented public member of B would be a private or protected member of P, or
there exists a class S such that B is a base class of S accessible at R and S is a base class of N accessible at R.
[Example:
class B { public: int m; }; class S: private B { friend class N; }; class N: private S { void f() { B* p = this; // OK because class S satisfies the fourth condition // above: B is a base class of N accessible in f() because // B is an accessible base class of S and S is an accessible // base class of N. } }; class N2: protected B { }; class P2: public N2 { void f2(N2* n2p) { B* bp = n2p; // error: invented member would be protected and naming // class N2 not the same as or derived from the referencing // class P2 (cf 11.5 [class.protected]) } };—end example]
Mark Mitchell raised a number of issues related to the resolution of issue 244 and of destructor lookup in general.
Issue 244 says:
... in a qualified-id of the form:::opt nested-name-specifieropt class-name :: ~ class-name
the second class-name is looked up in the same scope as the first.
But if the reference is "p->X::~X()", the first class-name is looked up in two places (normal lookup and a lookup in the class of p). Does the new wording mean:
This is a test case that illustrates the issue:
struct A { typedef A C; }; typedef A B; void f(B* bp) { bp->B::~B(); // okay B found by normal lookup bp->C::~C(); // okay C found by class lookup bp->B::~C(); // B found by normal lookup C by class -- okay? bp->C::~B(); // C found by class lookup B by normal -- okay? }
A second issue concerns destructor references when the class involved is a template class.
namespace N { template <typename T> struct S { ~S(); }; } void f(N::S<int>* s) { s->N::S<int>::~S(); }
The issue here is that the grammar uses "~class-name" for destructor names, but in this case S is a template name when looked up in N.
Finally, what about cases like:
template <typename T> void f () { typename T::B x; x.template A<T>::template B<T>::~B(); }
When parsing the template definition, what checks can be done on "~B"?
Sandor Mathe adds :
The standard correction for issue 244 (now in DR status) is still incomplete.
Paragraph 5 of 3.4.3 [basic.lookup.qual] is not applicable for p->T::~T since there is no nested-name-specifier. Section 3.4.5 [basic.lookup.classref] describes the lookup of p->~T but p->T::~T is still not described. There are examples (which are non-normative) that illustrate this sort of lookup but they still leave questions unanswered. The examples imply that the name after ~ should be looked up in the same scope as the name before the :: but it is not stated. The problem is that the name to the left of the :: can be found in two different scopes. Consider the following:
struct S { struct C { ~C() { } }; }; typedef S::C D; int main() { D* p; p->C::~D(); // valid? }
Should the destructor call be valid? If there were a nested name specifier, then D should be looked for in the same scope as C. But here, C is looked for in 2 different ways. First, it is searched for in the type of the left hand side of -> and it is also looked for in the lexical context. It is found in one or if both, they must match. So, C is found in the scope of what p points at. Do you only look for D there? If so, this is invalid. If not, you would then look for D in the context of the expression and find it. They refer to the same underlying destructor so this is valid. The intended resolution of the original defect report of the standard was that the name before the :: did not imply a scope and you did not look for D inside of C. However, it was not made clear whether this was to be resolved by using the same lookup mechanism or by introducing a new form of lookup which is to look in the left hand side if that is where C was found, or in the context of the expression if that is where C was found. Of course, this begs the question of what should happen when it is found in both? Consider the modification to the above case when C is also found in the context of the expression. If you only look where you found C, is this now valid because it is in 1 of the two scopes or is it invalid because C was in both and D is only in 1?
struct S { struct C { ~C() { } }; }; typedef S::C D; typedef S::C C; int main() { D* p; p->C::~D(); // valid? }
I agree that the intention of the committee is that the original test case in this defect is broken. The standard committee clearly thinks that the last name before the last :: does not induce a new scope which is our current interpretation. However, how this is supposed to work is not defined. This needs clarification of the standard.
Martin Sebor adds this example (September 2003), along with errors produced by the EDG front end:
namespace N { struct A { typedef A NA; }; template <class T> struct B { typedef B NB; typedef T BT; }; template <template <class> class T> struct C { typedef C NC; typedef T<A> CA; }; } void foo (N::A *p) { p->~NA (); p->NA::~NA (); } template <class T> void foo (N::B<T> *p) { p->~NB (); p->NB::~NB (); } template <class T> void foo (typename N::B<T>::BT *p) { p->~BT (); p->BT::~BT (); } template <template <class> class T> void foo (N::C<T> *p) { p->~NC (); p->NC::~NC (); } template <template <class> class T> void foo (typename N::C<T>::CA *p) { p->~CA (); p->CA::~CA (); } Edison Design Group C/C++ Front End, version 3.3 (Sep 3 2003 11:54:55) Copyright 1988-2003 Edison Design Group, Inc. "t.cpp", line 16: error: invalid destructor name for type "N::B<T>" p->~NB (); ^ "t.cpp", line 17: error: qualifier of destructor name "N::B<T>::NB" does not match type "N::B<T>" p->NB::~NB (); ^ "t.cpp", line 30: error: invalid destructor name for type "N::C<T>" p->~NC (); ^ "t.cpp", line 31: error: qualifier of destructor name "N::C<T>::NC" does not match type "N::C<T>" p->NC::~NC (); ^ 4 errors detected in the compilation of "t.cpp".
John Spicer: The issue here is that we're unhappy with the destructor names when doing semantic analysis of the template definitions (not during an instantiation).
My personal feeling is that this is reasonable. After all, why would you call p->~NB for a class that you just named as N::B<T> and you could just say p->~B?
Additional note (September, 2004)
The resolution for issue 244 removed the discussion of p->N::~S, where N is a namespace-name. However, the resolution did not make this construct ill-formed; it simply left the semantics undefined. The meaning should either be defined or the construct made ill-formed.
According to 12.7 [class.cdtor] paragraph 4,
Member functions, including virtual functions (10.3 [class.virtual]), can be called during construction or destruction (12.6.2 [class.base.init]). When a virtual function is called directly or indirectly from a constructor (including the mem-initializer or brace-or-equal-initializer for a non-static data member) or from a destructor, and the object to which the call applies is the object under construction or destruction, the function called is the one defined in the constructor or destructor's own class or in one of its bases, but not a function overriding it in a class derived from the constructor or destructor's class, or overriding it in one of the other base classes of the most derived object (1.8 [intro.object]).
This is clear regarding virtual functions called during the initialization of a class's members, but it does not specifically address the polymorphic behavior of the class during the destruction of the members. Presumably the behavior during destruction should be the exact inverse of that of the constructor, i.e., the class's virtual functions should still be called during member destruction.
In addition, the wording
If the virtual function call uses an explicit class member access (5.2.5 [expr.ref]) and the object-expression refers to the object under construction or destruction but its type is neither the constructor or destructor's own class or one of its bases, the result of the call is undefined.
should be clarified that “refers to the object under construction” does not include referring to member subobjects but only to base or more-derived classes of the class under construction or destruction.
It seems odd to have an implicitly declared copy constructor (and the same for the copy assignment operator) if one of the subobjects does not have one. For example,
struct A { A(); A(A&&); }; struct B: A { }; B b; B b2(b); // error when implicitly defining B(B&), should not be declared
If we don't declare it in that case, we need to decide what happens if one base has only a move constructor and another has only a copy constructor.
Notes from the November, 2010 meeting:
The consensus of the CWG was to change the behavior so that all classes have a declaration of a copy constructor, but that it is defined as deleted in the cases where the declaration is omitted by the current rules.
According to the Standard (although not implemented this way in most implementations), the following code exhibits non-intuitive behavior:
struct T { operator short() const; operator int() const; }; short s; void f(const T& t) { s = t; // surprisingly calls T::operator int() const }
The reason for this choice is 13.6 [over.built] paragraph 18:
For every triple (L, VQ, R), where L is an arithmetic type, VQ is either volatile or empty, and R is a promoted arithmetic type, there exist candidate operator functions of the form
VQ L& operator=(VQ L&, R);
Because R is a "promoted arithmetic type," the second argument to the built-in assignment operator is int, causing the unexpected choice of conversion function.
Suggested resolution: Provide built-in assignment operators for the unpromoted arithmetic types.
Related to the preceding, but not resolved by the suggested resolution, is the following problem. Given:
struct T { operator int() const; operator double() const; };
I believe the standard requires the following assignment to be ambiguous (even though I expect that would surprise the user):
double x; void f(const T& t) { x = t; }
The problem is that both of these built-in operator=()s exist (13.6 [over.built] paragraph 18):
double& operator=(double&, int); double& operator=(double&, double);
Both are an exact match on the first argument and a user conversion on the second. There is no rule that says one is a better match than the other.
The compilers that I have tried (even in their strictest setting) do not give a peep. I think they are not following the standard. They pick double& operator=(double&, double) and use T::operator double() const.
I hesitate to suggest changes to overload resolution, but a possible resolution might be to introduce a rule that, for built-in operator= only, also considers the conversion sequence from the second to the first type. This would also resolve the earlier question.
It would still leave x += t etc. ambiguous -- which might be the desired behavior and is the current behavior of some compilers.
Notes from the 04/01 meeting:
The difference between initialization and assignment is disturbing. On the other hand, promotion is ubiquitous in the language, and this is the beginning of a very slippery slope (as the second report above demonstrates).
Additional note (August, 2010):
See issue 507 for a similar example involving comparison operators.
Static data members of template classes and of nested classes of template classes are not themselves templates but receive much the same treatment as template. For instance, 14 [temp] paragraph 1 says that templates are only "classes or functions" but implies that "a static data member of a class template or of a class nested within a class template" is defined using the template-declaration syntax.
There are many places in the clause, however, where static data members of one sort or another are overlooked. For instance, 14 [temp] paragraph 6 allows static data members of class templates to be declared with the export keyword. I would expect that static data members of (non-template) classes nested within class templates could also be exported, but they are not mentioned here.
Paragraph 8, however, overlooks static data members altogether and deals only with "templates" in defining the effect of the export keyword; there is no description of the semantics of defining a static data member of a template to be exported.
These are just two instances of a systematic problem. The entire clause needs to be examined to determine which statements about "templates" apply to static data members, and which statements about "static data members of class templates" also apply to static data members of non-template classes nested within class templates.
(The question also applies to member functions of template classes; see issue 217, where the phrase "non-template function" in 8.3.6 [dcl.fct.default] paragraph 4 is apparently intended not to include non-template member functions of template classes. See also issue 108, which would benefit from understanding nested classes of class templates as templates. Also, see issue 249, in which the usage of the phrase "member function template" is questioned.)
Notes from the 4/02 meeting:
Daveed Vandevoorde will propose appropriate terminology.
The EDG front-end accepts:
template <typename T> struct A { template <typename U> struct B {}; }; template <typename T> struct C : public A<T>::template B<T> { };
It rejects this code if the base-specifier is spelled A<T>::B<T>.
However, the grammar for a base-specifier does not allow the template keyword.
Suggested resolution:
It seems to me that a consistent approach to the solution that looks like it will be adopted for issue 180 (which deals with the typename keyword in similar contexts) would be to assume that B is a template if it is followed by a "<". After all, an expression cannot appear in this context.Notes from the 4/02 meeting:
We agreed that template must be allowed in this context. The syntax needs to be changed. We also opened the related issue 343.
Additional note (August, 2010):
The same considerations apply to mem-initializer-ids, as noted in issue 1019.
It was intended for empty pack expansions to be useful in contexts like base-specifiers, e.g.,
template<class... T> struct A : T... {}; A<> x; // ok?
However, the current wording provides no description of how that might work. (More generally, the problem arises in any context where the pack expansion follows a token that should only be present when the pack expansion is non-empty: following another argument in a function call, etc.)
Given an example like
template<typename T, typename U> struct Outer { template<typename X, typename Y> struct Inner; template<typename Y> struct Inner<T, Y> {}; template<typename Y> struct Inner<U, Y> {}; }; Outer<int, int> outer; // #1 Outer<int, int>::Inner<int, float> inner; // #2
Is #1 ill-formed because of the identical partial specializations? If not, presumably #2 is ill-formed because of the resulting ambiguity (14.5.5.1 [temp.class.spec.match] paragraph 1).
Notes from the November, 2010 meeting:
The instantiation of Outer<int,int> results in duplicate declarations of the partial specialization, which are ill-formed by 9.2 [class.mem] paragraph 1. No normative change is required, but it might be helpful to add an example like this somewhere.
In the following example, the template parameter in the partial specialization is non-deducible:
template <class T> struct A { typedef T U; }; template <class T> struct C { }; template <class T> struct C<typename A<T>::U> { };
Several compilers issue errors for this case, but there appears to be nothing in the Standard that would make this ill-formed; it simply seems that the partial specialization will never be matched, so the primary template will be used for all specializations. Should it be ill-formed?
Notes from the April, 2006 meeting:
It was noted that there are similar issues for constructors and conversion operators with non-deducible parameters, and that they should probably be dealt with similarly.
Consider the following example:
template <class T> struct Outer { struct Inner { Inner* self(); }; }; template <class T> Outer<T>::Inner* Outer<T>::Inner::self() { return this; }
According to 14.6 [temp.res] paragraph 3 (before the salient wording was inadvertently removed, see issue 559),
A qualified-id that refers to a type and in which the nested-name-specifier depends on a template-parameter (14.6.2 [temp.dep]) but does not refer to a member of the current instantiation (14.6.2.1 [temp.dep.type]) shall be prefixed by the keyword typename to indicate that the qualified-id denotes a type, forming a typename-specifier.
Because Outer<T>::Inner is a member of the current instantiation, the Standard does not currently require that it be prefixed with typename when it is used in the return type of the definition of the self() member function. However, it is difficult to parse this definition correctly without knowing that the return type is, in fact, a type, which is what the typename keyword is for. Should the Standard be changed to require typename in such contexts?
14.6.2.1 [temp.dep.type] paragraph 4 treats unqualified-ids and qualified-ids in which the nested-name-specifier refers to the current instantiation as equivalent. However, the lookups associated with these two id-expressions are different in the presence of dependent base classes (14.6.2 [temp.dep] paragraph 3): with an unqualified-id, a dependent base class scope is never examined, while with a qualified-id it is. The current wording does not specify how an example like the following is to be handled:
template<typename T> struct B {}; struct C { typedef int type; }; template<typename T> struct A : B<T>, C { template<typename U> type a(); // #1 template<typename U> typename A<T>::type a(); // #2: different from #1? }; template<typename T> template<typename U> typename A<T>::type A<T>::a() { ... } // defines #1 or #2?
There seem to be two possible strategies for the handling of typename A<T>::type:
It is handled like the unqualified-id case, looking only in non-dependent base classes and not being a dependent type.
Since the current instantiation has dependent base classes, it is handled as a dependent type.
EDG seems to be doing the former, g++ the latter.
Notes from the November, 2010 meeting:
The CWG agreed that if a name is found in a non-dependent base, the type should be treated as non-dependent also.
Additional note (November, 2010):
The overall treatment of dependent base classes in handling a qualified-id in which the nested-name-specifier names the current instantiation or in a member access expression where the object expression is *this is not very clear. It would be helpful if the resolution of this issue could clarify the overall treatment while dealing with the mixed dependent/non-dependent case given in the example.
template <class T> class Foo { public: typedef int Bar; Bar f(); }; template <class T> typename Foo<T>::Bar Foo<T>::f() { return 1;} --------------------In the class template definition, the declaration of the member function is interpreted as:
int Foo<T>::f();In the definition of the member function that appears outside of the class template, the return type is not known until the member function is instantiated. Must the return type of the member function be known when this out-of-line definition is seen (in which case the definition above is ill-formed)? Or is it OK to wait until the member function is instantiated to see if the type of the return type matches the return type in the class template definition (in which case the definition above is well-formed)?
Suggested resolution: (John Spicer)
My opinion (which I think matches several posted on the reflector recently) is that the out-of-class definition must match the declaration in the template. In your example they do match, so it is well formed.
I've added some additional cases that illustrate cases that I think either are allowed or should be allowed, and some cases that I don't think are allowed.
template <class T> class A { typedef int X; }; template <class T> class Foo { public: typedef int Bar; typedef typename A<T>::X X; Bar f(); Bar g1(); int g2(); X h(); X i(); int j(); }; // Declarations that are okay template <class T> typename Foo<T>::Bar Foo<T>::f() { return 1;} template <class T> typename Foo<T>::Bar Foo<T>::g1() { return 1;} template <class T> int Foo<T>::g2() { return 1;} template <class T> typename Foo<T>::X Foo<T>::h() { return 1;} // Declarations that are not okay template <class T> int Foo<T>::i() { return 1;} template <class T> typename Foo<T>::X Foo<T>::j() { return 1;}In general, if you can match the declarations up using only information from the template, then the declaration is valid.
Declarations like Foo::i and Foo::j are invalid because for a given instance of A<T>, A<T>::X may not actually be int if the class is specialized.
This is not a problem for Foo::g1 and Foo::g2 because for any instance of Foo<T> that is generated from the template you know that Bar will always be int. If an instance of Foo is specialized, the template member definitions are not used so it doesn't matter whether a specialization defines Bar as int or not.
Implementations differ in their treatment of the following code:
template <class T> struct A { typename T::X x; }; template <class T> struct B { typedef T* X; A<B> a; }; int main () { B<int> b; }
Some implementations accept it. At least one rejects it because the instantiation of A<B<int> > requires that B<int> be complete, and it is not at the point at which A<B<int> > is being instantiated.
Erwin Unruh:
In my view the programm is ill-formed. My reasoning:
So each class needs the other to be complete.
The problem can be seen much easier if you replace the typedef with
typedef T (*X) [sizeof(B::a)];
Now you have a true recursion. The compiler cannot easily distinguish between a true recursion and a potential recursion.
John Spicer:
Using a class to form a qualified name does not require the class to be complete, it only requires that the named member already have been declared. In other words, this kind of usage is permitted:
class A { typedef int B; A::B ab; };
In the same way, once B has been declared in A, it is also visible to any template that uses A through a template parameter.
The standard could be more clear in this regard, but there are two notes that make this point. Both 3.4.3.1 [class.qual] and 5.1.1 [expr.prim.general] paragraph 7 contain a note that says "a class member can be referred to using a qualified-id at any point in its potential scope (3.3.7 [basic.scope.class])." A member's potential scope begins at its point of declaration.
In other words, a class has three states: incomplete, being completed, and complete. The standard permits a qualified name to be used once a name has been declared. The quotation of the notes about the potential scope was intended to support that.
So, in the original example, class A does not require the type of T to be complete, only that it have already declared a member X.
Bill Gibbons:
The template and non-template cases are different. In the non-template case the order in which the members become declared is clear. In the template case the members of the instantiation are conceptually all created at the same time. The standard does not say anything about trying to mimic the non-template case during the instantiation of a class template.
Mike Miller:
I think the relevant specification is 14.6.4.1 [temp.point] paragraph 3, dealing with the point of instantiation:
For a class template specialization... if the specialization is implicitly instantiated because it is referenced from within another template specialization, if the context from which the specialization is referenced depends on a template parameter, and if the specialization is not instantiated previous to the instantiation of the enclosing template, the point of instantiation is immediately before the point of instantiation of the enclosing template. Otherwise, the point of instantiation for such a specialization immediately precedes the namespace scope declaration or definition that refers to the specialization.
That means that the point of instantiation of A<B<int> > is before that of B<int>, not in the middle of B<int> after the declaration of B::X, and consequently a reference to B<int>::X from A<B<int> > is ill-formed.
To put it another way, I believe John's approach requires that there be an instantiation stack, with the results of partially-instantiated templates on the stack being available to instantiations above them. I don't think the Standard mandates that approach; as far as I can see, simply determining the implicit instantiations that need to be done, rewriting the definitions at their respective points of instantiation with parameters substituted (with appropriate "forward declarations" to allow for non-instantiating references), and compiling the result normally should be an acceptable implementation technique as well. That is, the implicit instantiation of the example (using, e.g., B_int to represent the generated name of the B<int> specialization) could be something like
struct B_int; struct A_B_int { B_int::X x; // error, incomplete type }; struct B_int { typedef int* X; A_B_int a; };
Notes from 10/01 meeting:
This was discussed at length. The consensus was that the template case should be treated the same as the non-template class case it terms of the order in which members get declared/defined and classes get completed.
Proposed resolution:
In 14.6.4.1 [temp.point] paragraph 3 change:
the point of instantiation is immediately before the point of instantiation of the enclosing template. Otherwise, the point of instantiation for such a specialization immediately precedes the namespace scope declaration or definition that refers to the specialization.
To:
the point of instantiation is the same as the point of instantiation of the enclosing template. Otherwise, the point of instantiation for such a specialization immediately precedes the nearest enclosing declaration. [Note: The point of instantiation is still at namespace scope but any declarations preceding the point of instantiation, even if not at namespace scope, are considered to have been seen.]
Add following paragraph 3:
If an implicitly instantiated class template specialization, class member specialization, or specialization of a class template references a class, class template specialization, class member specialization, or specialization of a class template containing a specialization reference that directly or indirectly caused the instantiation, the requirements of completeness and ordering of the class reference are applied in the context of the specialization reference.
and the following example
template <class T> struct A { typename T::X x; }; struct B { typedef int X; A<B> a; }; template <class T> struct C { typedef T* X; A<C> a; }; int main () { C<int> c; }
Notes from the October 2002 meeting:
This needs work. Moved back to drafting status.
Three points have been raised where the wording in 14.7.1 [temp.inst] may not be sufficiently clear.
A class template specialization is implicitly instantiated... if the completeness of the class type affects the semantics of the program...
It is not clear what it means for the "completeness... [to affect] the semantics." Consider the following example:
template<class T> struct A; extern A<int> a; void *foo() { return &a; } template<class T> struct A { #ifdef OPTION void *operator &() { return 0; } #endif };
The question here is whether it is necessary for template class A to declare an operator & for the semantics of the program to be affected. If it does not do so, the meaning of &a will be the same whether the class is complete or not and thus arguably the semantics of the program are not affected.
Presumably what was intended is whether the presence or absence of certain member declarations in the template class might be relevant in determining the meaning of the program. A clearer statement may be desirable.
If the overload resolution process can determine the correct function to call without instantiating a class template definition, it is unspecified whether that instantiation actually takes place.
The intent of this wording, as illustrated in the example in that paragraph, is to allow a "smart" implementation not to instantiate class templates if it can determine that such an instantiation will not affect the result of overload resolution, even though the algorithm described in clause 13 [over] requires that all the viable functions be enumerated, including functions that might be found as members of specializations.
Unfortunately, the looseness of the wording allowing this latitude for implementations makes it unclear what "the overload resolution process" is — is it the algorithm in 13 [over] or something else? — and what "the correct function" is.
If an implicit instantiation of a class template specialization is required and the template is declared but not defined, the program is ill-formed.
Here, it is not clear what conditions "require" an implicit instantiation. From the context, it would appear that the intent is to refer to the conditions in paragraph 4 that cause a specialization to be instantiated.
This interpretation, however, leads to different treatment of template and non-template incomplete classes. For example, by this interpretation,
class A; template <class T> struct TA; extern A a; extern TA<int> ta; void f(A*); void f(TA<int>*); int main() { f(&a); // well-formed; undefined if A // has operator &() member f(&ta); // ill-formed: cannot instantiate }
A different approach would be to understand "required" in paragraph 6 to mean that a complete type is required in the expression. In this interpretation, if an incomplete type is acceptable in the context and the class template definition is not visible, the instantiation is not attempted and the program is well-formed.
The meaning of "required" in paragraph 6 must be clarified.
Notes on 10/01 meeting:
It was felt that item 1 is solved by addition of the word "might" in the resolution for issue 63; item 2 is not much of a problem; and item 3 could be solved by changing "required" to "required to be complete".
Paragraph 17 of 14.7.3 [temp.expl.spec] says,
A member or a member template may be nested within many enclosing class templates. In an explicit specialization for such a member, the member declaration shall be preceded by a template<> for each enclosing class template that is explicitly specialized.
This is curious, because paragraph 3 only allows explicit specialization of members of implicitly-instantiated class specializations, not explicit specializations. Furthermore, paragraph 4 says,
Definitions of members of an explicitly specialized class are defined in the same manner as members of normal classes, and not using the explicit specialization syntax.
Paragraph 18 provides a clue for resolving the apparent contradiction:
In an explicit specialization declaration for a member of a class template or a member template that appears in namespace scope, the member template and some of its enclosing class templates may remain unspecialized, except that the declaration shall not explicitly specialize a class member template if its enclosing class templates are not explicitly specialized as well. In such explicit specialization declaration, the keyword template followed by a template-parameter-list shall be provided instead of the template<> preceding the explicit specialization declaration of the member.
It appears from this and the following example that the phrase “explicitly specialized” in paragraphs 17 and 18, when referring to enclosing class templates, does not mean that explicit specializations have been declared for them but that their names in the qualified-id are followed by template argument lists. This terminology is confusing and should be changed.
Proposed resolution (October, 2005):
Change 14.7.3 [temp.expl.spec] paragraph 17 as indicated:
A member or a member template may be nested within many enclosing class templates. In an explicit specialization for such a member, the member declaration shall be preceded by a template<> for each enclosing class template that is explicitly specialized specialization. [Example:...
Change 14.7.3 [temp.expl.spec] paragraph 18 as indicated:
In an explicit specialization declaration for a member of a class template or a member template that appears in namespace scope, the member template and some of its enclosing class templates may remain unspecialized, except that the declaration shall not explicitly specialize a class member template if its enclosing class templates are not explicitly specialized as well that is, the template-id naming the template may be composed of template parameter names rather than template-arguments. In For each unspecialized template in such an explicit specialization declaration, the keyword template followed by a template-parameter-list shall be provided instead of the template<> preceding the explicit specialization declaration of the member. The types of the template-parameters in the template-parameter-list shall be the same as those specified in the primary template definition. In such declarations, an unspecialized template-id shall not precede the name of a template specialization in the qualified-id naming the member. [Example:...
Notes from the April, 2006 meeting:
The revised wording describing “unspecialized” templates needs more work to ensure that the parameter names in the template-id are in the correct order; the distinction between template argyments and parameters is also probably not clear enough. It might be better to replace this paragraph completely and avoid the “unspecialized” wording altogether.
Proposed resolution (February, 2010):
Change 14.7.3 [temp.expl.spec] paragraph 17 as follows:
A member or a member template may be nested within many enclosing class templates. In an explicit specialization for such a member, the member declaration shall be preceded by a template<> for each enclosing class template that is explicitly specialized specialization. [Example:...
Change 14.7.3 [temp.expl.spec] paragraph 18 as follows:
In an explicit specialization declaration for a member of a class template or a member template that appears in namespace scope, the member template and some of its enclosing class templates may remain unspecialized, except that the declaration shall not explicitly specialize a class member template if its enclosing class templates are not explicitly specialized as well. In such explicit specialization declaration, the keyword template followed by a template-parameter-list shall be provided instead of the template<> preceding the explicit specialization declaration of the member. The types of the template-parameters in the template-parameter-list shall be the same as those specified in the primary template definition. that is, the corresponding template prefix may specify a template-parameter-list instead of template<> and the template-id naming the template be written using those template-parameters as template-arguments. In such a declaration, the number, kinds, and types of the template-parameters shall be the same as those specified in the primary template definition, and the template-parameters shall be named in the template-id in the same order that they appear in the template-parameter-list. An unspecialized template-id shall not precede the name of a template specialization in the qualified-id naming the member. [Example:...
There are certain constructs that are not covered by the existing categories of “type dependent” and “value dependent.” For example, the expression sizeof(sizeof(T())) is neither type-dependent nor value-dependent, but its validity depends on whether T can be value-constructed. We should be able to overload on such characteristics and select via deduction failure, but we need a term like “instantiation-dependent” to describe these cases in the Standard. The phrase “expression involving a template parameter” seems to come pretty close to capturing this idea.
Notes from the November, 2010 meeting:
The CWG favored extending the concepts of “type-dependent” and “value-dependent” to cover these additional cases, rather than adding a new concept.
It is not clear whether the following example is well-formed or not:
template<class T> struct identity { typedef T type; }; template<class T, class C> void f(T C::*, typename identity<T>::type*){} struct X { void f() {}; }; int main() { f(&X::f, 0); }
The null pointer conversion required for the second parameter of f is not one of the ones permitted by 14.8.2.1 [temp.deduct.call] paragraph 4, but it's unclear whether that list should apply to parameters with nondeduced types or not. 14.8.1 [temp.arg.explicit] paragraph 6 is explicit that
Implicit conversions (Clause 4 [conv]) will be performed on a function argument to convert it to the type of the corresponding function parameter if the parameter type contains no template-parameters that participate in template argument deduction.
However, this statement appears in a section dealing with explicitly-specified template arguments, so its applicability to nondeduced contexts in general is not clear.
Implementations disagree on the handling of this example.
14.8.2.5 [temp.deduct.type] paragraph 22 describes how we cope with partial ordering between two function templates that differ because one has a function parameter pack while the other has a normal function parameter. However, this paragraph was meant to apply to template parameter packs as well, e.g., to help with partial ordering of class template partial specializations:
template <class T1, class ...Z> class S; // #1 template <class T1, class ...Z> class S<T1, const Z&...> {}; // #2 template <class T1, class T2> class S<T1, const T2&> {};; // #3 S<int, const int&> s; // both #2 and #3 match; #3 is more specialized
(See also issue 818.)
Proposed resolution (March, 2009):
Change 14.8.2.5 [temp.deduct.type] paragraphs 9-10 as follows (and add the example above to paragraph 9):
If P has a form that contains <T> or <i>, then each argument Pi of the respective template argument list of P is compared with the corresponding argument Ai of the corresponding template argument list of A. If the template argument list of P contains a pack expansion that is not the last template argument, the entire template argument list is a non-deduced context. If Pi is a pack expansion, then the pattern of Pi is compared with each remaining argument in the template argument list of A. Each comparison deduces template arguments for subsequent positions in the template parameter packs expanded by Pi. During partial ordering (14.8.2.4 [temp.deduct.partial]), if Ai was originally a pack expansion and Pi is not a pack expansion, or if P does not contain a template argument corresponding to Ai, argument deduction fails.
Similarly, if P has a form that contains (T), then each parameter type Pi of the respective parameter-type-list of P is compared with the corresponding parameter type Ai of the corresponding parameter-type-list of A. If the parameter-declaration corresponding to Pi is a function parameter pack, then the type of its declarator-id is compared with each remaining parameter type in the parameter-type-list of A. Each comparison deduces template arguments for subsequent positions in the template parameter packs expanded by the function parameter pack. During partial ordering (14.8.2.4 [temp.deduct.partial]), if Ai was originally a function parameter pack and Pi is not a function parameter pack, or if P does not contain a function parameter type corresponding to Ai, argument deduction fails. [Note: A function parameter pack can only occur at the end of a parameter-declaration-list (8.3.5 [dcl.fct]). —end note]
Is the following well-formed?
auto concept HasDestructor<typename T> { T::~T(); } concept_map HasDestructor<int&> { }
According to _N2914_.14.10.2.1 [concept.map.fct] paragraph 4, the destructor requirement in the concept map results in an expression x.~X(), where X is the type int&. According to 5.2.4 [expr.pseudo], this expression is ill-formed because the object type and the type-name must be the same type, but the object type cannot be a reference type (references are dropped from types used in expressions, 5 [expr] paragraph 5).
It is not clear whether this should be addressed by changing 5.2.4 [expr.pseudo] or _N2914_.14.10.2.1 [concept.map.fct].
The C++ Standard uses the phrase “indeterminate value” without defining it. C99 defines it as “either an unspecified value or a trap representation.” Should C++ follow suit?
In addition, 4.1 [conv.lval] paragraph 1 says that applying the lvalue-to-rvalue conversion to an “object [that] is uninitialized” results in undefined behavior; this should be rephrased in terms of an object with an indeterminate value.
The definition of an argument does not seem to cover many assumed use cases, and we believe that is not intentional. There should be answers to questions such as: Are lambda-captures arguments? Are type names in a throw-spec arguments? “Argument” to casts, typeid, alignof, alignas, decltype and sizeof? why in x[arg] arg is not an argument, but the value forwarded to operator[]() is? Does not apply to operators as call-points not bounded by parentheses? Similar for copy initialization and conversion? What are deduced template “arguments?” what are “default arguments?” can attributes have arguments? What about concepts, requires clauses and concept_map instantiations? What about user-defined literals where parens are not used?
According to 1.4 [intro.compliance] paragraph 7,
A freestanding implementation is one in which execution may take place without the benefit of an operating system, and has an implementation-defined set of libraries that includes certain language-support libraries (17.6.1.3 [compliance]).
This definition links two relatively separate topics: the lack of an operating system and the minimal set of libraries. Furthermore, 3.6.1 [basic.start.main] paragraph 1 says:
[Note: in a freestanding environment, start-up and termination is implementation-defined; start-up contains the execution of constructors for objects of namespace scope with static storage duration; termination contains the execution of destructors for objects with static storage duration. —end note]
It would be helpful if the two characteristics (lack of an operating system and restricted set of libraries) were named separately and if these statements were clarified to identify exactly what is implementation-defined.
Notes from the October, 2009 meeting:
The CWG felt that it needed a specific proposal in a paper before attempting to resolve this issue.
There should be a list of incompatibilities between the current and previous Standards, as in ISO/IEC TR 10176 4.1.1 paragraph 9.
(See document N2733 for an initial list of this information.)
Does the Standard require that an uninitialized auto variable have a stable (albeit indeterminate) value? That is, does the Standard require that the following function return true?
bool f() { unsigned char i; // not initialized unsigned char j = i; unsigned char k = i; return j == k; // true iff "i" is stable }3.9.1 [basic.fundamental] paragraph 1 requires that uninitialized unsigned char variables have a valid value, so the initializations of j and k are well-formed and required not to trap. The question here is whether the value of i is allowed to change between those initializations.
Mike Miller: 1.9 [intro.execution] paragraph 10 says,
An instance of each object with automatic storage duration (3.7.3 [basic.stc.auto] ) is associated with each entry into its block. Such an object exists and retains its last-stored value during the execution of the block and while the block is suspended...I think that the most reasonable way to read this is that the only thing that is allowed to change the value of an automatic (non-volatile?) value is a "store" operation in the abstract machine. There are no "store" operations to i between the initializations of j and k, so it must retain its original (indeterminate but valid) value, and the result of the program is well-defined.
The quibble, of course, is whether the wording "last-stored value" should be applied to a "never-stored" value. I think so, but others might differ.
Tom Plum: 7.1.6.1 [dcl.type.cv] paragraph 8 says,
[Note: volatile is a hint to the implementation to avoid aggressive optimization involving the object because the value of the object might be changed by means undetectable by an implementation. See 1.9 [intro.execution] for detailed semantics. In general, the semantics of volatile are intended to be the same in C++ as they are in C. ]>From this I would infer that non-volatile means "shall not be changed by means undetectable by an implementation"; that the compiler is entitled to safely cache accesses to non-volatile objects if it can prove that no "detectable" means can modify them; and that therefore i shall maintain the same value during the example above.
Nathan Myers: This also has practical code-generation consequences. If the uninitialized auto variable lives in a register, and its value is really unspecified, then until it is initialized that register can be used as a temporary. Each time it's "looked at" the variable has the value that last washed up in that register. After it's initialized it's "live" and cannot be used as a temporary any more, and your register pressure goes up a notch. Fixing the uninit'd value would make it "live" the first time it is (or might be) looked at, instead.
Mike Ball: I agree with this. I also believe that it was certainly never my intent that an uninitialized variable be stable, and I would have strongly argued against such a provision. Nathan has well stated the case. And I am quite certain that it would be disastrous for optimizers. To ensure it, the frontend would have to generate an initializer, because optimizers track not only the lifetimes of variables, but the lifetimes of values assigned to those variables. This would put C++ at a significant performance disadvantage compared to other languages. Not even Java went this route. Guaranteeing defined behavior for a very special case of a generally undefined operation seems unnecessary.
According to 1.9 [intro.execution] paragraph 14, “sequenced before” is a relation between “evaluations.” However, 3.6.3 [basic.start.term] paragraph 3 says,
If the completion of the initialization of a non-local object with static storage duration is sequenced before a call to std::atexit (see <cstdlib>, 18.5 [support.start.term]), the call to the function passed to std::atexit is sequenced before the call to the destructor for the object. If a call to std::atexit is sequenced before the completion of the initialization of a non-local object with static storage duration, the call to the destructor for the object is sequenced before the call to the function passed to std::atexit. If a call to std::atexit is sequenced before another call to std::atexit, the call to the function passed to the second std::atexit call is sequenced before the call to the function passed to the first std::atexit call.
Except for the calls to std::atexit, these events do not correspond to “evaluation” of expressions that appear in the program. If the “sequenced before” relation is to be applied to them, a more comprehensive definition is needed.
According to 2.2 [lex.phases] paragraph 1, in translation phase 1,
Any source file character not in the basic source character set (2.3 [lex.charset]) is replaced by the universal-character-name that designates that character.
If a character that is not in the basic character set is preceded by a backslash character, for example
"\á"
the result is equivalent to
"\\u00e1"
that is, a backslash character followed by the spelling of the universal-character-name. This is different from the result in C99, which accepts characters from the extended source character set without replacing them with universal-character-names.
2.12 [lex.key] paragraph 2 says,
Furthermore, the alternative representations shown in Table 4 for certain operators and punctuators (2.6 [lex.digraph]) are reserved and shall not be used otherwise:
Also, 2.6 [lex.digraph] paragraph 2 says,
In all respects of the language, each alternative token behaves the same, respectively, as its primary token, except for its spelling.
It is not clear whether the following example is well-formed:
#define STR2(x) #x #define STR(x) STR2(x) int main() { return sizeof STR('\0'bitor 0) - sizeof STR('\0'bitor 0); }
In this example, bitor is not the | operator but the identifier in a user-defined-character-literal. Does this violate the restrictions of 2.12 [lex.key] and 2.6 [lex.digraph]?
According to 2.14.3 [lex.ccon] paragraph 1,
A character literal that does not begin with u, U, or L is an ordinary character literal, also referred to as a narrow-character literal. An ordinary character literal that contains a single c-char has type char, with value equal to the numerical value of the encoding of the c-char in the execution character set.
However, the definition of c-char includes as one possibility a universal-character-name. The value of a universal-character-name cannot, in general, be represented as a char, so this specification is impossible to satisfy.
(See also issue 411 for related questions.)
There is no limit placed on the number of c-chars in a multicharacter literal or a wide-character literal containing multiple c-chars, either in 2.14.3 [lex.ccon] paragraphs 1-2 or in Annex B [implimits]. Presumably this means that an implementation must accept arbitrarily long literals.
An alternative approach might be to state that these literals are conditionally supported with implementation-defined semantics, allowing an implementation to impose a documented limit that makes sense for the particular architecture.
2.14.5 [lex.string] paragraph 5 reads
Escape sequences and universal-character-names in string literals have the same meaning as in character literals, except that the single quote ' is representable either by itself or by the escape sequence \', and the double quote " shall be preceded by a \. In a narrow string literal, a universal-character-name may map to more than one char element due to multibyte encoding.
The first sentence refers us to 2.14.3 [lex.ccon], where we read in the first paragraph that "An ordinary character literal that contains a single c-char has type char [...]." Since the grammar shows that a universal-character-name is a c-char, something like '\u1234' must have type char (and thus be a single char element); in paragraph 5, we read that "A universal-character-name is translated to the encoding, in the execution character set, of the character named. If there is no such encoding, the universal-character-name is translated to an implemenation-defined encoding."
This is in obvious contradiction with the second sentence. In addition, I'm not really clear what is supposed to happen in the case where the execution (narrow-)character set is UTF-8. Consider the character \u0153 (the oe in the French word oeuvre). Should '\u0153' be a char, with an "error" value, say '?' (in conformance with the requirement that it be a single char), or an int, with the two char values 0xC5, 0x93, in an implementation defined order (in conformance with the requirement that a character representable in the execution character set be represented). Supposing the former, should "\u0153" be the equivalent of "?" (in conformance with the first sentence), or "\xC5\x93" (in conformance with the second).
Notes from October 2003 meeting:
We decided we should forward this to the C committee and let them resolve it. Sent via e-mail to John Benito on November 14, 2003.
Reply from John Benito:
I talked this over with the C project editor, we believe this was handled by the C committee before publication of the current standard.
WG14 decided there needed to be a more restrictive rule for one-to-one mappings: rather than saying "a single c-char" as C++ does, the C standard says "a single character that maps to a single-byte execution character"; WG14 fully expect some (if not many or even most) UCNs to map to multiple characters.
Because of the fundamental differences between C and C++ character types, I am not sure the C committee is qualified to answer this satisfactorily for WG21. WG14 is willing to review any decision reached for compatibility.
I hope this helps.
(See also issue 912 for a related question.)
User-defined literals should not be part of C++0x unless they have implementation experience.
2.11 [lex.name] paragraph 3 says,
In addition, some identifiers are reserved for use by C++ implementations and standard libraries (17.6.3.3.2 [global.names]) and shall not be used otherwise; no diagnostic is required.
There is no corresponding mention in 2.14.8 [lex.ext] of the restrictions on user-defined literal suffixes in 17.6.3.3.5 [usrlit.suffix]. Furthermore, considering the likelihood of adding hexadecimal floating-point literals, whose syntax overlaps that of user-defined literals except for that restriction, it would be a good idea to require a diagnostic for a violation of that rule.
Consider the following complete program:
void f(); template<typename T> void g() { f(); } int main() { }
Must f() be defined to make this program well-formed? The current wording of 3.2 [basic.def.odr] does not make any special provision for expressions that appear only in uninstantiated template definitions.
The current description of unqualified name lookup in 3.4.1 [basic.lookup.unqual] paragraph 8 does not correctly handle complex cases of nesting. The Standard currently reads,
A name used in the definition of a function that is a member function (9.3) of a class X shall be declared in one of the following ways:In particular, this formulation does not handle the following example:
- before its use in the block in which it is used or in an enclosing block (6.3), or
- shall be a member of class X or be a member of a base class of X (10.2), or
- if X is a nested class of class Y (9.7), shall be a member of Y, or shall be a member of a base class of Y (this lookup applies in turn to Y's enclosing classes, starting with the innermost enclosing class), or
- if X is a local class (9.8) or is a nested class of a local class, before the definition of class X in a block enclosing the definition of class X, or
- if X is a member of namespace N, or is a nested class of a class that is a member of N, or is a local class or nested class within a local class of a function that is a member of N, before the member function definition, in namespace N or in one of N's enclosing namespaces.
struct outer { static int i; struct inner { void f() { struct local { void g() { i = 5; } }; } }; };Here the reference to i is from a member function of a local class of a member function of a nested class. Nothing in the rules allows outer::i to be found, although intuitively it should be found.
A more comprehensive formulation is needed that allows traversal of any combination of blocks, local classes, and nested classes. Similarly, the final bullet needs to be augmented so that a function need not be a (direct) member of a namespace to allow searching that namespace when the reference is from a member function of a class local to that function. That is, the current rules do not allow the following example:
int j; // global namespace struct S { void f() { struct local2 { void g() { j = 5; } }; } };
The description of name lookup in the parameter-declaration-clause of member functions in 3.4.1 [basic.lookup.unqual] paragraphs 7-8 is flawed in at least two regards.
First, both paragraphs 7 and 8 apply to the parameter-declaration-clause of a member function definition and give different rules for the lookup. Paragraph 7 applies to names "used in the definition of a class X outside of a member function body...," which includes the parameter-declaration-clause of a member function definition, while paragraph 8 applies to names following the function's declarator-id (see the proposed resolution of issue 41), including the parameter-declaration-clause.
Second, paragraph 8 appears to apply to the type names used in the parameter-declaration-clause of a member function defined inside the class definition. That is, it appears to allow the following code, which was not the intent of the Committee:
struct S { void f(I i) { } typedef int I; };
There seems to be some confusion in the Standard regarding the relationship between 3.4.1 [basic.lookup.unqual] (Unqualified name lookup) and 3.4.2 [basic.lookup.argdep] (Argument-dependent lookup). For example, 3.4.1 [basic.lookup.unqual] paragraph 3 says,
The lookup for an unqualified name used as the postfix-expression of a function call is described in 3.4.2 [basic.lookup.argdep].
In other words, nothing in 3.4.1 [basic.lookup.unqual] applies to function names; the entire lookup is described in 3.4.2 [basic.lookup.argdep].
3.4.2 [basic.lookup.argdep] does not appear to share this view of its responsibility. The closest it comes is in 3.4.2 [basic.lookup.argdep] paragraph 2a:
...the set of declarations found by the lookup of the function name is the union of the set of declarations found using ordinary unqualified lookup and the set of declarations found in the namespaces and classes associated with the argument types.
Presumably, "ordinary unqualified lookup" is a reference to the processing described in 3.4.1 [basic.lookup.unqual], but, as noted above, 3.4.1 [basic.lookup.unqual] explicitly precludes applying that processing to function names. The details of "ordinary unqualified lookup" of function names are not described anywhere.
The other clauses that reference 3.4.2 [basic.lookup.argdep], clauses 13 [over] and 14 [temp], are split over the question of the relationship between 3.4.1 [basic.lookup.unqual] and 3.4.2 [basic.lookup.argdep]. 13.3.1.1.1 [over.call.func] paragraph 3, for instance, says
The name is looked up in the context of the function call following the normal rules for name lookup in function calls (3.4.2 [basic.lookup.argdep]).
I.e., this reference assumes that 3.4.2 [basic.lookup.argdep] is self-contained. The same is true of 13.3.1.2 [over.match.oper] paragraph 3, second bullet:
The set of non-member candidates is the result of the unqualified lookup of operator@ in the context of the expression according to the usual rules for name lookup in unqualified function calls (3.4.2 [basic.lookup.argdep]), except that all member functions are ignored.
On the other hand, however, 14.6.4.2 [temp.dep.candidate] paragraph 1 explicitly assumes that 3.4.1 [basic.lookup.unqual] and 3.4.2 [basic.lookup.argdep] are both involved in function name lookup and do different things:
For a function call that depends on a template parameter, if the function name is an unqualified-id but not a template-id, the candidate functions are found using the usual lookup rules (3.4.1 [basic.lookup.unqual], 3.4.2 [basic.lookup.argdep]) except that:
- For the part of the lookup using unqualified name lookup (3.4.1 [basic.lookup.unqual]), only function declarations with external linkage from the template definition context are found.
- For the part of the lookup using associated namespaces (3.4.2 [basic.lookup.argdep]), only function declarations with external linkage found in either the template definition context or the template instantiation context are found.
Suggested resolution:
Change 3.4.1 [basic.lookup.unqual] paragraph 1 from
...name lookup ends as soon as a declaration is found for the name.
to
...name lookup ends with the first scope containing one or more declarations of the name.
Change the first sentence of 3.4.1 [basic.lookup.unqual] paragraph 3 from
The lookup for an unqualified name used as the postfix-expression of a function call is described in 3.4.2 [basic.lookup.argdep].
to
An unqualified name used as the postfix-expression of a function call is looked up as described below. In addition, argument-dependent lookup (3.4.2 [basic.lookup.argdep]) is performed on this name to complete the resulting set of declarations.
Although 3.3.9 [basic.scope.temp] now describes the scope of a template parameter, the description of unqualified name lookup in 3.4.1 [basic.lookup.unqual] do not cover uses of template parameter names. The note in 3.4.1 [basic.lookup.unqual] paragraph 16 says,
the rules for name lookup in template definitions are described in 14.6 [temp.res].
but the rules there cover dependent and non-dependent names, not template parameters themselves.
The last bullet of the second paragraph of section 3.4.2 [basic.lookup.argdep] says that:
If T is a template-id, its associated namespaces and classes are the namespace in which the template is defined; for member templates, the member template's class; the namespaces and classes associated with the types of the template arguments provided for template type parameters (excluding template template parameters); the namespaces in which any template template arguments are defined; and the classes in which any member templates used as template template arguments are defined.
The first problem with this wording is that it is misleading, since one cannot get such a function argument whose type would be a template-id. The bullet should be speaking about template specializations instead.
The second problem is owing to the use of the word "defined" in the phrases "are the namespace in which the template is defined", "in which any template template arguments are defined", and "as template template arguments are defined". The bullet should use the word "declared" instead, since scenarios like the one below are possible:
namespace A { template<class T> struct test { template<class U> struct mem_templ { }; }; // declaration in namespace 'A' template<> template<> struct test<int>::mem_templ<int>; void foo(test<int>::mem_templ<int>&) { } } // definition in the global namespace template<> template<> struct A::test<int>::mem_templ<int> { }; int main() { A::test<int>::mem_templ<int> inst; // According to the current definition of 3.4.2 // foo is not found. foo(inst); }
In addition, the bullet doesn't make it clear whether a T which is a class template specialization must also be treated as a class type, i.e. if the contents of the second bullet of the second paragraph of section 3.4.2 [basic.lookup.argdep].
must apply to it or not. The same stands for a T which is a function template specialization. This detail can make a difference in an example such as the one below:
- If T is a class type (including unions), its associated classes are: the class itself; the class of which it is a member, if any; and its direct and indirect base classes. Its associated namespaces are the namespaces in which its associated classes are defined. [This wording is as updated by core issue 90.]
template<class T> struct slist_iterator { friend bool operator==(const slist_iterator& x, const slist_iterator& y) { return true; } }; template<class T> struct slist { typedef slist_iterator<T> iterator; iterator begin() { return iterator(); } iterator end() { return iterator(); } }; int main() { slist<int> my_list; slist<int>::iterator mi1 = my_list.begin(), mi2 = my_list.end(); // Must the the friend function declaration // bool operator==(const slist_iterator<int>&, const slist_iterator<int>&); // be found through argument dependent lookup? I.e. is the specialization // 'slist<int>' the associated class of the arguments 'mi1' and 'mi2'. If we // apply only the contents of the last bullet of 3.4.2/2, then the type // 'slist_iterator<int>' has no associated classes and the friend declaration // is not found. mi1 == mi2; }
Suggested resolution:
Replace the last bullet of the second paragraph of section 3.4.2 [basic.lookup.argdep]
with
- If T is a template-id, its associated namespaces and classes are the namespace in which the template is defined; for member templates, the member template's class; the namespaces and classes associated with the types of the template arguments provided for template type parameters (excluding template template parameters); the namespaces in which any template template arguments are defined; and the classes in which any member templates used as template template arguments are defined.
- If T is a class template specialization, its associated namespaces and classes are those associated with T when T is regarded as a class type; the namespaces and classes associated with the types of the template arguments provided for template type parameters (excluding template template parameters); the namespaces in which the primary templates making template template arguments are declared; and the classes in which any primary member templates used as template template arguments are declared.
- If T is a function template specialization, its associated namespaces and classes are those associated with T when T is regarded as a function type; the namespaces and classes associated with the types of the template arguments provided for template type parameters (excluding template template parameters); the namespaces in which the primary templates making template template arguments are declared; and the classes in which any primary member templates used as template template arguments are declared.
Replace the second bullet of the second paragraph of section 3.4.2 [basic.lookup.argdep]
with
- If T is a class type (including unions), its associated classes are: the class itself; the class of which it is a member, if any; and its direct and indirect base classes. Its associated namespaces are the namespaces in which its associated classes are defined.
- If T is a class type (including unions), its associated classes are: the class itself; the class of which it is a member, if any; and its direct and indirect base classes. Its associated namespaces are the namespaces in which its associated classes are declared [Note: in case of any of the associated classes being a class template specialization, its associated namespace is acually the namespace containing the declaration of the primary class template of the class template specialization].
Both 3.4.3.1 [class.qual] and 3.4.3.2 [namespace.qual] specify that some lookups are to be performed “in the context of the entire postfix-expression,” ignoring the fact that qualified-ids can appear outside of expressions.
It was suggested in document J16/05-0156 = WG21 N1896 that these uses be changed to “the context in which the qualified-id occurs,” but it isn't clear that this formulation adequately covers all the places a qualified-id can occur.
It is unclear to what extent entities without names match across translation units. For example,
struct S { int :2; enum { a, b, c } x; static class {} *p; };
If this declaration appears in multiple translation units, are all these members "the same" in each declaration?
A similar question can be asked about non-member declarations:
// Translation unit 1: extern enum { d, e, f } y; // Translation unit 2: extern enum { d, e, f } y; // Translation unit 3: enum { d, e, f } y;
Is this valid C++? Is it valid C?
James Kanze: S::p cannot be defined, because to do so requires a type specifier and the type cannot be named. ::y is valid C because C only requires compatible, not identical, types. In C++, it appears that there is a new type in each declaration, so it would not be valid. This differs from S::x because the unnamed type is part of a named type — but I don't know where or if the Standard says that.
John Max Skaller: It's not valid C++, because the type is a synthesised, unique name for the enumeration type which differs across translation units, as if:
extern enum _synth1 { d,e,f} y; .. extern enum _synth2 { d,e,f} y;
had been written.
However, within a class, the ODR implies the types are the same:
class X { enum { d } y; };
in two translation units ensures that the type of member y is the same: the two X's obey the ODR and so denote the same class, and it follows that there's only one member y and one type that it has.
(See also issues 132 and 216.)
The standard says that an unnamed class or enum definition can be given a "name for linkage purposes" through a typedef. E.g.,
typedef enum {} E; extern E *p;
can appear in multiple translation units.
How about the following combination?
// Translation unit 1: struct S; extern S *q; // Translation unit 2: typedef struct {} S; extern S *q;
Is this valid C++?
Also, if the answer is "yes", consider the following slight variant:
// Translation unit 1: struct S {}; // <<-- class has definition extern S *q; // Translation unit 2: typedef struct {} S; extern S *q;
Is this a violation of the ODR because two definitions of type S consist of differing token sequences?
The following declarations are allowed within a translation unit:
struct S; enum { S };
However, 3.5 [basic.link] paragraph 9 seems to say these two declarations cannot appear in two different translation units. That also would mean that the inclusion of a header containing the above in two different translation units is not valid C++.
I suspect this is an oversight and that users should be allowed to have the declarations above appear in different translation units. (It is a fairly common thing to do, I think.)
Mike Miller: I think you meant "enum E { S };" -- enumerators only have external linkage if the enumeration does (3.5 [basic.link] paragraph 4), and 3.5 [basic.link] paragraph 9 only applies to entities with external linkage.
I don't remember why enumerators were given linkage; I don't think it's necessary for mangling non-type template arguments. In any event, I can't think why cross-TU name collisions between enumerators and other entities would cause a problem, so I guess a change here would be okay. I can think of three changes that would have that effect:
Daveed Vandevoorde: I don't think any of these are sufficient in the sense that the problem isn't limited to enumerators. E.g.:
struct X; extern void X();shouldn't create cross-TU collisions either.
Mike Miller: So you're saying that cross-TU collisions should only be prohibited if both names denote entities of the same kind (both functions, both objects, both types, etc.), or if they are both references (regardless of what they refer to, presumably)?
Daveed Vandevoorde: Not exactly. Instead, I'm saying that if two entities (with external linkage) can coexist when they're both declared in the same translation unit (TU), then they should also be allowed to coexist when they're declared in two different translation units.
For example:
int i; void i(); // ErrorThis is an error within a TU, so I don't see a reason to make it valid across TUs.
However, "tag names" (class/struct/union/enum) can sometimes coexist with identically named entities (variables, functions & enumerators, but not namespaces, templates or type names).
The specification of the forms of the definition of main that an impliementation is required to accept is clear in C99 that the parameter names and the exact syntactic form of the types can vary. Although it is reasonable to assume that a C++ implementation would accept a definition like
int main(int foo, char** bar) { /* ... */ }
instead of the canonical
int main(int argc, char* argv[]) { /* ... */ }
it might be a good idea to clarify the intent using wording similar to C99's.
Is a compiler allowed to interleave constructor calls when performing dynamic initialization of nonlocal objects? What I mean by interleaving is: beginning to execute a particular constructor, then going off and doing something else, then going back to the original constructor. I can't find anything explicit about this in clause 3.6.2 [basic.start.init].
I'll present a few different examples, some of which get a bit wild. But a lot of what this comes down to is exactly what the standard means when it talks about the order of initialization. If it says that some object x must be initialized before a particular event takes place, does that mean that x's constructor must be entered before that event, or does it mean that it must be exited before that event? If object x must be initialized before object y, does that mean that x's constructor must exit before y's constructor is entered?
(The answer to that question might just be common sense, but I couldn't find an answer in clause 3.6.2 [basic.start.init]. Actually, when I read 3.6.2 [basic.start.init] carefully, I find there are a lot of things I took for granted that aren't there.)
OK, so a few specific scenerios.
<runtime gunk> <Enter A's constructor> <Enter f> <runtime gunk> <Enter B's constructor> <Enter f> <Leave f> <Leave B's constructor> <Leave f> <Leave A's constructor>The implication of a 'yes' answer for users is that any function called by a constructor, directly or indirectly, must be reentrant.
At this point, you might be thinking we could avoid all of this nonsense by removing compilers' freedom to defer initialization until after the beginning of main(). I'd resist that, for two reasons. First, it would be a huge change to make after the standard has been out. Second, that freedom is necessary if we want to have support for dynamic libraries. I realize we don't yet say anything about dynamic libraries, but I'd hate to make decisions that would make such support even harder.
3.6.3 [basic.start.term] paragraph 2 says,
If a function contains a local object of static storage duration that has been destroyed and the function is called during the destruction of an object with static storage duration, the program has undefined behavior if the flow of control passes through the definition of the previously destroyed local object.
I would like to turn this behavior from undefined to well-defined behavior for the purpose of achieving a graceful shutdown, especially in a multi-threaded world.
Background: Alexandrescu describes the “phoenix singleton” in Modern C++ Design. This is a class used as a function local static, that will reconstruct itself, and reapply itself to the atexit chain, if the program attempts to use it after it is destructed in the atexit chain. It achieves this by setting a “destructed flag” in its own state in its destructor. If the object is later accessed (and a member function is called on it), the member function notes the state of the “destructed flag” and does the reconstruction dance. The phoenix singleton pattern was designed to address issues only in single-threaded code where accesses among static objects can have a non-scoped pattern. When we throw in multi-threading, and the possibility that threads can be running after main returns, the chances of accessing a destroyed static significantly increase.
The very least that I would like to see happen is to standardize what I believe is existing practice: When an object is destroyed in the atexit chain, the memory the object occupied is left in whatever state the destructor put it in. If this can only be reliably done for objects with standard layout, that would be an acceptable compromise. This would allow objects to set “I'm destructed” flags in their state and then do something well-defined if accessed, such as throw an exception.
A possible refinement of this idea is to have the compiler set up a 3-state flag around function-local statics instead of the current 2-state flag:
We have the first two states today. We might choose to add the third state, and if execution passes over a function-local static with “destroyed” state, an exception could be thrown. This would mean that we would not have to guarantee memory stability in destroyed objects of static duration.
This refinement would break phoenix singletons, and is not required for the ~mutex()/~condition() I've described and prototyped. But it might make it easier for Joe Coder to apply this kind of guarantee to his own types.
There are several problems with 3.7 [basic.stc]:
3.7 [basic.stc] paragraph 2 says that "Static and automatic storage durations are associated with objects introduced by declarations (3.1 [basic.def]) and implicitly created by the implementation (12.2 [class.temporary])."
In fact, objects "implicitly created by the implementation" are the temporaries described in (12.2 [class.temporary]), and have neither static nor automatic storage duration, but a totally different duration, described in 12.2 [class.temporary].
3.7 [basic.stc] uses the expression "local object" in several places, without ever defining it. Presumably, what is meant is "an object declared at block scope", but this should be said explicitly.
In a recent discussion in comp.lang.c++.moderated, on poster interpreted "local objects" as including temporaries. This would require them to live until the end of the block in which they are created, which contradicts 12.2 [class.temporary]. If temporaries are covered by this section, and the statement in 3.7 [basic.stc] seems to suggest, and they aren't local objects, then they must have static storage duration, which isn't right either.
I propose adding a fourth storage duration to the list after 3.7 [basic.stc] paragraph 1:
And rewriting the second paragraph of this section as follows:
Temporary storage duration is associated with objects implicitly created by the implementation, and is described in 12.2 [class.temporary]. Static and automatic storage durations are associated with objects defined by declarations; in the following, an object defined by a declaration with block scope is a local object. The dynamic storage duration is associated with objects created by the operator new.
Steve Adamczyk: There may well be an issue here, but one should bear in mind the difference between storage duration and object lifetime. As far as I can see, there is no particular problem with temporaries having automatic or static storage duration, as appropriate. The point of 12.2 [class.temporary] is that they have an unusual object lifetime.
Notes from Ocrober 2002 meeting:
It might be desirable to shorten the storage duration of temporaries to allow reuse of them. The as-if rule allows some reuse, but such reuse requires analysis, including noting whether the addresses of such temporaries have been taken.
The global allocation functions are implicitly declared in every translation unit with exception-specifications (3.7.4 [basic.stc.dynamic] paragraph 2). It is not clear what should happen if a replacement allocation function is declared without an exception-specification. Is that a conflict with the implicitly-declared function (as it would be with explicitly-declared functions, and presumably is if the <new> header is included)? Or does the new declaration replace the implicit one, including the lack of an exception-specification? Or does the implicit declaration prevail? (Regardless of the exception-specification or lack thereof, it is presumably undefined behavior for an allocation function to exit with an exception that cannot be caught by a handler of type std::bad_alloc (3.7.4.1 [basic.stc.dynamic.allocation] paragraph 3).)
3.7.4.2 [basic.stc.dynamic.deallocation] paragraph 4 mentions that the effect of using an invalid pointer value is undefined. However, the standard never says what it means to 'use' a value.
There are a number of possible interpretations, but it appears that each of them leads to undesired conclusions:
int *x = new int(0); delete x; x = 0;into undefined behaviour. As this is a common idiom, this is clearly undesirable.
int *x = new int(0); delete x; x->~int();into undefined behaviour; according to 5.2.4 [expr.pseudo], the variable x is 'evaluated' as part of evaluating the pseudo destructor call. This, in turn, would mean that all containers (23 [containers]) of pointers show undefined behaviour, e.g. 23.3.4.3 [list.modifiers] requires to invoke the destructor as part of the clear() method of the container.
If any other meaning was intended for 'using an expression', that meaning should be stated explicitly.
(See also issue 623.)
When an object is deleted, 3.7.4.2 [basic.stc.dynamic.deallocation] says that the deallocation “[renders] invalid all pointers referring to any part of the deallocated storage.” According to 3.9.2 [basic.compound] paragraph 3, a pointer whose address is one past the end of an array is considered to point to an unrelated object that happens to reside at that address. Does this need to be clarified to specify that the one-past-the-end pointer of an array is not invalidated by deleting the following object? (See also 5.3.5 [expr.delete] paragraph 4, which also mentions that the system deallocation function renders a pointer invalid.)
Consider
extern "C" int printf (const char *,...); struct Base { Base();}; struct Derived: virtual public Base { Derived() {;} }; Derived d; extern Derived& obj = d; int i; Base::Base() { if ((Base *) &obj) i = 4; printf ("i=%d\n", i); } int main() { return 0; }
12.7 [class.cdtor] paragraph 2 makes this valid, but 3.8 [basic.life] paragraph 5 implies that it isn't valid.
Steve Adamczyk: A second issue:
extern "C" int printf(const char *,...); struct A { virtual ~A(); int x; }; struct B : public virtual A { }; struct C : public B { C(int); }; struct D : public C { D(); }; int main() { D t; printf("passed\n");return 0; } A::~A() {} C::C(int) {} D::D() : C(this->x) {}
Core issue 52 almost, but not quite, says that in evaluating "this->x" you do a cast to the virtual base class A, which would be an error according to 12.7 [class.cdtor] paragraph 2 because the base class B constructor hasn't started yet. 5.2.5 [expr.ref] should be clarified to say that the cast does need to get done.
James Kanze submitted the same issue via comp.std.c++ on 11 July 2003:
Richard Smith: Nonsense. You can use "this" perfectly happily in a constructor, just be careful that (a) you're not using any members that are not fully initialised, and (b) if you're calling virtual functions you know exactly what you're doing.
In practice, and I think in intent, you are right. However, the standard makes some pretty stringent restrictions in 3.8 [basic.life]. To start with, it says (in paragraph 1):
The lifetime of an object is a runtime property of the object. The lifetime of an object of type T begins when:(Emphasis added.) Then when we get down to paragraph 5, it says:The lifetime of an object of type T ends when:
- storage with the proper alignment and size for type T is obtained, and
- if T is a class type with a non-trivial constructor, the constructor calls has COMPLETED.
- if T is a class type with a non-trivial destructor, the destructor call STARTS, or
- the storage which the object occupies is reused or released.
Before the lifetime of an object has started but after the storage which the object will occupy has been allocated [which sounds to me like it would include in the constructor, given the text above] or, after the lifetime of an object has ended and before the storage which the object occupied is reused or released, any pointer that refers to the storage location where the object will be or was located may be used but only in limited ways. [...] If the object will be or was of a non-POD class type, the program has undefined behavior if:
[...]
- the pointer is implicitly converted to a pointer to a base class type, or [...]
I can't find any exceptions for the this pointer.
Note that calling a non-static function in the base class, or even constructing the base class in initializer list, involves an implicit conversion of this to a pointer to the base class. Thus undefined behavior. I'm sure that this wasn't the intent, but it would seem to be what this paragraph is saying.
Sent in by David Abrahams:
Yes, and to add to this tangent, 3.9.1 [basic.fundamental] paragraph 1 states "Plain char, signed char, and unsigned char are three distinct types." Strangely, 3.9 [basic.types] paragraph 2 talks about how "... the underlying bytes making up the object can be copied into an array of char or unsigned char. If the content of the array of char or unsigned char is copied back into the object, the object shall subsequently hold its original value." I guess there's no requirement that this copying work properly with signed chars!
Notes from October 2002 meeting:
We should do whatever C99 does. 6.5p6 of the C99 standard says "array of character type", and "character type" includes signed char (6.2.5p15), and 6.5p7 says "character type". But see also 6.2.6.1p4, which mentions (only) an array of unsigned char.
Proposed resolution (April 2003):
Change 3.8 [basic.life] paragraph 5 bullet 3 from
to
Change 3.8 [basic.life] paragraph 6 bullet 3 from
to
Change the beginning of 3.9 [basic.types] paragraph 2 from
For any object (other than a base-class subobject) of POD type T, whether or not the object holds a valid value of type T, the underlying bytes (1.7 [intro.memory]) making up the object can be copied into an array of char or unsigned char.
to
For any object (other than a base-class subobject) of POD type T, whether or not the object holds a valid value of type T, the underlying bytes (1.7 [intro.memory]) making up the object can be copied into an array of byte-character type.
Add the indicated text to 3.9.1 [basic.fundamental] paragraph 1:
Objects declared as characters (char) shall be large enough to store any member of the implementation's basic character set. If a character from this set is stored in a character object, the integral value of that character object is equal to the value of the single character literal form of that character. It is implementation-defined whether a char object can hold negative values. Characters can be explicitly declared unsigned or signed. Plain char, signed char, and unsigned char are three distinct types, called the byte-character types. A char, a signed char, and an unsigned char occupy the same amount of storage and have the same alignment requirements (3.9 [basic.types]); that is, they have the same object representation. For byte-character types, all bits of the object representation participate in the value representation. For unsigned byte-character types, all possible bit patterns of the value representation represent numbers. These requirements do not hold for other types. In any particular implementation, a plain char object can take on either the same values as a signed char or an unsigned char; which one is implementation-defined.
Change 3.10 [basic.lval] paragraph 15 last bullet from
to
Notes from October 2003 meeting:
It appears that in C99 signed char may have padding bits but no trap representation, whereas in C++ signed char has no padding bits but may have -0. A memcpy in C++ would have to copy the array preserving the actual representation and not just the value.
March 2004: The liaisons to the C committee have been asked to tell us whether this change would introduce any unnecessary incompatibilities with C.
Notes from October 2004 meeting:
The C99 Standard appears to be inconsistent in its requirements. For example, 6.2.6.1 paragraph 4 says:
The value may be copied into an object of type unsigned char [n] (e.g., by memcpy); the resulting set of bytes is called the object representation of the value.
On the other hand, 6.2 paragraph 6 says,
If a value is copied into an object having no declared type using memcpy or memmove, or is copied as an array of character type, then the effective type of the modified object for that access and for subsequent accesses that do not modify the value is the effective type of the object from which the value is copied, if it has one.
Mike Miller will investigate further.
Proposed resolution (February, 2010):
Change 3.8 [basic.life] paragraph 5 bullet 4 as follows:
...The program has undefined behavior if:
...
the pointer is used as the operand of a static_cast (5.2.9 [expr.static.cast]) (except when the conversion is to cv void*, or to cv void* and subsequently to char*, or unsigned char* a pointer to a cv-qualified or cv-unqualified byte-character type (3.9.1 [basic.fundamental])), or
...
Change 3.8 [basic.life] paragraph 6 bullet 4 as follows:
...The program has undefined behavior if:
...
the lvalue is used as the operand of a static_cast (5.2.9 [expr.static.cast]) except when the conversion is ultimately to cv char& or cv unsigned char& a reference to a cv-qualified or cv-unqualified byte-character type (3.9.1 [basic.fundamental]) or an array thereof, or
...
Change 3.9 [basic.types] paragraph 2 as follows:
For any object (other than a base-class subobject) of trivially copyable type T, whether or not the object holds a valid value of type T, the underlying bytes (1.7 [intro.memory]) making up the object can be copied into an array of char or unsigned char a byte-character type (3.9.1 [basic.fundamental]).39 If the content of the that array of char or unsigned char is copied back into the object, the object shall subsequently hold its original value. [Example:...
Change 3.9.1 [basic.fundamental] paragraph 1 as follows:
...Characters can be explicitly declared unsigned or signed. Plain char, signed char, and unsigned char are three distinct types, called the byte-character types. A char, a signed char, and an unsigned char occupy the same amount of storage and have the same alignment requirements (3.11 [basic.align]); that is, they have the same object representation. For byte-character types, all bits of the object representation participate in the value representation. For unsigned character types unsigned char, all possible bit patterns of the value representation represent numbers...
Change 3.10 [basic.lval] paragraph 15 final bullet as follows:
If a program attempts to access the stored value of an object through an lvalue of other than one of the following types the behavior is undefined 52
...
a char or unsigned char byte-character type (3.9.1 [basic.fundamental]).
Change 3.11 [basic.align] paragraph 6 as follows:
The alignment requirement of a complete type can be queried using an alignof expression (5.3.6 [expr.alignof]). Furthermore, the byte-character types (3.9.1 [basic.fundamental]) char, signed char, and unsigned char shall have the weakest alignment requirement. [Note: this enables the byte-character types to be used as the underlying type for an aligned memory area (7.6.2 [dcl.align]). —end note]
Change 5.3.4 [expr.new] paragraph 10 as follows:
...For arrays of char and unsigned char a byte-character type (3.9.1 [basic.fundamental]), the difference between the result of the new-expression and the address returned by the allocation function shall be an integral multiple of the strictest fundamental alignment requirement (3.11 [basic.align]) of any object type whose size is no greater than the size of the array being created. [Note: Because allocation functions are assumed to return pointers to storage that is appropriately aligned for objects of any type with fundamental alignment, this constraint on array allocation overhead permits the common idiom of allocating byte-character arrays into which objects of other types will later be placed. —end note]
Notes from the March, 2010 meeting:
The CWG was not convinced that there was a need to change the existing specification at this time. Some were concerned that there might be implementation difficulties with giving signed char the requisite semantics; implementations for which that is true can currently make char equivalent to unsigned char and avoid those problems, but the suggested change would undermine that strategy.
In 3.9 [basic.types] paragraph 10, the standard makes it quite clear that volatile qualified types are PODs:
Arithmetic types (3.9.1 [basic.fundamental]), enumeration types, pointer types, and pointer to member types (3.9.2 [basic.compound]), and cv-qualified versions of these types (3.9.3 [basic.type.qualifier]) are collectively called scalar types. Scalar types, POD-struct types, POD-union types (clause 9 [class]), arrays of such types and cv-qualified versions of these types (3.9.3 [basic.type.qualifier]) are collectively called POD types.
However in 3.9 [basic.types] paragraph 3, the standard makes it clear that PODs can be copied “as if” they were a collection of bytes by memcpy:
For any POD type T, if two pointers to T point to distinct T objects obj1 and obj2, where neither obj1 nor obj2 is a base-class subobject, if the value of obj1 is copied into obj2, using the std::memcpy library function, obj2 shall subsequently hold the same value as obj1.
The problem with this is that a volatile qualified type may need to be copied in a specific way (by copying using only atomic operations on multithreaded platforms, for example) in order to avoid the “memory tearing” that may occur with a byte-by-byte copy.
I realise that the standard says very little about volatile qualified types, and nothing at all (yet) about multithreaded platforms, but nonetheless this is a real issue, for the following reason:
The forthcoming TR1 will define a series of traits that provide information about the properties of a type, including whether a type is a POD and/or has trivial construct/copy/assign operations. Libraries can use this information to optimise their code as appropriate, for example an array of type T might be copied with a memcpy rather than an element-by-element copy if T is a POD. This was one of the main motivations behind the type traits chapter of the TR1. However it's not clear how volatile types (or POD's which have a volatile type as a member) should be handled in these cases.
Notes from the April, 2005 meeting:
It is not clear whether the volatile qualifier actually guarantees atomicity in this way. Also, the work on the memory model for multithreading being done by the Evolution Working Group seems at this point likely to specify additional semantics for volatile data, and that work would need to be considered before resolving this issue.
3.9.1 [basic.fundamental] does not impose a requirement on the floating point types that there be an exact representation of the value zero. This omission is significant in 4.12 [conv.bool] paragraph 1, in which any non-zero value converts to the bool value true.
Suggested resolution: require that all floating point types have an exact representation of the value zero.
3.9.1 [basic.fundamental] paragraph 2 says that
There are four signed integer types: "signed char", "short int", "int", and "long int."
This would indicate that const int is not a signed integer type.
There is no normative requirement on the ranges of the integral types, although the footnote in 3.9.1 [basic.fundamental] paragraph 2 indicates the intent (for int, at least) that they match the values given in the <climits> header. Should there be an explicit requirement of some sort?
(See also paper N1693.)
The relationship between the values representable by corresponding signed and unsigned integer types is not completely described, but 3.9 [basic.types] paragraph 4 says,
The value representation of an object is the set of bits that hold the value of type T.
and 3.9.1 [basic.fundamental] paragraph 3 says,
The range of nonnegative values of a signed integer type is a subrange of the corresponding unsigned integer type, and the value representation of each corresponding signed/unsigned type shall be the same.
I.e., the maximum value of each unsigned type must be larger than the maximum value of the corresponding signed type.
C90 doesn't have this restriction, and C99 explicitly says (6.2.6.2, paragraph 2),
For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M <= N).
Unlike C++, the sign bit is not part of the value, and on an architecture that does not have native support of unsigned types, an implementation can emulate unsigned integers by simply ignoring what would be the sign bit in the signed type and be conforming.
The question is whether we intend to make a conforming implementation on such an architecture impossible. More generally, what range of architectures do we intend to support? And to what degree do we want to follow C99 in its evolution since C89?
(See paper J16/08-0141 = WG21 N2631.)
The taxonomy of value categories in 3.10 [basic.lval] classifies temporaries as prvalues. However, some temporaries are explicitly referred to as lvalues (cf 15.1 [except.throw] paragraph 3).
The C and C++ approaches to alignment are incompatible. See document PL22.16 10-0083 = WG21 N3093 for details.
Notes from the August, 2010 meeting:
CWG agreed that the alignment specifier should be a keyword instead of an attributes.
According to 4.1 [conv.lval] paragraph 1, an lvalue-to-rvalue conversion on an uninitialized object produces undefined behavior. Since there is only one “value” of type std::nullptr_t, an lvalue-to-rvalue conversion on a std::nullptr_t glvalue does not need to fetch the value from storage. Is there any need for undefined behavior in this case?
Section 4.4 [conv.qual] covers the case of multi-level pointers, but does not appear to cover the case of pointers to arrays of pointers. The effect is that arrays are treated differently from simple scalar values.
Consider for example the following code: (from the thread "Pointer to array conversion question" begun in comp.lang.c++.moderated)
int main() { double *array2D[2][3]; double * (*array2DPtr1)[3] = array2D; // Legal double * const (*array2DPtr2)[3] = array2DPtr1; // Legal double const * const (*array2DPtr3)[3] = array2DPtr2; // Illegal }and compare this code with:-
int main() { double *array[2]; double * *ppd1 = array; // legal double * const *ppd2 = ppd1; // legal double const * const *ppd3 = ppd2; // certainly legal (4.4/4) }
The problem appears to be that the pointed to types in example 1 are unrelated since nothing in the relevant section of the standard covers it - 4.4 [conv.qual] does not mention conversions of the form "cv array of N pointer to T" into "cv array of N pointer to cv T"
It appears that reinterpret_cast is the only way to perform the conversion.
Suggested resolution:
Artem Livshits proposed a resolution :-
"I suppose if the definition of "similar" pointer types in 4.4 [conv.qual] paragraph 4 was rewritten like this:
it would address the problem.T1 is cv1,0 P0 cv1,1 P1 ... cv1,n-1 Pn-1 cv1,n T
and
T2 is cv1,0 P0 cv1,1 P1 ... cv1,n-1 Pn-1 cv1,n T
where Pi is either a "pointer to" or a "pointer to an array of Ni"; besides P0 may be also a "reference to" or a "reference to an array of N0" (in the case of P0 of T2 being a reference, P0 of T1 may be nothing).
In fact I guess Pi in this notation may be also a "pointer to member", so 4.4 [conv.qual]/{4,5,6,7} would be nicely wrapped in one paragraph."
It is not clear what constraints are placed on a floating point implementation by the wording of the Standard. For instance, is an implementation permitted to generate a "fused multiply-add" instruction if the result would be different from what would be obtained by performing the operations separately? To what extent does the "as-if" rule allow the kinds of optimizations (e.g., loop unrolling) performed by FORTRAN compilers?
(Moved from issue 760.)
Although it was considered and rejected as part of issue 643, more recent developments may argue in favor of allowing the use of this in a late-specified return type. In particular, declaring the return type for a forwarding function in a derived class template that invokes a member function of a dependent base class is difficult without this facility. For example:
template <typename T> struct derived: base<T> { auto invoke() -> decltype(this->base_func()) { return this->base_func(); } };
(See also issue 1207 for another potential motivation for a change to this rule.)
Additional note (October, 2010):
The question should also be considered for parameter types; for example,
class comparable { public: bool is_equal(decltype(*this) other) { // should be X& return /*...*/; } };
There does not appear to be any technical difficulty that would require the restriction in 5.1.2 [expr.prim.lambda] paragraph 5 against default arguments in lambda-expressions.
There does not appear to be any technical difficulty that would require the current restriction that the return type of a lambda can be deduced only if the body of the lambda consists of a single return statement. In particular, multiple return statements could be permitted if they all return the same type.
5.2.1 [expr.sub] paragraph 2 deals with one particular aspect of the overloaded operator[], which seems out of place. Either 5.2.1 [expr.sub] should be augmented to discuss the overloaded operator[] in general or the information in paragraph 2 should be moved into 13.5.5 [over.sub].
Because the subscripting operation is defined as indirection through a pointer value, the result of a subscript operator applied to an xvalue array is an lvalue, not an xvalue. This could be surprising to some.
According to 5.2.3 [expr.type.conv] paragraphs 1 and 3 (stated directly or by reference to another section of the Standard), all the following expressions create temporaries:
T(1) T(1, 2) T{1} T{}
However, paragraph 2 says,
The expression T(), where T is a simple-type-specifier or typename-specifier for a non-array complete effective object type or the (possibly cv-qualified) void type, creates an rvalue of the specified type, which is value-initialized (8.5 [dcl.init]; no initialization is done for the void() case).
This does not say that the result is a temporary, which means that the lifetime of the result is not specified by 12.2 [class.temporary]. Presumably this is just an oversight.
Notes from the October, 2009 meeting:
The specification in 5.2.3 [expr.type.conv] is in error, not because it fails to state that T() is a temporary but because it requires a temporary for the other cases with fewer than two operands. The case where T is a class type is covered by 12.2 [class.temporary] paragraph 1 (“a conversion that creates an rvalue”), and a temporary should not be created when T is not a class type.
Given the following declarations:
struct S { signed long long sll: 3; }; S s = { -1 };
the expressions s.sll-- < 0u and s.sll < 0u have different results. The reason for this is that s.sll-- is an rvalue of type signed long long (5.2.6 [expr.post.incr]), which means that the usual arithmetic conversions (5 [expr] paragraph 10) convert 0u to signed long long and the result is true. s.sll, on the other hand, is a bit-field lvalue, which is promoted (4.5 [conv.prom] paragraph 3) to int; both operands of < have the same rank, so s.sll is converted to unsigned int to match the type of 0u and the result is false. This disparity seems undesirable.
The original proposed resolution for issue 160 included changing extended_type_info (5.2.8 [expr.typeid] paragraph 1, footnote 61) to std::extended_type_info. There was no consensus on whether this name ought to be part of namespace std or in a vendor-specific namespace, so the question was moved into a separate issue.
5.2.8 [expr.typeid] paragraph 4 says,
When typeid is applied to a type-id, the result refers to a std::type_info object representing the type of the type-id. If the type of the type-id is a reference type, the result of the typeid expression refers to a std::type_info object representing the referenced type. If the type of the type-id is a class type or a reference to a class type, the class shall be completely-defined.
I'm wondering whether this is not overly restrictive. I can't think of a reason to require that T be completely-defined in typeid(T) when T is a class type. In fact, several popular compilers enforce that restriction for typeid(T), but not for typeid(T&). Can anyone explain this?
Nathan Sidwell: I think this restriction is so that whenever the compiler has to emit a typeid object of a class type, it knows what the base classes are, and can therefore emit an array of pointers-to-base-class typeids. Such a tree is necessary to implement dynamic_cast and exception catching (in a commonly implemented and obvious manner). If the class could be incomplete, the compiler might have to emit a typeid for incomplete Foo in one object file and a typeid for complete Foo in another object file. The compilation system will then have to make sure that (a) those compare equal and (b) the complete Foo gets priority, if that is applicable.
Unfortunately, there is a problem with exceptions that means there still can be a need to emit typeids for incomplete class. Namely one can throw a pointer-to-pointer-to-incomplete. To implement the matching of pointer-to-derived being caught by pointer-to-base, it is necessary for the typeid of a pointer type to contain a pointer to the typeid of the pointed-to type. In order to do the qualification matching on a multi-level pointer type, one has a chain of pointer typeids that can terminate in the typeid of an incomplete type. You cannot simply NULL-terminate the chain, because one must distinguish between different incomplete types.
Dave Abrahams: So if implementations are still required to be able to do it, for all practical purposes, why aren't we letting the user have the benefits?
Notes from the April, 2006 meeting:
There was some concern expressed that this might be difficult under the IA64 ABI. It was also observed that while it is necessary to handle exceptions involving incomplete types, there is no requirement that the RTTI data structures be used for exception handling.
During the discussion of issue 799, which specified the result of using reinterpret_cast to convert an operand to its own type, it was observed that it is probably reasonable to allow reinterpret_cast between any two types that have the same size and alignment.
5.3.1 [expr.unary.op] paragraph 2 indicates that the type of an address-of-member expression reflects the class in which the member was declared rather than the class identified in the nested-name-specifier of the qualified-id. This treatment is unintuitive and can lead to strange code and unexpected results. For instance, in
struct B { int i; }; struct D1: B { }; struct D2: B { }; int (D1::* pmD1) = &D2::i; // NOT an errorMore seriously, template argument deduction can give surprising results:
struct A { int i; virtual void f() = 0; }; struct B : A { int j; B() : j(5) {} virtual void f(); }; struct C : B { C() { j = 10; } }; template <class T> int DefaultValue( int (T::*m) ) { return T().*m; } ... DefaultValue( &B::i ) // Error: A is abstract ... DefaultValue( &C::j ) // returns 5, not 10.
Suggested resolution: 5.3.1 [expr.unary.op] should be changed to read,
If the member is a nonstatic member (perhaps by inheritance) of the class nominated by the nested-name-specifier of the qualified-id having type T, the type of the result is "pointer to member of class nested-name-specifier of type T."and the comment in the example should be changed to read,
// has type int B::*
Notes from 04/00 meeting:
The rationale for the current treatment is to permit the widest possible use to be made of a given address-of-member expression. Since a pointer-to-base-member can be implicitly converted to a pointer-to-derived-member, making the type of the expression a pointer-to-base-member allows the result to initialize or be assigned to either a pointer-to-base-member or a pointer-to-derived-member. Accepting this proposal would allow only the latter use.
Additional notes:
Another problematic example has been mentioned:
class Base { public: int func() const; }; class Derived : public Base { }; template<class T> class Templ { public: template<class S> Templ(S (T::*ptmf)() const); }; void foo() { Templ<Derived> x(&Derived::func); // ill-formed }
In this example, even though the conversion of &Derived::func to int (Derived::*)() const is permitted, the initialization of x cannot be done because template argument deduction for the constructor fails.
If the suggested resolution were adopted, the amount of code broken by the change might be reduced by adding an implicit conversion from pointer-to-derived-member to pointer-to-base-member for appropriate address-of-member expressions (not for arbitrary pointers to members, of course).
(See also issues 247 and 1121.)
According to 5.3.1 [expr.unary.op] paragraph 10,
There is an ambiguity in the unary-expression ~X(), where X is a class-name or decltype-specifier. The ambiguity is resolved in favor of treating ~ as a unary complement rather than treating ~X as referring to a destructor.
It is not clear whether this is intended to apply to an expression like (~S)(). In large measure, that depends on whether a class-name is an id-expression or not. If it is, the ambiguity described in 5.3.1 [expr.unary.op] paragraph 10 does apply; if not, the expression is an unambiguous reference to the destructor for class S. There are several places in the Standard that indicate that the name of a type is an id-expression, but that might be more confusing than helpful.
Requirements for the alignment of pointers returned by new-expressions are given in 5.3.4 [expr.new] paragraph 10:
For arrays of char and unsigned char, the difference between the result of the new-expression and the address returned by the allocation function shall be an integral multiple of the most stringent alignment requirement (3.9 [basic.types]) of any object type whose size is no greater than the size of the array being created.
The intent of this wording is that the pointer returned by the new-expression will be suitably aligned for any data type that might be placed into the allocated storage (since the allocation function is constrained to return a pointer to maximally-aligned storage). However, there is an implicit assumption that each alignment requirement is an integral multiple of all smaller alignment requirements. While this is probably a valid assumption for all real architectures, there's no reason that the Standard should require it.
For example, assume that int has an alignment requirement of 3 bytes and double has an alignment requirement of 4 bytes. The current wording only requires that a buffer that is big enough for an int or a double be aligned on a 4-byte boundary (the more stringent requirement), but that would allow the buffer to be allocated on an 8-byte boundary — which might not be an acceptable location for an int.
Suggested resolution: Change "of any object type" to "of every object type."
A similar assumption can be found in 5.2.10 [expr.reinterpret.cast] paragraph 7:
...converting an rvalue of type "pointer to T1" to the type "pointer to T2" (where ... the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value...
Suggested resolution: Change the wording to
...converting an rvalue of type "pointer to T1" to the type "pointer to T2" (where ... the alignment requirements of T1 are an integer multiple of those of T2) and back to its original type yields the original pointer value...
The same change would also be needed in paragraph 9.
Looking up operator new in a new-expression uses a different mechanism from ordinary lookup. According to 5.3.4 [expr.new] paragraph 9,
If the new-expression begins with a unary :: operator, the allocation function's name is looked up in the global scope. Otherwise, if the allocated type is a class type T or array thereof, the allocation function's name is looked up in the scope of T. If this lookup fails to find the name, or if the allocated type is not a class type, the allocation function's name is looked up in the global scope.
Note in particular that the scope in which the new-expression occurs is not considered. For example,
void f() { void* operator new(std::size_t, void*); int* i = new int; // okay? }
In this example, the implicit reference to operator new(std::size_t) finds the global declaration, even though the block-scope declaration of operator new with a different signature would hide it from an ordinary reference.
This seems strange; either the block-scope declaration should be ill-formed or it should be found by the lookup.
Notes from October 2004 meeting:
The CWG agreed that the block-scope declaration should not be found by the lookup in a new-expression. It would, however, be found by ordinary lookup if the allocation function were invoked explicitly.
(See also issue 256.)
An implementation may have an unspecified amount of array allocation overhead (5.3.4 [expr.new] paragraph 10), so that evaluation of a new-expression in which the new-type-id is T[n] involves a request for more than n * sizeof(T) bytes of storage through the relevant operator new[] function.
The placement operator new[] function does not and cannot check whether the requested size is less than or equal to the size of the provided region of memory (18.6.1.3 [new.delete.placement] paragraphs 5-6). A program using placement array new must calculate what the requested size will be in advance.
Therefore any program using placement array new must take into account the implementation's array allocation overhead, which cannot be obtained or calculated by any portable means.
Notes from the April, 2005 meeting:
While the CWG agreed that there is no portable means to accomplish this task in the current language, they felt that a paper is needed to analyze the numerous mechanisms that might address the problem and advance a specific proposal. There is no volunteer to write such a paper at this time.
5.3.4 [expr.new] paragraph 10 says that the result of an array allocation function and the value of the array new-expression from which it was invoked may be different, allowing for space preceding the array to be used for implementation purposes such as saving the number of elements in the array. However, there is no corresponding description of the relationship between the operand of an array delete-expression and the argument passed to its deallocation function.
3.7.4.2 [basic.stc.dynamic.deallocation] paragraph 3 does state that
the value supplied to operator delete[](void*) in the standard library shall be one of the values returned by a previous invocation of either operator new[](std::size_t) or operator new[](std::size_t, const std::nothrow_t&) in the standard library.
This statement might be read as requiring an implementation, when processing an array delete-expression and calling the deallocation function, to perform the inverse of the calculation applied to the result of the allocation function to produce the value of the new-expression. (5.3.5 [expr.delete] paragraph 2 requires that the operand of an array delete-expression "be the pointer value which resulted from a previous array new-expression.") However, it is not completely clear whether the "shall" expresses an implementation requirement or a program requirement (or both). Furthermore, there is no direct statement about user-defined deallocation functions.
Suggested resolution: A note should be added to 5.3.5 [expr.delete] to clarify that any offset added in an array new-expression must be subtracted in the array delete-expression.
The meaning of an old-style cast is described in terms of const_cast, static_cast, and reinterpret_cast in 5.4 [expr.cast] paragraph 5. Ignoring const_cast for the moment, it basically says that if the conversion performed by a given old-style cast is one of those performed by static_cast, the conversion is interpreted as if it were a static_cast; otherwise, it's interpreted as if it were a reinterpret_cast, if possible. The following example is given in illustration:
struct A {}; struct I1 : A {}; struct I2 : A {}; struct D : I1, I2 {}; A *foo( D *p ) { return (A*)( p ); // ill-formed static_cast interpretation }
The obvious intent here is that a derived-to-base pointer conversion is one of the conversions that can be performed using static_cast, so (A*)(p) is equivalent to static_cast<A*>(p), which is ill-formed because of the ambiguity.
Unfortunately, the description of static_cast in 5.2.9 [expr.static.cast] does NOT support this interpretation. The problem is in the way 5.2.9 [expr.static.cast] lists the kinds of casts that can be performed using static_cast. Rather than saying something like "All standard conversions can be performed using static_cast," it says
An expression e can be explicitly converted to a type T using a static_cast of the form static_cast<T>(e) if the declaration "T t(e);" is well-formed, for some invented temporary variable t.
Given the declarations above, the hypothetical declaration
A* t(p);
is NOT well-formed, because of the ambiguity. Therefore the old-style cast (A*)(p) is NOT one of the conversions that can be performed using static_cast, and (A*)(p) is equivalent to reinterpret_cast<A*>(p), which is well-formed under 5.2.10 [expr.reinterpret.cast] paragraph 7.
Other situations besides ambiguity which might raise similar questions include access violations, casting from virtual base to derived, and casting pointers-to-members when virtual inheritance is involved.
In C, this is ill-formed (cf C99 6.5.8):
void f(char* s) { if (s < 0) { } }
...but in C++, it's not. Why? Who would ever need to write (s > 0) when they could just as well write (s != 0)?
This has been in the language since the ARM (and possibly earlier); apparently it's because the pointer conversions (4.10 [conv.ptr]) need to be performed on both operands whenever one of the operands is of pointer type. So it looks like the "null-ptr-to-real-pointer-type" conversion is hitching a ride with the other pointer conversions.
6.4.1 [stmt.if] is silent about whether the else clause of an if statement is executed if the condition is not evaluated. (This could occur via a goto or a longjmp.) C99 covers the goto case with the following provision:
If the first substatement is reached via a label, the second substatement is not executed.
It should probably also be stated that the condition is not evaluated when the “then” clause is entered directly.
Because the restriction that a trailing-return-type can appear only in a declaration with “the single type-specifier auto” (8.3.5 [dcl.fct] paragraph 2) is a semantic, not a syntactic, restriction, it does not influence disambiguation, which is “purely syntactic” (6.8 [stmt.ambig] paragraph 3). Consequently, some previously unambiguous expressions are now ambiguous. For example:
struct A { A(int *); A *operator()(void); int B; }; int *p; typedef struct BB { int C[2]; } *B, C; void foo() { // The following line becomes invalid under C++0x: A (p)()->B; // ill-formed function declaration // In the following, // - B()->C is either type-id or class member access expression // - B()->C[1] is either type-id or subscripting expression // N3126 subclause 8.2 [dcl.ambig.res] does not mention an ambiguity // with these forms of expression A a(B ()->C); // function declaration or object declaration sizeof(B ()->C[1]); // sizeof(type-id) or sizeof on an expression }
7 [dcl.dcl] paragraph 3 reads,
In a simple-declaration, the optional init-declarator-list can be omitted only when... the decl-specifier-seq contains either a class-specifier, an elaborated-type-specifier with a class-key (9.1 [class.name] ), or an enum-specifier. In these cases and whenever a class-specifier or enum-specifier is present in the decl-specifier-seq, the identifiers in those specifiers are among the names being declared by the declaration... In such cases, and except for the declaration of an unnamed bit-field (9.6 [class.bit] ), the decl-specifier-seq shall introduce one or more names into the program, or shall redeclare a name introduced by a previous declaration. [Example:In the absence of any explicit restrictions in 7.1.3 [dcl.typedef] , this paragraph appears to allow declarations like the following:enum { }; // ill-formed typedef class { }; // ill-formed—end example]
typedef struct S { }; // no declarator typedef enum { e1 }; // no declaratorIn fact, the final example in 7 [dcl.dcl] paragraph 3 would seem to indicate that this is intentional: since it is illustrating the requirement that the decl-specifier-seq must introduce a name in declarations in which the init-declarator-list is omitted, presumably the addition of a class name would have made the example well-formed.
On the other hand, there is no good reason to allow such declarations; the only reasonable scenario in which they might occur is a mistake on the programmer's part, and it would be a service to the programmer to require that such errors be diagnosed.
Suppose we've got this class definition:
struct X { void f(); static int n; };
I think I can deduce from the existing standard that the following member definitions are ill-formed:
static void X::f() { } static int X::n;
To come to that conclusion, however, I have to put together several things in different parts of the standard. I would have expected to find an explicit statement of this somewhere; in particular, I would have expected to find it in 7.1.1 [dcl.stc]. I don't see it there, or anywhere.
Gabriel Dos Reis: Or in 3.5 [basic.link] which is about linkage. I would have expected that paragraph to say that that members of class types have external linkage when the enclosing class has an external linkage. Otherwise 3.5 [basic.link] paragraph 8:
Names not covered by these rules have no linkage.
might imply that such members do not have linkage.
Notes from the April, 2005 meeting:
The question about the linkage of class members is already covered by 3.5 [basic.link] paragraph 5.
The phrase “top-level cv-qualifier” is used numerous times in the Standard, but it is not defined. The phrase could be misunderstood to indicate that the const in something like const T& is at the “top level,” because where it appears is the highest level at which it is permitted: T& const is ill-formed.
7.1.6.3 [dcl.type.elab] paragraph 1 seems to impose an ordering constraint on the elements of friend class declarations. However, the general rule is that declaration specifiers can appear in any order. Should
class C friend;be well-formed?
There is disagreement among implementations as to when an enumeration type is complete. For example,
enum E { e = E() };
is rejected by some and accepted by another. The Standard does not appear to resolve this question definitively.
According to 7.3 [basic.namespace] paragraph 1,
The name of a namespace can be used to access entities declared in that namespace; that is, the members of the namespace.
implying that all declarations in a namespace, including definitions of members of nested namespaces, explicit instantiations, and explicit specializations, introduce members of the containing namespace. 7.3.1.2 [namespace.memdef] paragraph 3 clarifies the intent somewhat:
Every name first declared in a namespace is a member of that namespace.
However, current changes to clarify the behavior of deleted functions (which must be deleted on their “first declaration”) state that an explicit specialization of a function template is its first declaration.
According to 7.3.1.2 [namespace.memdef] paragraphs 1 and 2 read,
Members (including explicit specializations of templates (14.7.3 [temp.expl.spec])) of a namespace can be defined within that namespace.
Members of a named namespace can also be defined outside that namespace by explicit qualification (3.4.3.2 [namespace.qual]) of the name being defined, provided that the entity being defined was already declared in the namespace and the definition appears after the point of declaration in a namespace that encloses the declaration's namespace.
It is not clear what these specifications mean for the following pair of examples:
namespace N { struct A; } using N::A; struct A { };
Although this does not satisfy the “by explicit qualification” requirement, it is accepted by major implementations.
struct S; namespace A { using ::S; struct S { }; }
Is this a definition “within that namespace,” or should that wording be interpreted as “directly within” the namespace?
Section 7.3.3 [namespace.udecl] paragraph 8 says:
A using-declaration is a declaration and can therefore be used repeatedly where (and only where) multiple declarations are allowed.It contains the following example:
namespace A { int i; } namespace A1 { using A::i; using A::i; // OK: double declaration } void f() { using A::i; using A::i; // error: double declaration }However, if "using A::i;" is really a declaration, and not a definition, it is far from clear that repeating it should be an error in either context. Consider:
namespace A { int i; void g(); } void f() { using A::g; using A::g; }Surely the definition of f should be analogous to
void f() { void g(); void g(); }which is well-formed because "void g();" is a declaration and not a definition.
Indeed, if the double using-declaration for A::i is prohibited in f, why should it be allowed in namespace A1?
Proposed Resolution (04/99): Change the comment "// error: double declaration" to "// OK: double declaration". (This should be reviewed against existing practice.)
Notes from 04/00 meeting:
The core language working group was unable to come to consensus over what kind of declaration a using-declaration should emulate. In a straw poll, 7 members favored allowing using-declarations wherever a non-definition declaration could appear, while 4 preferred to allow multiple using-declarations only in namespace scope (the rationale being that the permission for multiple using-declarations is primarily to support its use in multiple header files, which are seldom included anywhere other than namespace scope). John Spicer pointed out that friend declarations can appear multiple times in class scope and asked if using-declarations would have the same property under the "like a declaration" resolution.
As a result of the lack of agreement, the issue was returned to "open" status.
See also issues 56, 85, and 138..
Additional notes (January, 2005):
Some related issues have been raised concerning the following example (modified from a C++ validation suite test):
struct A { int i; static int j; }; struct B : A { }; struct C : A { }; struct D : virtual B, virtual C { using B::i; using C::i; using B::j; using C::j; };
Currently, it appears that the using-declarations of i are ill-formed, on the basis of 7.3.3 [namespace.udecl] paragraph 10:
Since a using-declaration is a declaration, the restrictions on declarations of the same name in the same declarative region (3.3 [basic.scope]) also apply to using-declarations.
Because the using-declarations of i refer to different objects, declaring them in the same scope is not permitted under 3.3 [basic.scope]. It might, however, be preferable to treat this case as many other ambiguities are: allow the declaration but make the program ill-formed if a name reference resolves to the ambiguous declarations.
The status of the using-declarations of j, however, is less clear. They both declare the same entity and thus do not violate the rules of 3.3 [basic.scope]. This might (or might not) violate the restrictions of 9.2 [class.mem] paragraph 1:
Except when used to declare friends (11.4 [class.friend]) or to introduce the name of a member of a base class into a derived class (7.3.3 [namespace.udecl], 11.3 [class.access.dcl]), member-declarations declare members of the class, and each such member-declaration shall declare at least one member name of the class. A member shall not be declared twice in the member-specification, except that a nested class or member class template can be declared and then later defined.
Do the using-declarations of j repeatedly declare the same member? Or is the preceding sentence an indication that a using-declaration is not a declaration of a member?
Daveed Vandevoorde : While reading Core issue 11 I thought it implied the following possibility:
template<typename T> struct B { template<int> void f(int); }; template<typename T> struct D: B<T> { using B<T>::template f; void g() { this->f<1>(0); } // OK, f is a template };
However, the grammar for a using-declaration reads:
and nested-name-specifier never ends in "template".
Is that intentional?
Bill Gibbons :
It certainly appears to be, since we have:
Rationale (04/99): Any semantics associated with the template keyword in using-declarations should be considered an extension.
Notes from the April 2003 meeting:
We decided to make no change and to close this issue as not-a-defect. This is not needed functionality; the example above, for example, can be written with ->template. This issue has been on the issues list for years as an extension, and there has been no clamor for it.
It was also noted that knowing that something is a template is not enough; there's still the issue of knowing whether it is a class or function template.
Additional note (February, 2011):
This issue is being reopened for further consideration after
additional discussion
using T::template X; // ill-formed
for a class template member X of base class T, one could write
template<U> using X = typename T::template X<U>;
The following came up recently on comp.lang.c++.moderated (edited for brevity):
namespace N1 { template<typename T> void f( T* x ) { // ... other stuff ... delete x; } } namespace N2 { using N1::f; template<> void f<int>( int* ); // A: ill-formed class Test { ~Test() { } friend void f<>( Test* x ); // B: ill-formed? }; }
I strongly suspect, but don't have standardese to prove, that the friend declaration in line B is ill-formed. Can someone show me the text that allows or disallows line B?
Here's my reasoning: Writing "using" to pull the name into namespace N2 merely allows code in N2 to use the name in a call without qualification (per 7.3.3 [namespace.udecl]). But just as declaring a specialization must be done in the namespace where the template really lives (hence line A is ill-formed), I suspect that declaring a specialization as a friend must likewise be done using the original namespace name, not obliquely through a "using". I see nothing in 7.3.3 [namespace.udecl] that would permit this use. Is there?
Andrey Tarasevich: 14.5.4 [temp.friend] paragraph 2 seems to get pretty close: "A friend declaration that is not a template declaration and in which the name of the friend is an unqualified 'template-id' shall refer to a specialization of a function template declared in the nearest enclosing namespace scope".
Herb Sutter: OK, thanks. Then the question in this is the word "declared" -- in particular, we already know we cannot declare a specialization of a template in any other namespace but the original one.
John Spicer: This seems like a simple question, but it isn't.
First of all, I don't think the standard comments on this usage one way or the other.
A similar example using a namespace qualified name is ill-formed based on 8.3 [dcl.meaning] paragraph 1:
namespace N1 { void f(); } namespace N2 { using N1::f; class A { friend void N2::f(); }; }
Core issue 138 deals with this example:
void foo(); namespace A{ using ::foo; class X{ friend void foo(); }; }
The proposed resolution (not yet approved) for issue 138 is that the friend declares a new foo that conflicts with the using-declaration and results in an error.
Your example is different than this though because the presence of the explicit argument list means that this is not declaring a new f but is instead using a previously declared f.
One reservation I have about allowing the example is the desire to have consistent rules for all of the "declaration like" uses of template functions. Issue 275 (in DR status) addresses the issue of unqualified names in explicit instantiation and explicit specialization declarations. It requires that such declarations refer to templates from the namespace containing the explicit instantiation or explicit specialization. I believe this rule is necessary for those directives but is not really required for friend declarations -- but there is the consistency issue.
Notes from April 2003 meeting:
This is related to issue 138. John Spicer is supposed to update his paper on this topic. This is a new case not covered in that paper. We agreed that the B line should be allowed.
The Standard does not appear to specify what happens for code like the following:
namespace one { template<typename T> void fun(T); } using one::fun; template<typename T> void fun(T);
7.3.3 [namespace.udecl] paragraph 13 does not appear to apply because it deals only with functions, not function templates:
If a function declaration in namespace scope or block scope has the same name and the same parameter types as a function introduced by a using-declaration, and the declarations do not declare the same function, the program is ill-formed.
John Spicer: For function templates I believe the rule should be that if they have the same function type (parameter types and return type) and have identical template parameter lists, the program is ill-formed.
7.3.3 [namespace.udecl] paragraph 20 says,
If a using-declaration uses the keyword typename and specifies a dependent name (14.6.2 [temp.dep]), the name introduced by the using-declaration is treated as a typedef-name (7.1.3 [dcl.typedef]).
This wording does not address use of typename in a using-declaration with a non-dependent name; the primary specification of the typename keyword in 14.6 [temp.res] does not appear to describe this case, either.
The status of an example like the following is unclear in the current Standard:
struct B { void f(); }; template<typename T> struct S: T { using B::f; };
7.3.3 [namespace.udecl] does not deal explicitly with dependent base classes, but does say in paragraph 3,
In a using-declaration used as a member-declaration, the nested-name-specifier shall name a base class of the class being defined. If such a using-declaration names a constructor, the nested-name-specier shall name a direct base class of the class being defined; otherwise it introduces the set of declarations found by member name lookup (10.2 [class.member.lookup], 3.4.3.1 [class.qual]).
In the definition of S, B::f is not a dependent name but resolves to an apparently unrelated class. However, because S could be instantiated as S<B>, presumably 14.6 [temp.res] paragraph 8 would apply:
No diagnostic shall be issued for a template definition for which a valid specialization can be generated.
Note also the resolution of issue 515, which permitted a similar use of a dependent base class named with a non-dependent name.
It is not clear whether some of the wording in 7.5 [dcl.link] that applies only to function types and names ought also to apply to object names. In particular, paragraph 3 says,
Every implementation shall provide for linkage to functions written in the C programming language, "C", and linkage to C++ functions, "C++".
Nothing is said about variable names, apparently meaning that implementations need not provide C (or even C++!) linkage for variable names. Also, paragraph 5 says,
Except for functions with C++ linkage, a function declaration without a linkage specification shall not precede the first linkage specification for that function. A function can be declared without a linkage specification after an explicit linkage specification has been seen; the linkage explicitly specified in the earlier declaration is not affected by such a function declaration.
There doesn't seem to be a good reason for these provisions not to apply to variable names, as well.
There are some kinds of declarations that can appear in a derived class and hide names from a base class, but for which the syntax does not permit a [[hiding]] attribute. For example:
struct B1 { int N; int M; }; struct B2 { int M; }; struct [[base_check]] D: B1, B2 { enum { N }; // hides B1::N but cannot take an attribute using B1::M; // hides B2::M but cannot take an attribute };
Additional note (October, 2010):
alias-declarations should also be considered in this regard.
Notes from the November, 2010 meeting:
Paper N3206 did not address these cases; in fact, it introduced additional member declarations that cannot be annotated as hiding a base class member (function-definitions and template-declarations), because the new virt-specifier applies to a member-declarator and none of these member-declarations uses a member-declarator.
Additional note (November, 2010):
The injected-class-name can also hide a name from a base class but cannot be annotated with new.
The facility for checking hiding and overriding of base class members should not use the attribute syntax but should use keywords instead. Concerns about breaking code by changing current identifiers into keywords can be addressed by using contextual keywords, i.e., by putting the keywords into syntactic locations where identifiers cannot appear and thus continuing to allow their use as ordinary identifiers in other contexts.
Notes from the August, 2010 meeting:
CWG expressed a preference for non-contextual keywords for these features.
The footnote for 8 [dcl.decl] paragraph 3 reads,
A declaration with several declarators is usually equivalent to the corresponding sequence of declarations each with a single declarator... The exception occurs when a name introduced by one of the declarators hides a type name used by the decl-specifiers, so that when the same decl-specifiers are used in a subsequent declaration, they do not have the same meaning...
A more important exception to the rule has been added in C++0x, specifically with the auto specifier when the deduced type is not the same for all declarators, which renders the declaration ill-formed. The footnote should be updated accordingly.
The ellipsis for a parameter pack enters the normal declarator grammar as part of the declarator-id nonterminal. In contrast, however, the abstract-declarator grammar has no counterpart to declarator-id; instead, the ellipsis is one of the productions for the abstract-declarator nonterminal itself. It is thus impossible to declare a parameter pack for a pointer or reference using an abstract declarator, e.g.,
template<typename... T> void f(T& ...t); // t is a parameter pack template<typename... T> void f(T& ...); // equivalent to void f(T&, ...)
Split off from issue 453.
It is in general not possible to determine at compile time whether a reference is used before it is initialized. Nevertheless, there is some sentiment to require a diagnostic in the obvious cases that can be detected at compile time, such as the name of a reference appearing in its own initializer. The resolution of issue 453 originally made such uses ill-formed, but the CWG decided that this question should be a separate issue.
Rationale (October, 2005):
The CWG felt that this error was not likely to arise very often in practice. Implementations can warn about such constructs, and the resolution for issue 453 makes executing such code undefined behavior; that seemed to address the situation adequately.
Note (February, 2006):
Recent discussions have suggested that undefined behavior be reduced. One possibility (broadening the scope of this issue to include object declarations as well as references) was to require a diagnostic if the initializer uses the value, but not just the address, of the object or reference being declared:
int i = i; // Ill-formed, diagnostic required void* p = &p; // Okay
According to 8.3.4 [dcl.array] paragraph 1,
In a declaration T D where D has the form
D1 [ constant-expressionopt ] attribute-specifieropt
and the type of the identifier in the declaration T D1 is “derived-declarator-type-list T”, then the type of the identifier of D is an array type; if the type of the identifier of D contains the auto type-specifier, the program is ill-formed.
This has the effect of prohibiting a declaration like
int v[1]; auto (*p)[1] = &v;
This restriction is unnecessary and presumably unintentional.
Note also that the statement that “the type of the identifier of D is an array type” is incorrect when the nested declarator is not simply a declarator-id. A similar problem exists in the wording of 8.5.3 [dcl.init.ref] paragraph 3 for function types.
8.3.5 [dcl.fct]/2 restricts the use of void as parameter type, but does not mention CV qualified versions. Since void f(volatile void) isn't a callable function anyway, 8.3.5 [dcl.fct] should also ban cv-qualified versions. (BTW, this follows C)
Suggested resolution:
A possible resolution would be to add (cv-qualified) before void in
The parameter list (void) is equivalent to the empty parameter list. Except for this special case, (cv-qualified) void shall not be a parameter type (though types derived from void, such as void*, can).
The current wording of 8.3.5 [dcl.fct] paragraph 6 encompasses more than it should:
If the type of a parameter includes a type of the form “pointer to array of unknown bound of T” or “reference to array of unknown bound of T,” the program is ill-formed. [Footnote: This excludes parameters of type “ptr-arr-seq T2” where T2 is “pointer to array of unknown bound of T” and where ptr-arr-seq means any sequence of “pointer to” and “array of” derived declarator types. This exclusion applies to the parameters of the function, and if a parameter is a pointer to function or pointer to member function then to its parameters also, etc. —end footnote]
The normative wording (contrary to the intention expressed in the footnote) excludes declarations like
template<class T> struct S {}; void f(S<int (*)[]>);
and
struct S {}; void f(int(*S::*)[]);
but not
struct S {}; void f(int(S::*)[]);
8.3.5 [dcl.fct] paragraph 2 says,
The parameter list (void) is equivalent to the empty parameter list.
This special case is intended for C compatibility, but C99 describes it differently (6.7.5.3 paragraph 10):
The special case of an unnamed parameter of type void as the only item in the list specifies that the function has no parameters.
The C99 formulation allows typedefs for void, while C++ (and C90) accept only the keyword itself in this role. Should the C99 approach be adopted?
Notes from the October, 2006 meeting:
The CWG did not take a formal position on this issue; however, there was some concern expressed over the treatment of function templates and member functions of class templates if the C++ rule were changed: for a template parameter T, would a function taking a single parameter of type T become a no-parameter function if it were instantiated with T = void?
The standard is not precise enough about when the default arguments of member functions are parsed. This leads to confusion over whether certain constructs are legal or not, and the validity of certain compiler implementation algorithms.
8.3.6 [dcl.fct.default] paragraph 5 says "names in the expression are bound, and the semantic constraints are checked, at the point where the default argument expression appears"
However, further on at paragraph 9 in the same section there is an example, where the salient parts are
int b; class X { int mem2 (int i = b); // OK use X::b static int b; };which appears to contradict the former constraint. At the point the default argument expression appears in the definition of X, X::b has not been declared, so one would expect ::b to be bound. This of course appears to violate 3.3.7 [basic.scope.class] paragraph 1(2) "A name N used in a class S shall refer to the same declaration in its context and when reevaluated in the complete scope of S. No diagnostic is required."
Furthermore 3.3.7 [basic.scope.class] paragraph 1(1) gives the scope of names declared in class to "consist not only of the declarative region following the name's declarator, but also of .. default arguments ...". Thus implying that X::b is in scope in the default argument of X::mem2 previously.
That previous paragraph hints at an implementation technique of saving the token stream of a default argument expression and parsing it at the end of the class definition (much like the bodies of functions defined in the class). This is a technique employed by GCC and, from its behaviour, in the EDG front end. The standard leaves two things unspecified. Firstly, is a default argument expression permitted to call a static member function declared later in the class in such a way as to require evaluation of that function's default arguments? I.e. is the following well formed?
class A { static int Foo (int i = Baz ()); static int Baz (int i = Bar ()); static int Bar (int i = 5); };If that is well formed, at what point does the non-sensicalness of
class B { static int Foo (int i = Baz ()); static int Baz (int i = Foo()); };become detected? Is it when B is complete? Is it when B::Foo or B::Baz is called in such a way to require default argument expansion? Or is no diagnostic required?
The other problem is with collecting the tokens that form the default argument expression. Default arguments which contain template-ids with more than one parameter present a difficulty in determining when the default argument finishes. Consider,
template <int A, typename B> struct T { static int i;}; class C { int Foo (int i = T<1, int>::i); };The default argument contains a non-parenthesized comma. Is it required that this comma is seen as part of the default argument expression and not the beginning of another of argument declaration? To accept this as part of the default argument would require name lookup of T (to determine that the '<' was part of a template argument list and not a less-than operator) before C is complete. Furthermore, the more pathological
class D { int Foo (int i = T<1, int>::i); template <int A, typename B> struct T {static int i;}; };would be very hard to accept. Even though T is declared after Foo, T is in scope within Foo's default argument expression.
Suggested resolution:
Append the following text to 8.3.6 [dcl.fct.default] paragraph 8.
The default argument expression of a member function declared in the class definition consists of the sequence of tokens up until the next non-parenthesized, non-bracketed comma or close parenthesis. Furthermore such default argument expressions shall not require evaluation of a default argument of a function declared later in the class.
This would make the above A, B, C and D ill formed and is in line with the existing compiler practice that I am aware of.
Notes from the October, 2005 meeting:
The CWG agreed that the first example (A) is currently well-formed and that it is not unreasonable to expect implementations to handle it by processing default arguments recursively.
Additional notes, May, 2009:
Presumably the following is ill-formed:
int f(int = f());
However, it is not clear what in the Standard makes it so. Perhaps there needs to be a statement to the effect that a default argument only becomes usable after the complete declarator of which it is a part.
Is this program well-formed?
struct S { static int f2(int = f1()); // OK? static int f1(int = 2); }; int main() { return S::f2(); }
A class member function can in general refer to class members that are declared lexically later. But what about referring to default arguments of member functions that haven't yet been declared?
It seems to me that if f2 can refer to f1, it can also refer to the default argument of f1, but at least one compiler disagrees.
According to the new wording of 8.3.6 [dcl.fct.default] paragraph 1,
A default argument is implicitly converted (Clause 4 [conv]) to the parameter type.
This is incorrect when the default argument is a braced-init-list. That sentence doesn't seem to be necessary, but if it is kept, it should be recast in terms of initialization rather than conversion.
Paragraph 9 of 8.5 [dcl.init] says:
If no initializer is specified for an object, and the object is of (possibly cv-qualified) non-POD class type (or array thereof), the object shall be default-initialized; if the object is of const-qualified type, the underlying class type shall have a user-declared default constructor. Otherwise, if no initializer is specified for an object, the object and its subobjects, if any, have an indeterminate initial value; if the object or any of its subobjects are of const-qualified type, the program is ill-formed.
What if a const POD object has no non-static data members? This wording requires an empty initializer for such cases:
struct Z { // no data members operator int() const { return 0; } }; void f() { const Z z1; // ill-formed: no initializer const Z z2 = { }; // well-formed }
Similar comments apply to a non-POD const object, all of whose non-static data members and base class subobjects have default constructors. Why should the class of such an object be required to have a user-declared default constructor?
(See also issue 78.)
Additional note (February, 2011):
This issue should be brought up again in light of constexpr constructors and non-static data member initializers.
According to 8.5 [dcl.init] paragraph 5,
To zero-initialize an object of type T means:
...
if T is a reference type, no initialization is performed.
However, a reference is not an object, so this makes no sense.
In this example:
struct A {}; struct B: A { B(int); B(B&); B(A); }; void foo(B); void bar() { foo(0); }
we are copy-initializing a B from 0. So by 13.3.1.4 [over.match.copy] we consider all the converting constructors of B, and choose B(int) to create a B. Then, by 8.5 [dcl.init] paragraph 15, we direct-initialize the parameter from that temporary B. By 13.3.1.3 [over.match.ctor] we consider all constructors. The copy constructor cannot be called with a temporary, but B(A) is callable.
As far as I can tell, the Standard says that this example is well-formed, and calls B(A). EDG and G++ have rejected this example with a message about the copy constructor not being callable, but I have been unsuccessful in finding anything in the Standard that says that we only consider the copy constructor in the second step of copy-initialization. I wouldn't mind such a rule, but it doesn't seem to be there. And implementing issue 391 causes G++ to start accepting the example.
This question came up before in a GCC bug report; in the discussion of that bug Nathan Sidwell said that some EDG folks explained to him why the testcase is ill-formed, but unfortunately didn't provide that explanation in the bug report.
I think the resolution of issue 391 makes this example well-formed; it was previously ill-formed because in order to bind the temporary B(0) to the argument of A(const A&) we needed to make another temporary B, and that's what made the example ill-formed. If we want this example to stay ill-formed, we need to change something else.
Steve Adamczyk:
I tracked down my response to Nathan at the time, and it related to my paper N1232 (on the auto_ptr problem). The change that came out of that paper is in 13.3.3.1 [over.best.ics] paragraph 4:
However, when considering the argument of a user-defined conversion function that is a candidate by 13.3.1.3 [over.match.ctor] when invoked for the copying of the temporary in the second step of a class copy-initialization, or by 13.3.1.4 [over.match.copy], 13.3.1.5 [over.match.conv], or 13.3.1.6 [over.match.ref] in all cases, only standard conversion sequences and ellipsis conversion sequences are allowed.
This is intended to prevent use of more than one implicit user- defined conversion in an initialization.
I told Nathan B(A) can't be called because its argument would require yet another user-defined conversion, but I was wrong. I saw the conversion from B to A and immediately thought “user-defined,” but in fact because B is a derived class of A the conversion according to 13.3.3.1 [over.best.ics] paragraph 6 is a derived-to-base Conversion (even though it will be implemented by calling a copy constructor).
So I agree with you: with the analysis above and the change for issue 391 this example is well-formed. We should discuss whether we want to make a change to keep it ill-formed.
8.5 [dcl.init] paragraph 7 only describes how to initialize objects:
To value-initialize an object of type T means:
However, 5.2.3 [expr.type.conv] paragraph 2 calls for value-initializing prvalues, which in the case of scalar types are not objects:
The expression T(), where T is a simple-type-specifier or typename-specifier for a non-array complete object type or the (possibly cv-qualified) void type, creates a prvalue of the specified type, which is value-initialized (8.5 [dcl.init]; no initialization is done for the void() case).
The examples in 8.5.3 [dcl.init.ref] paragraph 5 are not consistent as to whether earlier code fragments are assumed to be part of the example or not. For instance, the variables i and d are used as initializers without declaration in later fragments, presumably intended to refer to the declarations introduced in the first couple. However, the third fragment declares rca, which is an incompatible redeclaration of a name declared in the first fragment.
By analogy with the variable definition
int arr[3] = {1, 2, 3};
it should be possible to write something like
void f(const int(&)[3]); void g() { f({1, 2, 3}); }
There are currently at least two problems with the latter usage. First, the variable initializer relies on brace elision, which appears to be defined only for variable declarations (8.5.1 [dcl.init.aggr] paragraph 11), and possibly only for certain forms of variable declarations.
Second, the call would require creation of an array temporary to which the parameter reference would be bound, and the current Standard does not describe array temporaries (although rvalue arrays can occur as members of class rvalues). This is also contrary to the direction established by CWG in considering 1058.
A POD-struct is not permitted to have a user-declared copy assignment operator (9 [class] paragraph 4). However, a template assignment operator is not considered a copy assignment operator, even though its specializations can be selected by overload resolution for performing copy operations (12.8 [class.copy] paragraph 9 and especially footnote 114). Consequently, X in the following code is a POD, notwithstanding the fact that copy assignment (for a non-const operand) is a member function call rather than a bitwise copy:
struct X {
template<typename T> const X& operator=(T&);
};
void f() {
X x1, x2;
x1 = x2; // calls X::operator=<X>(X&)
}
Is this intentional?
Non-static data member initializers should not be part of C++0x unless they have implementation experience.
Notes from the August, 2010 meeting:
The C++/CLI dialect has a very similar feature that has been implemented.
Move semantics for *this should not be part of C++0x unless they have implementation experience.
There doesn't seem to be a prohibition in 9.5 [class.union] against a declaration like
union { int : 0; } x;Should that be valid? If so, 8.5 [dcl.init] paragraph 5 third bullet, which deals with default-initialization of unions, should say that no initialization is done if there are no data members.
What about:
union { } x; static union { };If the first example is well-formed, should either or both of these cases be well-formed as well?
(See also the resolution for issue 151.)
Notes from 10/00 meeting: The resolution to issue 178, which was accepted as a DR, addresses the first point above (default initialization). The other questions have not yet been decided, however.
Is the signedness of x in the following example implementation-defined?
template <typename T> struct A { T x : 7; }; template struct A<long>;
A similar example could be created with a typedef.
Lawrence Crowl: According to 9.6 [class.bit] paragraph 3,
It is implementation-defined whether a plain (neither explicitly signed nor unsigned) char, short, int or long bit-field is signed or unsigned.
This clause is conspicuously silent on typedefs and template parameters.
Clark Nelson: At least in C, the intention is that the presence or absence of this redundant keyword is supposed to be remembered through typedef declarations. I don't remember discussing it in C++, but I would certainly hope that we don't want to do something different. And presumably, we would want template type parameters to work the same way.
So going back to the original example, in an instantiation of A<long>, the signedness of the bit-field is implementation-defined, but in an instantiation of A<signed long>, the bit-field is definitely signed.
Peter Dimov: How can this work? Aren't A<long> and A<signed long> the same type?
(See also issue 739.)9.6 [class.bit] paragraph 3 says,
It is implementation-defined whether a plain (neither explicitly signed nor unsigned) char, short, int or long bit-field is signed or unsigned.
The implications of this permission for an implementation that chooses to treat plain bit-fields as unsigned are not clear. Does this mean that the type of such a bit-field is adjusted to the unsigned variant or simply that sign-extension is not performed when the value is fetched? C99 is explicit in specifying the former (6.7.2 paragraph 5: “for bit-fields, it is implementation-defined whether the specifier int designates the same type as signed int or the same type as unsigned int”), while C90 takes the latter approach (6.5.2.1: “Whether the high-order bit position of a (possibly qualified) 'plain' int bit-field is treated as a sign bit is implementation-defined”).
(See also issue 675 and issue 741.)Additional note, May, 2009:
As an example of the implications of this question, consider the following declaration:
struct S { int i: 2; signed int si: 2; unsigned int ui: 2; } s;
Is it implementation-defined which expression, cond?s.i:s.si or cond?s.i:s.ui, is an lvalue (the lvalueness of the result depends on the second and third operands having the same type, per 5.16 [expr.cond] paragraph 4)?
The term "ambiguous base class" doesn't seem to be actually defined anywhere. 10.2 [class.member.lookup] paragraph 7 seems like the place to do it.
According to 10.4 [class.abstract] paragraph 6,
Member functions can be called from a constructor (or destructor) of an abstract class; the effect of making a virtual call (10.3 [class.virtual]) to a pure virtual function directly or indirectly for the object being created (or destroyed) from such a constructor (or destructor) is undefined.
This prohibition is unnecessarily restrictive. It should not apply to cases in which the pure virtual function has been defined.
Currently the "pure" specifier for a virtual member function has two meanings that need not be related:
The prohibition of virtual calls to pure virtual functions arises from the first meaning and unnecessarily penalizes those who only need the second.
For example, consider a scenario such as the following. A class B is defined containing a (non-pure) virtual function f that provides some initialization and is thus called from the base class constructor. As time passes, a number of classes are derived from B and it is noticed that each needs to override f, so it is decided to make B::f pure to enforce this convention while still leaving the original definition of B::f to perform its needed initialization. However, the act of making B::f pure means that every reference to f that might occur during the execution of one of B's constructors must be tracked down and edited to be a qualified reference to B::f. This process is tedious and error-prone: needed edits might be overlooked, and calls that actually should be virtual when the containing function is called other than during construction/destruction might be incorrectly changed.
Suggested resolution: Allow virtual calls to pure virtual functions if the function has been defined.
Referring to a private member of a class, 11 [class.access] paragraph 1 says,
its name can be used only by members and friends of the class in which it is declared.
That wording does not appear to reflect the intent of access control, however. Consider the following:
struct S {
void f(int);
private:
void f(double);
};
void g(S* sp) {
sp->f(2); // Ill-formed?
}
The statement from 11 [class.access] paragraph 1 says that the name f can be used only by members and friends of S. Function g is neither, and it clearly contains a use of the name f. That appears to make it ill-formed, in spite of the fact that overload resolution will select the public member.
A related question is whether the use of the term “name” in the description of the effect of access control means that it does not apply to constructors and destructors, which do not have names.
Mike Miller: The phrase “its name can be used” should be understood as “it can be referred to by name.” Paragraph 4, among other places, makes it clear that access control is applied after overload resolution. The “name” phrasing is there to indicate that access control does not apply where the name is not used (in a call via a pointer, for example).
I have heard a claim that the following code is valid, but I don't see why.
struct A { int foo (); }; struct B: A { private: using A::foo; }; int main () { return B ().foo (); }
It seems to me that the using declaration in B should hide the public foo in A. Then the call to B::foo should fail because B::foo is not accessible in main.
Am I missing something?
Steve Adamczyk: This is similar to the last example in 11.2 [class.access.base]. In prose, the rule is that if you have access to cast to a base class and you have access to the member in the base class, you are given access in the derived class. In this case, A is a public base class of B and foo is public in A, so you can access foo through a B object. The actual permission for this is in the fourth bullet in 11.2 [class.access.base] paragraph 4.
The wording changes for issue 9 make this clearer, but I believe even without them this example could be discerned to be valid.
See my paper J16/96-0034, WG21/N0852 on this topic.
Steve Clamage: But a using-declaration is a declaration (7.3.3 [namespace.udecl]). Compare with
struct B : A { private: int foo(); };
In this case, the call would certainly be invalid, even though your argument about casting B to an A would make it OK. Your argument basically says that an access adjustment to make something less accessible has no effect. That also doesn't sound right.
Steve Adamczyk: I agree that is strange. I do think that's what 11.2 [class.access.base] says, but perhaps that's not what we want it to say.
Consider the following example:
struct B { void f(){} }; class N : protected B { }; struct P: N { friend int main(); }; int main() { N n; B& b = n; // R b.f(); }
This code is rendered well-formed by bullet 3 of 11.2 [class.access.base] paragraph 4, which says that a base class B of N is accessible at R if
R occurs in a member or friend of a class P derived from N, and an invented public member of B would be a private or protected member of P
This provision circumvents the additional restrictions on access to protected members found in 11.5 [class.protected] — main() could not call B::f() directly because the reference is not via an object of the class through which access is obtained. What is the purpose of this rule?
With the change from a scope-based to an entity-based definition of friendship (see issues 372 and 580), it could well make sense to grant friendship to enumerations and variables, for example:
enum E: int; class C { static const int i = 5; // Private friend E; friend int x; }; enum E { e = C::i; }; // OK: E is a friend int x = C::i; // OK: x is a friend
According to the current wording of 11.4 [class.friend] paragraph 3, the friend declaration of E is well-formed but ignored, while the friend declaration of x is ill-formed.
Although it is not possible to specify a constructor's template arguments in a constructor invocation (because the constructor has no name but is invoked by use of the constructor's class's name), it is possible to “name” the constructor in declarative contexts: per 3.4.3.1 [class.qual] paragraph 2,
In a lookup in which the constructor is an acceptable lookup result, if the nested-name-specifier nominates a class C, and the name specified after the nested-name-specifier, when looked up in C, is the injected-class-name of C (clause 9 [class]), the name is instead considered to name the constructor of class C... Such a constructor name shall be used only in the declarator-id of a declaration that names a constructor.
Should it therefore be possible to specify template-arguments for a templated constructor in an explicit instantiation or specialization? For example,
template <int dim> struct T {}; struct X { template <int dim> X (T<dim> &) {}; }; template X::X<> (T<2> &);
If so, that should be clarified in the text. In particular, 12.1 [class.ctor] paragraph 1 says,
Constructors do not have names. A special declarator syntax using an optional sequence of function-specifiers (7.1.2 [dcl.fct.spec]) followed by the constructor’s class name followed by a parameter list is used to declare or define the constructor.
This certainly sounds as if the parameter list must immediately follow the class name, with no allowance for a template argument list.
It would be worthwhile in any event to revise this wording to utilize the “considered to name” approach of 3.4.3.1 [class.qual]; as it stands, this wording sounds as if the following would be acceptable:
struct S {
S();
};
S() { } // qualified-id not required?
Notes from the October, 2006 meeting:
It was observed that explicitly specifying the template arguments in a constructor declaration is never actually necessary because the arguments are, by definition, all deducible and can thus be omitted.
An implicit declaration of a copy assignment operator is deprecated if the class has a user-declared copy constructor or a user-declared destructor. However, the example in 12.2 [class.temporary] relies on such an implicit declaration; an explicit declaration for the copy assignment operator for class X should be provided:
class X {
public:
X(int);
X(const X&);
~X();
};
class Y {
public:
Y(int);
Y(Y&&);
~Y();
};
X f(X);
Y g(Y);
void h() {
X a(1);
X b = f(X(2));
Y c = g(Y(3));
a = f(a); // relies on implicitly-declared X::operator=(const X&)
}
Note that destructors suffer from similar problems as those of constructors dealt with in issue 194 and in 263 (constructors as friends). Also, the wording in 12.4 [class.dtor], paragraph 1 does not permit a destructor to be defined outside of the memberlist.
Change 12.4 [class.dtor], paragraph 1 from
...A special declarator syntax using an optional function-specifier (7.1.2 [dcl.fct.spec]) followed by ~ followed by the destructor's class name followed by an empty parameter list is used to declare the destructor in a class definition. In such a declaration, the ~ followed by the destructor's class name can be enclosed in optional parentheses; such parentheses are ignored....
to
...A special declarator syntax using an optional sequence of function-specifiers (7.1.2 [dcl.fct.spec]), an optional friend keyword, an optional sequence of function-specifiers (7.1.2 [dcl.fct.spec]) followed by an optional :: scope-resolution-operator followed by an optional nested-name-specifier followed by ~ followed by the destructor's class name followed by an empty parameter list is used to declare the destructor. The optional nested-name-specifier shall not be specified in the declaration of a destructor within the member-list of the class of which the destructor is a member. In such a declaration, the optional :: scope-resolution-operator followed by an optional nested-name-specifier followed by ~ followed by the destructor's class name can be enclosed in optional parentheses; such parentheses are ignored....
The current wording of 12.4 [class.dtor] paragraph 7 says,
After executing the body of the destructor and destroying any automatic objects allocated within the body, a destructor for class X calls the destructors for X's direct non-variant members...
This is incorrect; it is only the non-static members that are destroyed.
Paragraph 4 of 12.5 [class.free] speaks of looking up a deallocation function. While it is an error if a placement deallocation function alone is found by this lookup, there seems to be an assumption that a placement deallocation function and a usual deallocation function can both be declared in a given class scope without creating an ambiguity. The normal mechanism by which ambiguity is avoided when functions of the same name are declared in the same scope is overload resolution; however, there is no mention of overload resolution in the description of the lookup. In fact, there appears to be nothing in the current wording that handles this case. That is, the following example appears to be ill-formed, according to the current wording:
struct S { void operator delete(void*); void operator delete(void*, int); }; void f(S* p) { delete p; // ill-formed: ambiguous operator delete }
Suggested resolution (Mike Miller, March 2002):
I think you might get the right effect by replacing the last sentence of 12.5 [class.free] paragraph 4 with something like:
After removing all placement deallocation functions, the result of the lookup shall contain an unambiguous and accessible deallocation function.
In an example like,
struct Y {}; template <typename T> struct X : public virtual Y { }; template <typename T> class A : public X<T> { template <typename S> A (S) : S () { } }; template A<int>::A (Y);
Should S be found? (S is a dependent name, so if it resolves to a base class type in the instantiated template, it should satisfy the requirements.) All the compilers I tried allowed this example, but 12.6.2 [class.base.init] paragraph 2 says,
Names in a mem-initializer-id are looked up in the scope of the constructor’s class and, if not found in that scope, are looked up in the scope containing the constructor’s definition.
The name S is not declared in those scopes.
Mike Miller: Here's another example that is accepted by most/all compilers but not by the current wording:
namespace N { struct B { B(int); }; typedef B typedef_B; struct D: B { D(); }; } N::D::D(): typedef_B(0) { }
Except for the fact that the constructor function parameter names are ignored (see paragraph 7), what the compilers seem to be doing is essentially ordinary unqualified name lookup.
Notes from the October, 2009 meeting:
The eventual resolution of this issue should take into account the template parameter scope introduced by the resolution of issue 481.
References to non-static data members inside the body of a non-static member function (which includes the mem-initializers of a constructor definition) are implicitly transformed to member access expressions using (*this) (9.3.1 [class.mfct.non-static] paragraph 3). Although 5.1.1 [expr.prim.general] paragraph 3 permits use of this in a brace-or-equal-initializer for a non-static data member, 12.6.2 [class.base.init] does not give details about the value of this in that context, and there is no parallel to the transformation of member references into class member access expressions. This leaves use of non-static data members in this context underspecified.
The current wording of 12.6.2 [class.base.init] paragraph 5 says,
A ctor-initializer may initialize the member of an anonymous union that is a member of the constructor's class.
The wording “the member” is strange; furthermore, this should be restricted to non-static data members. That could be accomplished by using the existing term “variant members,” which is defined in 9.5 [class.union] paragraph 8 to be “the non-static data members of all anonymous unions that are members of” the class (which by definition must be non-static data members, since a storage class specifier is not allowed on an anonymous union in class scope).
[Picked up by evolution group at October 2002 meeting.]
(See also paper J16/99-0005 = WG21 N1182.)At the London meeting, 12.8 [class.copy] paragraph 15 was changed to limit the optimization described to only the following cases:
Can we find an appropriate description for the desired cases?
Rationale (04/99): The absence of this optimization does not constitute a defect in the Standard, although the proposed resolution in the paper should be considered when the Standard is revised.
Note (March, 2008):
The Evolution Working Group has accepted the intent of this issue and referred it to CWG for action (not for C++0x). See paper J16/07-0033 = WG21 N2173.
Notes from the June, 2008 meeting:
The CWG decided to take no action on this issue until an interested party produces a paper with analysis and a proposal.
Consider the following example:
int c; struct A { A() { ++c; } A(const A&) { ++c; } }; struct B { A a; B(const A& a): a(a) { } }; int main() { (B(A())); return c - 1; }
Here we would like to be able to avoid the copy and just construct the A() directly into the A subobject of B. But we can't, because it isn't allowed by 12.8 [class.copy] paragraph 34 bullet 3:
when a temporary class object that has not been bound to a reference (12.2 [class.temporary]) would be copied/moved to a class object with the same cv-unqualified type, the copy/move operation can be omitted by constructing the temporary object directly into the target of the omitted copy/move
The part about not being bound to a reference was added for an unrelated reason by issue 185. If that resolution were recast to require that the temporary object is not accessed after the copy, rather than banning the reference binding, this optimization could be applied.
The similar example using pass by value is also not one of the allowed cases, which could be considered part of issue 6.
Moving to always doing overload resolution for determining exception specifications and implicit deletion creates some unfortunate cycles:
template<typename T> struct A { T t; }; template <typename T> struct B { typename T::U u; }; template <typename T> struct C { C(const T&); }; template <typename T> struct D { C<B<T> > v; }; struct E { typedef A<D<E> > U; }; extern A<D<E> > a; A<D<E> > a2(a);
If declaring the copy constructor for A<D<E>> is part of instantiating the class, then we need to do overload resolution on D<E>, and thus C<B<E>>. We consider C(const B<E>&), and therefore look to see if there's a conversion from C<B<E>> to B<E>, which instantiates B<E>, which fails because it has a field of type A<D<E>> which is already being instantiated.
Even if we wait until A<D<E>> is considered complete before finalizing the copy constructor declaration, declaring the copy constructor for B<E> will want to look at the copy constructor for A<D<E>>, so we still have the cycle.
I think that to avoid this cycle we need to short-circuit consideration of C(const T&) somehow. But I don't see how we can do that without breaking
struct F { F(F&); }; struct G; struct G2 { G2(const G&); }; struct G { G(G&&); G(const G2&); }; struct H: F, G { }; extern H h; H h2(h);
Here, since G's move constructor suppresses the implicit copy constructor, the defaulted H copy constructor calls G(const G2&) instead. If the move constructor did not suppress the implicit copy constructor, I believe the implicit copy constructor would always be viable, and therefore a better match than a constructor taking a reference to another type.
So perhaps the answer is to reconsider that suppression and then disqualify any constructor taking (a reference to) a type other than the constructor's class from consideration when looking up a subobject constructor in an implicitly defined constructor. (Or assignment operator, presumably.)
Another possibility would be that when we're looking for a conversion from C<B<E>> to B<E> we could somehow avoid considering, or even declaring, the B<E> copy constructor. But that seems a bit dodgy.
Additional note (October, 2010):
An explicitly declared move constructor/op= should not suppress the implicitly declared copy constructor/op=; it should cause it to be deleted instead. This should prevent a member function taking a (reference to) an un-reference-related type from being chosen by overload resolution in a defaulted member function.
And we should clarify that member functions taking un-reference-related types are not even considered during overload resolution in a defaulted member function, to avoid requiring their parameter types to be complete.
Inheriting constructors should not be part of C++0x unless they have implementation experience.
Consider the following example:
class B1 {}; typedef void (B1::*PB1) (); // memptr to B1 class B2 {}; typedef void (B2::*PB2) (); // memptr to B2 class D1 : public B1, public B2 {}; typedef void (D1::*PD) (); // memptr to D1 struct S { operator PB1(); // can be converted to PD } s; struct T { operator PB2(); // can be converted to PD } t; void foo() { s == t; // Is this an error? }
According to 13.6 [over.built] paragraph 16, there is an operator== for PD (“For every pointer to member type...”), so why wouldn't it be used for this comparison?
Mike Miller: The problem, as I understand it, is that 13.3.1.2 [over.match.oper] paragraph 3, bullet 3, sub-bullet 3 is broader than it was intended to be. It says that candidate built-in operators must “accept operand types to which the given operand or operands can be converted according to 13.3.3.1 [over.best.ics].” 13.3.3.1.2 [over.ics.user] describes user-defined conversions as having a second standard conversion sequence, and there is nothing to restrict that second standard conversion sequence.
My initial thought on addressing this would be to say that user-defined conversion sequences whose second standard conversion sequence contains a pointer conversion or a pointer-to-member conversion are not considered when selecting built-in candidate operator functions. They would still be applicable after the hand-off to Clause 5 (e.g., in bringing the operands to their common type, 5.10 [expr.eq], or composite pointer type, 5.9 [expr.rel]), just not in constructing the list of built-in candidate operator functions.
I started to suggest restricting the second standard conversion sequence to conversions having Promotion or Exact Match rank, but that would exclude the Boolean conversions, which are needed for !, &&, and ||. (It would have also restricted the floating-integral conversions, though, which might be a good idea. They can't be used implicitly, I think, because there would be an ambiguity among all the promoted integral types; however, none of the compilers I tested even tried those conversions because the errors I got were not ambiguities but things like “floating point operands not allowed for %”.)
Bill Gibbons: I recall seeing this problem before, though possibly not in committee discussions. As written this rule makes the set of candidate functions dependent on what classes have been defined, including classes not otherwise required to have been defined in order for "==" to be meaningful. For templates this implies that the set is dependent on what templates have been instantiated, e.g.
template<class T> class U : public T { }; U<B1> u; // changes the set of candidate functions to include // operator==(U<B1>,U<B1>)?
There may be other places where the existence of a class definition, or worse, a template instantiation, changes the semantics of an otherwise valid program (e.g. pointer conversions?) but it seems like something to be avoided.
(See also issue 954.)
The rules for selecting candidate functions in copy-list-initialization (13.3.1.7 [over.match.list]) differ from those of regular copy-initialization (13.3.1.4 [over.match.copy]): the latter specify that only the converting (non-explicit) constructors are considered, while the former include all constructors but state that the program is ill-formed if an explicit constructor is selected by overload resolution. This is counterintuitive and can lead to surprising results. For example, the call to the function object p in the following example is ambiguous because the explicit constructor is a candidate for the initialization of the operator's parameter:
struct MyList { explicit MyStore(int initialCapacity); }; struct MyInt { MyInt(int i); }; struct Printer { void operator()(MyStore const& s); void operator()(MyInt const& i); }; void f() { Printer p; p({23}); }
According to 13.3.3 [over.match.best] paragraph 4, the following program appears to be ill-formed:
void f(int, int=0); void f(int=0, int); void g() { f(); }
Though I do not expect this is the intent of this paragraph in the standard.
13.3.3 [over.match.best] paragraph 4:
If the best viable function resolves to a function for which multiple declarations were found, and if at least two of these declarations or the declarations they refer to in the case of using-declarations specify a default argument that made the function viable, the program is ill-formed. [Example:namespace A { extern "C" void f(int = 5); } namespace B { extern "C" void f(int = 5); } using A::f; using B::f; void use() { f(3); //OK, default argument was not used for viability f(); //Error: found default argument twice }end example]
It's not clear how overloading and partial ordering handle non-deduced pairs of corresponding arguments. For example:
template<typename T> struct A { typedef char* type; }; template<typename T> char* f1(T, typename A<T>::type); // #1 template<typename T> long* f1(T*, typename A<T>::type*); // #2 long* p1 = f1(p1, 0); // #3
I thought that #3 is ambiguous but different compilers disagree on that. Comeau C/C++ 4.3.3 (EDG 3.0.3) accepted the code, GCC 3.2 and BCC 5.5 selected #1 while VC7.1+ yields ambiguity.
I intuitively thought that the second pair should prevent overloading from triggering partial ordering since both arguments are non-deduced and has different types - (char*, char**). Just like in the following:
template<typename T> char* f2(T, char*); // #3 template<typename T> long* f2(T*, char**); // #4 long* p2 = f2(p2, 0); // #5
In this case all the compilers I checked found #5 to be ambiguous. The standard and DR 214 is not clear about how partial ordering handle such cases.
I think that overloading should not trigger partial ordering (in step 13.3.3 [over.match.best]/1/5) if some candidates have non-deduced pairs with different (specialized) types. In this stage the arguments are already adjusted so no need to mention it (i.e. array to pointer). In case that one of the arguments is non-deuced then partial ordering should only consider the type from the specialization:
template<typename T> struct B { typedef T type; }; template<typename T> char* f3(T, T); // #7 template<typename T> long* f3(T, typename B<T>::type); // #8 char* p3 = f3(p3, p3); // #9
According to my reasoning #9 should yield ambiguity since second pair is (T, long*). The second type (i.e. long*) was taken from the specialization candidate of #8. EDG and GCC accepted the code. VC and BCC found an ambiguity.
John Spicer: There may (or may not) be an issue concerning whether nondeduced contexts are handled properly in the partial ordering rules. In general, I think nondeduced contexts work, but we should walk through some examples to make sure we think they work properly.
Rani's description of the problem suggests that he believes that partial ordering is done on the specialized types. This is not correct. Partial ordering is done on the templates themselves, independent of type information from the specialization.
Notes from October 2004 meeting:
John Spicer will investigate further to see if any action is required.
(See also issue 885.)
The changes for issue 990 did not address the description of overload resolution when an argument is an empty braced-init-list. For example:
struct A { A(); A(std::initializer_list<int>); A(std::initializer_list<double>); }; A a{}; // OK void f(A); void g() { f({}); // ambiguous }
Currently overload resolution does not distinguish between binding an lvalue reference to a function lvalue and an rvalue reference to a function lvalue. The former should be preferred.
In a related point, the current wording of 13.3.3.1.4 [over.ics.ref] paragraph 3 forbids binding an rvalue reference to an lvalue; this should be changed to allow binding an rvalue reference to a function lvalue.
The Standard is not clear whether the following example is well-formed or not:
struct S { static void f(int); static void f(double); }; S s; void (*pf)(int) = &s.f;
According to 5.2.5 [expr.ref] paragraph 4 bullet 3, you do function overload resolution to determine whether x.f is a static or non-static member function. 5.3.1 [expr.unary.op] paragraph 6 says that you can only take the address of an overloaded function in a context that determines the overload to be chosen, and the initialization of a function pointer is such a context (13.4 [over.over] paragraph 1). The problem is that 13.4 [over.over] is phrased in terms of “an overloaded function name,” and this is a member access expression, not a name.
There is variability among implementations as to whether this example is accepted; some accept it as written, some only if the & is omitted, and some reject it in both forms.
Additional note (October, 2010):
A related question concerns an example like
struct S { static void g(int*) {} static void g(long) {} } s; void foo() { (&s.g)(0L); }
Because the address occurs in a call context and not in one of the contexts mentioned in 13.4 [over.over] paragraph 1, the call expression in foo is presumably ill-formed. Contrast this with the similar example
void g1(int*) {} void g1(long) {} void foo1() { (&g1)(0L); }
This call presumably is well-formed because 13.3.1.1 [over.match.call] applies to “the address of a set of overloaded functions.” (This was clearer in the wording prior to the resolution of issue 704: “...in this context using &F behaves the same as using the name F by itself.”) It's not clear that there's any reason to treat these two cases differently.
This question also bears on the original question of this issue, since the original wording of 13.3.1.1 [over.match.call] also described the case of an ordinary member function call like s.g(0L) as involving the “name” of the function, even though the postfix-expression is a member access expression and not a “name.” Perhaps the reference to “name” in 13.4 [over.over] should be similarly understood as applying to member access expressions?
Consider the following example:
struct NullClass { template<typename T> operator T () { return 0 ; } }; int main() { NullClass n; n==5; // #1 return 0; }
The comparison at #1 is, according to the current Standard, ambiguous. According to 13.6 [over.built] paragraph 12, the candidates for operator==(L, R) include functions “for every pair of promoted arithmetic types,” so L could be either int or long, and the conversion operator template will provide an exact match for either.
Some implementations unambiguously choose the int candidate. Perhaps the overload resolution rules could be tweaked to prefer candidates in which L and R are the same type?
(See also issue 545.)
According to 14 [temp] paragraph 5,
Except that a function template can be overloaded either by (non-template) functions with the same name or by other function templates with the same name (14.8.3 [temp.over] ), a template name declared in namespace scope or in class scope shall be unique in that scope.3.3.10 [basic.scope.hiding] paragraph 2 agrees that only functions, not function templates, can hide a class name declared in the same scope:
A class name (9.1 [class.name] ) or enumeration name (7.2 [dcl.enum] ) can be hidden by the name of an object, function, or enumerator declared in the same scope.However, 3.3 [basic.scope] paragraph 4 treats functions and template functions together in this regard:
Given a set of declarations in a single declarative region, each of which specifies the same unqualified name,
- they shall all refer to the same entity, or all refer to functions and function templates; or
- exactly one declaration shall declare a class name or enumeration name that is not a typedef name and the other declarations shall all refer to the same object or enumerator, or all refer to functions and function templates; in this case the class name or enumeration name is hidden
John Spicer: You should be able to take an existing program and replace an existing function with a function template without breaking unrelated parts of the program. In addition, all of the compilers I tried allow this usage (EDG, Sun, egcs, Watcom, Microsoft, Borland). I would recommend that function templates be handled exactly like functions for purposes of name hiding.
Martin O'Riordan: I don't see any justification for extending the purview of what is decidedly a hack, just for the sake of consistency. In fact, I think we should go further and in the interest of consistency, we should deprecate the hack, scheduling its eventual removal from the C++ language standard.
The hack is there to allow old C programs and especially the 'stat.h' file to compile with minimum effort (also several other Posix and X headers). People changing such older programs have ample opportunity to "do it right". Indeed, if you are adding templates to an existing program, you should probably be placing your templates in a 'namespace', so the issue disappears anyway. The lookup rules should be able to provide the behaviour you need without further hacking.
According to 14.1 [temp.param] paragraph 11,
If a template-parameter of a class template has a default template-argument, each subsequent template-parameter shall either have a default template-argument supplied or be a template parameter pack. If a template-parameter of a primary class template is a template parameter pack, it shall be the last template-parameter. [Note: These are not requirements for function templates or class template partial specializations because template arguments can be deduced (14.8.2 [temp.deduct])...
Should the Standard forbid non-final parameter packs in cases where the declaration does not allow the template arguments to be deduced? For example,
template<typename... T, typename... U> void f() { } template<typename... T, typename U> void g() { }
By analogy with typename, the keyword template used to indicate that a dependent name will be a template name should be optional in contexts where a type is required, e.g., base class lists. We could also consider member and parameter declarations.
This was suggested by issue 314.
The Standard does not normatively define which > and >> tokens are to be taken as closing a template-argument-list; instead, 14.2 [temp.names] paragraph 3 uses the undefined and imprecise term “non-nested:”
When parsing a template-id, the first non-nested > is taken as the end of the template-argument-list rather than a greater-than operator. Similarly, the first non-nested >> is treated as two consecutive but distinct > tokens, the first of which is taken as the end of the template-argument-list and completes the template-id.
The (non-normative) footnote clarifies that
A > that encloses the type-id of a dynamic_cast, static_cast, reinterpret_cast or const_cast, or which encloses the template-arguments of a subsequent template-id, is considered nested for the purpose of this description.
Aside from the questionable wording of this footnote (e.g., in what sense does a single terminating character “enclose” anything, and is a nested template-id “subsequent?”) and the fact that it is non-normative, it does not provide a complete definition of what “nesting” is intended to mean. For example, is the first > in this putative template-id “nested” or not?
X<a ? b > c : d>
None of my compilers accept this, which surprised me a little. Is the base-to-derived member function conversion considered to be a runtime-only thing?
template <class D> struct B { template <class X> void f(X) {} template <class X, void (D::*)(X) = &B<D>::f<X> > struct row {}; }; struct D : B<D> { void g(int); row<int,&D::g> r1; row<char*> r2; };
John Spicer: This is not among the permitted conversions listed in 14.3.
I'm not sure there is a terribly good reason for that. Some of the template argument rules for external entities were made conservatively because of concerns about issues of mangling template argument names.
David Abrahams: I'd really like to see that restriction loosened. It is a serious inconvenience because there appears to be no way to supply a usable default in this case. Zero would be an OK default if I could use the function pointer's equality to zero as a compile-time switch to choose an empty function implementation:
template <bool x> struct tag {}; template <class D> struct B { template <class X> void f(X) {} template <class X, void (D::*pmf)(X) = 0 > struct row { void h() { h(tag<(pmf == 0)>(), pmf); } void h(tag<1>, ...) {} void h(tag<0>, void (D::*q)(X)) { /*something*/} }; }; struct D : B<D> { void g(int); row<int,&D::g> r1; row<char*> r2; };
But there appears to be no way to get that effect either. The result is that you end up doing something like:
template <class X, void (D::*pmf)(X) = 0 > struct row { void h() { if (pmf) /*something*/ } };
which invariably makes compilers warn that you're switching on a constant expression.
[Picked up by evolution group at October 2002 meeting.]
How are default template arguments handled with respect to template template parameters? Two separate questions have been raised:
template <class T, class U = int> class ARG { }; template <class X, template <class Y> class PARM> void f(PARM<X>) { } // specialization permitted? void g() { ARG<int> x; // actually ARG<int, int> f(x); // does ARG (2 parms, 1 with default) // match PARM (1 parm)?Template template parameters are deducible (14.8.2.5 [temp.deduct.type] paragraph 9), but 14.3.3 [temp.arg.template] does not specify how matching is done.
Jack Rouse: I implemented template template parameters assuming template signature matching is analogous to function type matching. This seems like the minimum reasonable implementation. The code in the example would not be accepted by this compiler. However, template default arguments are compile time entities so it seems reasonable to relax the matching rules to allow cases like the one in the example. But I would consider this to be an extension to the language.
Herb Sutter: An open issue in the LWG is that the standard doesn't explicitly permit or forbid implementations' adding additional template-parameters to those specified by the standard, and the LWG may be leaning toward explicitly permitting this. [Under this interpretation,] if the standard is ever modified to allow additional template-parameters, then writing "a template that takes a standard library template as a template template parameter" won't be just ugly because you have to mention the defaulted parameters; it would not be (portably) possible at all except possibly by defining entire families of overloaded templates to account for all the possible numbers of parameters vector<> (or anything else) might actually have. That seems unfortunate.
template <template <class T, class U = int> class PARM> class C { PARM<int> pi; };
Jack Rouse: I decided they could not in the compiler I support. This continues the analogy with function type matching. Also, I did not see a strong need to allow default arguments in this context.
A class template used as a template template argument can have default template arguments from its declarations. How are the two sources of default arguments to be reconciled? The default arguments from the template template formal could override. But it could be cofusing if a template-id using the argument template, ARG<int>, behaves differently from a template-id using the template formal name, FORMAL<int>.
Rationale (10/99): Template template parameters are intended to be handled analogously to function function parameters. Thus the number of parameters in a template template argument must match the number of parameters in a template template parameter, regardless of whether any of those paramaters have default arguments or not. Default arguments are allowed for the parameters of a template template parameter, and those default arguments alone will be considered in a specialization of the template template parameter within a template definition; any default arguments for the parameters of a template template argument are ignored.
Note (Mark Mitchell, February, 2006):
Perhaps it is already obvious to all, but it seems worth noting that this extension would change the meaning of conforming programs:
struct Dense { static const unsigned int dim = 1; }; template <template <typename> class View, typename Block> void operator+(float, View<Block> const&); template <typename Block, unsigned int Dim = Block::dim> struct Lvalue_proxy { operator float() const; }; void test_1d (void) { Lvalue_proxy<Dense> p; float b; b + p; }
If Lvalue_proxy is allowed to bind to View, then the template operator+ will be used to perform addition; otherwise, Lvalue_proxy's implicit conversion to float, followed by the built-in addition on floats will be used.
Note (March, 2008):
The Evolution Working Group has accepted the intent of this issue and referred it to CWG for action (not for C++0x). See paper J16/07-0033 = WG21 N2173.
Notes from the June, 2008 meeting:
The CWG decided to take no action on this issue until an interested party produces a paper with analysis and a proposal.
The example in 14.4 [temp.type] paragraph 1 reads in significant part,
template<template<class> class TT> struct X { }; template<class> struct Y { }; template<class T> using Z = Y<T>; X<Y> y; X<Z> z;
and says that y and z have the same type.
This would only be true if alias template Z were considered to be equivalent to class template Y. However, 14.5.7 [temp.alias] describes equivalence only for specializations of alias templates, not for the alias templates themselves. Either such rules should be specified, which could be tricky, or the example should be deleted.
Type matching rules aren't well-specified in the current Standard, but it seems reasonable to say that if a declaration uses decltype, its definition must do so as well. For example, the following should be ill-formed:
template<class T, T* u> struct S { decltype(u) foo(T); }; template<class T, T *u> T* S<T, u>::foo(T) { return nullptr; }
Should the Standard allow declarations of variadic templates or member functions of class templates where only an empty expansion would be well-formed? For example,
template<typename ... T> struct A { void operator++(int, T... t); }; template<typename ... T> union X: T... { }; template<typename ... T> struct A: T..., T... { };
The Standard does not appear to specify clearly the effect of a partial specialization of a member template of a class template. For example:
template<class T> struct B { template<class U> struct A { // #1 void h() {} }; template<class U> struct A<U*> { // #2 void f() {} }; }; template<> template<class U> struct B<int>::A { // #3 void g() {} }; void q(B<int>::A<char*>& p) { p.f(); // #4 }
The explicit specialization at #3 replaces the primary member template #1 of B<int>; however, it is not clear whether the partial specialization #2 should be considered to apply to the explicitly-specialized member template of A<int> (thus allowing the call to p.f() at #4) or whether the partial specialization will be used only for specializations of B that are implicitly instantiated (meaning that #4 could call p.g() but not p.f()).
I get the following error diagnostic [from the EDG front end]:
line 8: error: function template "example<T>::foo<R,A>(A)" has already been declared R foo(const A); ^when compiling this piece of code:
struct example { template<class R, class A> // 1-st member template R foo(A); template<class R, class A> // 2-nd member template const R foo(A&); template<class R, class A> // 3-d member template R foo(const A); }; /*template<> template<> int example<char>::foo(int&);*/ int main() { int (example<char>::* pf)(int&) = &example<char>::foo; }
The implementation complains that
template<class R, class A> // 1-st member template R foo(A); template<class R, class A> // 3-d member template R foo(const A);cannot be overloaded and I don't see any reason for it since it is function template specializations that are treated like ordinary non-template functions, meaning that the transformation of a parameter-declaration-clause into the corresponding parameter-type-list is applied to specializations (when determining its type) and not to function templates.
What makes me think so is the contents of 14.5.6.1 [temp.over.link] and the following sentence from 14.8.2.1 [temp.deduct.call] "If P is a cv-qualified type, the top level cv-qualifiers of P are ignored for type deduction". If the transformation was to be applied to function templates, then there would be no reason for having that sentence in 14.8.2.1 [temp.deduct.call].
14.8.2.2 [temp.deduct.funcaddr], which my example is based upon, says nothing about ignoring the top level cv-qualifiers of the function parameters of the function template whose address is being taken.
As a result, I expect that template argument deduction will fail for the 2-nd and 3-d member templates and the 1-st one will be used for the instantiation of the specialization.
Issue 1:
14.5.6.2 [temp.func.order] paragraph 2 says:
Given two overloaded function templates, whether one is more specialized than another can be determined by transforming each template in turn and using argument deduction (14.8.2 [temp.deduct] ) to compare it to the other.14.8.2 [temp.deduct] now has 4 subsections describing argument deduction in different situations. I think this paragraph should point to a subsection of 14.8.2 [temp.deduct] .
Rationale:
This is not a defect; it is not necessary to pinpoint cross-references to this level of detail.
Issue 2:
14.5.6.2 [temp.func.order] paragraph 4 says:
Using the transformed function parameter list, perform argument deduction against the other function template. The transformed template is at least as specialized as the other if, and only if, the deduction succeeds and the deduced parameter types are an exact match (so the deduction does not rely on implicit conversions).In "the deduced parameter types are an exact match", the terms exact match do not make it clear what happens when a type T is compared to the reference type T&. Is that an exact match?
Issue 3:
14.5.6.2 [temp.func.order] paragraph 5 says:
A template is more specialized than another if, and only if, it is at least as specialized as the other template and that template is not at least as specialized as the first.What happens in this case:
template<class T> void f(T,int); template<class T> void f(T, T); void f(1,1);For the first function template, there is no type deduction for the second parameter. So the rules in this clause seem to imply that the second function template will be chosen.
Rationale:
This is not a defect; the standard unambiguously makes the above example ill-formed due to ambiguity.
This was split off from issue 214 at the April 2003 meeting.
Nathan Sidwell: John Spicer's proposed resolution does not make the following well-formed.
template <typename T> int Foo (T const *) {return 1;} //#1 template <unsigned I> int Foo (char const (&)[I]) {return 2;} //#2 int main () { return Foo ("a") != 2; }
Both #1 and #2 can deduce the "a" argument, #1 deduces T as char and #2 deduces I as 2. However, neither is more specialized because the proposed rules do not have any array to pointer decay.
#1 is only deduceable because of the rules in 14.8.2.1 [temp.deduct.call] paragraph 2 that decay array and function type arguments when the template parameter is not a reference. Given that such behaviour happens in deduction, I believe there should be equivalent behaviour during partial ordering. #2 should be resolved as more specialized as #1. The following alteration to the proposed resolution of DR214 will do that.
Insert before,
the following
For the example above, this change results in deducing 'T const *' against 'char const *' in one direction (which succeeds), and 'char [I]' against 'T const *' in the other (which fails).
John Spicer: I don't consider this a shortcoming of my proposed wording, as I don't think this is part of the current rules. In other words, the resolution of 214 might make it clearer how this case is handled (i.e., clearer that it is not allowed), but I don't believe it represents a change in the language.
I'm not necessarily opposed to such a change, but I think it should be reviewed by the core group as a related change and not a defect in the proposed resolution to 214.
Notes from the October 2003 meeting:
There was some sentiment that it would be desirable to have this case ordered, but we don't think it's worth spending the time to work on it now. If we look at some larger partial ordering changes at some point, we will consider this again.
14.5.6.2 [temp.func.order] paragraph 3 says,
To produce the transformed template, for each type, non-type, or template template parameter (including template parameter packs (14.5.3 [temp.variadic]) thereof) synthesize a unique type, value, or class template respectively and substitute it for each occurrence of that parameter in the function type of the template.
The characteristics of the synthesized entities and how they are determined is not specified. For example, members of a dependent type referred to in non-deduced contexts are not specified to exist, even though the transformed function type would be invalid in their absence.
Example 1:
template<typename T, typename U> struct A; template<typename T> void foo(A<T, typename T::u> *) { } // #1 // synthetic T1 has member T1::u template <typename T> void foo(A<T, typename T::u::v> *) { } // #2 // synthetic T2 has member T2::u and member T2::u::v // T in #1 deduces to synthetic T2 in partial ordering; // deduced A for the parameter is A<T2, T2::u> * --this is not necessarily compatible // with A<T2, T2::u::v> * and it does not need to be. See Note 1. The effect is that // (in the call below) the compatibility of B::u and B::u::v is respected. // T in #2 cannot be successfully deduced in partial ordering from A<T1, T1::u> *; // invalid type T1::u::v will be formed when T1 is substituted into non-deduced contexts. struct B { struct u { typedef u v; }; }; int main() { foo((A<B, B::u> *)0); // calls #2 }
Note 1: Template argument deduction is an attempt to match a P and a deduced A; however, template argument deduction is not specified to fail if the P and the deduced A are incompatible. This may occur in the presence of non-deduced contexts. Notwithstanding the parenthetical statement in 14.8.2.4 [temp.deduct.partial] paragraph 9, template argument deduction may succeed in determining a template argument for every template parameter while producing a deduced A that is not compatible with the corresponding P.
Example 2:
template <typename T, typename U, typename V> struct A; template <typename T> void foo(A<T, struct T::u, struct T::u::u> *); // #2.1 // synthetic T1 has member non-union class T1::u template <typename T, typename U> void foo(A<T, U , U> *); // #2.2 // synthetic T2 and U2 has no required properties // T in #2.1 cannot be deduced in partial ordering from A<T2, U2, U2> *; // invalid types T2::u and T2::u::u will be formed when T2 is substituted in nondeduced contexts. // T and U in #2.2 deduces to, respectively, T1 and T1::u from A<T1, T1::u, struct T1::u::u> * unless // struct T1::u::u does not refer to the injected-class-name of the class T1::u (if that is possible). struct B { struct u { }; }; int main() { foo((A<B, B::u, struct B::u::u> *)0); // calls #2.1 }
It is, however, unclear to what extent an implementation will have to go to determine these minimal properties.
The specification for how to handle default arguments and ellipsis in partial ordering of function templates is confusing. 14.5.6.2 [temp.func.order] paragraph 5 currently reads,
The presence of unused ellipsis and default arguments has no effect on the partial ordering of function templates.
It is not clear what “unused” means in this context. According to the original issue resolution that resulted in this wording (N1053, item 6.55), the intent was that “When partial ordering of function templates containing a different number of parameters is done, only the common parameters are considered.” Presumably this would include parameters with default arguments if each function had such parameters in corresponding positions.
The wording needs to be revised to make this intent clear.
The standard prohibits a class template from having the same name as one of its template parameters (14.6.1 [temp.local] paragraph 4). This prohibits
template <class X> class X;for the reason that the template name would hide the parameter, and such hiding is in general prohibited.
Presumably, we should also prohibit
template <template <class T> class T> struct A;for the same reason.
Currently, member of nondependent base classes hide references to template parameters in the definition of a derived class template.
Consider the following example:
class B { typedef void *It; // (1) // ... }; class M: B {}; template<typename> X {}; template<typename It> struct S // (2) : M, X<It> { // (3) S(It, It); // (4) // ... };
As the C++ language currently stands, the name "It" in line (3) refers to the template parameter declared in line (2), but the name "It" in line (4) refers to the typedef in the private base class (declared in line (1)).
This situation is both unintuitive and a hindrance to sound software engineering. (See also the Usenet discussion at http://tinyurl.com/32q8d .) Among other things, it implies that the private section of a base class may change the meaning of the derived class, and (unlike other cases where such things happen) there is no way for the writer of the derived class to defend the code against such intrusion (e.g., by using a qualified name).
Changing this can break code that is valid today. However, such code would have to:
It has been suggested to make situations like these ill-formed. That solution is unattractive however because it still leaves the writer of a derived class template without defense against accidental name conflicts with base members. (Although at least the problem would be guaranteed to be caught at compile time.) Instead, since just about everyone's intuition agrees, I would like to see the rules changed to make class template parameters hide members of the same name in a base class.
See also issue 458.
Notes from the March 2004 meeting:
We have some sympathy for a change, but the current rules fall straightforwardly out of the lookup rules, so they're not “wrong.” Making private members invisible also would solve this problem. We'd be willing to look at a paper proposing that.
Additional discussion (April, 2005):
John Spicer: Base class members are more-or-less treated as members of the class, [so] it is only natural that the base [member] would hide the template parameter.
Daveed Vandevoorde: Are base class members really “more or less” members of the class from a lookup perspective? After all, derived class members can hide base class members of the same name. So there is some pretty definite boundary between those two sets of names. IMO, the template parameters should either sit between those two sets, or they should (for lookup purposes) be treated as members of the class they parameterize (I cannot think of a practical difference between those two formulations).
John Spicer: How is [hiding template parameters] different from the fact that namespace members can be hidden by private parts of a base class? The addition of int C to N::A breaks the code in namespace M in this example:
namespace N { class A { private: int C; }; } namespace M { typedef int C; class B : public N::A { void f() { C c; } }; }
Daveed Vandevoorde: C++ has a mechanism in place to handle such situations: qualified names. There is no such mechanism in place for template parameters.
Nathan Myers: What I see as obviously incorrect ... is simply that a name defined right where I can see it, and directly attached to the textual scope of B's class body, is ignored in favor of something found in some other file. I don't care that C1 is defined in A, I have a C1 right here that I have chosen to use. If I want A::C1, I can say so.
I doubt you'll find any regular C++ coder who doesn't find the standard behavior bizarre. If the meaning of any code is changed by fixing this behavior, the overwhelming majority of cases will be mysterious bugs magically fixed.
John Spicer: I have not heard complaints that this is actually a cause of problems in real user code. Where is the evidence that the status quo is actually causing problems?
In this example, the T2 that is found is the one from the base class. I would argue that this is natural because base class members are found as part of the lookup in class B:
struct A { typedef int T2; }; template <class T2> struct B : public A { typedef int T1; T1 t1; T2 t2; };
This rule that base class members hide template parameters was formalized about a dozen years ago because it fell out of the principle that base class members should be found at the same stage of lookup as derived class members, and that to do otherwise would be surprising.
Gabriel Dos Reis: The bottom line is that:
Unless presented with real major programming problems the current rules exhibit, I do not think the simple rule “scopes nest” needs a change that silently mutates program meaning.
Mike Miller: The rationale for the current specification is really very simple:
That's it. Because template parameters are not members, they are hidden by member names (whether inherited or not). I don't find that “bizarre,” or even particularly surprising.
I believe these rules are straightforward and consistent, so I would be opposed to changing them. However, I am not unsympathetic toward Daveed's concern about name hijacking from base classes. How about a rule that would make a program ill-formed if a direct or inherited member hides a template parameter?
Unless this problem is a lot more prevalent than I've heard so far, I would not want to change the lookup rules; making this kind of collision a diagnosable error, however, would prevent hijacking without changing the lookup rules.
Erwin Unruh: I have a different approach that is consistent and changes the interpretation of the questionable code. At present lookup is done in this sequence:
If we change this order to
it is still consistent in that no lookup is placed between the base class and the derived class. However, it introduces another inconsistency: now scopes do not nest the same way as curly braces nest — but base classes are already inconsistent this way.
Nathan Myers: This looks entirely satisfactory. If even this seems like too big a change, it would suffice to say that finding a different name by this search order makes the program ill-formed. Of course, a compiler might issue only a portability warning in that case and use the name found Erwin's way, anyhow.
Gabriel Dos Reis: It is a simple fact, even without templates, that a writer of a derived class cannot protect himself against declaration changes in the base class.
Richard Corden: If a change is to be made, then making it ill-formed is better than just changing the lookup rules.
struct B { typedef int T; virtual void bar (T const & ); }; template <typename T> struct D : public B { virtual void bar (T const & ); }; template class D<float>;
I think changing the semantics of the above code silently would result in very difficult-to-find problems.
Mike Miller: Another case that may need to be considered in deciding on Erwin's suggestion or the “ill-formed” alternative is the treatment of friend declarations described in 3.4.1 [basic.lookup.unqual] paragraph 10:
struct A { typedef int T; void f(T); }; template<typename T> struct B { friend void A::f(T); // Currently T is A::T };
Notes from the October, 2005 meeting:
The CWG decided not to consider a change to the existing rules at this time without a paper exploring the issue in more detail.
Is the following example well-formed?
template<class T> struct A { typedef int M; struct B { typedef void M; struct C; }; }; template<class T> struct A<T>::B::C : A<T> { M // A<T>::M or A<T>::B::M? p[2]; };
14.6.2 [temp.dep] paragraph 3 says the use of M should refer to A<T>::B::M because the base class A<T> is not searched because it's dependent. But in this case A<T> is also the current instantiation (14.6.2.1 [temp.dep.type]) so it seems like it should be searched.
In an example like
void f(int, int, int); template<int ...N> void g() { f((N+N)...); } void h() { g<1, 2, 3>(); }
the call to f needs to be dependent; however, the arguments are not type-dependent, so the criteria of 14.6.2 [temp.dep] paragraph 1 are not met. Presumably the specification needs to be updated so that an argument list containing a type-level pack expansion is dependent.
In 14.6.2.1 [temp.dep.type] paragraph 5 we have:
A name is a member of an unknown specialization if the name is a qualified-id in which the nested-name-specifier names a dependent type that is not the current instantiation.
So given:
template<class T> struct A { struct B { struct C { A<T>::B::C f(); }; }; };
it appears that the name A<T>::B::C should be taken as a member of an unknown specialization, because the WP refers to “the” current instantiation, implying that there can be at most one at any given time. At the declaration of f(), the current instantiation is C, so A<T>::B is not the current instantiation.
Would it be better to refer to “a known instantiation” instead of “the current instantiation?”
Mike Miller:
I agree that there is a problem here, but I don't think the “current instantiation” terminology needs to be replaced. By way of background, paragraph 1 makes it clear that A<T>::B “refers to” the current instantiation:
In the definition of a class template, a nested class of a class template, a member of a class template, or a member of a nested class of a class template, a name refers to the current instantiation if it is
the injected-class-name (9 [class]) of the class template or nested class,
in the definition of a primary class template, the name of the class template followed by the template argument list of the primary template (as described below) enclosed in <>,
in the definition of a nested class of a class template, the name of the nested class referenced as a member of the current instantiation...
A<T>::B satisfies bullet 3. Paragraph 4 says,
A name is a member of the current instantiation if it is
An unqualified name that, when looked up, refers to a member of a class template. [Note: this can only occur when looking up a name in a scope enclosed by the definition of a class template. —end note]
A qualified-id in which the nested-name-specifier refers to the current instantiation.
So clearly by paragraphs 1 and 4, A<T>::B::C is a member of the current instantiation. The problem is in the phrasing of paragraph 5, which incorrectly requires that the nested-name-specifier “be” the current instantiation rather than simply “referring to” the current instantiation, which would be the correct complement to paragraph 4. Perhaps paragraph 5 could simply be rephrased as, “...a dependent type and it is not a member of the current instantiation.”
(Paragraph 1 may require a bit more wordsmithing to make it truly recursive across multiple levels of nested classes; as it stands, it's not clear whether the name of a nested class of a nested class of a class template is covered or not.)
Consider the following example:
void f(int*); void f(...); template <int N> void g() { f(N); } int main() { g<0>(); g<1>(); }
The call to f in g is not type-dependent, so the overload resolution must be done at definition time rather than at instantiation time. As a result, both of the calls to g will result in calls to f(...), i.e., N will not be a null pointer constant, even if the value of N is 0.
It would be most consistent to adopt a rule that a value-dependent expression can never be a null pointer constant, even in cases like
template <int N> void g() { int* p = N; }
This would always be ill-formed, even when N is 0.
John Spicer: It's clear that this treatment is required for overload resolution, but it seems too expansive given that there are other cases in which the value of a template parameter can affect the validity of the program, and an implementation is forbidden to issue a diagnostic on a template definition unless there are no possible valid specializations.
Notes from the July, 2009 meeting:
There was a strong consensus among the CWG that only the literal 0 should be considered a null pointer constant, not any arbitrary zero-valued constant expression as is currently specified.
The current wording of 14.6.4 [temp.dep.res] seems to assume that dependent names can only appear in the definition of a template:
In resolving dependent names, names from the following sources are considered:
Declarations that are visible at the point of definition of the template.
Declarations from namespaces associated with the types of the function arguments both from the instantiation context (14.6.4.1 [temp.point]) and from the definition context.
However, dependent names can occur in non-defining declarations of the template as well; for instance,
template<typename T> T foo(T, decltype(bar(T())));
bar needs to be looked up, even though there is no definition of foo in the translation unit.
Additional note (February, 2011):
The resolution of this issue can't simply replace the word “definition” with the word “declaration,” mutatis mutandis, because there can be multiple declarations in a translation unit (which isn't true of “the definition”). As a result, the issue was moved back to "open" status for further consideration.
14.7.2 [temp.explicit] defines an explicit instantiation as
Syntactically, that allows things like:
template int S<int>::i = 5, S<int>::j = 7;
which isn't what anyone actually expects. As far as I can tell, nothing in the standard explicitly forbids this, as written. Syntactically, this also allows:
template namespace N { void f(); }
although perhaps the surrounding context is enough to suggest that this is invalid.
Suggested resolution:
I think we should say:
[Steve Adamczyk: presumably, this should have template at the beginning.]
and then say that:
There are similar problems in 14.7.3 [temp.expl.spec]:
Here, I think we want:
with similar restrictions as above.
[Steve Adamczyk: This also needs to have template <> at the beginning, possibly repeated.]
According to 14.7.2 [temp.explicit] paragraph 10,
An entity that is the subject of an explicit instantiation declaration and that is also used in the translation unit shall be the subject of an explicit instantiation definition somewhere in the program; otherwise the program is ill-formed, no diagnostic required.
The term “used” is too vague and needs to be defined. In particular, “use” of a class template specialization as an incomplete type — to form a pointer, for instance — should not require the presence of an explicit instantiation definition elsewhere in the program.
The note in paragraph 5 of 14.8.1 [temp.arg.explicit] makes clear that explicit template arguments cannot be supplied in invocations of constructors and conversion functions because they are called without using a name. However, there is nothing in the current wording of the Standard that makes declaring a constructor or conversion operator that is unusable because of nondeduced parameters (i.e., that would need to be specified explicitly) ill-formed. It would be a service to the programmer to diagnose this useless construct as early as possible.
Nicolai Josuttis sent me an example like the following:
template <typename RET, typename T1, typename T2> const RET& min (const T1& a, const T2& b) { return (a < b ? a : b); } template const int& min<int>(const int&,const int&); // #1 template const int& min(const int&,const int&); // #2
Among the questions was whether explicit instantiation #2 is valid, where deduction is required to determine the type of RET.
The first thing I realized when researching this is that the standard does not really spell out the rules for deduction in declarative contexts (friend declarations, explicit specializations, and explicit instantiations). For explicit instantiations, 14.7.2 [temp.explicit] paragraph 2 does mention deduction, but it doesn't say which set of deduction rules from 14.8.2 [temp.deduct] should be applied.
Second, Nicolai pointed out that 14.7.2 [temp.explicit] paragraph 6 says
A trailing template-argument can be left unspecified in an explicit instantiation provided it can be deduced from the type of a function parameter (14.8.2 [temp.deduct]).
This prohibits cases like #2, but I believe this was not considered in the wording as there is no reason not to include the return type in the deduction process.
I think there may have been some confusion because the return type is excluded when doing deduction on a function call. But there are contexts where the return type is included in deduction, for example, when taking the address of a function template specialization.
Suggested resolution:
Andrei Iltchenko points out that the standard has no wording that defines how to determine which template is specialized by an explicit specialization of a function template. He suggests "template argument deduction in such cases proceeds in the same way as when taking the address of a function template, which is described in 14.8.2.2 [temp.deduct.funcaddr]."
John Spicer points out that the same problem exists for all similar declarations, i.e., friend declarations and explicit instantiation directives. Finding a corresponding placement operator delete may have a similar problem.
John Spicer: There are two aspects of "determining which template" is referred to by a declaration: determining the function template associated with the named specialization, and determining the values of the template arguments of the specialization.
template <class T> void f(T); #1 template <class T> void f(T*); #2 template <> void f(int*);
In other words, which f is being specialized (#1 or #2)? And then, what are the deduced template arguments?
14.5.6.2 [temp.func.order] does say that partial ordering is done in contexts such as this. Is this sufficient, or do we need to say more about the selection of the function template to be selected?
14.8.2 [temp.deduct] probably needs a new section to cover argument deduction for cases like this.
14.8.2 [temp.deduct] is all about function types, but these rules also apply, e.g., when matching a class template partial specialization. We should add a note stating that we could be doing substitution into the template-id for a class template partial specialization.
Additional note (August 2008):
According to 14.5.5.1 [temp.class.spec.match] paragraph 2, argument deduction is used to determine whether a given partial specialization matches a given argument list. However, there is nothing in 14.5.5.1 [temp.class.spec.match] nor in 14.8.2 [temp.deduct] and its subsections that describes exactly how argument deduction is to be performed in this case. It would seem that more than just a note is required to clarify this processing.
Consider the following example:
template <int> struct X { typedef int type; }; template <class T> struct Y { }; template <class T> struct Z { static int const value = Y<T>::value; }; template <class T> typename X<Y<T>::value + Z<T>::value>::type f(T); int f(...); int main() { sizeof f(0); }
The problem here is that there is a combination of an invalid expression in the immediate context (Y<T>::value) and in the non-immediate context (within Z<T> when evaluating Z<T>::value). The Standard does not appear to state clearly whether this program is well-formed (because the error in the immediate context causes deduction failure) or ill-formed (because of the error in the non-immediate context).
Consider the following program:
template <typename T> int ref (T&) { return 0; } template <typename T> int ref (const T&) { return 1; } template <typename T> int ref (const volatile T&) { return 2; } template <typename T> int ref (volatile T&) { return 4; } template <typename T> int ptr (T*) { return 0; } template <typename T> int ptr (const T*) { return 8; } template <typename T> int ptr (const volatile T*) { return 16; } template <typename T> int ptr (volatile T*) { return 32; } void foo() {} int main() { return ref(foo) + ptr(&foo); }
The Standard appears to specify that the value returned from main is 2. The reason for this result is that references and pointers are handled differently in template argument deduction.
For the reference case, 14.8.2.1 [temp.deduct.call] paragraph 3 says that “If P is a reference type, the type referred to by P is used for type deduction.” Because of issue 295, all four of the types for the ref function parameters are the same, with no cv-qualification; overload resolution does not find a best match among the parameters and thus the most-specialized function is selected.
For the pointer type, argument deduction does not get as far as forming a cv-qualified function type; instead, argument deduction fails in the cv-qualified cases because of the cv-qualification mismatch, and only the cv-unqualified version of ptr survives as a viable function.
I think the choice of ignoring cv-qualifiers in the reference case but not the pointer case is very troublesome. The reason is that when one considers function objects as function parameters, it introduces a semantic difference whether the function parameter is declared a reference or a pointer. In all other contexts, it does not matter: a function name decays to a pointer and the resulting semantics are the same.
The current partial ordering rules produce surprising results in the presence of reference collapsing.
Since partial ordering is currently based solely on the signature of the function templates, the lack of difference following substitution of the template type parameter in the following is not taken into account.
Especially unsettling is that the allegedly "more specialized" template (#2) is not a candidate in the first call where template argument deduction fails for it despite a lack of non-deduced contexts.
template <typename T> void foo(T&&); // #1 template <typename T> void foo(volatile T&&); // #2 int main(void) { const int x = 0; foo(x); // calls #1 with T='const int &' foo<const int &>(x); // calls #2 }
The current wording of 15.3 [except.handle] paragraph 16 is:
The object declared in an exception-declaration or, if the exception-declaration does not specify a name, a temporary (12.2 [class.temporary]) is copy-initialized (8.5 [dcl.init]) from the exception object. The object shall not have an abstract class type. The object is destroyed when the handler exits, after the destruction of any automatic objects initialized within the handler.
There are two problems with this. First, it's not clear what it means for the handler's “parameter” to be a temporary. This possibility is briefly mentioned in 12.2 [class.temporary], but the lifetime of such a temporary is not defined there; the discussion of lifetime is restricted to those temporaries that arise during the evaluation of an expression, and this is not such a case.
Second, this wording assumes that there will be an object to be destroyed and thus ignores the possibility that the exception-declaration declares a reference.
According to 15.3 [except.handle] paragraph 16,
The object declared in an exception-declaration or, if the exception-declaration does not specify a name, a temporary (12.2 [class.temporary]) is copy-initialized (8.5 [dcl.init]) from the exception object. The object shall not have an abstract class type. The object is destroyed when the handler exits, after the destruction of any automatic objects initialized within the handler.
This wording leaves unspecified how an exception-declaration that is a reference should be treated. For example, presumably a reference to an abstract class type should be permitted, but that is not specified. The treatment of ellipsis is also not clearly addressed.
It was tentatively agreed at the Santa Cruz meeting that exception specifications should fully participate in the type system. This change would address gaps in the current static checking of exception specifications such as
void (*p)() throw(int); void (**pp)() throw() = &p; // not currently an error
This is such a major change that it deserves to be a separate issue.
See also issues 25, 87, and 133.
A type used in an exception specification must be complete (15.4 [except.spec] paragraph 2). The resolution of issue 437 stated that a class type appearing in an exception specification inside its own member-specification is considered to be complete. Should this also apply to exception specifications in class templates instantiated because of a reference inside the member-specification of a class? For example,
template<class T> struct X { void f() throw(T) {} }; struct S { X<S> xs; };
Destructors that throw can easily cause programs to terminate, with no possible defense. Example: Given
struct XY { X x; Y y; };
Assume that X::~X() is the only destructor in the entire program that can throw. Assume further that Y construction is the only other operation in the whole program that can throw. Then XY cannot be used safely, in any context whatsoever, period — even simply declaring an XY object can crash the program:
XY xy; // construction attempt might terminate program: // 1. construct x -- succeeds // 2. construct y -- fails, throws exception // 3. clean up by destroying x -- fails, throws exception, // but an exception is already active, so call // std::terminate() (oops) // there is no defenseSo it is highly dangerous to have even one destructor that could throw.
Suggested Resolution:
Fix the above problem in one of the following two ways. I prefer the first.
Fergus Henderson: I disagree. Code using XY may well be safe, if X::~X() only throws if std::uncaught_exception() is false.
I think the current exception handling scheme in C++ is certainly flawed, but the flaws are IMHO design flaws, not minor technical defects, and I don't think they can be solved by minor tweaks to the existing design. I think that at this point it is probably better to keep the standard stable, and learn to live with the existing flaws, rather than trying to solve them via TC.
Bjarne Stroustrup: I strongly prefer to have the call to std::terminate() be conforming. I see std::terminate() as a proper way to blow away "the current mess" and get to the next level of error handling. I do not want that escape to be non-conforming — that would imply that programs relying on a error handling based on serious errors being handled by terminating a process (which happens to be a C++ program) in std::terminate() becomes non-conforming. In many systems, there are — and/or should be — error-handling and recovery mechanisms beyond what is offered by a single C++ program.
Andy Koenig: If we were to prohibit writing a destructor that can throw, how would I solve the following problem?
I want to write a class that does buffered output. Among the other properties of that class is that destroying an object of that class writes the last buffer on the output device before freeing memory.
What should my class do if writing that last buffer indicates a hardware output error? My user had the option to flush the last buffer explicitly before destroying the object, but didn't do so, and therefore did not anticipate such a problem. Unfortunately, the problem happened anyway. Should I be required to suppress this error indication anyway? In all cases?
Herb Sutter (June, 2007): IMO, it's fine to suppress it. The user had the option of flushing the buffer and thus being notified of the problem and chose not to use it. If the caller didn't flush, then likely the caller isn't ready for an exception from the destructor, either. You could also put an assert into the destructor that would trigger if flush() had not been called, to force callers to use the interface that would report the error.
In practice, I would rather thrown an exception, even at the risk of crashing the program if we happen to be in the middle of stack unwinding. The reason is that the program would crash only if a hardware error occurred in the middle of cleaning up from some other error that was in the process of being handled. I would rather have such a bizarre coincidence cause a crash, which stands a chance of being diagnosed later, than to be ignored entirely and leave the system in a state where the ignore error could cause other trouble later that is even harder to diagnose.
If I'm not allowed to throw an exception when I detect this problem, what are my options?
Herb Sutter: I understand that some people might feel that "a failed dtor during stack unwinding is preferable in certain cases" (e.g., when recovery can be done beyond the scope of the program), but the problem is "says who?" It is the application program that should be able to decide whether or not such semantics are correct for it, and the problem here is that with the status quo a program cannot defend itself against a std::terminate() — period. The lower-level code makes the decision for everyone. In the original example, the mere existence of an XY object puts at risk every program that uses it, whether std::terminate() makes sense for that program or not, and there is no way for a program to protect itself.
That the "it's okay if the process goes south should a rare combination of things happen" decision should be made by lower-level code (e.g., X dtor) for all apps that use it, and which doesn't even understand the context of any of the hundreds of apps that use it, just cannot be correct.
When a function throws an exception that is not in its exception-specification, std::unexpected() is called. According to 15.5.2 [except.unexpected] paragraph 2,
If [std::unexpected()] throws or rethrows an exception that the exception-specification does not allow then the following happens: If the exception-specification does not include the class std::bad_exception (18.8.2 [bad.exception]) then the function std::terminate() is called, otherwise the thrown exception is replaced by an implementation-defined object of the type std::bad_exception, and the search for another handler will continue at the call of the function whose exception-specification was violated.
The “replaced by” wording is imprecise and undefined. For example, does this mean that the destructor is called for the existing exception object, or is it simply abandoned? Is the replacement in situ, so that a pointer to the existing exception object will now point to the std::bad_exception object?
Mike Miller: The call to std::unexpected() is not described as analogous to invoking a handler, but if it were, that would resolve this question; it is clearly specified what happens to the previous exception object when a new exception is thrown from a handler (15.1 [except.throw] paragraph 4).
This approach would also clarify other questions that have been raised regarding the requirements for stack unwinding. For example, 15.5.1 [except.terminate] paragraph 2 says that
In the situation where no matching handler is found, it is implementation-defined whether or not the stack is unwound before std::terminate() is called.
This requirement could be viewed as in conflict with the statement in 15.5.2 [except.unexpected] paragraph 1 that
If a function with an exception-specification throws an exception that is not listed in the exception-specification, the function std::unexpected() is called (D.13 [exception.unexpected]) immediately after completing the stack unwinding for the former function.
If it is implementation-defined whether stack unwinding occurs before calling std::terminate() and std::unexpected() is called only after doing stack unwinding, does that mean that it is implementation-defined whether std::unexpected() is called if there is ultimately no handler found?
Again, if invoking std::unexpected() were viewed as essentially invoking a handler, the answer to this would be clear, because unwinding occurs before invoking a handler.
According to 16.1 [cpp.cond] paragraph 4,
The resulting tokens comprise the controlling constant expression which is evaluated according to the rules of 5.19 [expr.const] using arithmetic that has at least the ranges specified in 18.3 [support.limits], except that all signed and unsigned integer types act as if they have the same representation as, respectively, intmax_t or uintmax_t (_N3035_.18.4.2 [stdinth]). This includes interpreting character literals, which may involve converting escape sequences into execution character set members.
Ordinary character literals with a single c-char have the type char, which is neither a signed nor an unsigned integer type. Although 4.5 [conv.prom] paragraph 1 is clear that char values promote to int, regardless of whether the implementation treats char as having the values of signed char or unsigned char, 16.1 [cpp.cond] paragraph 4 isn't clear on whether character literals should be treated as signed or unsigned values. In C99, such literals have type int, so the question does not arise. If an implementation in which plain char has the values of unsigned char were to treat character literals as unsigned, an expression like '0'-'1' would thus have different values in C and C++, namely -1 in C and some large unsigned value in C++.
It is not clear from the Standard what the result of the following example should be:
#define NIL(xxx) xxx #define G_0(arg) NIL(G_1)(arg) #define G_1(arg) NIL(arg) G_0(42)
The relevant text from the Standard is found in 16.3.4 [cpp.rescan] paragraph 2:
If the name of the macro being replaced is found during this scan of the replacement list (not including the rest of the source file's preprocessing tokens), it is not replaced. Further, if any nested replacements encounter the name of the macro being replaced, it is not replaced. These nonreplaced macro name preprocessing tokens are no longer available for further replacement even if they are later (re)examined in contexts in which that macro name preprocessing token would otherwise have been replaced.
The sequence of expansion of G0(42) is as follows:
G0(42) NIL(G_1)(42) G_1(42) NIL(42)
The question is whether the use of NIL in the last line of this sequence qualifies for non-replacement under the cited text. If it does, the result will be NIL(42). If it does not, the result will be simply 42.
The original intent of the J11 committee in this text was that the result should be 42, as demonstrated by the original pseudo-code description of the replacement algorithm provided by Dave Prosser, its author. The English description, however, omits some of the subtleties of the pseudo-code and thus arguably gives an incorrect answer for this case.
Suggested resolution (Mike Miller): Replace the cited paragraph with the following:
As long as the scan involves only preprocessing tokens from a given macro's replacement list, or tokens resulting from a replacement of those tokens, an occurrence of the macro's name will not result in further replacement, even if it is later (re)examined in contexts in which that macro name preprocessing token would otherwise have been replaced.
Once the scan reaches the preprocessing token following a macro's replacement list — including as part of the argument list for that or another macro — the macro's name is once again available for replacement. [Example:
#define NIL(xxx) xxx #define G_0(arg) NIL(G_1)(arg) #define G_1(arg) NIL(arg) G_0(42) // result is 42, not NIL(42)The reason that NIL(42) is replaced is that (42) comes from outside the replacement list of NIL(G_1), hence the occurrence of NIL within the replacement list for NIL(G_1) (via the replacement of G_1(42)) is not marked as nonreplaceable. —end example]
(Note: The resolution of this issue must be coordinated with J11/WG14.)
Notes (via Tom Plum) from April, 2004 WG14 Meeting:
Back in the 1980's it was understood by several WG14 people that there were tiny differences between the "non-replacement" verbiage and the attempts to produce pseudo-code. The committee's decision was that no realistic programs "in the wild" would venture into this area, and trying to reduce the uncertainties is not worth the risk of changing conformance status of implementations or programs.
C99 is very clear that a #error directive causes a translation to fail: Clause 4 paragraph 4 says,
The implementation shall not successfully translate a preprocessing translation unit containing a #error preprocessing directive unless it is part of a group skipped by conditional inclusion.
C++, on the other hand, simply says that a #error directive “renders the program ill-formed” (16.5 [cpp.error]), and the only requirement for an ill-formed program is that a diagnostic be issued; the translation may continue and succeed. (Noted in passing: if this difference between C99 and C++ is addressed, it would be helpful for synchronization purposes in other contexts as well to introduce the term “preprocessing translation unit.”)
The specification of how the string-literal in a _Pragma operator is handled does not deal with the new kinds of string literals. 16.9 [cpp.pragma.op] says,
The string literal is destringized by deleting the L prefix, if present, deleting the leading and trailing double-quotes, replacing each escape sequence...
The various other prefixes should either be handled or prohibited.
During the discussion of issues 167 and 174, it became apparent that there was no consensus on the meaning of deprecation. Some thought that deprecating a feature reflected an intent to remove it from the language. Others viewed it more as an encouragement to programmers not to use certain constructs, even though they might be supported in perpetuity.
There is a formal-sounding definition of deprecation in Annex D [depr] paragraph 2:
deprecated is defined as: Normative for the current edition of the Standard, but not guaranteed to be part of the Standard in future revisions.However, this definition would appear to say that any non-deprecated feature is "guaranteed to be part of the Standard in future revisions." It's not clear that that implication was intended, so this definition may need to be amended.
This issue is intended to provide an avenue for discussing and resolving those questions, after which the original issues may be reopened if that is deemed desirable.