Doc. no. N2684=08-0194
Date: 2008-06-29
Project: Programming Language C++
Reply to: Howard Hinnant <howard.hinnant@gmail.com>

C++ Standard Library Active Issues List (Revision R57)

Reference ISO/IEC IS 14882:1998(E)

Also see:

The purpose of this document is to record the status of issues which have come before the Library Working Group (LWG) of the ANSI (J16) and ISO (WG21) C++ Standards Committee. Issues represent potential defects in the ISO/IEC IS 14882:1998(E) document. Issues are not to be used to request new features.

This document contains only library issues which are actively being considered by the Library Working Group. That is, issues which have a status of New, Open, Ready, and Review. See Library Defect Reports List for issues considered defects and Library Closed Issues List for issues considered closed.

The issues in these lists are not necessarily formal ISO Defect Reports (DR's). While some issues will eventually be elevated to official Defect Report status, other issues will be disposed of in other ways. See Issue Status.

This document is in an experimental format designed for both viewing via a world-wide web browser and hard-copy printing. It is available as an HTML file for browsing or PDF file for printing.

Prior to Revision 14, library issues lists existed in two slightly different versions; a Committee Version and a Public Version. Beginning with Revision 14 the two versions were combined into a single version.

This document includes [bracketed italicized notes] as a reminder to the LWG of current progress on issues. Such notes are strictly unofficial and should be read with caution as they may be incomplete or incorrect. Be aware that LWG support for a particular resolution can quickly change if new viewpoints or killer examples are presented in subsequent discussions.

For the most current official version of this document see http://www.open-std.org/jtc1/sc22/wg21/. Requests for further information about this document should include the document number above, reference ISO/IEC 14882:1998(E), and be submitted to Information Technology Industry Council (ITI), 1250 Eye Street NW, Washington, DC 20005.

Public information as to how to obtain a copy of the C++ Standard, join the standards committee, submit an issue, or comment on an issue can be found in the comp.std.c++ FAQ. Public discussion of C++ Standard related issues occurs on news:comp.std.c++.

For committee members, files available on the committee's private web site include the HTML version of the Standard itself. HTML hyperlinks from this issues list to those files will only work for committee members who have downloaded them into the same disk directory as the issues list files.

Revision History

Issue Status

New - The issue has not yet been reviewed by the LWG. Any Proposed Resolution is purely a suggestion from the issue submitter, and should not be construed as the view of LWG.

Open - The LWG has discussed the issue but is not yet ready to move the issue forward. There are several possible reasons for open status:

A Proposed Resolution for an open issue is still not be construed as the view of LWG. Comments on the current state of discussions are often given at the end of open issues in an italic font. Such comments are for information only and should not be given undue importance.

Dup - The LWG has reached consensus that the issue is a duplicate of another issue, and will not be further dealt with. A Rationale identifies the duplicated issue's issue number.

NAD - The LWG has reached consensus that the issue is not a defect in the Standard.

NAD Editorial - The LWG has reached consensus that the issue can either be handled editorially, or is handled by a paper (usually linked to in the rationale).

NAD Future - In addition to the regular status, the LWG believes that this issue should be revisited at the next revision of the standard.

Review - Exact wording of a Proposed Resolution is now available for review on an issue for which the LWG previously reached informal consensus.

Tentatively Ready - The issue has been reviewed online, but not in a meeting, and some support has been formed for the proposed resolution. Tentatively Ready issues may be moved to Ready and forwarded to full committee within the same meeting. Unlike Ready issues they will be reviewed in subcommittee prior to forwarding to full committee.

Ready - The LWG has reached consensus that the issue is a defect in the Standard, the Proposed Resolution is correct, and the issue is ready to forward to the full committee for further action as a Defect Report (DR).

DR - (Defect Report) - The full J16 committee has voted to forward the issue to the Project Editor to be processed as a Potential Defect Report. The Project Editor reviews the issue, and then forwards it to the WG21 Convenor, who returns it to the full committee for final disposition. This issues list accords the status of DR to all these Defect Reports regardless of where they are in that process.

TC - (Technical Corrigenda) - The full WG21 committee has voted to accept the Defect Report's Proposed Resolution as a Technical Corrigenda. Action on this issue is thus complete and no further action is possible under ISO rules.

TRDec - (Decimal TR defect) - The LWG has voted to accept the Defect Report's Proposed Resolution into the Decimal TR. Action on this issue is thus complete and no further action is expected.

WP - (Working Paper) - The proposed resolution has not been accepted as a Technical Corrigendum, but the full WG21 committee has voted to apply the Defect Report's Proposed Resolution to the working paper.

Pending - This is a status qualifier. When prepended to a status this indicates the issue has been processed by the committee, and a decision has been made to move the issue to the associated unqualified status. However for logistical reasons the indicated outcome of the issue has not yet appeared in the latest working paper.

Issues are always given the status of New when they first appear on the issues list. They may progress to Open or Review while the LWG is actively working on them. When the LWG has reached consensus on the disposition of an issue, the status will then change to Dup, NAD, or Ready as appropriate. Once the full J16 committee votes to forward Ready issues to the Project Editor, they are given the status of Defect Report ( DR). These in turn may become the basis for Technical Corrigenda (TC), or are closed without action other than a Record of Response (RR ). The intent of this LWG process is that only issues which are truly defects in the Standard move to the formal ISO DR status.

Active Issues


23. Num_get overflow result

Section: 22.2.2.1.2 [facet.num.get.virtuals] Status: Review Submitter: Nathan Myers Date: 1998-08-06

View other active issues in [facet.num.get.virtuals].

View all other issues in [facet.num.get.virtuals].

View all issues with Review status.

Discussion:

The current description of numeric input does not account for the possibility of overflow. This is an implicit result of changing the description to rely on the definition of scanf() (which fails to report overflow), and conflicts with the documented behavior of traditional and current implementations.

Users expect, when reading a character sequence that results in a value unrepresentable in the specified type, to have an error reported. The standard as written does not permit this.

Further comments from Dietmar:

I don't feel comfortable with the proposed resolution to issue 23: It kind of simplifies the issue to much. Here is what is going on:

Currently, the behavior of numeric overflow is rather counter intuitive and hard to trace, so I will describe it briefly:

Further discussion from Redmond:

The basic problem is that we've defined our behavior, including our error-reporting behavior, in terms of C90. However, C90's method of reporting overflow in scanf is not technically an "input error". The strto_* functions are more precise.

There was general consensus that failbit should be set upon overflow. We considered three options based on this:

  1. Set failbit upon conversion error (including overflow), and don't store any value.
  2. Set failbit upon conversion error, and also set errno to indicated the precise nature of the error.
  3. Set failbit upon conversion error. If the error was due to overflow, store +-numeric_limits<T>::max() as an overflow indication.

Straw poll: (1) 5; (2) 0; (3) 8.

Discussed at Lillehammer. General outline of what we want the solution to look like: we want to say that overflow is an error, and provide a way to distinguish overflow from other kinds of errors. Choose candidate field the same way scanf does, but don't describe the rest of the process in terms of format. If a finite input field is too large (positive or negative) to be represented as a finite value, then set failbit and assign the nearest representable value. Bill will provide wording.

Discussed at Toronto: N2327 is in alignment with the direction we wanted to go with in Lillehammer. Bill to work on.

Proposed resolution:

Change 22.2.2.1.2 [facet.num.get.virtuals], end of p3:

Stage 3: The result of stage 2 processing can be one of The sequence of chars accumulated in stage 2 (the field) is converted to a numeric value by the rules of one of the functions declared in the header <cstdlib>:

The numeric value to be stored can be one of:

The resultant numeric value is stored in val.

Change 22.2.2.1.2 [facet.num.get.virtuals], p6-p7:

iter_type do_get(iter_type in, iter_type end, ios_base& str, 
                 ios_base::iostate& err, bool& val) const;

-6- Effects: If (str.flags()&ios_base::boolalpha)==0 then input proceeds as it would for a long except that if a value is being stored into val, the value is determined according to the following: If the value to be stored is 0 then false is stored. If the value is 1 then true is stored. Otherwise err|=ios_base::failbit is performed and no value true is stored. and ios_base::failbit is assigned to err.

-7- Otherwise target sequences are determined "as if" by calling the members falsename() and truename() of the facet obtained by use_facet<numpunct<charT> >(str.getloc()). Successive characters in the range [in,end) (see 23.1.1) are obtained and matched against corresponding positions in the target sequences only as necessary to identify a unique match. The input iterator in is compared to end only when necessary to obtain a character. If and only if a target sequence is uniquely matched, val is set to the corresponding value. Otherwise false is stored and ios_base::failbit is assigned to err.


96. Vector<bool> is not a container

Section: 23.2.6 [vector] Status: Open Submitter: AFNOR Date: 1998-10-07

View all other issues in [vector].

View all issues with Open status.

Discussion:

vector<bool> is not a container as its reference and pointer types are not references and pointers.

Also it forces everyone to have a space optimization instead of a speed one.

See also: 99-0008 == N1185 Vector<bool> is Nonconforming, Forces Optimization Choice.

[In Santa Cruz the LWG felt that this was Not A Defect.]

[In Dublin many present felt that failure to meet Container requirements was a defect. There was disagreement as to whether or not the optimization requirements constituted a defect.]

[The LWG looked at the following resolutions in some detail:
     * Not A Defect.
     * Add a note explaining that vector<bool> does not meet Container requirements.
     * Remove vector<bool>.
     * Add a new category of container requirements which vector<bool> would meet.
     * Rename vector<bool>.

No alternative had strong, wide-spread, support and every alternative had at least one "over my dead body" response.

There was also mention of a transition scheme something like (1) add vector_bool and deprecate vector<bool> in the next standard. (2) Remove vector<bool> in the following standard.]

[Modifying container requirements to permit returning proxies (thus allowing container requirements conforming vector<bool>) was also discussed.]

[It was also noted that there is a partial but ugly workaround in that vector<bool> may be further specialized with a customer allocator.]

[Kona: Herb Sutter presented his paper J16/99-0035==WG21/N1211, vector<bool>: More Problems, Better Solutions. Much discussion of a two step approach: a) deprecate, b) provide replacement under a new name. LWG straw vote on that: 1-favor, 11-could live with, 2-over my dead body. This resolution was mentioned in the LWG report to the full committee, where several additional committee members indicated over-my-dead-body positions.]

Discussed at Lillehammer. General agreement that we should deprecate vector<bool> and introduce this functionality under a different name, e.g. bit_vector. This might make it possible to remove the vector<bool> specialization in the standard that comes after C++0x. There was also a suggestion that in C++0x we could additional say that it's implementation defined whether vector<bool> refers to the specialization or to the primary template, but there wasn't general agreement that this was a good idea.

We need a paper for the new bit_vector class.

Proposed resolution:

We now have: N2050 and N2160.

[ Batavia: The LWG feels we need something closer to SGI's bitvector to ease migration from vector<bool>. Although some of the funcitonality from N2050 could well be used in such a template. The concern is easing the API migration for those users who want to continue using a bit-packed container. Alan and Beman to work. ]


128. Need open_mode() function for file stream, string streams, file buffers, and string  buffers

Section: 27.7 [string.streams], 27.8 [file.streams] Status: Open Submitter: Angelika Langer Date: 1999-02-22

View all other issues in [string.streams].

View all issues with Open status.

Discussion:

The following question came from Thorsten Herlemann:

You can set a mode when constructing or opening a file-stream or filebuf, e.g. ios::in, ios::out, ios::binary, ... But how can I get that mode later on, e.g. in my own operator << or operator >> or when I want to check whether a file-stream or file-buffer object passed as parameter is opened for input or output or binary? Is there no possibility? Is this a design-error in the standard C++ library?

It is indeed impossible to find out what a stream's or stream buffer's open mode is, and without that knowledge you don't know how certain operations behave. Just think of the append mode.

Both streams and stream buffers should have a mode() function that returns the current open mode setting.

[ post Bellevue: Alisdair requested to re-Open. ]

Proposed resolution:

For stream buffers, add a function to the base class as a non-virtual function qualified as const to 27.5.2 [streambuf]:

    openmode mode() const;

    Returns the current open mode.

With streams, I'm not sure what to suggest. In principle, the mode could already be returned by ios_base, but the mode is only initialized for file and string stream objects, unless I'm overlooking anything. For this reason it should be added to the most derived stream classes. Alternatively, it could be added to basic_ios and would be default initialized in basic_ios<>::init().

Rationale:

This might be an interesting extension for some future, but it is not a defect in the current standard. The Proposed Resolution is retained for future reference.


180. Container member iterator arguments constness has unintended consequences

Section: 21.3 [basic.string] Status: Ready Submitter: Dave Abrahams Date: 1999-07-01

View other active issues in [basic.string].

View all other issues in [basic.string].

View all issues with Ready status.

Discussion:

It is the constness of the container which should control whether it can be modified through a member function such as erase(), not the constness of the iterators. The iterators only serve to give positioning information.

Here's a simple and typical example problem which is currently very difficult or impossible to solve without the change proposed below.

Wrap a standard container C in a class W which allows clients to find and read (but not modify) a subrange of (C.begin(), C.end()]. The only modification clients are allowed to make to elements in this subrange is to erase them from C through the use of a member function of W.

[ post Bellevue, Alisdair adds: ]

This issue was implemented by N2350 for everything but basic_string.

Note that the specific example in this issue (basic_string) is the one place we forgot to amend in N2350, so we might open this issue for that single container?

[ Sophia Antipolis: ]

This was a fix that was intended for all standard library containers, and has been done for other containers, but string was missed.

The wording updated.

We did not make the change in replace, because this change would affect the implementation because the string may be written into. This is an issue that should be taken up by concepts.

We note that the supplied wording addresses the initializer list provided in N2679.

Proposed resolution:

Update the following signature in the basic_string class template definition in 21.3 [basic.string], p5:

namespace std {
  template<class charT, class traits = char_traits<charT>,
    class Allocator = allocator<charT> >
  class basic_string {

    ...

    iterator insert(const_iterator p, charT c);
    void insert(const_iterator p, size_type n, charT c);
    template<class InputIterator>
      void insert(const_iterator p, InputIterator first, InputIterator last);
    void insert(const_iterator p, initializer_list<charT>);

    ...

    iterator erase(const_iterator const_position);
    iterator erase(const_iterator first, const_iterator last);

    ...

  };
}

Update the following signatures in 21.3.6.4 [string::insert]:

iterator insert(const_iterator p, charT c);
void insert(const_iterator p, size_type n, charT c);
template<class InputIterator>
  void insert(const_iterator p, InputIterator first, InputIterator last);
void insert(const_iterator p, initializer_list<charT>);

Update the following signatures in 21.3.6.5 [string::erase]:

iterator erase(const_iterator const_position);
iterator erase(const_iterator first, const_iterator last);

Rationale:

The issue was discussed at length. It was generally agreed that 1) There is no major technical argument against the change (although there is a minor argument that some obscure programs may break), and 2) Such a change would not break const correctness. The concerns about making the change were 1) it is user detectable (although only in boundary cases), 2) it changes a large number of signatures, and 3) it seems more of a design issue that an out-and-out defect.

The LWG believes that this issue should be considered as part of a general review of const issues for the next revision of the standard. Also see issue 200.


190. min() and max() functions should be std::binary_functions

Section: 25.3.7 [alg.min.max] Status: Open Submitter: Mark Rintoul Date: 1999-08-26

View all other issues in [alg.min.max].

View all issues with Open status.

Discussion:

Both std::min and std::max are defined as template functions. This is very different than the definition of std::plus (and similar structs) which are defined as function objects which inherit std::binary_function.

This lack of inheritance leaves std::min and std::max somewhat useless in standard library algorithms which require a function object that inherits std::binary_function.

[ post Bellevue: Alisdair requested to re-Open. ]

Rationale:

Although perhaps an unfortunate design decision, the omission is not a defect in the current standard.  A future standard may wish to consider additional function objects.


255. Why do basic_streambuf<>::pbump() and gbump() take an int?

Section: 27.5.2 [streambuf] Status: Open Submitter: Martin Sebor Date: 2000-08-12

View all other issues in [streambuf].

View all issues with Open status.

Discussion:

The basic_streambuf members gbump() and pbump() are specified to take an int argument. This requirement prevents the functions from effectively manipulating buffers larger than std::numeric_limits<int>::max() characters. It also makes the common use case for these functions somewhat difficult as many compilers will issue a warning when an argument of type larger than int (such as ptrdiff_t on LLP64 architectures) is passed to either of the function. Since it's often the result of the subtraction of two pointers that is passed to the functions, a cast is necessary to silence such warnings. Finally, the usage of a native type in the functions signatures is inconsistent with other member functions (such as sgetn() and sputn()) that manipulate the underlying character buffer. Those functions take a streamsize argument.

Proposed resolution:

Change the signatures of these functions in the synopsis of template class basic_streambuf (27.5.2) and in their descriptions (27.5.2.3.1, p4 and 27.5.2.3.2, p4) to take a streamsize argument.

Although this change has the potential of changing the ABI of the library, the change will affect only platforms where int is different than the definition of streamsize. However, since both functions are typically inline (they are on all known implementations), even on such platforms the change will not affect any user code unless it explicitly relies on the existing type of the functions (e.g., by taking their address). Such a possibility is IMO quite remote.

Alternate Suggestion from Howard Hinnant, c++std-lib-7780:

This is something of a nit, but I'm wondering if streamoff wouldn't be a better choice than streamsize. The argument to pbump and gbump MUST be signed. But the standard has this to say about streamsize (27.4.1/2/Footnote):

[Footnote: streamsize is used in most places where ISO C would use size_t. Most of the uses of streamsize could use size_t, except for the strstreambuf constructors, which require negative values. It should probably be the signed type corresponding to size_t (which is what Posix.2 calls ssize_t). --- end footnote]

This seems a little weak for the argument to pbump and gbump. Should we ever really get rid of strstream, this footnote might go with it, along with the reason to make streamsize signed.

Rationale:

The LWG believes this change is too big for now. We may wish to reconsider this for a future revision of the standard. One possibility is overloading pbump, rather than changing the signature.

[ [2006-05-04: Reopened at the request of Chris (Krzysztof ?elechowski)] ]


290. Requirements to for_each and its function object

Section: 25.1.1 [alg.foreach] Status: Open Submitter: Angelika Langer Date: 2001-01-03

View all other issues in [alg.foreach].

View all issues with Open status.

Discussion:

The specification of the for_each algorithm does not have a "Requires" section, which means that there are no restrictions imposed on the function object whatsoever. In essence it means that I can provide any function object with arbitrary side effects and I can still expect a predictable result. In particular I can expect that the function object is applied exactly last - first times, which is promised in the "Complexity" section.

I don't see how any implementation can give such a guarantee without imposing requirements on the function object.

Just as an example: consider a function object that removes elements from the input sequence. In that case, what does the complexity guarantee (applies f exactly last - first times) mean?

One can argue that this is obviously a nonsensical application and a theoretical case, which unfortunately it isn't. I have seen programmers shooting themselves in the foot this way, and they did not understand that there are restrictions even if the description of the algorithm does not say so.

[Lillehammer: This is more general than for_each. We don't want the function object in transform invalidiating iterators either. There should be a note somewhere in clause 17 (17, not 25) saying that user code operating on a range may not invalidate iterators unless otherwise specified. Bill will provide wording.]

Proposed resolution:


299. Incorrect return types for iterator dereference

Section: 24.1.4 [bidirectional.iterators], 24.1.5 [random.access.iterators] Status: Open Submitter: John Potter Date: 2001-01-22

View all other issues in [bidirectional.iterators].

View all issues with Open status.

Discussion:

In section 24.1.4 [bidirectional.iterators], Table 75 gives the return type of *r-- as convertible to T. This is not consistent with Table 74 which gives the return type of *r++ as T&. *r++ = t is valid while *r-- = t is invalid.

In section 24.1.5 [random.access.iterators], Table 76 gives the return type of a[n] as convertible to T. This is not consistent with the semantics of *(a + n) which returns T& by Table 74. *(a + n) = t is valid while a[n] = t is invalid.

Discussion from the Copenhagen meeting: the first part is uncontroversial. The second part, operator[] for Random Access Iterators, requires more thought. There are reasonable arguments on both sides. Return by value from operator[] enables some potentially useful iterators, e.g. a random access "iota iterator" (a.k.a "counting iterator" or "int iterator"). There isn't any obvious way to do this with return-by-reference, since the reference would be to a temporary. On the other hand, reverse_iterator takes an arbitrary Random Access Iterator as template argument, and its operator[] returns by reference. If we decided that the return type in Table 76 was correct, we would have to change reverse_iterator. This change would probably affect user code.

History: the contradiction between reverse_iterator and the Random Access Iterator requirements has been present from an early stage. In both the STL proposal adopted by the committee (N0527==94-0140) and the STL technical report (HPL-95-11 (R.1), by Stepanov and Lee), the Random Access Iterator requirements say that operator[]'s return value is "convertible to T". In N0527 reverse_iterator's operator[] returns by value, but in HPL-95-11 (R.1), and in the STL implementation that HP released to the public, reverse_iterator's operator[] returns by reference. In 1995, the standard was amended to reflect the contents of HPL-95-11 (R.1). The original intent for operator[] is unclear.

In the long term it may be desirable to add more fine-grained iterator requirements, so that access method and traversal strategy can be decoupled. (See "Improved Iterator Categories and Requirements", N1297 = 01-0011, by Jeremy Siek.) Any decisions about issue 299 should keep this possibility in mind.

Further discussion: I propose a compromise between John Potter's resolution, which requires T& as the return type of a[n], and the current wording, which requires convertible to T. The compromise is to keep the convertible to T for the return type of the expression a[n], but to also add a[n] = t as a valid expression. This compromise "saves" the common case uses of random access iterators, while at the same time allowing iterators such as counting iterator and caching file iterators to remain random access iterators (iterators where the lifetime of the object returned by operator*() is tied to the lifetime of the iterator).

Note that the compromise resolution necessitates a change to reverse_iterator. It would need to use a proxy to support a[n] = t.

Note also there is one kind of mutable random access iterator that will no longer meet the new requirements. Currently, iterators that return an r-value from operator[] meet the requirements for a mutable random access iterartor, even though the expression a[n] = t will only modify a temporary that goes away. With this proposed resolution, a[n] = t will be required to have the same operational semantics as *(a + n) = t.

Proposed resolution:

In section 24.1.4 [lib.bidirectdional.iterators], change the return type in table 75 from "convertible to T" to T&.

In section 24.1.5 [lib.random.access.iterators], change the operational semantics for a[n] to " the r-value of a[n] is equivalent to the r-value of *(a + n)". Add a new row in the table for the expression a[n] = t with a return type of convertible to T and operational semantics of *(a + n) = t.

[Lillehammer: Real problem, but should be addressed as part of iterator redesign]


309. Does sentry catch exceptions?

Section: 27.6 [iostream.format] Status: Open Submitter: Martin Sebor Date: 2001-03-19

View all other issues in [iostream.format].

View all issues with Open status.

Discussion:

The descriptions of the constructors of basic_istream<>::sentry (27.6.1.1.3 [istream::sentry]) and basic_ostream<>::sentry (27.6.2.4 [ostream::sentry]) do not explain what the functions do in case an exception is thrown while they execute. Some current implementations allow all exceptions to propagate, others catch them and set ios_base::badbit instead, still others catch some but let others propagate.

The text also mentions that the functions may call setstate(failbit) (without actually saying on what object, but presumably the stream argument is meant). That may have been fine for basic_istream<>::sentry prior to issue 195, since the function performs an input operation which may fail. However, issue 195 amends 27.6.1.1.3 [istream::sentry], p2 to clarify that the function should actually call setstate(failbit | eofbit), so the sentence in p3 is redundant or even somewhat contradictory.

The same sentence that appears in 27.6.2.4 [ostream::sentry], p3 doesn't seem to be very meaningful for basic_istream<>::sentry which performs no input. It is actually rather misleading since it would appear to guide library implementers to calling setstate(failbit) when os.tie()->flush(), the only called function, throws an exception (typically, it's badbit that's set in response to such an event).

Additional comments from Martin, who isn't comfortable with the current proposed resolution (see c++std-lib-11530)

The istream::sentry ctor says nothing about how the function deals with exemptions (27.6.1.1.2, p1 says that the class is responsible for doing "exception safe"(*) prefix and suffix operations but it doesn't explain what level of exception safety the class promises to provide). The mockup example of a "typical implementation of the sentry ctor" given in 27.6.1.1.2, p6, removed in ISO/IEC 14882:2003, doesn't show exception handling, either. Since the ctor is not classified as a formatted or unformatted input function, the text in 27.6.1.1, p1 through p4 does not apply. All this would seem to suggest that the sentry ctor should not catch or in any way handle exceptions thrown from any functions it may call. Thus, the typical implementation of an istream extractor may look something like [1].

The problem with [1] is that while it correctly sets ios::badbit if an exception is thrown from one of the functions called from the sentry ctor, if the sentry ctor reaches EOF while extracting whitespace from a stream that has eofbit or failbit set in exceptions(), it will cause an ios::failure to be thrown, which will in turn cause the extractor to set ios::badbit.

The only straightforward way to prevent this behavior is to move the definition of the sentry object in the extractor above the try block (as suggested by the example in 22.2.8, p9 and also indirectly supported by 27.6.1.3, p1). See [2]. But such an implementation will allow exceptions thrown from functions called from the ctor to freely propagate to the caller regardless of the setting of ios::badbit in the stream object's exceptions().

So since neither [1] nor [2] behaves as expected, the only possible solution is to have the sentry ctor catch exceptions thrown from called functions, set badbit, and propagate those exceptions if badbit is also set in exceptions(). (Another solution exists that deals with both kinds of sentries, but the code is non-obvious and cumbersome -- see [3].)

Please note that, as the issue points out, current libraries do not behave consistently, suggesting that implementors are not quite clear on the exception handling in istream::sentry, despite the fact that some LWG members might feel otherwise. (As documented by the parenthetical comment here: http://anubis.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1480.html#309)

Also please note that those LWG members who in Copenhagen felt that "a sentry's constructor should not catch exceptions, because sentries should only be used within (un)formatted input functions and that exception handling is the responsibility of those functions, not of the sentries," as noted here http://anubis.dkuug.dk/jtc1/sc22/wg21/docs/papers/2001/n1310.html#309 would in effect be either arguing for the behavior described in [1] or for extractors implemented along the lines of [3].

The original proposed resolution (Revision 25 of the issues list) clarifies the role of the sentry ctor WRT exception handling by making it clear that extractors (both library or user-defined) should be implemented along the lines of [2] (as opposed to [1]) and that no exception thrown from the callees should propagate out of either function unless badbit is also set in exceptions().

[1] Extractor that catches exceptions thrown from sentry:

struct S { long i; };

istream& operator>> (istream &strm, S &s)
{
    ios::iostate err = ios::goodbit;
    try {
        const istream::sentry guard (strm, false);
        if (guard) {
            use_facet<num_get<char> >(strm.getloc ())
                .get (istreambuf_iterator<char>(strm),
                      istreambuf_iterator<char>(),
                      strm, err, s.i);
        }
    }
    catch (...) {
        bool rethrow;
        try {
            strm.setstate (ios::badbit);
            rethrow = false;
        }
        catch (...) {
            rethrow = true;
        }
        if (rethrow)
            throw;
    }
    if (err)
        strm.setstate (err);
    return strm;
}

[2] Extractor that propagates exceptions thrown from sentry:

istream& operator>> (istream &strm, S &s)
{
    istream::sentry guard (strm, false);
    if (guard) {
        ios::iostate err = ios::goodbit;
        try {
            use_facet<num_get<char> >(strm.getloc ())
                .get (istreambuf_iterator<char>(strm),
                      istreambuf_iterator<char>(),
                      strm, err, s.i);
        }
        catch (...) {
            bool rethrow;
            try {
                strm.setstate (ios::badbit);
                rethrow = false;
            }
            catch (...) {
                rethrow = true;
            }
            if (rethrow)
                throw;
        }
        if (err)
            strm.setstate (err);
    }
    return strm;
}

[3] Extractor that catches exceptions thrown from sentry but doesn't set badbit if the exception was thrown as a result of a call to strm.clear().

istream& operator>> (istream &strm, S &s)
{
    const ios::iostate state = strm.rdstate ();
    const ios::iostate except = strm.exceptions ();
    ios::iostate err = std::ios::goodbit;
    bool thrown = true;
    try {
        const istream::sentry guard (strm, false);
        thrown = false;
        if (guard) {
            use_facet<num_get<char> >(strm.getloc ())
                .get (istreambuf_iterator<char>(strm),
                      istreambuf_iterator<char>(),
                      strm, err, s.i);
        }
    }
    catch (...) {
        if (thrown && state & except)
            throw;
        try {
            strm.setstate (ios::badbit);
            thrown = false;
        }
        catch (...) {
            thrown = true;
        }
        if (thrown)
            throw;
    }
    if (err)
        strm.setstate (err);

    return strm;
}

[Pre-Berlin] Reopened at the request of Paolo Carlini and Steve Clamage.

[Pre-Portland] A relevant newsgroup post:

The current proposed resolution of issue #309 (http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-active.html#309) is unacceptable. I write commerical software and coding around this makes my code ugly, non-intuitive, and requires comments referring people to this very issue. Following is the full explanation of my experience.

In the course of writing software for commercial use, I constructed std::ifstream's based on user-supplied pathnames on typical POSIX systems.

It was expected that some files that opened successfully might not read successfully -- such as a pathname which actually refered to a directory. Intuitively, I expected the streambuffer underflow() code to throw an exception in this situation, and recent implementations of libstdc++'s basic_filebuf do just that (as well as many of my own custom streambufs).

I also intuitively expected that the istream code would convert these exceptions to the "badbit' set on the stream object, because I had not requested exceptions. I refer to 27.6.1.1. P4.

However, this was not the case on at least two implementations -- if the first thing I did with an istream was call operator>>( T& ) for T among the basic arithmetic types and std::string. Looking further I found that the sentry's constructor was invoking the exception when it pre-scanned for whitespace, and the extractor function (operator>>()) was not catching exceptions in this situation.

So, I was in a situation where setting 'noskipws' would change the istream's behavior even though no characters (whitespace or not) could ever be successfully read.

Also, calling .peek() on the istream before calling the extractor() changed the behavior (.peek() had the effect of setting the badbit ahead of time).

I found this all to be so inconsistent and inconvenient for me and my code design, that I filed a bugzilla entry for libstdc++. I was then told that the bug cannot be fixed until issue #309 is resolved by the committee.

Proposed resolution:

Rationale:

The LWG agrees there is minor variation between implementations, but believes that it doesn't matter. This is a rarely used corner case. There is no evidence that this has any commercial importance or that it causes actual portability problems for customers trying to write code that runs on multiple implementations.


342. seek and eofbit

Section: 27.6.1.3 [istream.unformatted] Status: Open Submitter: Howard Hinnant Date: 2001-10-09

View all other issues in [istream.unformatted].

View all issues with Open status.

Discussion:

I think we have a defect.

According to lwg issue 60 which is now a dr, the description of seekg in 27.6.1.3 [istream.unformatted] paragraph 38 now looks like:

Behaves as an unformatted input function (as described in 27.6.1.3, paragraph 1), except that it does not count the number of characters extracted and does not affect the value returned by subsequent calls to gcount(). After constructing a sentry object, if fail() != true, executes rdbuf()->pubseekpos( pos).

And according to lwg issue 243 which is also now a dr, 27.6.1.3, paragraph 1 looks like:

Each unformatted input function begins execution by constructing an object of class sentry with the default argument noskipws (second) argument true. If the sentry object returns true, when converted to a value of type bool, the function endeavors to obtain the requested input. Otherwise, if the sentry constructor exits by throwing an exception or if the sentry object returns false, when converted to a value of type bool, the function returns without attempting to obtain any input. In either case the number of extracted characters is set to 0; unformatted input functions taking a character array of non-zero size as an argument shall also store a null character (using charT()) in the first location of the array. If an exception is thrown during input then ios::badbit is turned on in *this'ss error state. If (exception()&badbit)!= 0 then the exception is rethrown. It also counts the number of characters extracted. If no exception has been thrown it ends by storing the count in a member object and returning the value specified. In any event the sentry object is destroyed before leaving the unformatted input function.

And finally 27.6.1.1.2/5 says this about sentry:

If, after any preparation is completed, is.good() is true, ok_ != false otherwise, ok_ == false.

So although the seekg paragraph says that the operation proceeds if !fail(), the behavior of unformatted functions says the operation proceeds only if good(). The two statements are contradictory when only eofbit is set. I don't think the current text is clear which condition should be respected.

Further discussion from Redmond:

PJP: It doesn't seem quite right to say that seekg is "unformatted". That makes specific claims about sentry that aren't quite appropriate for seeking, which has less fragile failure modes than actual input. If we do really mean that it's unformatted input, it should behave the same way as other unformatted input. On the other hand, "principle of least surprise" is that seeking from EOF ought to be OK.

Pre-Berlin: Paolo points out several problems with the proposed resolution in Ready state:

Proposed resolution:

Change 27.6.1.3 [istream.unformatted] to:

Behaves as an unformatted input function (as described in 27.6.1.3, paragraph 1), except that it does not count the number of characters extracted, does not affect the value returned by subsequent calls to gcount(), and does not examine the value returned by the sentry object. After constructing a sentry object, if fail() != true, executes rdbuf()->pubseekpos(pos). In case of success, the function calls clear(). In case of failure, the function calls setstate(failbit) (which may throw ios_base::failure).

[Lillehammer: Matt provided wording.]

Rationale:

In C, fseek does clear EOF. This is probably what most users would expect. We agree that having eofbit set should not deter a seek, and that a successful seek should clear eofbit. Note that fail() is true only if failbit or badbit is set, so using !fail(), rather than good(), satisfies this goal.


343. Unspecified library header dependencies

Section: 21 [strings], 23 [containers], 27 [input.output] Status: Open Submitter: Martin Sebor Date: 2001-10-09

View all other issues in [strings].

View all issues with Open status.

Discussion:

The synopses of the C++ library headers clearly show which names are required to be defined in each header. Since in order to implement the classes and templates defined in these headers declarations of other templates (but not necessarily their definitions) are typically necessary the standard in 17.4.4, p1 permits library implementers to include any headers needed to implement the definitions in each header.

For instance, although it is not explicitly specified in the synopsis of <string>, at the point of definition of the std::basic_string template the declaration of the std::allocator template must be in scope. All current implementations simply include <memory> from within <string>, either directly or indirectly, to bring the declaration of std::allocator into scope.

Additionally, however, some implementation also include <istream> and <ostream> at the top of <string> to bring the declarations of std::basic_istream and std::basic_ostream into scope (which are needed in order to implement the string inserter and extractor operators (21.3.7.9 [lib.string.io])). Other implementations only include <iosfwd>, since strictly speaking, only the declarations and not the full definitions are necessary.

Obviously, it is possible to implement <string> without actually providing the full definitions of all the templates std::basic_string uses (std::allocator, std::basic_istream, and std::basic_ostream). Furthermore, not only is it possible, doing so is likely to have a positive effect on compile-time efficiency.

But while it may seem perfectly reasonable to expect a program that uses the std::basic_string insertion and extraction operators to also explicitly include <istream> or <ostream>, respectively, it doesn't seem reasonable to also expect it to explicitly include <memory>. Since what's reasonable and what isn't is highly subjective one would expect the standard to specify what can and what cannot be assumed. Unfortunately, that isn't the case.

The examples below demonstrate the issue.

Example 1:

It is not clear whether the following program is complete:

#include <string>

extern std::basic_ostream<char> &strm;

int main () {
    strm << std::string ("Hello, World!\n");
}

or whether one must explicitly include <memory> or <ostream> (or both) in addition to <string> in order for the program to compile.

Example 2:

Similarly, it is unclear whether the following program is complete:

#include <istream>

extern std::basic_iostream<char> &strm;

int main () {
    strm << "Hello, World!\n";
}

or whether one needs to explicitly include <ostream>, and perhaps even other headers containing the definitions of other required templates:

#include <ios>
#include <istream>
#include <ostream>
#include <streambuf>

extern std::basic_iostream<char> &strm;

int main () {
    strm << "Hello, World!\n";
}

Example 3:

Likewise, it seems unclear whether the program below is complete:

#include <iterator>

bool foo (std::istream_iterator<int> a, std::istream_iterator<int> b)
{
    return a == b;
}

int main () { }

or whether one should be required to include <istream>.

There are many more examples that demonstrate this lack of a requirement. I believe that in a good number of cases it would be unreasonable to require that a program explicitly include all the headers necessary for a particular template to be specialized, but I think that there are cases such as some of those above where it would be desirable to allow implementations to include only as much as necessary and not more.

[ post Bellevue: ]

Position taken in prior reviews is that the idea of a table of header dependencies is a good one. Our view is that a full paper is needed to do justice to this, and we've made that recommendation to the issue author.

Proposed resolution:

For every C++ library header, supply a minimum set of other C++ library headers that are required to be included by that header. The proposed list is below (C++ headers for C Library Facilities, table 12 in 17.4.1.2, p3, are omitted):

+------------+--------------------+
| C++ header |required to include |
+============+====================+
|<algorithm> |                    |
+------------+--------------------+
|<bitset>    |                    |
+------------+--------------------+
|<complex>   |                    |
+------------+--------------------+
|<deque>     |<memory>            |
+------------+--------------------+
|<exception> |                    |
+------------+--------------------+
|<fstream>   |<ios>               |
+------------+--------------------+
|<functional>|                    |
+------------+--------------------+
|<iomanip>   |<ios>               |
+------------+--------------------+
|<ios>       |<streambuf>         |
+------------+--------------------+
|<iosfwd>    |                    |
+------------+--------------------+
|<iostream>  |<istream>, <ostream>|
+------------+--------------------+
|<istream>   |<ios>               |
+------------+--------------------+
|<iterator>  |                    |
+------------+--------------------+
|<limits>    |                    |
+------------+--------------------+
|<list>      |<memory>            |
+------------+--------------------+
|<locale>    |                    |
+------------+--------------------+
|<map>       |<memory>            |
+------------+--------------------+
|<memory>    |                    |
+------------+--------------------+
|<new>       |<exception>         |
+------------+--------------------+
|<numeric>   |                    |
+------------+--------------------+
|<ostream>   |<ios>               |
+------------+--------------------+
|<queue>     |<deque>             |
+------------+--------------------+
|<set>       |<memory>            |
+------------+--------------------+
|<sstream>   |<ios>, <string>     |
+------------+--------------------+
|<stack>     |<deque>             |
+------------+--------------------+
|<stdexcept> |                    |
+------------+--------------------+
|<streambuf> |<ios>               |
+------------+--------------------+
|<string>    |<memory>            |
+------------+--------------------+
|<strstream> |                    |
+------------+--------------------+
|<typeinfo>  |<exception>         |
+------------+--------------------+
|<utility>   |                    |
+------------+--------------------+
|<valarray>  |                    |
+------------+--------------------+
|<vector>    |<memory>            |
+------------+--------------------+

Rationale:

The portability problem is real. A program that works correctly on one implementation might fail on another, because of different header dependencies. This problem was understood before the standard was completed, and it was a conscious design choice.

One possible way to deal with this, as a library extension, would be an <all> header.

Hinnant: It's time we dealt with this issue for C++0X. Reopened.


382. codecvt do_in/out result

Section: 22.2.1.4 [locale.codecvt] Status: Open Submitter: Martin Sebor Date: 2002-08-30

View all other issues in [locale.codecvt].

View all issues with Open status.

Discussion:

It seems that the descriptions of codecvt do_in() and do_out() leave sufficient room for interpretation so that two implementations of codecvt may not work correctly with the same filebuf. Specifically, the following seems less than adequately specified:

  1. the conditions under which the functions terminate
  2. precisely when the functions return ok
  3. precisely when the functions return partial
  4. the full set of conditions when the functions return error
  1. 22.2.1.4.2 [locale.codecvt.virtuals], p2 says this about the effects of the function: ...Stops if it encounters a character it cannot convert... This assumes that there *is* a character to convert. What happens when there is a sequence that doesn't form a valid source character, such as an unassigned or invalid UNICODE character, or a sequence that cannot possibly form a character (e.g., the sequence "\xc0\xff" in UTF-8)?
  2. Table 53 says that the function returns codecvt_base::ok to indicate that the function(s) "completed the conversion." Suppose that the source sequence is "\xc0\x80" in UTF-8, with from pointing to '\xc0' and (from_end==from + 1). It is not clear whether the return value should be ok or partial (see below).
  3. Table 53 says that the function returns codecvt_base::partial if "not all source characters converted." With the from pointers set up the same way as above, it is not clear whether the return value should be partial or ok (see above).
  4. Table 53, in the row describing the meaning of error mistakenly refers to a "from_type" character, without the symbol from_type having been defined. Most likely, the word "source" character is intended, although that is not sufficient. The functions may also fail when they encounter an invalid source sequence that cannot possibly form a valid source character (e.g., as explained in bullet 1 above).

Finally, the conditions described at the end of 22.2.1.4.2 [locale.codecvt.virtuals], p4 don't seem to be possible:

"A return value of partial, if (from_next == from_end), indicates that either the destination sequence has not absorbed all the available destination elements, or that additional source elements are needed before another destination element can be produced."

If the value is partial, it's not clear to me that (from_next ==from_end) could ever hold if there isn't enough room in the destination buffer. In order for (from_next==from_end) to hold, all characters in that range must have been successfully converted (according to 22.2.1.4.2 [locale.codecvt.virtuals], p2) and since there are no further source characters to convert, no more room in the destination buffer can be needed.

It's also not clear to me that (from_next==from_end) could ever hold if additional source elements are needed to produce another destination character (not element as incorrectly stated in the text). partial is returned if "not all source characters have been converted" according to Table 53, which also implies that (from_next==from) does NOT hold.

Could it be that the intended qualifying condition was actually (from_next != from_end), i.e., that the sentence was supposed to read

"A return value of partial, if (from_next != from_end),..."

which would make perfect sense, since, as far as I understand it, partial can only occur if (from_next != from_end)?

[Lillehammer: Defer for the moment, but this really needs to be fixed. Right now, the description of codecvt is too vague for it to be a useful contract between providers and clients of codecvt facets. (Note that both vendors and users can be both providers and clients of codecvt facets.) The major philosophical issue is whether the standard should only describe mappings that take a single wide character to multiple narrow characters (and vice versa), or whether it should describe fully general N-to-M conversions. When the original standard was written only the former was contemplated, but today, in light of the popularity of utf8 and utf16, that doesn't seem sufficient for C++0x. Bill supports general N-to-M conversions; we need to make sure Martin and Howard agree.]

Proposed resolution:


394. behavior of formatted output on failure

Section: 27.6.2.6.1 [ostream.formatted.reqmts] Status: Open Submitter: Martin Sebor Date: 2002-12-27

View all issues with Open status.

Discussion:

There is a contradiction in Formatted output about what bit is supposed to be set if the formatting fails. On sentence says it's badbit and another that it's failbit.

27.6.2.5.1, p1 says in the Common Requirements on Formatted output functions:

     ... If the generation fails, then the formatted output function
     does setstate(ios::failbit), which might throw an exception.

27.6.2.5.2, p1 goes on to say this about Arithmetic Inserters:

... The formatting conversion occurs as if it performed the following code fragment:

     bool failed =
         use_facet<num_put<charT,ostreambuf_iterator<charT,traits>
         > >
         (getloc()).put(*this, *this, fill(), val). failed();

     ... If failed is true then does setstate(badbit) ...

The original intent of the text, according to Jerry Schwarz (see c++std-lib-10500), is captured in the following paragraph:

In general "badbit" should mean that the stream is unusable because of some underlying failure, such as disk full or socket closure; "failbit" should mean that the requested formatting wasn't possible because of some inconsistency such as negative widths. So typically if you clear badbit and try to output something else you'll fail again, but if you clear failbit and try to output something else you'll succeed.

In the case of the arithmetic inserters, since num_put cannot report failure by any means other than exceptions (in response to which the stream must set badbit, which prevents the kind of recoverable error reporting mentioned above), the only other detectable failure is if the iterator returned from num_put returns true from failed().

Since that can only happen (at least with the required iostream specializations) under such conditions as the underlying failure referred to above (e.g., disk full), setting badbit would seem to be the appropriate response (indeed, it is required in 27.6.2.5.2, p1). It follows that failbit can never be directly set by the arithmetic (it can only be set by the sentry object under some unspecified conditions).

The situation is different for other formatted output functions which can fail as a result of the streambuf functions failing (they may do so by means other than exceptions), and which are then required to set failbit.

The contradiction, then, is that ostream::operator<<(int) will set badbit if the disk is full, while operator<<(ostream&, char) will set failbit under the same conditions. To make the behavior consistent, the Common requirements sections for the Formatted output functions should be changed as proposed below.

[Kona: There's agreement that this is a real issue. What we decided at Kona: 1. An error from the buffer (which can be detected either directly from streambuf's member functions or by examining a streambuf_iterator) should always result in badbit getting set. 2. There should never be a circumstance where failbit gets set. That represents a formatting error, and there are no circumstances under which the output facets are specified as signaling a formatting error. (Even more so for string output that for numeric because there's nothing to format.) If we ever decide to make it possible for formatting errors to exist then the facets can signal the error directly, and that should go in clause 22, not clause 27. 3. The phrase "if generation fails" is unclear and should be eliminated. It's not clear whether it's intended to mean a buffer error (e.g. a full disk), a formatting error, or something else. Most people thought it was supposed to refer to buffer errors; if so, we should say so. Martin will provide wording.]

Proposed resolution:

Rationale:


396. what are characters zero and one

Section: 23.3.5.1 [bitset.cons] Status: Ready Submitter: Martin Sebor Date: 2003-01-05

View all other issues in [bitset.cons].

View all issues with Ready status.

Discussion:

23.3.5.1, p6 [lib.bitset.cons] talks about a generic character having the value of 0 or 1 but there is no definition of what that means for charT other than char and wchar_t. And even for those two types, the values 0 and 1 are not actually what is intended -- the values '0' and '1' are. This, along with the converse problem in the description of to_string() in 23.3.5.2, p33, looks like a defect remotely related to DR 303.

http://anubis.dkuug.dk/jtc1/sc22/wg21/docs/lwg-defects.html#303

23.3.5.1:
  -6-  An element of the constructed string has value zero if the
       corresponding character in str, beginning at position pos,
       is 0. Otherwise, the element has the value one.
    
23.3.5.2:
  -33-  Effects: Constructs a string object of the appropriate
        type and initializes it to a string of length N characters.
        Each character is determined by the value of its
        corresponding bit position in *this. Character position N
        ?- 1 corresponds to bit position zero. Subsequent decreasing
        character positions correspond to increasing bit positions.
        Bit value zero becomes the character 0, bit value one becomes
        the character 1.
    

Also note the typo in 23.3.5.1, p6: the object under construction is a bitset, not a string.

[ Sophia Antipolis: ]

We note that bitset has been moved from section 23 to section 20, by another issue (842) previously resolved at this meeting.

Disposition: move to ready.

We request that Howard submit a separate issue regarding the three to_string overloads.

Proposed resolution:

Change the constructor's function declaration immediately before 23.3.5.1 [bitset.cons] p3 to:

    template <class charT, class traits, class Allocator>
    explicit
    bitset(const basic_string<charT, traits, Allocator>& str,
           typename basic_string<charT, traits, Allocator>::size_type pos = 0,
           typename basic_string<charT, traits, Allocator>::size_type n =
             basic_string<charT, traits, Allocator>::npos,
           charT zero = charT('0'), charT one = charT('1'))

Change the first two sentences of 23.3.5.1 [bitset.cons] p6 to: "An element of the constructed string has value 0 if the corresponding character in str, beginning at position pos, is zero. Otherwise, the element has the value 1.

Change the text of the second sentence in 23.3.5.1, p5 to read: "The function then throws invalid_argument if any of the rlen characters in str beginning at position pos is other than zero or one. The function uses traits::eq() to compare the character values."

Change the declaration of the to_string member function immediately before 23.3.5.2 [bitset.members] p33 to:

    template <class charT, class traits, class Allocator>
    basic_string<charT, traits, Allocator> 
    to_string(charT zero = charT('0'), charT one = charT('1')) const;

Change the last sentence of 23.3.5.2 [bitset.members] p33 to: "Bit value 0 becomes the character zero, bit value 1 becomes the character one.

Change 23.3.5.3 [bitset.operators] p8 to:

Returns:

  os << x.template to_string<charT,traits,allocator<charT> >(
      use_facet<ctype<charT> >(os.getloc()).widen('0'),
      use_facet<ctype<charT> >(os.getloc()).widen('1'));

Rationale:

There is a real problem here: we need the character values of '0' and '1', and we have no way to get them since strings don't have imbued locales. In principle the "right" solution would be to provide an extra object, either a ctype facet or a full locale, which would be used to widen '0' and '1'. However, there was some discomfort about using such a heavyweight mechanism. The proposed resolution allows those users who care about this issue to get it right.

We fix the inserter to use the new arguments. Note that we already fixed the analogous problem with the extractor in issue 303.

[ post Bellevue: ]

We are happy with the resolution as proposed, and we move this to Ready.

[ Howard adds: ]

The proposed wording neglects the 3 newer to_string overloads.

397. ostream::sentry dtor throws exceptions

Section: 27.6.2.4 [ostream::sentry] Status: Open Submitter: Martin Sebor Date: 2003-01-05

View other active issues in [ostream::sentry].

View all other issues in [ostream::sentry].

View all issues with Open status.

Discussion:

17.4.4.8, p3 prohibits library dtors from throwing exceptions.

27.6.2.3, p4 says this about the ostream::sentry dtor:

    -4- If ((os.flags() & ios_base::unitbuf) && !uncaught_exception())
        is true, calls os.flush().
    

27.6.2.6, p7 that describes ostream::flush() says:

    -7- If rdbuf() is not a null pointer, calls rdbuf()->pubsync().
        If that function returns ?-1 calls setstate(badbit) (which
        may throw ios_base::failure (27.4.4.3)).
    

That seems like a defect, since both pubsync() and setstate() can throw an exception.

[ The contradiction is real. Clause 17 says destructors may never throw exceptions, and clause 27 specifies a destructor that does throw. In principle we might change either one. We're leaning toward changing clause 17: putting in an "unless otherwise specified" clause, and then putting in a footnote saying the sentry destructor is the only one that can throw. PJP suggests specifying that sentry::~sentry() should internally catch any exceptions it might cause. ]

[ See 418 and 622 for related issues. ]

Proposed resolution:


398. effects of end-of-file on unformatted input functions

Section: 27.6.2.4 [ostream::sentry] Status: Open Submitter: Martin Sebor Date: 2003-01-05

View other active issues in [ostream::sentry].

View all other issues in [ostream::sentry].

View all issues with Open status.

Discussion:

While reviewing unformatted input member functions of istream for their behavior when they encounter end-of-file during input I found that the requirements vary, sometimes unexpectedly, and in more than one case even contradict established practice (GNU libstdc++ 3.2, IBM VAC++ 6.0, STLPort 4.5, SunPro 5.3, HP aCC 5.38, Rogue Wave libstd 3.1, and Classic Iostreams).

The following unformatted input member functions set eofbit if they encounter an end-of-file (this is the expected behavior, and also the behavior of all major implementations):

    basic_istream<charT, traits>&
    get (char_type*, streamsize, char_type);
    

Also sets failbit if it fails to extract any characters.

    basic_istream<charT, traits>&
    get (char_type*, streamsize);
    

Also sets failbit if it fails to extract any characters.

    basic_istream<charT, traits>&
    getline (char_type*, streamsize, char_type);
    

Also sets failbit if it fails to extract any characters.

    basic_istream<charT, traits>&
    getline (char_type*, streamsize);
    

Also sets failbit if it fails to extract any characters.

    basic_istream<charT, traits>&
    ignore (int, int_type);
    
    basic_istream<charT, traits>&
    read (char_type*, streamsize);
    

Also sets failbit if it encounters end-of-file.

    streamsize readsome (char_type*, streamsize);
    

The following unformated input member functions set failbit but not eofbit if they encounter an end-of-file (I find this odd since the functions make it impossible to distinguish a general failure from a failure due to end-of-file; the requirement is also in conflict with all major implementation which set both eofbit and failbit):

    int_type get();
    
    basic_istream<charT, traits>&
    get (char_type&);
    

These functions only set failbit of they extract no characters, otherwise they don't set any bits, even on failure (I find this inconsistency quite unexpected; the requirement is also in conflict with all major implementations which set eofbit whenever they encounter end-of-file):

    basic_istream<charT, traits>&
    get (basic_streambuf<charT, traits>&, char_type);
    
    basic_istream<charT, traits>&
    get (basic_streambuf<charT, traits>&);
    

This function sets no bits (all implementations except for STLport and Classic Iostreams set eofbit when they encounter end-of-file):

    int_type peek ();
    

Informally, what we want is a global statement of intent saying that eofbit gets set if we trip across EOF, and then we can take away the specific wording for individual functions. A full review is necessary. The wording currently in the standard is a mishmash, and changing it on an individual basis wouldn't make things better. Dietmar will do this work.

Proposed resolution:


408. Is vector<reverse_iterator<char*> > forbidden?

Section: 24.1 [iterator.requirements] Status: Open Submitter: Nathan Myers Date: 2003-06-03

View other active issues in [iterator.requirements].

View all other issues in [iterator.requirements].

View all issues with Open status.

Discussion:

I've been discussing iterator semantics with Dave Abrahams, and a surprise has popped up. I don't think this has been discussed before.

24.1 [iterator.requirements] says that the only operation that can be performed on "singular" iterator values is to assign a non-singular value to them. (It doesn't say they can be destroyed, and that's probably a defect.) Some implementations have taken this to imply that there is no need to initialize the data member of a reverse_iterator<> in the default constructor. As a result, code like

  std::vector<std::reverse_iterator<char*> > v(7);
  v.reserve(1000);

invokes undefined behavior, because it must default-initialize the vector elements, and then copy them to other storage. Of course many other vector operations on these adapters are also left undefined, and which those are is not reliably deducible from the standard.

I don't think that 24.1 was meant to make standard-library iterator types unsafe. Rather, it was meant to restrict what operations may be performed by functions which take general user- and standard iterators as arguments, so that raw pointers would qualify as iterators. However, this is not clear in the text, others have come to the opposite conclusion.

One question is whether the standard iterator adaptors have defined copy semantics. Another is whether they have defined destructor semantics: is

  { std::vector<std::reverse_iterator<char*> >  v(7); }

undefined too?

Note this is not a question of whether algorithms are allowed to rely on copy semantics for arbitrary iterators, just whether the types we actually supply support those operations. I believe the resolution must be expressed in terms of the semantics of the adapter's argument type. It should make clear that, e.g., the reverse_iterator<T> constructor is actually required to execute T(), and so copying is defined if the result of T() is copyable.

Issue 235, which defines reverse_iterator's default constructor more precisely, has some relevance to this issue. However, it is not the whole story.

The issue was whether

  reverse_iterator() { }

is allowed, vs.

  reverse_iterator() : current() { }

The difference is when T is char*, where the first leaves the member uninitialized, and possibly equal to an existing pointer value, or (on some targets) may result in a hardware trap when copied.

8.5 paragraph 5 seems to make clear that the second is required to satisfy DR 235, at least for non-class Iterator argument types.

But that only takes care of reverse_iterator, and doesn't establish a policy for all iterators. (The reverse iterator adapter was just an example.) In particular, does my function

  template <typename Iterator>
    void f() { std::vector<Iterator>  v(7); } 

evoke undefined behavior for some conforming iterator definitions? I think it does, now, because vector<> will destroy those singular iterator values, and that's explicitly disallowed.

24.1 shouldn't give blanket permission to copy all singular iterators, because then pointers wouldn't qualify as iterators. However, it should allow copying of that subset of singular iterator values that are default-initialized, and it should explicitly allow destroying any iterator value, singular or not, default-initialized or not.

Related issue: 407

[ We don't want to require all singular iterators to be copyable, because that is not the case for pointers. However, default construction may be a special case. Issue: is it really default construction we want to talk about, or is it something like value initialization? We need to check with core to see whether default constructed pointers are required to be copyable; if not, it would be wrong to impose so strict a requirement for iterators. ]

Proposed resolution:


417. what does ctype::do_widen() return on failure

Section: 22.2.1.1.2 [locale.ctype.virtuals] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all other issues in [locale.ctype.virtuals].

View all issues with Open status.

Discussion:

The Effects and Returns clauses of the do_widen() member function of the ctype facet fail to specify the behavior of the function on failure. That the function may not be able to simply cast the narrow character argument to the type of the result since doing so may yield the wrong value for some wchar_t encodings. Popular implementations of ctype<wchar_t> that use mbtowc() and UTF-8 as the native encoding (e.g., GNU glibc) will fail when the argument's MSB is set. There is no way for the the rest of locale and iostream to reliably detect this failure.

[Kona: This is a real problem. Widening can fail. It's unclear what the solution should be. Returning WEOF works for the wchar_t specialization, but not in general. One option might be to add a default, like narrow. But that's an incompatible change. Using traits::eof might seem like a good idea, but facets don't have access to traits (a recurring problem). We could have widen throw an exception, but that's a scary option; existing library components aren't written with the assumption that widen can throw.]

Proposed resolution:


418. exceptions thrown during iostream cleanup

Section: 27.4.2.1.6 [ios::Init] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all issues with Open status.

Discussion:

The dtor of the ios_base::Init object is supposed to call flush() on the 6 standard iostream objects cout, cerr, clog, wcout, wcerr, and wclog. This call may cause an exception to be thrown.

17.4.4.8, p3 prohibits all library destructors from throwing exceptions.

The question is: What should this dtor do if one or more of these calls to flush() ends up throwing an exception? This can happen quite easily if one of the facets installed in the locale imbued in the iostream object throws.

[Kona: We probably can't do much better than what we've got, so the LWG is leaning toward NAD. At the point where the standard stream objects are being cleaned up, the usual error reporting mechanism are all unavailable. And exception from flush at this point will definitely cause problems. A quality implementation might reasonably swallow the exception, or call abort, or do something even more drastic.]

[ See 397 and 622 for related issues. ]

Proposed resolution:


419. istream extractors not setting failbit if eofbit is already set

Section: 27.6.1.1.3 [istream::sentry] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all other issues in [istream::sentry].

View all issues with Open status.

Discussion:

27.6.1.1.3 [istream::sentry], p2 says that istream::sentry ctor prepares for input if is.good() is true. p4 then goes on to say that the ctor sets the sentry::ok_ member to true if the stream state is good after any preparation. 27.6.1.2.1 [istream.formatted.reqmts], p1 then says that a formatted input function endeavors to obtain the requested input if the sentry's operator bool() returns true. Given these requirements, no formatted extractor should ever set failbit if the initial stream rdstate() == eofbit. That is contrary to the behavior of all implementations I tested. The program below prints out eof = 1, fail = 0 eof = 1, fail = 1 on all of them.


#include <sstream>
#include <cstdio>

int main()
{
    std::istringstream strm ("1");

    int i = 0;

    strm >> i;

    std::printf ("eof = %d, fail = %d\n",
                 !!strm.eof (), !!strm.fail ());

    strm >> i;

    std::printf ("eof = %d, fail = %d\n",
                 !!strm.eof (), !!strm.fail ());
}


Comments from Jerry Schwarz (c++std-lib-11373):
Jerry Schwarz wrote:
I don't know where (if anywhere) it says it in the standard, but the formatted extractors are supposed to set failbit if they don't extract any characters. If they didn't then simple loops like
while (cin >> x);
would loop forever.
Further comments from Martin Sebor:
The question is which part of the extraction should prevent this from happening by setting failbit when eofbit is already set. It could either be the sentry object or the extractor. It seems that most implementations have chosen to set failbit in the sentry [...] so that's the text that will need to be corrected.

Pre Berlin: This issue is related to 342. If the sentry sets failbit when it finds eofbit already set, then you can never seek away from the end of stream.

Kona: Possibly NAD. If eofbit is set then good() will return false. We then set ok to false. We believe that the sentry's constructor should always set failbit when ok is false, and we also think the standard already says that. Possibly it could be clearer.

Proposed resolution:

Change 27.6.1.1.3 [istream::sentry], p2 to:

explicit sentry(basic_istream<charT,traits>& is , bool noskipws = false);

-2- Effects: If is.good() is true false, calls is.setstate(failbit). Otherwise prepares for formatted or unformatted input. ...


421. is basic_streambuf copy-constructible?

Section: 27.5.2.1 [streambuf.cons] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all other issues in [streambuf.cons].

View all issues with Open status.

Discussion:

The reflector thread starting with c++std-lib-11346 notes that the class template basic_streambuf, along with basic_stringbuf and basic_filebuf, is copy-constructible but that the semantics of the copy constructors are not defined anywhere. Further, different implementations behave differently in this respect: some prevent copy construction of objects of these types by declaring their copy ctors and assignment operators private, others exhibit undefined behavior, while others still give these operations well-defined semantics.

Note that this problem doesn't seem to be isolated to just the three types mentioned above. A number of other types in the library section of the standard provide a compiler-generated copy ctor and assignment operator yet fail to specify their semantics. It's believed that the only types for which this is actually a problem (i.e. types where the compiler-generated default may be inappropriate and may not have been intended) are locale facets. See issue 439.

Proposed resolution:

27.5.2 [lib.streambuf]: Add into the synopsis, public section, just above the destructor declaration:

basic_streambuf(const basic_streambuf& sb);
basic_streambuf& operator=(const basic_streambuf& sb);

Insert after 27.5.2.1, paragraph 2:

basic_streambuf(const basic_streambuf& sb);

Constructs a copy of sb.

Postcondtions:

                eback() == sb.eback()
                gptr()  == sb.gptr()
                egptr() == sb.egptr()
                pbase() == sb.pbase()
                pptr()  == sb.pptr()
                epptr() == sb.epptr()
                getloc() == sb.getloc()
basic_streambuf& operator=(const basic_streambuf& sb);

Assigns the data members of sb to this.

Postcondtions:

                eback() == sb.eback()
                gptr()  == sb.gptr()
                egptr() == sb.egptr()
                pbase() == sb.pbase()
                pptr()  == sb.pptr()
                epptr() == sb.epptr()
                getloc() == sb.getloc()

Returns: *this.

27.7.1 [lib.stringbuf]:

Option A:

Insert into the basic_stringbuf synopsis in the private section:

basic_stringbuf(const basic_stringbuf&);             // not defined
basic_stringbuf& operator=(const basic_stringbuf&);  // not defined

Option B:

Insert into the basic_stringbuf synopsis in the public section:

basic_stringbuf(const basic_stringbuf& sb);
basic_stringbuf& operator=(const basic_stringbuf& sb);

27.7.1.1, insert after paragraph 4:

basic_stringbuf(const basic_stringbuf& sb);

Constructs an independent copy of sb as if with sb.str(), and with the openmode that sb was constructed with.

Postcondtions:

               str() == sb.str()
               gptr()  - eback() == sb.gptr()  - sb.eback()
               egptr() - eback() == sb.egptr() - sb.eback()
               pptr()  - pbase() == sb.pptr()  - sb.pbase()
               getloc() == sb.getloc()

Note: The only requirement on epptr() is that it point beyond the initialized range if an output sequence exists. There is no requirement that epptr() - pbase() == sb.epptr() - sb.pbase().

basic_stringbuf& operator=(const basic_stringbuf& sb);

After assignment the basic_stringbuf has the same state as if it were initially copy constructed from sb, except that the basic_stringbuf is allowed to retain any excess capacity it might have, which may in turn effect the value of epptr().

27.8.1.1 [lib.filebuf]

Insert at the bottom of the basic_filebuf synopsis:

private:
  basic_filebuf(const basic_filebuf&);             // not defined
  basic_filebuf& operator=(const basic_filebuf&);  // not defined

[Kona: this is an issue for basic_streambuf itself and for its derived classes. We are leaning toward allowing basic_streambuf to be copyable, and specifying its precise semantics. (Probably the obvious: copying the buffer pointers.) We are less sure whether the streambuf derived classes should be copyable. Howard will write up a proposal.]

[Sydney: Dietmar presented a new argument against basic_streambuf being copyable: it can lead to an encapsulation violation. Filebuf inherits from streambuf. Now suppose you inhert a my_hijacking_buf from streambuf. You can copy the streambuf portion of a filebuf to a my_hijacking_buf, giving you access to the pointers into the filebuf's internal buffer. Perhaps not a very strong argument, but it was strong enough to make people nervous. There was weak preference for having streambuf not be copyable. There was weak preference for having stringbuf not be copyable even if streambuf is. Move this issue to open for now. ]

[ 2007-01-12, Howard: Rvalue Reference Recommendations for Chapter 27 recommends protected copy constructor and assignment for basic_streambuf with the same semantics as would be generated by the compiler. These members aid in derived classes implementing move semantics. A protected copy constructor and copy assignment operator do not expose encapsulation more so than it is today as each data member of a basic_streambuf is already both readable and writable by derived classes via various get/set protected member functions (eback(), setp(), etc.). Rather a protected copy constructor and copy assignment operator simply make the job of derived classes implementing move semantics less tedious and error prone. ]

Rationale:

27.5.2 [lib.streambuf]: The proposed basic_streambuf copy constructor and assignment operator are the same as currently implied by the lack of declarations: public and simply copies the data members. This resolution is not a change but a clarification of the current standard.

27.7.1 [lib.stringbuf]: There are two reasonable options: A) Make basic_stringbuf not copyable. This is likely the status-quo of current implementations. B) Reasonable copy semantics of basic_stringbuf can be defined and implemented. A copyable basic_streambuf is arguably more useful than a non-copyable one. This should be considered as new functionality and not the fixing of a defect. If option B is chosen, ramifications from issue 432 are taken into account.

27.8.1.1 [lib.filebuf]: There are no reasonable copy semantics for basic_filebuf.


423. effects of negative streamsize in iostreams

Section: 27 [input.output] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all other issues in [input.output].

View all issues with Open status.

Discussion:

A third party test suite tries to exercise istream::ignore(N) with a negative value of N and expects that the implementation will treat N as if it were 0. Our implementation asserts that (N >= 0) holds and aborts the test.

I can't find anything in section 27 that prohibits such values but I don't see what the effects of such calls should be, either (this applies to a number of unformatted input functions as well as some member functions of the basic_streambuf template).

Proposed resolution:

I propose that we add to each function in clause 27 that takes an argument, say N, of type streamsize a Requires clause saying that "N >= 0." The intent is to allow negative streamsize values in calls to precision() and width() but disallow it in calls to streambuf::sgetn(), istream::ignore(), or ostream::write().

[Kona: The LWG agreed that this is probably what we want. However, we need a review to find all places where functions in clause 27 take arguments of type streamsize that shouldn't be allowed to go negative. Martin will do that review.]


427. stage 2 and rationale of DR 221

Section: 22.2.2.1.2 [facet.num.get.virtuals] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View other active issues in [facet.num.get.virtuals].

View all other issues in [facet.num.get.virtuals].

View all issues with Open status.

Discussion:

The requirements specified in Stage 2 and reiterated in the rationale of DR 221 (and echoed again in DR 303) specify that num_get<charT>:: do_get() compares characters on the stream against the widened elements of "012...abc...ABCX+-"

An implementation is required to allow programs to instantiate the num_get template on any charT that satisfies the requirements on a user-defined character type. These requirements do not include the ability of the character type to be equality comparable (the char_traits template must be used to perform tests for equality). Hence, the num_get template cannot be implemented to support any arbitrary character type. The num_get template must either make the assumption that the character type is equality-comparable (as some popular implementations do), or it may use char_traits<charT> to do the comparisons (some other popular implementations do that). This diversity of approaches makes it difficult to write portable programs that attempt to instantiate the num_get template on user-defined types.

[Kona: the heart of the problem is that we're theoretically supposed to use traits classes for all fundamental character operations like assignment and comparison, but facets don't have traits parameters. This is a fundamental design flaw and it appears all over the place, not just in this one place. It's not clear what the correct solution is, but a thorough review of facets and traits is in order. The LWG considered and rejected the possibility of changing numeric facets to use narrowing instead of widening. This may be a good idea for other reasons (see issue 459), but it doesn't solve the problem raised by this issue. Whether we use widen or narrow the num_get facet still has no idea which traits class the user wants to use for the comparison, because only streams, not facets, are passed traits classes. The standard does not require that two different traits classes with the same char_type must necessarily have the same behavior.]

Informally, one possibility: require that some of the basic character operations, such as eq, lt, and assign, must behave the same way for all traits classes with the same char_type. If we accept that limitation on traits classes, then the facet could reasonably be required to use char_traits<charT>.

Proposed resolution:


430. valarray subset operations

Section: 26.5.2.4 [valarray.sub] Status: Open Submitter: Martin Sebor Date: 2003-09-18

View all issues with Open status.

Discussion:

The standard fails to specify the behavior of valarray::operator[](slice) and other valarray subset operations when they are passed an "invalid" slice object, i.e., either a slice that doesn't make sense at all (e.g., slice (0, 1, 0) or one that doesn't specify a valid subset of the valarray object (e.g., slice (2, 1, 1) for a valarray of size 1).

[Kona: the LWG believes that invalid slices should invoke undefined behavior. Valarrays are supposed to be designed for high performance, so we don't want to require specific checking. We need wording to express this decision.]

[ Bellevue: ]

Please note that the standard also fails to specify the behavior of slice_array and gslice_array in the valid case. Bill Plauger will endeavor to provide revised wording for slice_array and gslice_array.

[ post-Bellevue: Bill provided wording. ]

Proposed resolution:

Insert after 26.5.2.4 [valarray.sub], paragraph 1:

The member operator is overloaded to provide several ways to select sequences of elements from among those controlled by *this. The first group of five member operators work in conjunction with various overloads of operator= (and other assigning operators) to allow selective replacement (slicing) of the controlled sequence. The selected elements must exist.

The first member operator selects element off. For example:

valarray<char> v0("abcdefghijklmnop", 16);
v0[3] = 'A';
// v0 == valarray<char>("abcAefghijklmnop", 16)

The second member operator selects those elements of the controlled sequence designated by slicearr. For example:

valarray<char> v0("abcdefghijklmnop", 16);
valarray<char> v1("ABCDE", 5);
v0[slice(2, 5, 3)] = v1;
// v0 == valarray<char>("abAdeBghCjkDmnEp", 16)

The third member operator selects those elements of the controlled sequence designated by gslicearr. For example:

valarray<char> v0("abcdefghijklmnop", 16);
valarray<char> v1("ABCDEF", 6);
const size_t lv[] = {2, 3};
const size_t dv[] = {7, 2};
const valarray<size_t> len(lv, 2), str(dv, 2);
v0[gslice(3, len, str)] = v1;
// v0 == valarray<char>("abcAeBgCijDlEnFp", 16)

The fourth member operator selects those elements of the controlled sequence designated by boolarr. For example:

valarray<char> v0("abcdefghijklmnop", 16);
valarray<char> v1("ABC", 3);
const bool vb[] = {false, false, true, true, false, true};
v0[valarray<bool>(vb, 6)] = v1;
// v0 == valarray<char>("abABeCghijklmnop", 16)

The fifth member operator selects those elements of the controlled sequence designated by indarr. For example:

valarray<char> v0("abcdefghijklmnop", 16);
valarray<char> v1("ABCDE", 5);
const size_t vi[] = {7, 5, 2, 3, 8};
v0[valarray<size_t>(vi, 5)] = v1;
// v0 == valarray<char>("abCDeBgAEjklmnop", 16)

The second group of five member operators each construct an object that represents the value(s) selected. The selected elements must exist.

The sixth member operator returns the value of element off. For example:

valarray<char> v0("abcdefghijklmnop", 16);
// v0[3] returns 'd'

The seventh member operator returns an object of class valarray<Ty> containing those elements of the controlled sequence designated by slicearr. For example:

valarray<char> v0("abcdefghijklmnop", 16);
// v0[slice(2, 5, 3)] returns valarray<char>("cfilo", 5)

The eighth member operator selects those elements of the controlled sequence designated by gslicearr. For example:

valarray<char> v0("abcdefghijklmnop", 16);
const size_t lv[] = {2, 3};
const size_t dv[] = {7, 2};
const valarray<size_t> len(lv, 2), str(dv, 2);
// v0[gslice(3, len, str)] returns
//    valarray<char>("dfhkmo", 6)

The ninth member operator selects those elements of the controlled sequence designated by boolarr. For example:

valarray<char> v0("abcdefghijklmnop", 16);
const bool vb[] = {false, false, true, true, false, true};
// v0[valarray<bool>(vb, 6)] returns
//    valarray<char>("cdf", 3)

The last member operator selects those elements of the controlled sequence designated by indarr. For example:

valarray<char> v0("abcdefghijklmnop", 16);
const size_t vi[] = {7, 5, 2, 3, 8};
// v0[valarray<size_t>(vi, 5)] returns
//    valarray<char>("hfcdi", 5)

431. Swapping containers with unequal allocators

Section: 20.1.2 [allocator.requirements], 25 [algorithms] Status: Open Submitter: Matt Austern Date: 2003-09-20

View other active issues in [allocator.requirements].

View all other issues in [allocator.requirements].

View all issues with Open status.

Discussion:

Clause 20.1.2 [allocator.requirements] paragraph 4 says that implementations are permitted to supply containers that are unable to cope with allocator instances and that container implementations may assume that all instances of an allocator type compare equal. We gave implementers this latitude as a temporary hack, and eventually we want to get rid of it. What happens when we're dealing with allocators that don't compare equal?

In particular: suppose that v1 and v2 are both objects of type vector<int, my_alloc> and that v1.get_allocator() != v2.get_allocator(). What happens if we write v1.swap(v2)? Informally, three possibilities:

1. This operation is illegal. Perhaps we could say that an implementation is required to check and to throw an exception, or perhaps we could say it's undefined behavior.

2. The operation performs a slow swap (i.e. using three invocations of operator=, leaving each allocator with its original container. This would be an O(N) operation.

3. The operation swaps both the vectors' contents and their allocators. This would be an O(1) operation. That is:

    my_alloc a1(...);
    my_alloc a2(...);
    assert(a1 != a2);

    vector<int, my_alloc> v1(a1);
    vector<int, my_alloc> v2(a2);
    assert(a1 == v1.get_allocator());
    assert(a2 == v2.get_allocator());

    v1.swap(v2);
    assert(a1 == v2.get_allocator());
    assert(a2 == v1.get_allocator());
  

[Kona: This is part of a general problem. We need a paper saying how to deal with unequal allocators in general.]

[pre-Sydney: Howard argues for option 3 in N1599. ]

[ 2007-01-12, Howard: This issue will now tend to come up more often with move constructors and move assignment operators. For containers, these members transfer resources (i.e. the allocated memory) just like swap. ]

[ Batavia: There is agreement to overload the container swap on the allocator's Swappable requirement using concepts. If the allocator supports Swappable, then container's swap will swap allocators, else it will perform a "slow swap" using copy construction and copy assignment. ]

Proposed resolution:


446. Iterator equality between different containers

Section: 24.1 [iterator.requirements], 23.1 [container.requirements] Status: Open Submitter: Andy Koenig Date: 2003-12-16

View other active issues in [iterator.requirements].

View all other issues in [iterator.requirements].

View all issues with Open status.

Discussion:

What requirements does the standard place on equality comparisons between iterators that refer to elements of different containers. For example, if v1 and v2 are empty vectors, is v1.end() == v2.end() allowed to yield true? Is it allowed to throw an exception?

The standard appears to be silent on both questions.

[Sydney: The intention is that comparing two iterators from different containers is undefined, but it's not clear if we say that, or even whether it's something we should be saying in clause 23 or in clause 24. Intuitively we might want to say that equality is defined only if one iterator is reachable from another, but figuring out how to say it in any sensible way is a bit tricky: reachability is defined in terms of equality, so we can't also define equality in terms of reachability. ]

Proposed resolution:


454. basic_filebuf::open should accept wchar_t names

Section: 27.8.1.4 [filebuf.members] Status: Open Submitter: Bill Plauger Date: 2004-01-30

View all other issues in [filebuf.members].

View all issues with Open status.

Duplicate of: 105

Discussion:

    basic_filebuf *basic_filebuf::open(const char *, ios_base::open_mode);

should be supplemented with the overload:

    basic_filebuf *basic_filebuf::open(const wchar_t *, ios_base::open_mode);

Depending on the operating system, one of these forms is fundamental and the other requires an implementation-defined mapping to determine the actual filename.

[Sydney: Yes, we want to allow wchar_t filenames. Bill will provide wording.]

[ In Toronto we noted that this is issue 5 from N1569. ]

How does this interact with the newly-defined character types, and how do we avoid interface explosion considering std::string overloads that were added? Propose another solution that is different than the suggestion proposed by PJP.

Suggestion is to make a member template function for basic_string (for char, wchar_t, u16char, u32char instantiations), and then just keep a const char* member.

Goal is to do implicit conversion between character string literals to appropriate basic_string type. Not quite sure if this is possible.

Implementors are free to add specific overloads for non-char character types.

[ Martin adds pre-Sophia Antipolis: ]

Please see issue 454: problems and solutions.

[ Sophia Antipolis: ]

Beman is concerned that making these changes to basic_filebuf is not usefully changed unless fstream is also changed; this also only handles wchar_t and not other character types.

The TR2 filesystem library is a more complete solution, but is not available soon.

[ Martin adds: please reference N2683 for problems and solutions. ]

Proposed resolution:

Change from:

basic_filebuf<charT,traits>* open(
	const char* s,
	ios_base::openmode mode );

Effects: If is_open() != false, returns a null pointer. Otherwise, initializes the filebuf as required. It then opens a file, if possible, whose name is the NTBS s ("as if" by calling std::fopen(s,modstr)).

to:

basic_filebuf<charT,traits>* open(
	const char* s,
	ios_base::openmode mode );

basic_filebuf<charT,traits>* open(
	const wchar_t* ws,
	ios_base::openmode mode );

Effects: If is_open() != false, returns a null pointer. Otherwise, initializes the filebuf as required. It then opens a file, if possible, whose name is the NTBS s ("as if" by calling std::fopen(s,modstr)). For the second signature, the NTBS s is determined from the WCBS ws in an implementation-defined manner.

(NOTE: For a system that "naturally" represents a filename as a WCBS, the NTBS s in the first signature may instead be mapped to a WCBS; if so, it follows the same mapping rules as the first argument to open.)

Rationale:

Slightly controversial, but by a 7-1 straw poll the LWG agreed to move this to Ready. The controversy was because the mapping between wide names and files in a filesystem is implementation defined. The counterargument, which most but not all LWG members accepted, is that the mapping between narrow files names and files is also implemenation defined.

[Lillehammer: Moved back to "open" status, at Beman's urging. (1) Why just basic_filebuf, instead of also basic_fstream (and possibly other things too). (2) Why not also constructors that take std::basic_string? (3) We might want to wait until we see Beman's filesystem library; we might decide that it obviates this.]

[ post Bellevue: ]

Move again to Ready.

There is a timing issue here. Since the filesystem library will not be in C++0x, this should be brought forward. This solution would remain valid in the context of the proposed filesystem.

This issue has been kicking around for a while, and the wchar_t addition alone would help many users. Thus, we suggest putting this on the reflector list with an invitation for someone to produce proposed wording that covers basic_fstream. In the meantime, we suggest that the proposed wording be adopted as-is.

If more of the Lillehammer questions come back, they should be introduced as separate issues.


458. 24.1.5 contains unintented limitation for operator-

Section: 24.1.5 [random.access.iterators] Status: Open Submitter: Daniel Frey Date: 2004-02-27

View all other issues in [random.access.iterators].

View all issues with Open status.

Discussion:

In 24.1.5 [lib.random.access.iterators], table 76 the operational semantics for the expression "r -= n" are defined as "return r += -n". This means, that the expression -n must be valid, which is not the case for unsigned types.

[ Sydney: Possibly not a real problem, since difference type is required to be a signed integer type. However, the wording in the standard may be less clear than we would like. ]

Proposed resolution:

To remove this limitation, I suggest to change the operational semantics for this column to:

    { Distance m = n; 
      if (m >= 0) 
        while (m--) --r; 
      else 
        while (m++) ++r;
      return r; }

459. Requirement for widening in stage 2 is overspecification

Section: 22.2.2.1.2 [facet.num.get.virtuals] Status: Open Submitter: Martin Sebor Date: 2004-03-16

View other active issues in [facet.num.get.virtuals].

View all other issues in [facet.num.get.virtuals].

View all issues with Open status.

Discussion:

When parsing strings of wide-character digits, the standard requires the library to widen narrow-character "atoms" and compare the widened atoms against the characters that are being parsed. Simply narrowing the wide characters would be far simpler, and probably more efficient. The two choices are equivalent except in convoluted test cases, and many implementations already ignore the standard and use narrow instead of widen.

First, I disagree that using narrow() instead of widen() would necessarily have unfortunate performance implications. A possible implementation of narrow() that allows num_get to be implemented in a much simpler and arguably comparably efficient way as calling widen() allows, i.e. without making a virtual call to do_narrow every time, is as follows:

  inline char ctype<wchar_t>::narrow (wchar_t wc, char dflt) const
  {
      const unsigned wi = unsigned (wc);

      if (wi > UCHAR_MAX)
          return typeid (*this) == typeid (ctype<wchar_t>) ?
                 dflt : do_narrow (wc, dflt);

      if (narrow_ [wi] < 0) {
         const char nc = do_narrow (wc, dflt);
         if (nc == dflt)
             return dflt;
         narrow_ [wi] = nc;
      }

      return char (narrow_ [wi]);
  }

Second, I don't think the change proposed in the issue (i.e., to use narrow() instead of widen() during Stage 2) would be at all drastic. Existing implementations with the exception of libstdc++ currently already use narrow() so the impact of the change on programs would presumably be isolated to just a single implementation. Further, since narrow() is not required to translate alternate wide digit representations such as those mentioned in issue 303 to their narrow equivalents (i.e., the portable source characters '0' through '9'), the change does not necessarily imply that these alternate digits would be treated as ordinary digits and accepted as part of numbers during parsing. In fact, the requirement in 22.2.1.1.2 [locale.ctype.virtuals], p13 forbids narrow() to translate an alternate digit character, wc, to an ordinary digit in the basic source character set unless the expression (ctype<charT>::is(ctype_base::digit, wc) == true) holds. This in turn is prohibited by the C standard (7.25.2.1.5, 7.25.2.1.5, and 5.2.1, respectively) for charT of either char or wchar_t.

[Sydney: To a large extent this is a nonproblem. As long as you're only trafficking in char and wchar_t we're only dealing with a stable character set, so you don't really need either 'widen' or 'narrow': can just use literals. Finally, it's not even clear whether widen-vs-narrow is the right question; arguably we should be using codecvt instead.]

Proposed resolution:

Change stage 2 so that implementations are permitted to use either technique to perform the comparison:

  1. call widen on the atoms and compare (either by using operator== or char_traits<charT>::eq) the input with the widened atoms, or
  2. call narrow on the input and compare the narrow input with the atoms
  3. do (1) or (2) only if charT is not char or wchar_t, respectively; i.e., avoid calling widen or narrow if it the source and destination types are the same

463. auto_ptr usability issues

Section: D.9.1 [auto.ptr] Status: Open Submitter: Rani Sharoni Date: 2003-12-07

View all other issues in [auto.ptr].

View all issues with Open status.

Discussion:

TC1 CWG DR #84 effectively made the template<class Y> operator auto_ptr<Y>() member of auto_ptr (20.4.5.3/4) obsolete.

The sole purpose of this obsolete conversion member is to enable copy initialization base from r-value derived (or any convertible types like cv-types) case:

#include <memory>
using std::auto_ptr;

struct B {};
struct D : B {};

auto_ptr<D> source();
int sink(auto_ptr<B>);
int x1 = sink( source() ); // #1 EDG - no suitable copy constructor

The excellent analysis of conversion operations that was given in the final auto_ptr proposal (http://anubis.dkuug.dk/jtc1/sc22/wg21/docs/papers/1997/N1128.pdf) explicitly specifies this case analysis (case 4). DR #84 makes the analysis wrong and actually comes to forbid the loophole that was exploited by the auto_ptr designers.

I didn't encounter any compliant compiler (e.g. EDG, GCC, BCC and VC) that ever allowed this case. This is probably because it requires 3 user defined conversions and in fact current compilers conform to DR #84.

I was surprised to discover that the obsolete conversion member actually has negative impact of the copy initialization base from l-value derived case:

auto_ptr<D> dp;
int x2 = sink(dp); // #2 EDG - more than one user-defined conversion applies

I'm sure that the original intention was allowing this initialization using the template<class Y> auto_ptr(auto_ptr<Y>& a) constructor (20.4.5.1/4) but since in this copy initialization it's merely user defined conversion (UDC) and the obsolete conversion member is UDC with the same rank (for the early overloading stage) there is an ambiguity between them.

Removing the obsolete member will have impact on code that explicitly invokes it:

int y = sink(source().operator auto_ptr<B>());

IMHO no one ever wrote such awkward code and the reasonable workaround for #1 is:

int y = sink( auto_ptr<B>(source()) );

I was even more surprised to find out that after removing the obsolete conversion member the initialization was still ill-formed: int x3 = sink(dp); // #3 EDG - no suitable copy constructor

This copy initialization semantically requires copy constructor which means that both template conversion constructor and the auto_ptr_ref conversion member (20.4.5.3/3) are required which is what was explicitly forbidden in DR #84. This is a bit amusing case in which removing ambiguity results with no candidates.

I also found exception safety issue with auto_ptr related to auto_ptr_ref:

int f(auto_ptr<B>, std::string);
auto_ptr<B> source2();

// string constructor throws while auto_ptr_ref
// "holds" the pointer
int x4 = f(source2(), "xyz"); // #4

The theoretic execution sequence that will cause a leak:

  1. call auto_ptr<B>::operator auto_ptr_ref<B>()
  2. call string::string(char const*) and throw

According to 20.4.5.3/3 and 20.4.5/2 the auto_ptr_ref conversion member returns auto_ptr_ref<Y> that holds *this and this is another defect since the type of *this is auto_ptr<X> where X might be different from Y. Several library vendors (e.g. SGI) implement auto_ptr_ref<Y> with Y* as member which is much more reasonable. Other vendor implemented auto_ptr_ref as defectively required and it results with awkward and catastrophic code: int oops = sink(auto_ptr<B>(source())); // warning recursive on all control paths

Dave Abrahams noticed that there is no specification saying that auto_ptr_ref copy constructor can't throw.

My proposal comes to solve all the above issues and significantly simplify auto_ptr implementation. One of the fundamental requirements from auto_ptr is that it can be constructed in an intuitive manner (i.e. like ordinary pointers) but with strict ownership semantics which yield that source auto_ptr in initialization must be non-const. My idea is to add additional constructor template with sole propose to generate ill-formed, diagnostic required, instance for const auto_ptr arguments during instantiation of declaration. This special constructor will not be instantiated for other types which is achievable using 14.8.2/2 (SFINAE). Having this constructor in hand makes the constructor template<class Y> auto_ptr(auto_ptr<Y> const&) legitimate since the actual argument can't be const yet non const r-value are acceptable.

This implementation technique makes the "private auxiliary class" auto_ptr_ref obsolete and I found out that modern C++ compilers (e.g. EDG, GCC and VC) consume the new implementation as expected and allow all intuitive initialization and assignment cases while rejecting illegal cases that involve const auto_ptr arguments.

The proposed auto_ptr interface:

namespace std {
    template<class X> class auto_ptr {
    public:
        typedef X element_type;

        // 20.4.5.1 construct/copy/destroy:
        explicit auto_ptr(X* p=0) throw();
        auto_ptr(auto_ptr&) throw();
        template<class Y> auto_ptr(auto_ptr<Y> const&) throw();
        auto_ptr& operator=(auto_ptr&) throw();
        template<class Y> auto_ptr& operator=(auto_ptr<Y>) throw();
        ~auto_ptr() throw();

        // 20.4.5.2 members:
        X& operator*() const throw();
        X* operator->() const throw();
        X* get() const throw();
        X* release() throw();
        void reset(X* p=0) throw();

    private:
        template<class U>
        auto_ptr(U& rhs, typename
unspecified_error_on_const_auto_ptr<U>::type = 0);
    };
}

One compliant technique to implement the unspecified_error_on_const_auto_ptr helper class is using additional private auto_ptr member class template like the following:

template<typename T> struct unspecified_error_on_const_auto_ptr;

template<typename T>
struct unspecified_error_on_const_auto_ptr<auto_ptr<T> const>
{ typedef typename auto_ptr<T>::const_auto_ptr_is_not_allowed type; };

There are other techniques to implement this helper class that might work better for different compliers (i.e. better diagnostics) and therefore I suggest defining its semantic behavior without mandating any specific implementation. IMO, and I didn't found any compiler that thinks otherwise, 14.7.1/5 doesn't theoretically defeat the suggested technique but I suggest verifying this with core language experts.

Further changes in standard text:

Remove section 20.4.5.3

Change 20.4.5/2 to read something like: Initializing auto_ptr<X> from const auto_ptr<Y> will result with unspecified ill-formed declaration that will require unspecified diagnostic.

Change 20.4.5.1/4,5,6 to read:

template<class Y> auto_ptr(auto_ptr<Y> const& a) throw();

4 Requires: Y* can be implicitly converted to X*.

5 Effects: Calls const_cast<auto_ptr<Y>&>(a).release().

6 Postconditions: *this holds the pointer returned from a.release().

Change 20.4.5.1/10

template<class Y> auto_ptr& operator=(auto_ptr<Y> a) throw();

10 Requires: Y* can be implicitly converted to X*. The expression delete get() is well formed.

LWG TC DR #127 is obsolete.

Notice that the copy constructor and copy assignment operator should remain as before and accept non-const auto_ptr& since they have effect on the form of the implicitly declared copy constructor and copy assignment operator of class that contains auto_ptr as member per 12.8/5,10:

struct X {
    // implicit X(X&)
    // implicit X& operator=(X&)
    auto_ptr<D> aptr_;
};

In most cases this indicates about sloppy programming but preserves the current auto_ptr behavior.

Dave Abrahams encouraged me to suggest fallback implementation in case that my suggestion that involves removing of auto_ptr_ref will not be accepted. In this case removing the obsolete conversion member to auto_ptr<Y> and 20.4.5.3/4,5 is still required in order to eliminate ambiguity in legal cases. The two constructors that I suggested will co exist with the current members but will make auto_ptr_ref obsolete in initialization contexts. auto_ptr_ref will be effective in assignment contexts as suggested in DR #127 and I can't see any serious exception safety issues in those cases (although it's possible to synthesize such). auto_ptr_ref<X> semantics will have to be revised to say that it strictly holds pointer of type X and not reference to an auto_ptr for the favor of cases in which auto_ptr_ref<Y> is constructed from auto_ptr<X> in which X is different from Y (i.e. assignment from r-value derived to base).

[Redmond: punt for the moment. We haven't decided yet whether we want to fix auto_ptr for C++-0x, or remove it and replace it with move_ptr and unique_ptr.]

[ Oxford 2007: Recommend NAD. We're just going to deprecate it. It still works for simple use cases and people know how to deal with it. Going forward unique_ptr is the recommended tool. ]

[ 2007-11-09: Reopened at the request of David Abrahams, Alisdair Meredith and Gabriel Dos Reis. ]

Proposed resolution:

Change the synopsis in D.9.1 [auto.ptr]:

namespace std { 
  template <class Y> struct auto_ptr_ref {};

  // exposition only
  template <class T> struct constant_object;

  // exposition only
  template <class T>
  struct cannot_transfer_ownership_from
    : constant_object<T> {};

  template <class X> class auto_ptr { 
  public: 
    typedef X element_type; 

    // D.9.1.1 construct/copy/destroy: 
    explicit auto_ptr(X* p =0) throw(); 
    auto_ptr(auto_ptr&) throw(); 
    template<class Y> auto_ptr(auto_ptr<Y> const&) throw(); 
    auto_ptr& operator=(auto_ptr&) throw(); 
    template<class Y> auto_ptr& operator=(auto_ptr<Y>&) throw();
    auto_ptr& operator=(auto_ptr_ref<X> r) throw();
    ~auto_ptr() throw(); 

    // D.9.1.2 members: 
    X& operator*() const throw();
    X* operator->() const throw();
    X* get() const throw();
    X* release() throw();
    void reset(X* p =0) throw();

    // D.9.1.3 conversions:
    auto_ptr(auto_ptr_ref<X>) throw();
    template<class Y> operator auto_ptr_ref<Y>() throw();
    template<class Y> operator auto_ptr<Y>() throw();

    // exposition only
    template<class U>
    auto_ptr(U& rhs, typename cannot_transfer_ownership_from<U>::error = 0);
  }; 

  template <> class auto_ptr<void> 
  { 
  public: 
    typedef void element_type; 
  }; 

}

Remove D.9.1.3 [auto.ptr.conv].

Change D.9.1 [auto.ptr], p3:

The auto_ptr provides a semantics of strict ownership. An auto_ptr owns the object it holds a pointer to. Copying an auto_ptr copies the pointer and transfers ownership to the destination. If more than one auto_ptr owns the same object at the same time the behavior of the program is undefined. Templates constant_object and cannot_transfer_ownership_from, and the final constructor of auto_ptr are for exposition only. For any types X and Y, initializing auto_ptr<X> from const auto_ptr<Y> is ill-formed, diagnostic required. [Note: The uses of auto_ptr include providing temporary exception-safety for dynamically allocated memory, passing ownership of dynamically allocated memory to a function, and returning dynamically allocated memory from a function. auto_ptr does not meet the CopyConstructible and Assignable requirements for Standard Library container elements and thus instantiating a Standard Library container with an auto_ptr results in undefined behavior. -- end note]

Change D.9.1.1 [auto.ptr.cons], p5:

template<class Y> auto_ptr(auto_ptr<Y> const& a) throw();

Requires: Y* can be implicitly converted to X*.

Effects: Calls const_cast<auto_ptr<Y>&>(a).release().

Postconditions: *this holds the pointer returned from a.release().

Change D.9.1.1 [auto.ptr.cons], p10:

template<class Y> auto_ptr& operator=(auto_ptr<Y>& a) throw();

Requires: Y* can be implicitly converted to X*. The expression delete get() is well formed.

Effects: Calls reset(a.release()).

Returns: *this.


471. result of what() implementation-defined

Section: 18.6.1 [type.info] Status: Open Submitter: Martin Sebor Date: 2004-06-28

View all other issues in [type.info].

View all issues with Open status.

Discussion:

[lib.exception] specifies the following:

    exception (const exception&) throw();
    exception& operator= (const exception&) throw();

    -4- Effects: Copies an exception object.
    -5- Notes: The effects of calling what() after assignment
        are implementation-defined.

First, does the Note only apply to the assignment operator? If so, what are the effects of calling what() on a copy of an object? Is the returned pointer supposed to point to an identical copy of the NTBS returned by what() called on the original object or not?

Second, is this Note intended to extend to all the derived classes in section 19? I.e., does the standard provide any guarantee for the effects of what() called on a copy of any of the derived class described in section 19?

Finally, if the answer to the first question is no, I believe it constitutes a defect since throwing an exception object typically implies invoking the copy ctor on the object. If the answer is yes, then I believe the standard ought to be clarified to spell out exactly what the effects are on the copy (i.e., after the copy ctor was called).

[Redmond: Yes, this is fuzzy. The issue of derived classes is fuzzy too.]

[ Batavia: Howard provided wording. ]

[ Bellevue: ]

Eric concerned this is unimplementable, due to nothrow guarantees. Suggested implementation would involve reference counting.

Is the implied reference counting subtle enough to call out a note on implementation? Probably not.

If reference counting required, could we tighten specification further to require same pointer value? Probably an overspecification, especially if exception classes defer evalutation of final string to calls to what().

Remember issue moved open and not resolved at Batavia, but cannot remember who objected to canvas a disenting opinion - please speak up if you disagree while reading these minutes!

Move to Ready as we are accepting words unmodified.

[ Sophia Antipolis: ]

The issue was pulled from Ready. It needs to make clear that only homogenous copying is intended to be supported. Not coping from a dervied to a base.

Proposed resolution:

Change 18.7.1 [exception] to:

exception(const exception& e) throw();
exception& operator=(const exception& e) throw();

-4- Effects: Copies an exception object.

-5- Remarks: The effects of calling what() after assignment are implementation-defined.

-5- Throws: Nothing. This also applies to all standard library-defined classes that derive from exception.

-7- Postcondition: strcmp(what(), e.what()) == 0. This also applies to all standard library-defined classes that derive from exception.


473. underspecified ctype calls

Section: 22.2.1.1 [locale.ctype] Status: Open Submitter: Martin Sebor Date: 2004-07-01

View all issues with Open status.

Discussion:

Most ctype member functions come in two forms: one that operates on a single character at a time and another form that operates on a range of characters. Both forms are typically described by a single Effects and/or Returns clause.

The Returns clause of each of the single-character non-virtual forms suggests that the function calls the corresponding single character virtual function, and that the array form calls the corresponding virtual array form. Neither of the two forms of each virtual member function is required to be implemented in terms of the other.

There are three problems:

1. One is that while the standard does suggest that each non-virtual member function calls the corresponding form of the virtual function, it doesn't actually explicitly require it.

Implementations that cache results from some of the virtual member functions for some or all values of their arguments might want to call the array form from the non-array form the first time to fill the cache and avoid any or most subsequent virtual calls. Programs that rely on each form of the virtual function being called from the corresponding non-virtual function will see unexpected behavior when using such implementations.

2. The second problem is that either form of each of the virtual functions can be overridden by a user-defined function in a derived class to return a value that is different from the one produced by the virtual function of the alternate form that has not been overriden.

Thus, it might be possible for, say, ctype::widen(c) to return one value, while for ctype::widen(&c, &c + 1, &wc) to set wc to another value. This is almost certainly not intended. Both forms of every function should be required to return the same result for the same character, otherwise the same program using an implementation that calls one form of the functions will behave differently than when using another implementation that calls the other form of the function "under the hood."

3. The last problem is that the standard text fails to specify whether one form of any of the virtual functions is permitted to be implemented in terms of the other form or not, and if so, whether it is required or permitted to call the overridden virtual function or not.

Thus, a program that overrides one of the virtual functions so that it calls the other form which then calls the base member might end up in an infinite loop if the called form of the base implementation of the function in turn calls the other form.

Lillehammer: Part of this isn't a real problem. We already talk about caching. 22.1.1/6 But part is a real problem. ctype virtuals may call each other, so users don't know which ones to override to avoid avoid infinite loops.

This is a problem for all facet virtuals, not just ctype virtuals, so we probably want a blanket statement in clause 22 for all facets. The LWG is leaning toward a blanket prohibition, that a facet's virtuals may never call each other. We might want to do that in clause 27 too, for that matter. A review is necessary. Bill will provide wording.

Proposed resolution:


484. Convertible to T

Section: 24.1.1 [input.iterators] Status: Open Submitter: Chris Jefferson Date: 2004-09-16

View all other issues in [input.iterators].

View all issues with Open status.

Discussion:

From comp.std.c++:

I note that given an input iterator a for type T, then *a only has to be "convertable to T", not actually of type T.

Firstly, I can't seem to find an exact definition of "convertable to T". While I assume it is the obvious definition (an implicit conversion), I can't find an exact definition. Is there one?

Slightly more worryingly, there doesn't seem to be any restriction on the this type, other than it is "convertable to T". Consider two input iterators a and b. I would personally assume that most people would expect *a==*b would perform T(*a)==T(*b), however it doesn't seem that the standard requires that, and that whatever type *a is (call it U) could have == defined on it with totally different symantics and still be a valid inputer iterator.

Is this a correct reading? When using input iterators should I write T(*a) all over the place to be sure that the object i'm using is the class I expect?

This is especially a nuisance for operations that are defined to be "convertible to bool". (This is probably allowed so that implementations could return say an int and avoid an unnessary conversion. However all implementations I have seen simply return a bool anyway. Typical implemtations of STL algorithms just write things like while(a!=b && *a!=0). But strictly speaking, there are lots of types that are convertible to T but that also overload the appropriate operators so this doesn't behave as expected.

If we want to make code like this legal (which most people seem to expect), then we'll need to tighten up what we mean by "convertible to T".

[Lillehammer: The first part is NAD, since "convertible" is well-defined in core. The second part is basically about pathological overloads. It's a minor problem but a real one. So leave open for now, hope we solve it as part of iterator redesign.]

Proposed resolution:


485. output iterator insufficently constrained

Section: 24.1.2 [output.iterators] Status: Open Submitter: Chris Jefferson Date: 2004-10-13

View all other issues in [output.iterators].

View all issues with Open status.

Discussion:

The note on 24.1.2 Output iterators insufficently limits what can be performed on output iterators. While it requires that each iterator is progressed through only once and that each iterator is written to only once, it does not require the following things:

Note: Here it is assumed that x is an output iterator of type X which has not yet been assigned to.

a) That each value of the output iterator is written to: The standard allows: ++x; ++x; ++x;

b) That assignments to the output iterator are made in order X a(x); ++a; *a=1; *x=2; is allowed

c) Chains of output iterators cannot be constructed: X a(x); ++a; X b(a); ++b; X c(b); ++c; is allowed, and under the current wording (I believe) x,a,b,c could be written to in any order.

I do not believe this was the intension of the standard?

[Lillehammer: Real issue. There are lots of constraints we intended but didn't specify. Should be solved as part of iterator redesign.]

Proposed resolution:


492. Invalid iterator arithmetic expressions

Section: 23 [containers], 24 [iterators], 25 [algorithms] Status: Open Submitter: Thomas Mang Date: 2004-12-12

View other active issues in [containers].

View all other issues in [containers].

View all issues with Open status.

Discussion:

Various clauses other than clause 25 make use of iterator arithmetic not supported by the iterator category in question. Algorithms in clause 25 are exceptional because of 25 [lib.algorithms], paragraph 9, but this paragraph does not provide semantics to the expression "iterator - n", where n denotes a value of a distance type between iterators.

1) Examples of current wording:

Current wording outside clause 25:

23.2.2.4 [lib.list.ops], paragraphs 19-21: "first + 1", "(i - 1)", "(last - first)" 23.3.1.1 [lib.map.cons], paragraph 4: "last - first" 23.3.2.1 [lib.multimap.cons], paragraph 4: "last - first" 23.3.3.1 [lib.set.cons], paragraph 4: "last - first" 23.3.4.1 [lib.multiset.cons], paragraph 4: "last - first" 24.4.1 [lib.reverse.iterators], paragraph 1: "(i - 1)"

[Important note: The list is not complete, just an illustration. The same issue might well apply to other paragraphs not listed here.]

None of these expressions is valid for the corresponding iterator category.

Current wording in clause 25:

25.1.1 [lib.alg.foreach], paragraph 1: "last - 1" 25.1.3 [lib.alg.find.end], paragraph 2: "[first1, last1 - (last2-first2))" 25.2.8 [lib.alg.unique], paragraph 1: "(i - 1)" 25.2.8 [lib.alg.unique], paragraph 5: "(i - 1)"

However, current wording of 25 [lib.algorithms], paragraph 9 covers neither of these four cases:

Current wording of 25 [lib.algorithms], paragraph 9:

"In the description of the algorithms operator + and - are used for some of the iterator categories for which they do not have to be defined. In these cases the semantics of a+n is the same as that of

{X tmp = a;
advance(tmp, n);
return tmp;
}

and that of b-a is the same as of return distance(a, b)"

This paragrpah does not take the expression "iterator - n" into account, where n denotes a value of a distance type between two iterators [Note: According to current wording, the expression "iterator - n" would be resolved as equivalent to "return distance(n, iterator)"]. Even if the expression "iterator - n" were to be reinterpreted as equivalent to "iterator + -n" [Note: This would imply that "a" and "b" were interpreted implicitly as values of iterator types, and "n" as value of a distance type], then 24.3.4/2 interfers because it says: "Requires: n may be negative only for random access and bidirectional iterators.", and none of the paragraphs quoted above requires the iterators on which the algorithms operate to be of random access or bidirectional category.

2) Description of intended behavior:

For the rest of this Defect Report, it is assumed that the expression "iterator1 + n" and "iterator1 - iterator2" has the semantics as described in current 25 [lib.algorithms], paragraph 9, but applying to all clauses. The expression "iterator1 - n" is equivalent to an result-iterator for which the expression "result-iterator + n" yields an iterator denoting the same position as iterator1 does. The terms "iterator1", "iterator2" and "result-iterator" shall denote the value of an iterator type, and the term "n" shall denote a value of a distance type between two iterators.

All implementations known to the author of this Defect Report comply with these assumptions. No impact on current code is expected.

3) Proposed fixes:

Change 25 [lib.algorithms], paragraph 9 to:

"In the description of the algorithms operator + and - are used for some of the iterator categories for which they do not have to be defined. In this paragraph, a and b denote values of an iterator type, and n denotes a value of a distance type between two iterators. In these cases the semantics of a+n is the same as that of

{X tmp = a;
advance(tmp, n);
return tmp;
}

,the semantics of a-n denotes the value of an iterator i for which the following condition holds: advance(i, n) == a, and that of b-a is the same as of return distance(a, b)".

Comments to the new wording:

a) The wording " In this paragraph, a and b denote values of an iterator type, and n denotes a value of a distance type between two iterators." was added so the expressions "b-a" and "a-n" are distinguished regarding the types of the values on which they operate. b) The wording ",the semantics of a-n denotes the value of an iterator i for which the following condition holds: advance(i, n) == a" was added to cover the expression 'iterator - n'. The wording "advance(i, n) == a" was used to avoid a dependency on the semantics of a+n, as the wording "i + n == a" would have implied. However, such a dependency might well be deserved. c) DR 225 is not considered in the new wording.

Proposed fixes regarding invalid iterator arithmetic expressions outside clause 25:

Either a) Move modified 25 [lib.algorithms], paragraph 9 (as proposed above) before any current invalid iterator arithmetic expression. In that case, the first sentence of 25 [lib.algorithms], paragraph 9, need also to be modified and could read: "For the rest of this International Standard, ...." / "In the description of the following clauses including this ...." / "In the description of the text below ..." etc. - anyways substituting the wording "algorithms", which is a straight reference to clause 25. In that case, 25 [lib.algorithms] paragraph 9 will certainly become obsolete. Alternatively, b) Add an appropiate paragraph similar to resolved 25 [lib.algorithms], paragraph 9, to the beginning of each clause containing invalid iterator arithmetic expressions. Alternatively, c) Fix each paragraph (both current wording and possible resolutions of DRs) containing invalid iterator arithmetic expressions separately.

5) References to other DRs:

See DR 225. See DR 237. The resolution could then also read "Linear in last - first".

[ Bellevue: ]

Keep open and ask Bill to provide wording.

Proposed resolution:

[Lillehammer: Minor issue, but real. We have a blanket statement about this in 25/11. But (a) it should be in 17, not 25; and (b) it's not quite broad enough, because there are some arithmetic expressions it doesn't cover. Bill will provide wording.]


498. Requirements for partition() and stable_partition() too strong

Section: 25.2.13 [alg.partitions] Status: Open Submitter: Sean Parent, Joe Gottman Date: 2005-05-04

View all issues with Open status.

Discussion:

Problem: The iterator requirements for partition() and stable_partition() [25.2.12] are listed as BidirectionalIterator, however, there are efficient algorithms for these functions that only require ForwardIterator that have been known since before the standard existed. The SGI implementation includes these (see http://www.sgi.com/tech/stl/partition.html and http://www.sgi.com/tech/stl/stable_partition.html).

Proposed resolution:

Change 25.2.12 from

template<class BidirectionalIterator, class Predicate> 
BidirectionalIterator partition(BidirectionalIterato r first, 
                                BidirectionalIterator last, 
                                Predicate pred); 

to

template<class ForwardIterator, class Predicate> 
ForwardIterator partition(ForwardIterator first, 
                          ForwardIterator last, 
                          Predicate pred); 

Change the complexity from

At most (last - first)/2 swaps are done. Exactly (last - first) applications of the predicate are done.

to

If ForwardIterator is a bidirectional_iterator, at most (last - first)/2 swaps are done; otherwise at most (last - first) swaps are done. Exactly (last - first) applications of the predicate are done.

Rationale:

Partition is a "foundation" algorithm useful in many contexts (like sorting as just one example) - my motivation for extending it to include forward iterators is slist - without this extension you can't partition an slist (without writing your own partition). Holes like this in the standard library weaken the argument for generic programming (ideally I'd be able to provide a library that would refine std::partition() to other concepts without fear of conflicting with other libraries doing the same - but that is a digression). I consider the fact that partition isn't defined to work for ForwardIterator a minor embarrassment.

[Mont Tremblant: Moved to Open, request motivation and use cases by next meeting. Sean provided further rationale by post-meeting mailing.]


502. Proposition: Clarification of the interaction between a facet and an iterator

Section: 22.1.1.1.1 [locale.category] Status: Open Submitter: Christopher Conrade Zseleghovski Date: 2005-06-07

View all other issues in [locale.category].

View all issues with Open status.

Discussion:

Motivation:

This requirement seems obvious to me, it is the essence of code modularity. I have complained to Mr. Plauger that the Dinkumware library does not observe this principle but he objected that this behaviour is not covered in the standard.

Proposed resolution:

Append the following point to 22.1.1.1.1:

6. The implementation of a facet of Table 52 parametrized with an InputIterator/OutputIterator should use that iterator only as character source/sink respectively. For a *_get facet, it means that the value received depends only on the sequence of input characters and not on how they are accessed. For a *_put facet, it means that the sequence of characters output depends only on the value to be formatted and not of how the characters are stored.

[ Berlin: Moved to Open, Need to clean up this area to make it clear locales don't have to contain open ended sets of facets. Jack, Howard, Bill. ]


503. more on locales

Section: 22.2 [locale.categories] Status: Open Submitter: P.J. Plauger Date: 2005-06-20

View other active issues in [locale.categories].

View all other issues in [locale.categories].

View all issues with Open status.

Discussion:

a) In 22.2.1.1 para. 2 we refer to "the instantiations required in Table 51" to refer to the facet *objects* associated with a locale. And we almost certainly mean just those associated with the default or "C" locale. Otherwise, you can't switch to a locale that enforces a different mapping between narrow and wide characters, or that defines additional uppercase characters.

b) 22.2.1.5 para. 3 (codecvt) has the same issues.

c) 22.2.1.5.2 (do_unshift) is even worse. It *forbids* the generation of a homing sequence for the basic character set, which might very well need one.

d) 22.2.1.5.2 (do_length) likewise dictates that the default mapping between wide and narrow characters be taken as one-for-one.

e) 22.2.2 para. 2 (num_get/put) is both muddled and vacuous, as far as I can tell. The muddle is, as before, calling Table 51 a list of instantiations. But the constraint it applies seems to me to cover *all* defined uses of num_get/put, so why bother to say so?

f) 22.2.3.1.2 para. 1(do_decimal_point) says "The required instantiations return '.' or L'.'.) Presumably this means "as appropriate for the character type. But given the vague definition of "required" earlier, this overrules *any* change of decimal point for non "C" locales. Surely we don't want to do that.

g) 22.2.3.1.2 para. 2 (do_thousands_sep) says "The required instantiations return ',' or L','.) As above, this probably means "as appropriate for the character type. But this overrules the "C" locale, which requires *no* character ('\0') for the thousands separator. Even if we agree that we don't mean to block changes in decimal point or thousands separator, we should also eliminate this clear incompatibility with C.

h) 22.2.3.1.2 para. 2 (do_grouping) says "The required instantiations return the empty string, indicating no grouping." Same considerations as for do_decimal_point.

i) 22.2.4.1 para. 1 (collate) refers to "instantiations required in Table 51". Same bad jargon.

j) 22.2.4.1.2 para. 1 (do_compare) refers to "instantiations required in Table 51". Same bad jargon.

k) 22.2.5 para. 1 (time_get/put) uses the same muddled and vacuous as num_get/put.

l) 22.2.6 para. 2 (money_get/put) uses the same muddled and vacuous as num_get/put.

m) 22.2.6.3.2 (do_pos/neg_format) says "The instantiations required in Table 51 ... return an object of type pattern initialized to {symbol, sign, none, value}." This once again *overrides* the "C" locale, as well as any other locale."

3) We constrain the use_facet calls that can be made by num_get/put, so why don't we do the same for money_get/put? Or for any of the other facets, for that matter?

4) As an almost aside, we spell out when a facet needs to use the ctype facet, but several also need to use a codecvt facet and we don't say so.

[ Berlin: Bill to provide wording. ]

Proposed resolution:


522. Tuple doesn't define swap

Section: 20.3 [tuple], TR1 6.1 [tr.tuple] Status: Ready Submitter: Andy Koenig Date: 2005-07-03

View other active issues in [tuple].

View all other issues in [tuple].

View all issues with Ready status.

Discussion:

Tuple doesn't define swap(). It should.

[ Berlin: Doug to provide wording. ]

[ Batavia: Howard to provide wording. ]

[ Toronto: Howard to provide wording (really this time). ]

[ Bellevue: Alisdair provided wording. ]

Proposed resolution:

Add these signatures to 20.3 [tuple]

template <class... Types>
  void swap(tuple<Types...>& x, tuple<Types...>& y);
template <class... Types>
  void swap(tuple<Types...>&& x, tuple<Types...>& y);
template <class... Types>
  void swap(tuple<Types...>& x, tuple<Types...>&& y); 

Add this signature to 20.3.1 [tuple.tuple]

void swap(tuple&&);

Add the following two sections to the end of the tuple clauses

20.3.1.7 tuple swap [tuple.swap]

void swap(tuple&& rhs); 

Requires: Each type in Types shall be Swappable.

Effects: Calls swap for each element in *this and its corresponding element in rhs.

Throws: Nothing, unless one of the element-wise swap calls throw an exception.

20.3.1.8 tuple specialized algorithms [tuple.special]

template <class... Types>
  void swap(tuple<Types...>& x, tuple<Types...>& y);
template <class... Types>
  void swap(tuple<Types...>&& x, tuple<Types...>& y);
template <class... Types>
  void swap(tuple<Types...>& x, tuple<Types...>&& y); 

Effects: x.swap(y)


523. regex case-insensitive character ranges are unimplementable as specified

Section: 28 [re] Status: Open Submitter: Eric Niebler Date: 2005-07-01

View all other issues in [re].

View all issues with Open status.

Discussion:

A problem with TR1 regex is currently being discussed on the Boost developers list. It involves the handling of case-insensitive matching of character ranges such as [Z-a]. The proper behavior (according to the ECMAScript standard) is unimplementable given the current specification of the TR1 regex_traits<> class template. John Maddock, the author of the TR1 regex proposal, agrees there is a problem. The full discussion can be found at http://lists.boost.org/boost/2005/06/28850.php (first message copied below). We don't have any recommendations as yet.

-- Begin original message --

The situation of interest is described in the ECMAScript specification (ECMA-262), section 15.10.2.15:

"Even if the pattern ignores case, the case of the two ends of a range is significant in determining which characters belong to the range. Thus, for example, the pattern /[E-F]/i matches only the letters E, F, e, and f, while the pattern /[E-f]/i matches all upper and lower-case ASCII letters as well as the symbols [, \, ], ^, _, and `."

A more interesting case is what should happen when doing a case-insentitive match on a range such as [Z-a]. It should match z, Z, a, A and the symbols [, \, ], ^, _, and `. This is not what happens with Boost.Regex (it throws an exception from the regex constructor).

The tough pill to swallow is that, given the specification in TR1, I don't think there is any effective way to handle this situation. According to the spec, case-insensitivity is handled with regex_traits<>::translate_nocase(CharT) -- two characters are equivalent if they compare equal after both are sent through the translate_nocase function. But I don't see any way of using this translation function to make character ranges case-insensitive. Consider the difficulty of detecting whether "z" is in the range [Z-a]. Applying the transformation to "z" has no effect (it is essentially std::tolower). And we're not allowed to apply the transformation to the ends of the range, because as ECMA-262 says, "the case of the two ends of a range is significant."

So AFAICT, TR1 regex is just broken, as is Boost.Regex. One possible fix is to redefine translate_nocase to return a string_type containing all the characters that should compare equal to the specified character. But this function is hard to implement for Unicode, and it doesn't play nice with the existing ctype facet. What a mess!

-- End original message --

[ John Maddock adds: ]

One small correction, I have since found that ICU's regex package does implement this correctly, using a similar mechanism to the current TR1.Regex.

Given an expression [c1-c2] that is compiled as case insensitive it:

Enumerates every character in the range c1 to c2 and converts it to it's case folded equivalent. That case folded character is then used a key to a table of equivalence classes, and each member of the class is added to the list of possible matches supported by the character-class. This second step isn't possible with our current traits class design, but isn't necessary if the input text is also converted to a case-folded equivalent on the fly.

ICU applies similar brute force mechanisms to character classes such as [[:lower:]] and [[:word:]], however these are at least cached, so the impact is less noticeable in this case.

Quick and dirty performance comparisons show that expressions such as "[X-\\x{fff0}]+" are indeed very slow to compile with ICU (about 200 times slower than a "normal" expression). For an application that uses a lot of regexes this could have a noticeable performance impact. ICU also has an advantage in that it knows the range of valid characters codes: code points outside that range are assumed not to require enumeration, as they can not be part of any equivalence class. I presume that if we want the TR1.Regex to work with arbitrarily large character sets enumeration really does become impractical.

Finally note that Unicode has:

Three cases (upper, lower and title). One to many, and many to one case transformations. Character that have context sensitive case translations - for example an uppercase sigma has two different lowercase forms - the form chosen depends on context(is it end of a word or not), a caseless match for an upper case sigma should match either of the lower case forms, which is why case folding is often approximated by tolower(toupper(c)).

Probably we need some way to enumerate character equivalence classes, including digraphs (either as a result or an input), and some way to tell whether the next character pair is a valid digraph in the current locale.

Hoping this doesn't make this even more complex that it was already,

[ Portland: Alisdair: Detect as invalid, throw an exception. Pete: Possible general problem with case insensitive ranges. ]

Proposed resolution:


539. partial_sum and adjacent_difference should mention requirements

Section: 26.6.3 [partial.sum] Status: Open Submitter: Marc Schoolderman Date: 2006-02-06

View all issues with Open status.

Discussion:

There are some problems in the definition of partial_sum and adjacent_difference in 26.4 [lib.numeric.ops]

Unlike accumulate and inner_product, these functions are not parametrized on a "type T", instead, 26.4.3 [lib.partial.sum] simply specifies the effects clause as;

Assigns to every element referred to by iterator i in the range [result,result + (last - first)) a value correspondingly equal to

((...(* first + *( first + 1)) + ...) + *( first + ( i - result )))

And similarly for BinaryOperation. Using just this definition, it seems logical to expect that:

char i_array[4] = { 100, 100, 100, 100 };
int  o_array[4];

std::partial_sum(i_array, i_array+4, o_array);

Is equivalent to

int o_array[4] = { 100, 100+100, 100+100+100, 100+100+100+100 };

i.e. 100, 200, 300, 400, with addition happening in the result type, int.

Yet all implementations I have tested produce 100, -56, 44, -112, because they are using an accumulator of the InputIterator's value_type, which in this case is char, not int.

The issue becomes more noticeable when the result of the expression *i + *(i+1) or binary_op(*i, *i-1) can't be converted to the value_type. In a contrived example:

enum not_int { x = 1, y = 2 };
...
not_int e_array[4] = { x, x, y, y };
std::partial_sum(e_array, e_array+4, o_array);

Is it the intent that the operations happen in the input type, or in the result type?

If the intent is that operations happen in the result type, something like this should be added to the "Requires" clause of 26.4.3/4 [lib.partial.sum]:

The type of *i + *(i+1) or binary_op(*i, *(i+1)) shall meet the requirements of CopyConstructible (20.1.3) and Assignable (23.1) types.

(As also required for T in 26.4.1 [lib.accumulate] and 26.4.2 [lib.inner.product].)

The "auto initializer" feature proposed in N1894 is not required to implement partial_sum this way. The 'narrowing' behaviour can still be obtained by using the std::plus<> function object.

If the intent is that operations happen in the input type, then something like this should be added instead;

The type of *first shall meet the requirements of CopyConstructible (20.1.3) and Assignable (23.1) types. The result of *i + *(i+1) or binary_op(*i, *(i+1)) shall be convertible to this type.

The 'widening' behaviour can then be obtained by writing a custom proxy iterator, which is somewhat involved.

In both cases, the semantics should probably be clarified.

26.4.4 [lib.adjacent.difference] is similarly underspecified, although all implementations seem to perform operations in the 'result' type:

unsigned char i_array[4] = { 4, 3, 2, 1 };
int o_array[4];

std::adjacent_difference(i_array, i_array+4, o_array);

o_array is 4, -1, -1, -1 as expected, not 4, 255, 255, 255.

In any case, adjacent_difference doesn't mention the requirements on the value_type; it can be brought in line with the rest of 26.4 [lib.numeric.ops] by adding the following to 26.4.4/2 [lib.adjacent.difference]:

The type of *first shall meet the requirements of CopyConstructible (20.1.3) and Assignable (23.1) types."

[ Berlin: Giving output iterator's value_types very controversial. Suggestion of adding signatures to allow user to specify "accumulator". ]

[ Bellevue: ]

The intent of the algorithms is to perform their calculations using the type of the input iterator. Proposed wording provided.

[ Sophia Antipolis: ]

We did not agree that the proposed resolution was correct. For example, when the arguments are types (float*, float*, double*), the highest-quality solution would use double as the type of the accumulator. If the intent of the wording is to require that the type of the accumulator must be the input_iterator's value_type, the wording should specify it.

Proposed resolution:

Add to section 26.6.3 [partial.sum] paragraph 4 the following two sentences:

The type of *first shall meet the requirements of CopyConstructible? (20.1.3?) and Assignable (23.1?) types. The result of *i + *(i+1) or binary_op(*i, *(i+1)) shall be convertible to this type.

Add to section 26.6.4 [adjacent.difference] paragraph 2 the following sentence:

The type of *first shall meet the requirements of CopyConstructible? (20.1.3) and Assignable (23.1) types.

546. _Longlong and _ULonglong are integer types

Section: TR1 5.1.1 [tr.rand.req] Status: Open Submitter: Matt Austern Date: 2006-01-10

View all issues with Open status.

Discussion:

The TR sneaks in two new integer types, _Longlong and _Ulonglong, in [tr.c99]. The rest of the TR should use that type. I believe this affects two places. First, the random number requirements, 5.1.1/10-11, lists all of the types with which template parameters named IntType and UIntType may be instantiated. _Longlong (or "long long", assuming it is added to C++0x) should be added to the IntType list, and UIntType (again, or "unsigned long long") should be added to the UIntType list. Second, 6.3.2 lists the types for which hash<> is required to be instantiable. _Longlong and _Ulonglong should be added to that list, so that people may use long long as a hash key.

Proposed resolution:


556. is Compare a BinaryPredicate?

Section: 25.3 [alg.sorting] Status: Open Submitter: Martin Sebor Date: 2006-02-05

View all other issues in [alg.sorting].

View all issues with Open status.

Discussion:

In 25, p8 we allow BinaryPredicates to return a type that's convertible to bool but need not actually be bool. That allows predicates to return things like proxies and requires that implementations be careful about what kinds of expressions they use the result of the predicate in (e.g., the expression in if (!pred(a, b)) need not be well-formed since the negation operator may be inaccessible or return a type that's not convertible to bool).

Here's the text for reference:

...if an algorithm takes BinaryPredicate binary_pred as its argument and first1 and first2 as its iterator arguments, it should work correctly in the construct if (binary_pred(*first1, first2)){...}.

In 25.3, p2 we require that the Compare function object return true of false, which would seem to preclude such proxies. The relevant text is here:

Compare is used as a function object which returns true if the first argument is less than the second, and false otherwise...

Proposed resolution:

I think we could fix this by rewording 25.3, p2 to read somthing like:

-2- Compare is used as a function object which returns true if the first argument a BinaryPredicate. The return value of the function call operator applied to an object of type Compare, when converted to type bool, yields true if the first argument of the call is less than the second, and false otherwise. Compare comp is used throughout for algorithms assuming an ordering relation. It is assumed that comp will not apply any non-constant function through the dereferenced iterator.

[ Portland: Jack to define "convertible to bool" such that short circuiting isn't destroyed. ]


564. stringbuf seekpos underspecified

Section: 27.7.1.4 [stringbuf.virtuals] Status: Open Submitter: Martin Sebor Date: 2006-02-23

View all other issues in [stringbuf.virtuals].

View all issues with Open status.

Discussion:

The effects of the seekpos() member function of basic_stringbuf simply say that the function positions the input and/or output sequences but fail to spell out exactly how. This is in contrast to the detail in which seekoff() is described.

Proposed resolution:

Change 27.7.1.3, p13 to read:

-13- Effects: Same as seekoff(off_type(sp), ios_base::beg, which). Alters the stream position within the controlled sequences, if possible, to correspond to the stream position stored in sp (as described below).

[ Kona (2007): A pos_type is a position in a stream by definition, so there is no ambiguity as to what it means. Proposed Disposition: NAD ]

[ Post-Kona Martin adds: I'm afraid I disagree with the Kona '07 rationale for marking it NAD. The only text that describes precisely what it means to position the input or output sequence is in seekoff(). The seekpos() Effects clause is inadequate in comparison and the proposed resolution plugs the hole by specifying seekpos() in terms of seekoff(). ]


565. xsputn inefficient

Section: 27.5.2.4.5 [streambuf.virt.put] Status: Open Submitter: Martin Sebor Date: 2006-02-23

View all issues with Open status.

Discussion:

streambuf::xsputn() is specified to have the effect of "writing up to n characters to the output sequence as if by repeated calls to sputc(c)."

Since sputc() is required to call overflow() when (pptr() == epptr()) is true, strictly speaking xsputn() should do the same. However, doing so would be suboptimal in some interesting cases, such as in unbuffered mode or when the buffer is basic_stringbuf.

Assuming calling overflow() is not really intended to be required and the wording is simply meant to describe the general effect of appending to the end of the sequence it would be worthwhile to mention in xsputn() that the function is not actually required to cause a call to overflow().

Proposed resolution:

Add the following sentence to the xsputn() Effects clause in 27.5.2.4.5, p1 (N1804):

-1- Effects: Writes up to n characters to the output sequence as if by repeated calls to sputc(c). The characters written are obtained from successive elements of the array whose first element is designated by s. Writing stops when either n characters have been written or a call to sputc(c) would return traits::eof(). It is uspecified whether the function calls overflow() when (pptr() == epptr()) becomes true or whether it achieves the same effects by other means.

In addition, I suggest to add a footnote to this function with the same text as Footnote 292 to make it extra clear that derived classes are permitted to override xsputn() for efficiency.

[ Kona (2007): We want to permit a streambuf that streams output directly to a device without making calls to sputc or overflow. We believe that has always been the intention of the committee. We believe that the proposed wording doesn't accomplish that. Proposed Disposition: Open ]


568. log2 overloads missing

Section: TR1 8.16.4 [tr.c99.cmath.over] Status: New Submitter: Paolo Carlini Date: 2006-03-07

View all issues with New status.

Discussion:

log2 is missing from the list of "additional overloads" in TR1 8.16.4 [tr.c99.cmath.over] p1.

Hinnant: This is a TR1 issue only. It is fixed in the current (N2135) WD.

Proposed resolution:

Add log2 to the list of functions in TR1 8.16.4 [tr.c99.cmath.over] p1.


573. C++0x file positioning should handle modern file sizes

Section: 27.4.3 [fpos] Status: Open Submitter: Beman Dawes Date: 2006-04-12

View all other issues in [fpos].

View all issues with Open status.

Discussion:

There are two deficiencies related to file sizes:

  1. It doesn't appear that the Standard Library is specified in a way that handles modern file sizes, which are often too large to be represented by an unsigned long.
  2. The std::fpos class does not currently have the ability to set/get file positions.

The Dinkumware implementation of the Standard Library as shipped with the Microsoft compiler copes with these issues by:

  1. Defining fpos_t be long long, which is large enough to represent any file position likely in the foreseeable future.
  2. Adding member functions to class fpos. For example,
    fpos_t seekpos() const;
    

Because there are so many types relating to file positions and offsets (fpos_t, fpos, pos_type, off_type, streamoff, streamsize, streampos, wstreampos, and perhaps more), it is difficult to know if the Dinkumware extensions are sufficient. But they seem a useful starting place for discussions, and they do represent existing practice.

[ Kona (2007): We need a paper. It would be nice if someone proposed clarifications to the definitions of pos_type and off_type. Currently these definitions are horrible. Proposed Disposition: Open ]

Proposed resolution:


580. unused allocator members

Section: 20.1.2 [allocator.requirements] Status: Open Submitter: Martin Sebor Date: 2006-06-14

View other active issues in [allocator.requirements].

View all other issues in [allocator.requirements].

View all issues with Open status.

Duplicate of: 479

Discussion:

C++ Standard Library templates that take an allocator as an argument are required to call the allocate() and deallocate() members of the allocator object to obtain storage. However, they do not appear to be required to call any other allocator members such as construct(), destroy(), address(), and max_size(). This makes these allocator members less than useful in portable programs.

It's unclear to me whether the absence of the requirement to use these allocator members is an unintentional omission or a deliberate choice. However, since the functions exist in the standard allocator and since they are required to be provided by any user-defined allocator I believe the standard ought to be clarified to explictly specify whether programs should or should not be able to rely on standard containers calling the functions.

I propose that all containers be required to make use of these functions.

[ Batavia: We support this resolution. Martin to provide wording. ]

[ pre-Oxford: Martin provided wording. ]

Proposed resolution:

Specifically, I propose to change 23.1 [container.requirements], p9 as follows:

-9- Copy constructors for all container types defined in this clause that are parametrized on Allocator copy anthe allocator argument from their respective first parameters. All other constructors for these container types take an const Allocator& argument (20.1.6), an allocator whose value_type is the same as the container's value_type. A copy of this argument isshall be used for any memory allocation and deallocation performed, by these constructors and by all member functions, during the lifetime of each container object. Allocation shall be performed "as if" by calling the allocate() member function on a copy of the allocator object of the appropriate type New Footnote), and deallocation "as if" by calling deallocate() on a copy of the same allocator object of the corresponding type. A copy of this argument shall also be used to construct and destroy objects whose lifetime is managed by the container, including but not limited to those of the container's value_type, and to obtain their address. All objects residing in storage allocated by a container's allocator shall be constructed "as if" by calling the construct() member function on a copy of the allocator object of the appropriate type. The same objects shall be destroyed "as if" by calling destroy() on a copy of the same allocator object of the same type. The address of such objects shall be obtained "as if" by calling the address() member function on a copy of the allocator object of the appropriate type. Finally, a copy of this argument shall be used by its container object to determine the maximum number of objects of the container's value_type the container may store at the same time. The container member function max_size() obtains this number from the value returned by a call to get_allocator().max_size(). In all container types defined in this clause that are parametrized on Allocator, the member get_allocator() returns a copy of the Allocator object used to construct the container.258)

New Footnote: This type may be different from Allocator: it may be derived from Allocator via Allocator::rebind<U>::other for the appropriate type U.

The proposed wording seems cumbersome but I couldn't think of a better way to describe the requirement that containers use their Allocator to manage only objects (regardless of their type) that persist over their lifetimes and not, for example, temporaries created on the stack. That is, containers shouldn't be required to call Allocator::construct(Allocator::allocate(1), elem) just to construct a temporary copy of an element, or Allocator::destroy(Allocator::address(temp), 1) to destroy temporaries.

[ Howard: This same paragraph will need some work to accommodate 431. ]

[ post Oxford: This would be rendered NAD Editorial by acceptance of N2257. ]


582. specialized algorithms and volatile storage

Section: 20.6.10.1 [uninitialized.copy] Status: Open Submitter: Martin Sebor Date: 2006-06-14

View all other issues in [uninitialized.copy].

View all issues with Open status.

Discussion:

The specialized algorithms [lib.specialized.algorithms] are specified as having the general effect of invoking the following expression:


new (static_cast<void*>(&*i))
    typename iterator_traits<ForwardIterator>::value_type (x)

            

This expression is ill-formed when the type of the subexpression &*i is some volatile-qualified T.

[ Batavia: Lack of support for proposed resolution but agree there is a defect. Howard to look at wording. Concern that move semantics properly expressed if iterator returns rvalue. ]

Proposed resolution:

In order to allow these algorithms to operate on volatile storage I propose to change the expression so as to make it well-formed even for pointers to volatile types. Specifically, I propose the following changes to clauses 20 and 24. Change 20.6.4.1, p1 to read:


Effects:

typedef typename iterator_traits<ForwardIterator>::pointer    pointer;
typedef typename iterator_traits<ForwardIterator>::value_type value_type;

for (; first != last; ++result, ++first)
    new (static_cast<void*>(const_cast<pointer>(&*result))
        value_type (*first);

            

change 20.6.4.2, p1 to read


Effects:

typedef typename iterator_traits<ForwardIterator>::pointer    pointer;
typedef typename iterator_traits<ForwardIterator>::value_type value_type;

for (; first != last; ++result, ++first)
    new (static_cast<void*>(const_cast<pointer>(&*first))
        value_type (*x);

            

and change 20.6.4.3, p1 to read


Effects:

typedef typename iterator_traits<ForwardIterator>::pointer    pointer;
typedef typename iterator_traits<ForwardIterator>::value_type value_type;

for (; n--; ++first)
    new (static_cast<void*>(const_cast<pointer>(&*first))
        value_type (*x);

            

In addition, since there is no partial specialization for iterator_traits<volatile T*> I propose to add one to parallel such specialization for <const T*>. Specifically, I propose to add the following text to the end of 24.3.1, p3:

and for pointers to volatile as


namespace std {
template<class T> struct iterator_traits<volatile T*> {
typedef ptrdiff_t difference_type;
typedef T value_type;
typedef volatile T* pointer;
typedef volatile T& reference;
typedef random_access_iterator_tag iterator_category;
};
}

            

Note that the change to iterator_traits isn't necessary in order to implement the specialized algorithms in a way that allows them to operate on volatile strorage. It is only necesassary in order to specify their effects in terms of iterator_traits as is done here. Implementations can (and some do) achieve the same effect by means of function template overloading.


585. facet error reporting

Section: 22.2 [locale.categories] Status: Open Submitter: Martin Sebor, Paolo Carlini Date: 2006-06-22

View other active issues in [locale.categories].

View all other issues in [locale.categories].

View all issues with Open status.

Discussion:

Section 22.2, paragraph 2 requires facet get() members that take an ios_base::iostate& argument, err, to ignore the (initial) value of the argument, but to set it to ios_base::failbit in case of a parse error.

We believe there are a few minor problems with this blanket requirement in conjunction with the wording specific to each get() member function.

First, besides get() there are other member functions with a slightly different name (for example, get_date()). It's not completely clear that the intent of the paragraph is to include those as well, and at least one implementation has interpreted the requirement literally.

Second, the requirement to "set the argument to ios_base::failbit suggests that the functions are not permitted to set it to any other value (such as ios_base::eofbit, or even ios_base::eofbit | ios_base::failbit).

However, 22.2.2.1.2, p5 (Stage 3 of num_get parsing) and p6 (bool parsing) specifies that the do_get functions perform err |= ios_base::eofbit, which contradicts the earlier requirement to ignore err's initial value.

22.2.6.1.2, p1 (the Effects clause of the money_get facet's do_get member functions) also specifies that err's initial value be used to compute the final value by ORing it with either ios_base::failbit or withios_base::eofbit | ios_base::failbit.

Proposed resolution:

We believe the intent is for all facet member functions that take an ios_base::iostate& argument to:

To that effect we propose to change 22.2, p2 as follows:

The put() members make no provision for error reporting. (Any failures of the OutputIterator argument must be extracted from the returned iterator.) Unless otherwise specified, the get() members that take an ios_base::iostate& argument whose value they ignore, but set to ios_base::failbit in case of a parse error., err, start by evaluating err = ios_base::goodbit, and may subsequently set err to either ios_base::eofbit, or ios_base::failbit, or ios_base::eofbit | ios_base::failbit in response to reaching the end-of-file or in case of a parse error, or both, respectively.

[ Kona (2007): We need to change the proposed wording to clarify that the phrase "the get members" actually denotes get(), get_date(), etc. Proposed Disposition: Open ]


588. requirements on zero sized tr1::arrays and other details

Section: 23.2.1 [array] Status: Open Submitter: Gennaro Prota Date: 2006-07-18

View other active issues in [array].

View all other issues in [array].

View all issues with Open status.

Discussion:

The wording used for section 23.2.1 [lib.array] seems to be subtly ambiguous about zero sized arrays (N==0). Specifically:

* "An instance of array<T, N> stores N elements of type T, so that [...]"

Does this imply that a zero sized array object stores 0 elements, i.e. that it cannot store any element of type T? The next point clarifies the rationale behind this question, basically how to implement begin() and end():

* 23.2.1.5 [lib.array.zero], p2: "In the case that N == 0, begin() == end() == unique value."

What does "unique" mean in this context? Let's consider the following possible implementations, all relying on a partial specialization:

a)
    template< typename T >
    class array< T, 0 > {
    
        ....

        iterator begin()
        { return iterator( reinterpret_cast< T * >( this ) ); }
        ....

    };

This has been used in boost, probably intending that the return value had to be unique to the specific array object and that array couldn't store any T. Note that, besides relying on a reinterpret_cast, has (more than potential) alignment problems.

b)
    template< typename T >
    class array< T, 0 > {
    
        T t;

        iterator begin()
        { return iterator( &t ); }
        ....

    };

This provides a value which is unique to the object and to the type of the array, but requires storing a T. Also, it would allow the user to mistakenly provide an initializer list with one element.

A slight variant could be returning *the* null pointer of type T

    return static_cast<T*>(0);

In this case the value would be unique to the type array<T, 0> but not to the objects (all objects of type array<T, 0> with the same value for T would yield the same pointer value).

Furthermore this is inconsistent with what the standard requires from allocation functions (see library issue 9).

c) same as above but with t being a static data member; again, the value would be unique to the type, not to the object.

d) to avoid storing a T *directly* while disallowing the possibility to use a one-element initializer list a non-aggregate nested class could be defined

    struct holder { holder() {} T t; } h;

and then begin be defined as

 iterator begin() { return &h.t; }

But then, it's arguable whether the array stores a T or not. Indirectly it does.

-----------------------------------------------------

Now, on different issues:

* what's the effect of calling assign(T&) on a zero-sized array? There seems to be only mention of front() and back(), in 23.2.1 [lib.array] p4 (I would also suggest to move that bullet to section 23.2.1.5 [lib.array.zero], for locality of reference)

* (minor) the opening paragraph of 23.2.1 [lib.array] wording is a bit inconsistent with that of other sequences: that's not a problem in itself, but compare it for instance with "A vector is a kind of sequence that supports random access iterators"; though the intent is obvious one might argue that the wording used for arrays doesn't tell what an array is, and relies on the reader to infer that it is what the <array> header defines.

* it would be desiderable to have a static const data member of type std::size_t, with value N, for usage as integral constant expression

* section 23.1 [lib.container.requirements] seem not to consider fixed-size containers at all, as it says: "[containers] control allocation and deallocation of these objects [the contained objects] through constructors, destructors, *insert and erase* operations"

* max_size() isn't specified: the result is obvious but, technically, it relies on table 80: "size() of the largest possible container" which, again, doesn't seem to consider fixed size containers

Proposed resolution:

[ Kona (2007): requirements on zero sized tr1::arrays and other details Issue 617: std::array is a sequence that doesn't satisfy the sequence requirements? Alisdair will prepare a paper. Proposed Disposition: Open ]


597. Decimal: The notion of 'promotion' cannot be emulated by user-defined types.

Section: TRDecimal 3.2 [trdec.types.types] Status: Open Submitter: Daveed Vandevoorde Date: 2006-04-05

View other active issues in [trdec.types.types].

View all other issues in [trdec.types.types].

View all issues with Open status.

Discussion:

In a private email, Daveed writes:

I am not familiar with the C TR, but my guess is that the class type approach still won't match a built-in type approach because the notion of "promotion" cannot be emulated by user-defined types.

Here is an example:


		 struct S {
		   S(_Decimal32 const&);  // Converting constructor
		 };
		 void f(S);

		 void f(_Decimal64);

		 void g(_Decimal32 d) {
		   f(d);
		 }

If _Decimal32 is a built-in type, the call f(d) will likely resolve to f(_Decimal64) because that requires only a promotion, whereas f(S) requires a user-defined conversion.

If _Decimal32 is a class type, I think the call f(d) will be ambiguous because both the conversion to _Decimal64 and the conversion to S will be user-defined conversions with neither better than the other.

Robert comments:

In general, a library of arithmetic types cannot exactly emulate the behavior of the intrinsic numeric types. There are several ways to tell whether an implementation of the decimal types uses compiler intrinisics or a library. For example:

                 _Decimal32 d1;
                 d1.operator+=(5);  // If d1 is a builtin type, this won't compile.

In preparing the decimal TR, we have three options:

  1. require that the decimal types be class types
  2. require that the decimal types be builtin types, like float and double
  3. specify a library of class types, but allow enough implementor latitude that a conforming implementation could instead provide builtin types

We decided as a group to pursue option #3, but that approach implies that implementations may not agree on the semantics of certain use cases (first example, above), or on whether certain other cases are well-formed (second example). Another potentially important problem is that, under the present definition of POD, the decimal classes are not POD types, but builtins will be.

Note that neither example above implies any problems with respect to C-to-C++ compatibility, since neither example can be expressed in C.

Proposed resolution:


606. Decimal: allow narrowing conversions

Section: TRDecimal 3.2 [trdec.types.types] Status: Open Submitter: Martin Sebor Date: 2006-06-15

View other active issues in [trdec.types.types].

View all other issues in [trdec.types.types].

View all issues with Open status.

Discussion:

In c++std-lib-17205, Martin writes:

...was it a deliberate design choice to make narrowing assignments ill-formed while permitting narrowing compound assignments? For instance:

      decimal32 d32;
      decimal64 d64;

      d32 = 64;     // error
      d32 += 64;    // okay

In c++std-lib-17229, Robert responds:

It is a vestige of an old idea that I forgot to remove from the paper. Narrowing assignments should be permitted. The bug is that the converting constructors that cause narrowing should not be explicit. Thanks for pointing this out.

Proposed resolution:

1. In "3.2.2 Class decimal32" synopsis, remove the explicit specifier from the narrowing conversions:

                // 3.2.2.2 conversion from floating-point type:
                explicit decimal32(decimal64 d64);
                explicit decimal32(decimal128 d128);

2. Do the same thing in "3.2.2.2. Conversion from floating-point type."

3. In "3.2.3 Class decimal64" synopsis, remove the explicit specifier from the narrowing conversion:

                // 3.2.3.2 conversion from floating-point type:
                explicit decimal64(decimal128 d128);

4. Do the same thing in "3.2.3.2. Conversion from floating-point type."

[ Redmond: We prefer explicit conversions for narrowing and implicit for widening. ]


614. std::string allocator requirements still inconsistent

Section: 21.3 [basic.string] Status: Open Submitter: Bo Persson Date: 2006-12-05

View other active issues in [basic.string].

View all other issues in [basic.string].

View all issues with Open status.

Discussion:

This is based on N2134, where 21.3.1/2 states: "... The Allocator object used shall be a copy of the Allocator object passed to the basic_string object's constructor or, if the constructor does not take an Allocator argument, a copy of a default-constructed Allocator object."

Section 21.3.2/1 lists two constructors:

basic_string(const basic_string<charT,traits,Allocator>& str );

basic_string(const basic_string<charT,traits,Allocator>& str ,
             size_type pos , size_type n = npos,
             const Allocator& a = Allocator());

and then says "In the first form, the Allocator value used is copied from str.get_allocator().", which isn't an option according to 21.3.1.

[ Batavia: We need blanket statement to the effect of: ]

  1. If an allocator is passed in, use it, or,
  2. If a string is passed in, use its allocator.

[ Review constructors and functions that return a string; make sure we follow these rules (substr, operator+, etc.). Howard to supply wording. ]

[ Bo adds: The new container constructor which takes only a size_type is not consistent with 23.1 [container.requirements], p9 which says in part:

All other constructors for these container types take an Allocator& argument (20.1.2), an allocator whose value type is the same as the container's value type. A copy of this argument is used for any memory allocation performed, by these constructors and by all member functions, during the lifetime of each container object.
]

[ post Bellevue: We re-confirm that the issue is real. Pablo will provide wording. ]

Proposed resolution:


617. std::array is a sequence that doesn't satisfy the sequence requirements?

Section: 23.2.1 [array] Status: Open Submitter: Bo Persson Date: 2006-12-30

View other active issues in [array].

View all other issues in [array].

View all issues with Open status.

Discussion:

The <array> header is given under 23.2 [sequences]. 23.2.1 [array]/paragraph 3 says:

"Unless otherwise specified, all array operations are as described in 23.1 [container.requirements]".

However, array isn't mentioned at all in section 23.1 [container.requirements]. In particular, Table 82 "Sequence requirements" lists several operations (insert, erase, clear) that std::array does not have in 23.2.1 [array].

Also, Table 83 "Optional sequence operations" lists several operations that std::array does have, but array isn't mentioned.

Proposed resolution:


630. arrays of valarray

Section: 26.5.2.1 [valarray.cons] Status: Open Submitter: Martin Sebor Date: 2007-01-28

View all other issues in [valarray.cons].

View all issues with Open status.

Discussion:

Section 26.1 [numeric.requirements], p1 suggests that a valarray specialization on a type T that satisfies the requirements enumerated in the paragraph is itself a valid type on which valarray may be instantiated (Footnote 269 makes this clear). I.e., valarray<valarray<T> > is valid as long as T is valid. However, since implementations of valarray are permitted to initialize storage allocated by the class by invoking the default ctor of T followed by the copy assignment operator, such implementations of valarray wouldn't work with (perhaps user-defined) specializations of valarray whose assignment operator had undefined behavior when the size of its argument didn't match the size of *this. By "wouldn't work" I mean that it would be impossible to resize such an array of arrays by calling the resize() member function on it if the function used the copy assignment operator after constructing all elements using the default ctor (e.g., by invoking new value_type[N]) to obtain default-initialized storage) as it's permitted to do.

Stated more generally, the problem is that valarray<valarray<T> >::resize(size_t) isn't required or guaranteed to have well-defined semantics for every type T that satisfies all requirements in 26.1 [numeric.requirements].

I believe this problem was introduced by the adoption of the resolution outlined in N0857, Assignment of valarrays, from 1996. The copy assignment operator of the original numerical array classes proposed in N0280, as well as the one proposed in N0308 (both from 1993), had well-defined semantics for arrays of unequal size (the latter explicitly only when *this was empty; assignment of non empty arrays of unequal size was a runtime error).

The justification for the change given in N0857 was the "loss of performance [deemed] only significant for very simple operations on small arrays or for architectures with very few registers."

Since tiny arrays on a limited subset of hardware architectures are likely to be an exceedingly rare case (despite the continued popularity of x86) I propose to revert the resolution and make the behavior of all valarray assignment operators well-defined even for non-conformal arrays (i.e., arrays of unequal size). I have implemented this change and measured no significant degradation in performance in the common case (non-empty arrays of equal size). I have measured a 50% (and in some cases even greater) speedup in the case of assignments to empty arrays versus calling resize() first followed by an invocation of the copy assignment operator.

[ Bellevue: ]

If no proposed wording by June meeting, this issue should be closed NAD.

Proposed resolution:

Change 26.5.2.2 [valarray.assign], p1 as follows:

valarray<T>& operator=(const valarray<T>& x);

-1- Each element of the *this array is assigned the value of the corresponding element of the argument array. The resulting behavior is undefined if When the length of the argument array is not equal to the length of the *this array. resizes *this to make the two arrays the same length, as if by calling resize(x.size()), before performing the assignment.

And add a new paragraph just below paragraph 1 with the following text:

-2- Postcondition: size() == x.size().

Also add the following paragraph to 26.5.2.2 [valarray.assign], immediately after p4:

-?- When the length, N of the array referred to by the argument is not equal to the length of *this, the operator resizes *this to make the two arrays the same length, as if by calling resize(N), before performing the assignment.

[ pre-Sophia Antipolis, Martin adds the following compromise wording, but prefers the original proposed resolution: ]

Change 26.5.2.2 [valarray.assign], p1 as follows:

-1- Requires: size() == 0 || size() == x.size().

-2- Effects: If size() == 0 calls x.resize(x.size()) first. Each element of the *this array is assigned the value of the corresponding element of the argument array.

-3- Postcondition: size() == x.size().

Add the following paragraph to 26.5.2.2 [valarray.assign], immediately after p4:

-?- When size() == 0 and the length, N of the array referred to by the argument is not equal to the length of *this, the operator resizes *this to make the two arrays the same length, as if by calling resize(N), before performing the assignment. Otherwise, when size() > 0 and size() != N, the behavior is undefined.

[ Kona (2007): Gaby to propose wording for an alternative resolution in which you can assign to a valarray of size 0, but not to any other valarray whose size is unequal to the right hand side of the assignment. ]


631. conflicting requirements for BinaryPredicate

Section: 25 [algorithms] Status: Open Submitter: James Kanze Date: 2007-01-31

View all other issues in [algorithms].

View all issues with Open status.

Discussion:

The general requirements for BinaryPredicate (in 25 [algorithms]/8) contradict the implied specific requirements for some functions. In particular, it says that:

[...] if an algorithm takes BinaryPredicate binary_pred as its argument and first1 and first2 as its iterator arguments, it should work correctly in the construct if (binary_pred (*first1 , *first2 )){...}. BinaryPredicate always takes the first iterator type as its first argument, that is, in those cases when T value is part of the signature, it should work correctly in the context of if (binary_pred (*first1 , value)){...}.

In the description of upper_bound (25.3.3.2 [upper.bound]/2), however, the use is described as "!comp(value, e)", where e is an element of the sequence (a result of dereferencing *first).

In the description of lexicographical_compare, we have both "*first1 < *first2" and "*first2 < *first1" (which presumably implies "comp( *first1, *first2 )" and "comp( *first2, *first1 )".

[ Toronto: Moved to Open. ConceptGCC seems to get lower_bound and upper_bound to work withoutt these changes. ]

Proposed resolution:

Logically, the BinaryPredicate is used as an ordering relationship, with the semantics of "less than". Depending on the function, it may be used to determine equality, or any of the inequality relationships; doing this requires being able to use it with either parameter first. I would thus suggest that the requirement be:

[...] BinaryPredicate always takes the first iterator value_type as one of its arguments, it is unspecified which. If an algorithm takes BinaryPredicate binary_pred as its argument and first1 and first2 as its iterator arguments, it should work correctly both in the construct if (binary_pred (*first1 , *first2 )){...} and if (binary_pred (*first2, *first1)){...}. In those cases when T value is part of the signature, it should work correctly in the context of if (binary_pred (*first1 , value)){...} and of if (binary_pred (value, *first1)){...}. [Note: if the two types are not identical, and neither is convertable to the other, this may require that the BinaryPredicate be a functional object with two overloaded operator()() functions. --end note]

Alternatively, one could specify an order for each function. IMHO, this would be more work for the committee, more work for the implementors, and of no real advantage for the user: some functions, such as lexicographical_compare or equal_range, will still require both functions, and it seems like a much easier rule to teach that both functions are always required, rather than to have a complicated list of when you only need one, and which one.


632. Time complexity of size() for std::set

Section: 23.1 [container.requirements] Status: Open Submitter: Lionel B Date: 2007-02-01

View other active issues in [container.requirements].

View all other issues in [container.requirements].

View all issues with Open status.

Discussion:

A recent news group discussion:

Anyone know if the Standard has anything to say about the time complexity of size() for std::set? I need to access a set's size (/not/ to know if it is empty!) heavily during an algorithm and was thus wondering whether I'd be better off tracking the size "manually" or whether that'd be pointless.

That would be pointless. size() is O(1).

Nit: the standard says "should" have constant time. Implementations may take license to do worse. I know that some do this for std::list<> as a part of some trade-off with other operation.

I was aware of that, hence my reluctance to use size() for std::set.

However, this reason would not apply to std::set<> as far as I can see.

Ok, I guess the only option is to try it and see...

If I have any recommendation to the C++ Standards Committee it is that implementations must (not "should"!) document clearly[1], where known, the time complexity of *all* container access operations.

[1] In my case (gcc 4.1.1) I can't swear that the time complexity of size() for std::set is not documented... but if it is it's certainly well hidden away.

Proposed resolution:

[ Kona (2007): This issue affects all the containers. We'd love to see a paper dealing with the broad issue. We think that the complexity of the size() member of every container -- except possibly list -- should be O(1). Alan has volunteered to provide wording. ]

[ Bellevue: ]

Mandating O(1) size will not fly, too many implementations would be invalidated. Alan to provide wording that toughens wording, but that does not absolutely mandate O(1).

635. domain of allocator::address

Section: 20.1.2 [allocator.requirements] Status: Open Submitter: Howard Hinnant Date: 2007-02-08

View other active issues in [allocator.requirements].

View all other issues in [allocator.requirements].

View all issues with Open status.

Discussion:

The table of allocator requirements in 20.1.2 [allocator.requirements] describes allocator::address as:

a.address(r)
a.address(s)

where r and s are described as:

a value of type X::reference obtained by the expression *p.

and p is

a value of type X::pointer, obtained by calling a1.allocate, where a1 == a

This all implies that to get the address of some value of type T that value must have been allocated by this allocator or a copy of it.

However sometimes container code needs to compare the address of an external value of type T with an internal value. For example list::remove(const T& t) may want to compare the address of the external value t with that of a value stored within the list. Similarly vector or deque insert may want to make similar comparisons (to check for self-referencing calls).

Mandating that allocator::address can only be called for values which the allocator allocated seems overly restrictive.

Proposed resolution:

Change 20.1.2 [allocator.requirements]:

r : a value of type X::reference obtained by the expression *p.

s : a value of type X::const_reference obtained by the expression *q or by conversion from a value r.

[ post Oxford: This would be rendered NAD Editorial by acceptance of N2257. ]

[ Kona (2007): This issue is section 8 of N2387. There was some discussion of it but no resolution to this issue was recorded. Moved to Open. ]


659. istreambuf_iterator should have an operator->()

Section: 24.5.3 [istreambuf.iterator] Status: Open Submitter: Niels Dekker Date: 2007-03-25

View all other issues in [istreambuf.iterator].

View all issues with Open status.

Discussion:

Greg Herlihy has clearly demonstrated that a user defined input iterator should have an operator->(), even if its value type is a built-in type (comp.std.c++, "Re: Should any iterator have an operator->() in C++0x?", March 2007). And as Howard Hinnant remarked in the same thread that the input iterator istreambuf_iterator doesn't have one, this must be a defect!

Based on Greg's example, the following code demonstrates the issue:

 #include <iostream> 
 #include <fstream>
 #include <streambuf> 

 typedef char C;
 int main ()
 {
   std::ifstream s("filename", std::ios::in);
   std::istreambuf_iterator<char> i(s);

   (*i).~C();  // This is well-formed...
   i->~C();  // ... so this should be supported!
 }

Of course, operator-> is also needed when the value_type of istreambuf_iterator is a class.

The operator-> could be implemented in various ways. For instance, by storing the current value inside the iterator, and returning its address. Or by returning a proxy, like operator_arrow_proxy, from http://www.boost.org/boost/iterator/iterator_facade.hpp

I hope that the resolution of this issue will contribute to getting a clear and consistent definition of iterator concepts.

Proposed resolution:

Add to the synopsis in 24.5.3 [istreambuf.iterator]:

charT operator*() const;
pointer operator->() const;
istreambuf_iterator<charT,traits>& operator++();

Change 24.5.3 [istreambuf.iterator], p1:

The class template istreambuf_iterator reads successive characters from the streambuf for which it was constructed. operator* provides access to the current input character, if any. operator-> may return a proxy. Each time operator++ is evaluated, the iterator advances to the next input character. If the end of stream is reached (streambuf_type::sgetc() returns traits::eof()), the iterator becomes equal to the end of stream iterator value. The default constructor istreambuf_iterator() and the constructor istreambuf_iterator(0) both construct an end of stream iterator object suitable for use as an end-of-range.

[ Kona (2007): The proposed resolution is inconsistent because the return type of istreambuf_iterator::operator->() is specified to be pointer, but the proposed text also states that "operator-> may return a proxy." ]

[ Niels Dekker (mailed to Howard Hinnant): ]

The proposed resolution does not seem inconsistent to me. istreambuf_iterator::operator->() should have istreambuf_iterator::pointer as return type, and this return type may in fact be a proxy.

AFAIK, the resolution of 445 ("iterator_traits::reference unspecified for some iterator categories") implies that for any iterator class Iter, the return type of operator->() is Iter::pointer, by definition. I don't think Iter::pointer needs to be a raw pointer.

Still I wouldn't mind if the text "operator-> may return a proxy" would be removed from the resolution. I think it's up to the library implementation, how to implement istreambuf_iterator::operator->(). As longs as it behaves as expected: i->m should have the same effect as (*i).m. Even for an explicit destructor call, i->~C(). The main issue is just: istreambuf_iterator should have an operator->()!


667. money_get's widened minus sign

Section: 22.2.6.1.2 [locale.money.get.virtuals] Status: Open Submitter: Thomas Plum Date: 2007-04-16

View other active issues in [locale.money.get.virtuals].

View all other issues in [locale.money.get.virtuals].

View all issues with Open status.

Discussion:

22.2.6.1.2 [locale.money.get.virtuals], para 1 says:

The result is returned as an integral value stored in units or as a sequence of digits possibly preceded by a minus sign (as produced by ct.widen(c) where c is '-' or in the range from '0' through '9', inclusive) stored in digits.

The following objection has been raised:

Some implementations interpret this to mean that a facet derived from ctype<wchar_t> can provide its own member do_widen(char) which produces e.g. L'@' for the "widened" minus sign, and that the '@' symbol will appear in the resulting sequence of digits. Other implementations have assumed that one or more places in the standard permit the implementation to "hard-wire" L'-' as the "widened" minus sign. Are both interpretations permissible, or only one?

[Plum ref _222612Y14]

Furthermore: if ct.widen('9') produces L'X' (a non-digit), does a parse fail if a '9' appears in the subject string? [Plum ref _22263Y33]

[ Kona (2007): Bill and Dietmar to provide proposed wording. ]

[ post Bellevue: Bill adds: ]

The Standard is clear that the minus sign stored in digits is ct.widen('-'). The subject string must contain characters c in the set [-0123456789] which are translated by ct.widen(c) calls before being stored in digits; the widened characters are not relevant to the parsing of the subject string.

Proposed resolution:


668. money_get's empty minus sign

Section: 22.2.6.1.2 [locale.money.get.virtuals] Status: Open Submitter: Thomas Plum Date: 2007-04-16

View other active issues in [locale.money.get.virtuals].

View all other issues in [locale.money.get.virtuals].

View all issues with Open status.

Discussion:

22.2.6.1.2 [locale.money.get.virtuals], para 3 says:

If pos or neg is empty, the sign component is optional, and if no sign is detected, the result is given the sign that corresponds to the source of the empty string.

The following objection has been raised:

A negative_sign of "" means "there is no way to write a negative sign" not "any null sequence is a negative sign, so it's always there when you look for it".

[Plum ref _222612Y32]

[ Kona (2007): Bill to provide proposed wording and interpretation of existing wording. ]

Proposed resolution:


669. Equivalent postive and negative signs in money_get

Section: 22.2.6.1.2 [locale.money.get.virtuals] Status: Open Submitter: Thomas Plum Date: 2007-04-16

View other active issues in [locale.money.get.virtuals].

View all other issues in [locale.money.get.virtuals].

View all issues with Open status.

Discussion:

22.2.6.1.2 [locale.money.get.virtuals], para 3 sentence 4 says:

If the first character of pos is equal to the first character of neg, or if both strings are empty, the result is given a positive sign.

One interpretation is that an input sequence must match either the positive pattern or the negative pattern, and then in either event it is interpreted as positive. The following objections has been raised:

The input can successfully match only a positive sign, so the negative pattern is an unsuccessful match.

[Plum ref _222612Y34, 222612Y51b]

[ Bill to provide proposed wording and interpretation of existing wording. ]

Proposed resolution:


670. money_base::pattern and space

Section: 22.2.6.3 [locale.moneypunct] Status: Open Submitter: Thomas Plum Date: 2007-04-16

View all issues with Open status.

Duplicate of: 836

Discussion:

22.2.6.3 [locale.moneypunct], para 2 says:

The value space indicates that at least one space is required at that position.

The following objection has been raised:

Whitespace is optional when matching space. (See 22.2.6.1.2 [locale.money.get.virtuals], para 2.)

[Plum ref _22263Y22]

[ Kona (2007): Bill to provide proposed wording. We agree that C++03 is ambiguous, and that we want C++0X to say "space" means 0 or more whitespace characters on input. ]

Proposed resolution:


671. precision of hexfloat

Section: 22.2.2.2.2 [facet.num.put.virtuals] Status: Open Submitter: John Salmon Date: 2007-04-20

View all other issues in [facet.num.put.virtuals].

View all issues with Open status.

Discussion:

I am trying to understand how TR1 supports hex float (%a) output.

As far as I can tell, it does so via the following:

8.15 Additions to header <locale> [tr.c99.locale]

In subclause 22.2.2.2.2 [facet.num.put.virtuals], Table 58 Floating-point conversions, after the line: floatfield == ios_base::scientific %E

add the two lines:

floatfield == ios_base::fixed | ios_base::scientific && !uppercase %a
floatfield == ios_base::fixed | ios_base::scientific %A 2

[Note: The additional requirements on print and scan functions, later in this clause, ensure that the print functions generate hexadecimal floating-point fields with a %a or %A conversion specifier, and that the scan functions match hexadecimal floating-point fields with a %g conversion specifier.  end note]

Following the thread, in 22.2.2.2.2 [facet.num.put.virtuals], we find:

For conversion from a floating-point type, if (flags & fixed) != 0 or if str.precision() > 0, then str.precision() is specified in the conversion specification.

This would seem to imply that when floatfield == fixed|scientific, the precision of the conversion specifier is to be taken from str.precision().  Is this really what's intended?  I sincerely hope that I'm either missing something or this is an oversight.  Please tell me that the committee did not intend to mandate that hex floats (and doubles) should by default be printed as if by %.6a.

[ Howard: I think the fundamental issue we overlooked was that with %f, %e, %g, the default precision was always 6.  With %a the default precision is not 6, it is infinity.  So for the first time, we need to distinguish between the default value of precision, and the precision value 6. ]

Proposed resolution:

[ Kona (2007): Robert volunteers to propose wording. ]


675. Move assignment of containers

Section: 23.1 [container.requirements] Status: Review Submitter: Howard Hinnant Date: 2007-05-05

View other active issues in [container.requirements].

View all other issues in [container.requirements].

View all issues with Review status.

Discussion:

James Hopkin pointed out to me that if vector<T> move assignment is O(1) (just a swap) then containers such as vector<shared_ptr<ostream>> might have the wrong semantics under move assignment when the source is not truly an rvalue, but a moved-from lvalue (destructors could run late).

vector<shared_ptr<ostream>> v1;
vector<shared_ptr<ostream>> v2;
...
v1 = v2;               // #1
v1 = std::move(v2);    // #2

Move semantics means not caring what happens to the source (v2 in this example). It doesn't mean not caring what happens to the target (v1). In the above example both assignments should have the same effect on v1. Any non-shared ostream's v1 owns before the assignment should be closed, whether v1 is undergoing copy assignment or move assignment.

This implies that the semantics of move assignment of a generic container should be clear, swap instead of just swap. An alternative which could achieve the same effect would be to move assign each element. In either case, the complexity of move assignment needs to be relaxed to O(v1.size()).

The performance hit of this change is not nearly as drastic as it sounds. In practice, the target of a move assignment has always just been move constructed or move assigned from. Therefore under clear, swap semantics (in this common use case) we are still achieving O(1) complexity.

Proposed resolution:

Change 23.1 [container.requirements]:

Table 89: Container requirements
expressionreturn typeoperational semantics assertion/note pre/post-conditioncomplexity
a = rv;X& All existing elements of a are either move assigned or destructed a shall be equal to the value that rv had before this construction (Note C) linear

Notes: the algorithms swap(), equal() and lexicographical_compare() are defined in clause 25. Those entries marked "(Note A)" should have constant complexity. Those entries marked "(Note B)" have constant complexity unless allocator_propagate_never<X::allocator_type>::value is true, in which case they have linear complexity. Those entries marked "(Note C)" have constant complexity if a.get_allocator() == rv.get_allocator() or if either allocator_propagate_on_move_assignment<X::allocator_type>::value is true or allocator_propagate_on_copy_assignment<X::allocator_type>::value is true and linear complexity otherwise.

[ post Bellevue Howard adds: ]

This issue was voted to WP in Bellevue, but accidently got stepped on by N2525 which was voted to WP simulataneously. Moving back to Open for the purpose of getting the wording right. The intent of this issue and N2525 are not in conflict.

[ post Sophia Antipolis Howard updated proposed wording: ]


676. Moving the unordered containers

Section: 23.4 [unord] Status: Open Submitter: Howard Hinnant Date: 2007-05-05

View other active issues in [unord].

View all other issues in [unord].

View all issues with Open status.

Discussion:

Move semantics are missing from the unordered containers. The proposed resolution below adds move-support consistent with N1858 and the current working draft.

The current proposed resolution simply lists the requirements for each function. These might better be hoisted into the requirements table for unordered associative containers. Futhermore a mild reorganization of the container requirements could well be in order. This defect report is purposefully ignoring these larger issues and just focusing on getting the unordered containers "moved".

Proposed resolution:

Add to 23.4 [unord]:

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y); 

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);

...

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Value, Hash, Pred, Alloc>& x, 
            unordered_set<Value, Hash, Pred, Alloc>& y); 

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Value, Hash, Pred, Alloc>& x, 
            unordered_set<Value, Hash, Pred, Alloc>&& y);

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Value, Hash, Pred, Alloc>&& x, 
            unordered_set<Value, Hash, Pred, Alloc>& y);

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Value, Hash, Pred, Alloc>& x, 
            unordered_multiset<Value, Hash, Pred, Alloc>& y);

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Value, Hash, Pred, Alloc>& x, 
            unordered_multiset<Value, Hash, Pred, Alloc>&& y);

template <class Value, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Value, Hash, Pred, Alloc>&& x, 
            unordered_multiset<Value, Hash, Pred, Alloc>& y);

unordered_map

Change 23.4.1 [unord.map]:

class unordered_map
{
    ...
    unordered_map(const unordered_map&);
    unordered_map(unordered_map&&);
    ~unordered_map();
    unordered_map& operator=(const unordered_map&);
    unordered_map& operator=(unordered_map&&);
    ...
    // modifiers 
    std::pair<iterator, bool> insert(const value_type& obj); 
    template <class P> pair<iterator, bool> insert(P&& obj);
    iterator       insert(iterator hint, const value_type& obj);
    template <class P> iterator       insert(iterator hint, P&& obj);
    const_iterator insert(const_iterator hint, const value_type& obj);
    template <class P> const_iterator insert(const_iterator hint, P&& obj);
    ...
    void swap(unordered_map&&);
    ...
    mapped_type& operator[](const key_type& k);
    mapped_type& operator[](key_type&& k);
    ...
};

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y);

Add to 23.4.1.1 [unord.map.cnstr]:

template <class InputIterator>
  unordered_map(InputIterator f, InputIterator l, 
                size_type n = implementation-defined, 
                const hasher& hf = hasher(), 
                const key_equal& eql = key_equal(), 
                const allocator_type& a = allocator_type());

Requires: If the iterator's dereference operator returns an lvalue or a const rvalue pair<key_type, mapped_type>, then both key_type and mapped_type shall be CopyConstructible.

Add to 23.4.1.2 [unord.map.elem]:

mapped_type& operator[](const key_type& k);

...

Requires: key_type shall be CopyConstructible and mapped_type shall be DefaultConstructible.

mapped_type& operator[](key_type&& k);

Effects: If the unordered_map does not already contain an element whose key is equivalent to k , inserts the value std::pair<const key_type, mapped_type>(std::move(k), mapped_type()).

Requires: mapped_type shall be DefaultConstructible.

Returns: A reference to x.second, where x is the (unique) element whose key is equivalent to k.

Add new section [unord.map.modifiers]:

pair<iterator, bool> insert(const value_type& x);
template <class P> pair<iterator, bool> insert(P&& x);
iterator       insert(iterator hint, const value_type& x);
template <class P> iterator       insert(iterator hint, P&& x);
const_iterator insert(const_iterator hint, const value_type& x);
template <class P> const_iterator insert(const_iterator hint, P&& x);
template <class InputIterator>
  void insert(InputIterator first, InputIterator last);

Requires: Those signatures taking a const value_type& parameter requires both the key_type and the mapped_type to be CopyConstructible.

P shall be convertible to value_type. If P is instantiated as a reference type, then the argument x is copied from. Otherwise x is considered to be an rvalue as it is converted to value_type and inserted into the unordered_map. Specifically, in such cases CopyConstructible is not required of key_type or mapped_type unless the conversion from P specifically requires it (e.g. if P is a tuple<const key_type, mapped_type>, then key_type must be CopyConstructible).

The signature taking InputIterator parameters requires CopyConstructible of both key_type and mapped_type if the dereferenced InputIterator returns an lvalue or const rvalue value_type.

Add to 23.4.1.3 [unord.map.swap]:

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>&& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_map<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_map<Key, T, Hash, Pred, Alloc>& y);

unordered_multimap

Change 23.4.2 [unord.multimap]:

class unordered_multimap
{
    ...
    unordered_multimap(const unordered_multimap&);
    unordered_multimap(unordered_multimap&&);
    ~unordered_multimap();
    unordered_multimap& operator=(const unordered_multimap&);
    unordered_multimap& operator=(unordered_multimap&&);
    ...
    // modifiers 
    iterator insert(const value_type& obj); 
    template <class P> iterator insert(P&& obj);
    iterator       insert(iterator hint, const value_type& obj);
    template <class P> iterator       insert(iterator hint, P&& obj);
    const_iterator insert(const_iterator hint, const value_type& obj);
    template <class P> const_iterator insert(const_iterator hint, P&& obj);
    ...
    void swap(unordered_multimap&&);
    ...
};

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);

Add to 23.4.2.1 [unord.multimap.cnstr]:

template <class InputIterator>
  unordered_multimap(InputIterator f, InputIterator l, 
                size_type n = implementation-defined, 
                const hasher& hf = hasher(), 
                const key_equal& eql = key_equal(), 
                const allocator_type& a = allocator_type());

Requires: If the iterator's dereference operator returns an lvalue or a const rvalue pair<key_type, mapped_type>, then both key_type and mapped_type shall be CopyConstructible.

Add new section [unord.multimap.modifiers]:

iterator insert(const value_type& x);
template <class P> iterator       insert(P&& x);
iterator       insert(iterator hint, const value_type& x);
template <class P> iterator       insert(iterator hint, P&& x);
const_iterator insert(const_iterator hint, const value_type& x);
template <class P> const_iterator insert(const_iterator hint, P&& x);
template <class InputIterator>
  void insert(InputIterator first, InputIterator last);

Requires: Those signatures taking a const value_type& parameter requires both the key_type and the mapped_type to be CopyConstructible.

P shall be convertible to value_type. If P is instantiated as a reference type, then the argument x is copied from. Otherwise x is considered to be an rvalue as it is converted to value_type and inserted into the unordered_multimap. Specifically, in such cases CopyConstructible is not required of key_type or mapped_type unless the conversion from P specifically requires it (e.g. if P is a tuple<const key_type, mapped_type>, then key_type must be CopyConstructible).

The signature taking InputIterator parameters requires CopyConstructible of both key_type and mapped_type if the dereferenced InputIterator returns an lvalue or const rvalue value_type.

Add to 23.4.2.2 [unord.multimap.swap]:

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>&& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multimap<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_multimap<Key, T, Hash, Pred, Alloc>& y);

unordered_set

Change 23.4.3 [unord.set]:

class unordered_set
{
    ...
    unordered_set(const unordered_set&);
    unordered_set(unordered_set&&);
    ~unordered_set();
    unordered_set& operator=(const unordered_set&);
    unordered_set& operator=(unordered_set&&);
    ...
    // modifiers 
    std::pair<iterator, bool> insert(const value_type& obj); 
    pair<iterator, bool> insert(value_type&& obj);
    iterator       insert(iterator hint, const value_type& obj);
    iterator       insert(iterator hint, value_type&& obj);
    const_iterator insert(const_iterator hint, const value_type& obj);
    const_iterator insert(const_iterator hint, value_type&& obj);
    ...
    void swap(unordered_set&&);
    ...
};

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>& y);

Add to 23.4.3.1 [unord.set.cnstr]:

template <class InputIterator>
  unordered_set(InputIterator f, InputIterator l, 
                size_type n = implementation-defined, 
                const hasher& hf = hasher(), 
                const key_equal& eql = key_equal(), 
                const allocator_type& a = allocator_type());

Requires: If the iterator's dereference operator returns an lvalue or a const rvalue value_type, then the value_type shall be CopyConstructible.

Add new section [unord.set.modifiers]:

pair<iterator, bool> insert(const value_type& x);
pair<iterator, bool> insert(value_type&& x);
iterator       insert(iterator hint, const value_type& x);
iterator       insert(iterator hint, value_type&& x);
const_iterator insert(const_iterator hint, const value_type& x);
const_iterator insert(const_iterator hint, value_type&& x);
template <class InputIterator>
  void insert(InputIterator first, InputIterator last);

Requires: Those signatures taking a const value_type& parameter requires the value_type to be CopyConstructible.

The signature taking InputIterator parameters requires CopyConstructible of value_type if the dereferenced InputIterator returns an lvalue or const rvalue value_type.

Add to 23.4.3.2 [unord.set.swap]:

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>&& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_set<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_set<Key, T, Hash, Pred, Alloc>& y);

unordered_multiset

Change 23.4.4 [unord.multiset]:

class unordered_multiset
{
    ...
    unordered_multiset(const unordered_multiset&);
    unordered_multiset(unordered_multiset&&);
    ~unordered_multiset();
    unordered_multiset& operator=(const unordered_multiset&);
    unordered_multiset& operator=(unordered_multiset&&);
    ...
    // modifiers 
    iterator insert(const value_type& obj); 
    iterator insert(value_type&& obj);
    iterator       insert(iterator hint, const value_type& obj);
    iterator       insert(iterator hint, value_type&& obj);
    const_iterator insert(const_iterator hint, const value_type& obj);
    const_iterator insert(const_iterator hint, value_type&& obj);
    ...
    void swap(unordered_multiset&&);
    ...
};

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>&& y);

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>& y);

Add to 23.4.4.1 [unord.multiset.cnstr]:

template <class InputIterator>
  unordered_multiset(InputIterator f, InputIterator l, 
                size_type n = implementation-defined, 
                const hasher& hf = hasher(), 
                const key_equal& eql = key_equal(), 
                const allocator_type& a = allocator_type());

Requires: If the iterator's dereference operator returns an lvalue or a const rvalue value_type, then the value_type shall be CopyConstructible.

Add new section [unord.multiset.modifiers]:

iterator insert(const value_type& x);
iterator insert(value_type&& x);
iterator       insert(iterator hint, const value_type& x);
iterator       insert(iterator hint, value_type&& x);
const_iterator insert(const_iterator hint, const value_type& x);
const_iterator insert(const_iterator hint, value_type&& x);
template <class InputIterator>
  void insert(InputIterator first, InputIterator last);

Requires: Those signatures taking a const value_type& parameter requires the value_type to be CopyConstructible.

The signature taking InputIterator parameters requires CopyConstructible of value_type if the dereferenced InputIterator returns an lvalue or const rvalue value_type.

Add to 23.4.4.2 [unord.multiset.swap]:

template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>&& y);
template <class Key, class T, class Hash, class Pred, class Alloc> 
  void swap(unordered_multiset<Key, T, Hash, Pred, Alloc>&& x, 
            unordered_multiset<Key, T, Hash, Pred, Alloc>& y);

[ Voted to WP in Bellevue. ]

[ post Bellevue, Pete notes: ]

Please remind people who are reviewing issues to check that the text modifications match the current draft. Issue 676, for example, adds two overloads for unordered_map::insert taking a hint. One takes a const_iterator and returns a const_iterator, and the other takes an iterator and returns an iterator. This was correct at the time the issue was written, but was changed in Toronto so there is only one hint overload, taking a const_iterator and returning an iterator.

This issue is not ready. In addition to the relatively minor signature problem I mentioned earlier, it puts requirements in the wrong places. Instead of duplicating requirements throughout the template specifications, it should put them in the front matter that talks about requirements for unordered containers in general. This presentation problem is editorial, but I'm not willing to do the extensive rewrite that it requires. Please put it back into Open status.


688. reference_wrapper, cref unsafe, allow binding to rvalues

Section: 20.5.5.1 [refwrap.const] Status: Open Submitter: Peter Dimov Date: 2007-05-10

View all other issues in [refwrap.const].

View all issues with Open status.

Discussion:

A reference_wrapper can be constructed from an rvalue, either by using the constructor, or via cref (and ref in some corner cases). This leads to a dangling reference being stored into the reference_wrapper object. Now that we have a mechanism to detect an rvalue, we can fix them to disallow this source of undefined behavior.

Also please see the thread starting at c++std-lib-17398 for some good discussion on this subject.

Proposed resolution:

In 20.5 [function.objects], add the following two signatures to the synopsis:

template <class T> void ref(const T&& t) = delete;
template <class T> void cref(const T&& t) = delete;

[ N2292 addresses the first part of the resolution but not the second. ]

[ Bellevue: Doug noticed problems with the current wording. ]

[ post Bellevue: Howard and Peter provided revised wording. ]

[ This resolution depends on a "favorable" resolution of CWG 606: that is, the "special deduction rule" is disabled with the const T&& pattern. ]


691. const_local_iterator cbegin, cend missing from TR1

Section: 23.4 [unord], TR1 6.3 [tr.hash] Status: Ready Submitter: Joaquín M López Muñoz Date: 2007-06-14

View other active issues in [unord].

View all other issues in [unord].

View all issues with Ready status.

Discussion:

The last version of TR1 does not include the following member functions for unordered containers:

const_local_iterator cbegin(size_type n) const;
const_local_iterator cend(size_type n) const;

which looks like an oversight to me. I've checked th TR1 issues lists and the latest working draft of the C++0x std (N2284) and haven't found any mention to these menfuns or to their absence.

Is this really an oversight, or am I missing something?

Proposed resolution:

Add the following two rows to table 93 (unordered associative container requirements) in section 23.1.3 [unord.req]:

Unordered associative container requirements (in addition to container)
expression return type assertion/note pre/post-condition complexity
b.cbegin(n) const_local_iterator n shall be in the range [0, bucket_count()). Note: [b.cbegin(n), b.cend(n)) is a valid range containing all of the elements in the nth bucket. Constant
b.cend(n) const_local_iterator n shall be in the range [0, bucket_count()). Constant

Add to the synopsis in 23.4.1 [unord.map]:

const_local_iterator cbegin(size_type n) const;
const_local_iterator cend(size_type n) const;

Add to the synopsis in 23.4.2 [unord.multimap]:

const_local_iterator cbegin(size_type n) const;
const_local_iterator cend(size_type n) const;

Add to the synopsis in 23.4.3 [unord.set]:

const_local_iterator cbegin(size_type n) const;
const_local_iterator cend(size_type n) const;

Add to the synopsis in 23.4.4 [unord.multiset]:

const_local_iterator cbegin(size_type n) const;
const_local_iterator cend(size_type n) const;

692. get_money and put_money should be formatted I/O functions

Section: 27.6.4 [ext.manip] Status: Review Submitter: Martin Sebor Date: 2007-06-22

View other active issues in [ext.manip].

View all other issues in [ext.manip].

View all issues with Review status.

Discussion:

In a private email Bill Plauger notes:

I believe that the function that implements get_money [from N2072] should behave as a formatted input function, and the function that implements put_money should behave as a formatted output function. This has implications regarding the skipping of whitespace and the handling of errors, among other things.

The words don't say that right now and I'm far from convinced that such a change is editorial.

Martin's response:

I agree that the manipulators should handle exceptions the same way as formatted I/O functions do. The text in N2072 assumes so but the Returns clause explicitly omits exception handling for the sake of brevity. The spec should be clarified to that effect.

As for dealing with whitespace, I also agree it would make sense for the extractors and inserters involving the new manipulators to treat it the same way as formatted I/O.

Proposed resolution:

Add a new paragraph immediately above p4 of 27.6.4 [ext.manip] with the following text:

Effects: The expression in >> get_money(mon, intl) described below behaves as a formatted input function (as described in 27.6.1.2.1 [istream.formatted.reqmts]).

Also change p4 of 27.6.4 [ext.manip] as follows:

Returns: An object s of unspecified type such that if in is an object of type basic_istream<charT, traits> then the expression in >> get_money(mon, intl) behaves as a formatted input function that calls f(in, mon, intl) were called. The function f can be defined as...

[ post Bellevue: ]

We recommend moving immediately to Review. We've looked at the issue and have a consensus that the proposed resolution is correct, but want an iostream expert to sign off. Alisdair has taken the action item to putt this up on the reflector for possible movement by Howard to Tenatively Ready.

696. istream::operator>>(int&) broken

Section: 27.6.1.2.2 [istream.formatted.arithmetic] Status: New Submitter: Martin Sebor Date: 2007-06-23

View all other issues in [istream.formatted.arithmetic].

View all issues with New status.

Discussion:

From message c++std-lib-17897:

The code shown in 27.6.1.2.2 [istream.formatted.arithmetic] as the "as if" implementation of the two arithmetic extractors that don't have a corresponding num_get interface (i.e., the short and int overloads) is subtly buggy in how it deals with EOF, overflow, and other similar conditions (in addition to containing a few typos).

One problem is that if num_get::get() reaches the EOF after reading in an otherwise valid value that exceeds the limits of the narrower type (but not LONG_MIN or LONG_MAX), it will set err to eofbit. Because of the if condition testing for (err == 0), the extractor won't set failbit (and presumably, return a bogus value to the caller).

Another problem with the code is that it never actually sets the argument to the extracted value. It can't happen after the call to setstate() since the function may throw, so we need to show when and how it's done (we can't just punt as say: "it happens afterwards"). However, it turns out that showing how it's done isn't quite so easy since the argument is normally left unchanged by the facet on error except when the error is due to a misplaced thousands separator, which causes failbit to be set but doesn't prevent the facet from storing the value.

Proposed resolution:


698. system_error needs const char* constructors

Section: 19.4.5.1 [syserr.syserr.overview] Status: Review Submitter: Daniel Krügler Date: 2007-06-24

View all issues with Review status.

Discussion:

In 19.4.5.1 [syserr.syserr.overview] we have the class definition of std::system_error. In contrast to all exception classes, which are constructible with a what_arg string (see 19.1 [std.exceptions], or ios_base::failure in 27.4.2.1.1 [ios::failure]), only overloads with with const string& are possible. For consistency with the re-designed remaining exception classes this class should also provide c'tors which accept a const char* what_arg string.

Please note that this proposed addition makes sense even considering the given implementation hint for what(), because what_arg is required to be set as what_arg of the base class runtime_error, which now has the additional c'tor overload accepting a const char*.

Proposed resolution:

This proposed wording assumes issue 832 has been accepted and applied to the working paper.

Change 19.4.5.1 [syserr.syserr.overview] Class system_error overview, as indicated:

public:
  system_error(error_code ec, const string& what_arg);
  system_error(error_code ec, const char* what_arg);
  system_error(error_code ec);
  system_error(int ev, const error_category* ecat,
      const string& what_arg);
  system_error(int ev, const error_category* ecat,
      const char* what_arg);
  system_error(int ev, const error_category* ecat);

To 19.4.5.2 [syserr.syserr.members] Class system_error members add:

system_error(error_code ec, const char* what_arg);

Effects: Constructs an object of class system_error.

Postconditions: code() == ec and strcmp(runtime_error::what(), what_arg) == 0.

system_error(int ev, const error_category* ecat, const char* what_arg);

Effects: Constructs an object of class system_error.

Postconditions: code() == error_code(ev, ecat) and strcmp(runtime_error::what(), what_arg) == 0.


701. assoc laguerre poly's

Section: TR1 5.2.1.1 [tr.num.sf.Lnm] Status: New Submitter: Christopher Crawford Date: 2007-06-30

View all issues with New status.

Discussion:

I see that the definition the associated Laguerre polynomials TR1 5.2.1.1 [tr.num.sf.Lnm] has been corrected since N1687. However, the draft standard only specifies ranks of integer value m, while the associated Laguerre polynomials are actually valid for real values of m > -1. In the case of non-integer values of m, the definition Ln(m) = (1/n!)exx-m (d/dx)n (e-xxm+n) must be used, which also holds for integer values of m. See Abramowitz & Stegun, 22.11.6 for the general case, and 22.5.16-17 for the integer case. In fact fractional values are most commonly used in physics, for example to m = +/- 1/2 to describe the harmonic oscillator in 1 dimension, and 1/2, 3/2, 5/2, ... in 3 dimensions.

If I am correct, the calculation of the more general case is no more difficult, and is in fact the function implemented in the GNU Scientific Library. I would urge you to consider upgrading the standard, either adding extra functions for real m or switching the current ones to double.

Proposed resolution:


702. Restriction in associated Legendre functions

Section: TR1 5.2.1.2 [tr.num.sf.Plm] Status: New Submitter: Christopher Crawford Date: 2007-06-30

View all issues with New status.

Discussion:

One other small thing, in TR1 5.2.1.2 [tr.num.sf.Plm], the restriction should be |x| <= 1, not x >= 0.

Proposed resolution:


704. MoveAssignable requirement for container value type overly strict

Section: 23.1 [container.requirements] Status: Open Submitter: Howard Hinnant Date: 2007-05-20

View other active issues in [container.requirements].

View all other issues in [container.requirements].

View all issues with Open status.

Discussion:

The move-related changes inadvertently overwrote the intent of 276. Issue 276 removed the requirement of CopyAssignable from most of the member functions of node-based containers. But the move-related changes unnecessarily introduced the MoveAssignable requirement for those members which used to require CopyAssignable.

We also discussed (c++std-lib-18722) the possibility of dropping MoveAssignable from some of the sequence requirements. Additionally the in-place construction work may further reduce requirements. For purposes of an easy reference, here are the minimum sequence requirements as I currently understand them. Those items in requirements table in the working draft which do not appear below have been purposefully omitted for brevity as they do not have any requirements of this nature. Some items which do not have any requirements of this nature are included below just to confirm that they were not omitted by mistake.

Container Requirements
X u(a)value_type must be CopyConstructible
X u(rv)array and containers with a propagate_never allocator require value_type to be MoveConstructible
a = uSequences require value_type to be CopyConstructible and CopyAssignable. Associative containers require value_type to be CopyConstructible.
a = rvarray requires value_type to be MoveAssignable. Sequences and Associative containers with propagate_never and propagate_on_copy_construction allocators require value_type to be MoveConstructible.
swap(a,u)array and containers with propagate_never and propagate_on_copy_construction allocators require value_type to be Swappable.

Sequence Requirements
X(n)value_type must be DefaultConstructible
X(n, t)value_type must be CopyConstructible
X(i, j)If the iterators return an lvalue the value_type must be CopyConstructible. If the iterators return an rvalue the value_type must be MoveConstructible.
a.insert(p, t)The value_type must be CopyConstructible. The sequences vector and deque also require the value_type to be CopyAssignable.
a.insert(p, rv)The value_type must be MoveConstructible. The sequences vector and deque also require the value_type to be MoveAssignable.
a.insert(p, n, t)The value_type must be CopyConstructible. The sequences vector and deque also require the value_type to be CopyAssignable.
a.insert(p, i, j)If the iterators return an lvalue the value_type must be CopyConstructible. The sequences vector and deque also require the value_type to be CopyAssignable when the iterators return an lvalue. If the iterators return an rvalue the value_type must be MoveConstructible. The sequences vector and deque also require the value_type to be MoveAssignable when the iterators return an rvalue.
a.erase(p)The sequences vector and deque require the value_type to be MoveAssignable.
a.erase(q1, q2)The sequences vector and deque require the value_type to be MoveAssignable.
a.clear()
a.assign(i, j)If the iterators return an lvalue the value_type must be CopyConstructible and CopyAssignable. If the iterators return an rvalue the value_type must be MoveConstructible and MoveAssignable.
a.assign(n, t)The value_type must be CopyConstructible and CopyAssignable.
a.resize(n)The value_type must be DefaultConstructible. The sequence vector also requires the value_type to be MoveConstructible.
a.resize(n, t)The value_type must be CopyConstructible.

Optional Sequence Requirements
a.front()
a.back()
a.push_front(t)The value_type must be CopyConstructible.
a.push_front(rv)The value_type must be MoveConstructible.
a.push_back(t)The value_type must be CopyConstructible.
a.push_back(rv)The value_type must be MoveConstructible.
a.pop_front()
a.pop_back()
a[n]
a.at[n]

Associative Container Requirements
X(i, j)If the iterators return an lvalue the value_type must be CopyConstructible. If the iterators return an rvalue the value_type must be MoveConstructible.
a_uniq.insert(t)The value_type must be CopyConstructible.
a_uniq.insert(rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a_eq.insert(t)The value_type must be CopyConstructible.
a_eq.insert(rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a.insert(p, t)The value_type must be CopyConstructible.
a.insert(p, rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a.insert(i, j)If the iterators return an lvalue the value_type must be CopyConstructible. If the iterators return an rvalue the key_type and the mapped_type (if it exists) must be MoveConstructible..

Unordered Associative Container Requirements
X(i, j, n, hf, eq)If the iterators return an lvalue the value_type must be CopyConstructible. If the iterators return an rvalue the value_type must be MoveConstructible.
a_uniq.insert(t)The value_type must be CopyConstructible.
a_uniq.insert(rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a_eq.insert(t)The value_type must be CopyConstructible.
a_eq.insert(rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a.insert(p, t)The value_type must be CopyConstructible.
a.insert(p, rv)The key_type and the mapped_type (if it exists) must be MoveConstructible.
a.insert(i, j)If the iterators return an lvalue the value_type must be CopyConstructible. If the iterators return an rvalue the key_type and the mapped_type (if it exists) must be MoveConstructible..

Miscellaneous Requirements
map[lvalue-key]The key_type must be CopyConstructible. The mapped_type must be DefaultConstructible and MoveConstructible.
map[rvalue-key]The key_type must be MoveConstructible. The mapped_type must be DefaultConstructible and MoveConstructible.

[ Kona (2007): Howard and Alan to update requirements table in issue with emplace signatures. ]

[ Bellevue: This should be handled as part of the concepts work. ]

Proposed resolution:


708. Locales need to be per thread and updated for POSIX changes

Section: 22 [localization] Status: Open Submitter: Peter Dimov Date: 2007-07-28

View all other issues in [localization].

View all issues with Open status.

Discussion:

The POSIX "Extended API Set Part 4,"

http://www.opengroup.org/sib/details.tpl?id=C065

introduces extensions to the C locale mechanism that allow multiple concurrent locales to be used in the same application by introducing a type locale_t that is very similar to std::locale, and a number of _l functions that make use of it.

The global locale (set by setlocale) is now specified to be per- process. If a thread does not call uselocale, the global locale is in effect for that thread. It can install a per-thread locale by using uselocale.

There is also a nice querylocale mechanism by which one can obtain the name (such as "de_DE") for a specific facet, even for combined locales, with no std::locale equivalent.

std::locale should be harmonized with the new POSIX locale_t mechanism and provide equivalents for uselocale and querylocale.

[ Kona (2007): Bill and Nick to provide wording. ]

Proposed resolution:


711. Contradiction in empty shared_ptr

Section: 20.6.12.2.5 [util.smartptr.shared.obs] Status: Open Submitter: Peter Dimov Date: 2007-08-24

View all other issues in [util.smartptr.shared.obs].

View all issues with Open status.

Discussion:

A discussion on comp.std.c++ has identified a contradiction in the shared_ptr specification. The note:

[ Note: this constructor allows creation of an empty shared_ptr instance with a non-NULL stored pointer. -end note ]

after the aliasing constructor

template<class Y> shared_ptr(shared_ptr<Y> const& r, T *p);

reflects the intent of N2351 to, well, allow the creation of an empty shared_ptr with a non-NULL stored pointer.

This is contradicted by the second sentence in the Returns clause of 20.6.12.2.5 [util.smartptr.shared.obs]:

T* get() const;

Returns: the stored pointer. Returns a null pointer if *this is empty.

[ Bellevue: ]

Adopt option 1 and move to review, not ready.

There was a lot of confusion about what an empty shared_ptr is (the term isn't defined anywhere), and whether we have a good mental model for how one behaves. We think it might be possible to deduce what the definition should be, but the words just aren't there. We need to open an issue on the use of this undefined term. (The resolution of that issue might affect the resolution of issue 711.)

The LWG is getting more uncomfortable with the aliasing proposal (N2351) now that we realize some of its implications, and we need to keep an eye on it, but there isn't support for removing this feature at this time.

[ Sophia Antipolis: ]

We heard from Peter Dimov, who explained his reason for preferring solution 1.

Because it doesn't seem to add anything. It simply makes the behavior for p = 0 undefined. For programmers who don't create empty pointers with p = 0, there is no difference. Those who do insist on creating them presumably have a good reason, and it costs nothing for us to define the behavior in this case.

The aliasing constructor is sharp enough as it is, so "protecting" users doesn't make much sense in this particular case.

> Do you have a use case for r being empty and r being non-null?

I have received a few requests for it from "performance-conscious" people (you should be familiar with this mindset) who don't like the overhead of allocating and maintaining a control block when a null deleter is used to approximate a raw pointer. It is obviously an "at your own risk", low-level feature; essentially a raw pointer behind a shared_ptr facade.

We could not agree upon a resolution to the issue; some of us thought that Peter's description above is supporting an undesirable behavior.

Proposed resolution:

In keeping the N2351 spirit and obviously my preference, change 20.6.12.2.5 [util.smartptr.shared.obs]:

T* get() const;

Returns: the stored pointer. Returns a null pointer if *this is empty.

Alternative proposed resolution: (I won't be happy if we do this, but it's possible):

Change 20.6.12.2.1 [util.smartptr.shared.const]:

template<class Y> shared_ptr(shared_ptr<Y> const& r, T *p);

Requires: If r is empty, p shall be 0.

[ Note: this constructor allows creation of an empty shared_ptr instance with a non-NULL stored pointer. -- end note ]


713. sort() complexity is too lax

Section: 25.3.1.1 [sort] Status: Ready Submitter: Matt Austern Date: 2007-08-30

View all issues with Ready status.

Discussion:

The complexity of sort() is specified as "Approximately N log(N) (where N == last - first ) comparisons on the average", with no worst case complicity specified. The intention was to allow a median-of-three quicksort implementation, which is usually O(N log N) but can be quadratic for pathological inputs. However, there is no longer any reason to allow implementers the freedom to have a worst-cast-quadratic sort algorithm. Implementers who want to use quicksort can use a variant like David Musser's "Introsort" (Software Practice and Experience 27:983-993, 1997), which is guaranteed to be O(N log N) in the worst case without incurring additional overhead in the average case. Most C++ library implementers already do this, and there is no reason not to guarantee it in the standard.

Proposed resolution:

In 25.3.1.1 [sort], change the complexity to "O(N log N)", and remove footnote 266:

Complexity: Approximately O(N log(N)) (where N == last - first ) comparisons on the average.266)

266) If the worst case behavior is important stable_sort() (25.3.1.2) or partial_sort() (25.3.1.3) should be used.


714. search_n complexity is too lax

Section: 25.1.9 [alg.search] Status: Ready Submitter: Matt Austern Date: 2007-08-30

View all other issues in [alg.search].

View all issues with Ready status.

Discussion:

The complexity for search_n (25.1.9 [alg.search] par 7) is specified as "At most (last - first ) * count applications of the corresponding predicate if count is positive, or 0 otherwise." This is unnecessarily pessimistic. Regardless of the value of count, there is no reason to examine any element in the range more than once.

Proposed resolution:

Change the complexity to "At most (last - first) applications of the corresponding predicate".

template<class ForwardIterator, class Size, class T> 
  ForwardIterator 
    search_n(ForwardIterator first , ForwardIterator last , Size count , 
             const T& value ); 

template<class ForwardIterator, class Size, class T, 
         class BinaryPredicate> 
  ForwardIterator 
    search_n(ForwardIterator first , ForwardIterator last , Size count , 
             const T& value , BinaryPredicate pred );

Complexity: At most (last - first ) * count applications of the corresponding predicate if count is positive, or 0 otherwise.


716. Production in [re.grammar] not actually modified

Section: 28.13 [re.grammar] Status: New Submitter: Stephan T. Lavavej Date: 2007-08-31

View all issues with New status.

Discussion:

TR1 7.13 [tr.re.grammar]/3 and C++0x WP 28.13 [re.grammar]/3 say:

The following productions within the ECMAScript grammar are modified as follows:

CharacterClass ::
[ [lookahead ∉ {^}] ClassRanges ]
[ ^ ClassRanges ]

This definition for CharacterClass appears to be exactly identical to that in ECMA-262.

Was an actual modification intended here and accidentally omitted, or was this production accidentally included?

Proposed resolution:

Remove this mention of the CharacterClass production.

CharacterClass ::
[ [lookahead ∉ {^}] ClassRanges ]
[ ^ ClassRanges ]

718. basic_string is not a sequence

Section: 21.3 [basic.string] Status: Open Submitter: Bo Persson Date: 2007-08-18

View other active issues in [basic.string].

View all other issues in [basic.string].

View all issues with Open status.

Discussion:

Paragraph 21.3 [basic.string]/3 states:

The class template basic_string conforms to the requirements for a Sequence (23.1.1) and for a Reversible Container (23.1).

First of all, 23.1.1 [sequence.reqmts] is no longer "Sequence" but "Sequence container". Secondly, after the resent changes to containers (emplace, push_back, const_iterator parameters to insert and erase), basic_string is not even close to conform to the current requirements.

[ Bellevue: ]

General consensus is to suggest option 2.

Proposed resolution:

Remove this sentence, in recognition of the fact that basic_string is not just a vector-light for literal types, but something quite different, a string abstraction in its own right.


719. std::is_literal type traits should be provided

Section: 20.4 [meta] Status: Open Submitter: Daniel Krügler Date: 2007-08-25

View all other issues in [meta].

View all issues with Open status.

Discussion:

Since the inclusion of constexpr in the standard draft N2369 we have a new type category "literal", which is defined in 3.9 [basic.types]/p.11:

-11- A type is a literal type if it is:

I strongly suggest that the standard provides a type traits for literal types in 20.4.4.3 [meta.unary.prop] for several reasons:

  1. To keep the traits in sync with existing types.
  2. I see many reasons for programmers to use this trait in template code to provide optimized template definitions for these types, see below.
  3. A user-provided definition of this trait is practically impossible to write portably.

The special problem of reason (c) is that I don't see currently a way to portably test the condition for literal class types:

Here follows a simply example to demonstrate it's usefulness:

template <typename T>
constexpr typename std::enable_if<std::is_literal<T>::value, T>::type
abs(T x) {
  return x < T() ? -x : x;
}

template <typename T>
typename std::enable_if<!std::is_literal<T>::value, T>::type
abs(const T& x) {
  return x < T() ? -x : x;
}

Here we have the possibility to provide a general abs function template that can be used in ICE's if it's argument is a literal type which's value is a constant expression, otherwise we have an optimized version for arguments which are expensive to copy and therefore need the usage of arguments of reference type (instead of const T& we could decide to use T&&, but that is another issue).

[ Alisdair is considering preparing a paper listing a number of missing type traits, and feels that it might be useful to handle them all together rather than piecemeal. This would affect issue 719 and 750. These two issues should move to OPEN pending AM paper on type traits. ]

Proposed resolution:

In 20.4.2 [meta.type.synop] in the group "type properties", just below the line

template <class T> struct is_pod;

add a new one:

template <class T> struct is_literal;

In 20.4.4.3 [meta.unary.prop], table Type Property Predicates, just below the line for the is_pod property add a new line:

TemplateConditionPreconditions
template <class T> struct is_literal; T is a literal type (3.9) T shall be a complete type, an array of unknown bound, or (possibly cv-qualified) void.

720. Omissions in constexpr usages

Section: 23.2.1 [array], 23.3.5 [template.bitset] Status: Ready Submitter: Daniel Krügler Date: 2007-08-25

View other active issues in [array].

View all other issues in [array].

View all issues with Ready status.

Discussion:

  1. The member function bool array<T,N>::empty() const should be a constexpr because this is easily to proof and to implement following it's operational semantics defined by Table 87 (Container requirements) which says: a.size() == 0.
  2. The member function bool bitset<N>::test() const must be a constexpr (otherwise it would violate the specification of constexpr bitset<N>::operator[](size_t) const, because it's return clause delegates to test()).
  3. I wonder how the constructor bitset<N>::bitset(unsigned long) can be declared as a constexpr. Current implementations usually have no such bitset c'tor which would fulfill the requirements of a constexpr c'tor because they have a non-empty c'tor body that typically contains for-loops or memcpy to compute the initialisation. What have I overlooked here?

[ Sophia Antipolis: ]

We handle this as two parts

  1. The proposed resolution is correct; move to ready.
  2. The issue points out a real problem, but the issue is larger than just this solution. We believe a paper is needed, applying the full new features of C++ (including extensible literals) to update std::bitset. We note that we do not consider this new work, and that is should be handled by the Library Working Group.

In order to have a consistent working paper, Alisdair and Daniel produced a new wording for the resolution.

Proposed resolution:

  1. In the class template definition of 23.2.1 [array]/p. 3 change

    constexpr bool empty() const;
    
  2. In the class template definition of 23.3.5 [template.bitset]/p. 1 change

    constexpr bool test(size_t pos ) const;
    

    and in 23.3.5.2 [bitset.members] change

    constexpr bool test(size_t pos ) const;
    

721. wstring_convert inconsistensies

Section: 22.1.3.2.2 [conversions.string] Status: New Submitter: Bo Persson Date: 2007-08-27

View all issues with New status.

Discussion:

Paragraph 3 says that the Codecvt template parameter shall meet the requirements of std::codecvt, even though std::codecvt itself cannot be used (because of a protected destructor).

How are we going to explain this code to beginning programmers?

template<class I, class E, class S>
struct codecvt : std::codecvt<I, E, S>
{
    ~codecvt()
    { }
};

void main()
{
    std::wstring_convert<codecvt<wchar_t, char, std::mbstate_t> > compiles_ok;
    
    std::wstring_convert<std::codecvt<wchar_t, char, std::mbstate_t> >   not_ok;
}

Proposed resolution:


723. basic_regex should be moveable

Section: 28.8 [re.regex] Status: Open Submitter: Daniel Krügler Date: 2007-08-29

View all other issues in [re.regex].

View all issues with Open status.

Discussion:

According to the current state of the standard draft, the class template basic_regex, as described in 28.8 [re.regex]/3, is neither MoveConstructible nor MoveAssignable. IMO it should be, because typical regex state machines tend to have a rather large data quantum and I have seen several use cases, where a factory function returns regex values, which would take advantage of moveabilities.

[ Sophia Antipolis: ]

Needs wording for the semantics, the idea is agreed upon.

Proposed resolution:

  1. In the header <regex> synopsis 28.4 [re.syn], just below the function template swap add two further overloads:

    template <class charT, class traits> 
      void swap(basic_regex<charT, traits>& e1,  basic_regex<charT, traits>& e2);
    template <class charT, class traits>
      void swap(basic_regex<charT, traits>&& e1, basic_regex<charT, traits>& e2);
    template <class charT, class traits>
      void swap(basic_regex<charT, traits>& e1,  basic_regex<charT, traits>&& e2);
    

    In the class definition of basic_regex, just below 28.8 [re.regex]/3, perform the following changes:

  2. Just after the copy c'tor:

    basic_regex(basic_regex&&);
    
  3. Just after the copy-assignment op.:

    basic_regex& operator=(basic_regex&&);
    
  4. Just after the first assign overload insert:

    basic_regex& assign(basic_regex&& that);
    
  5. Change the current swap function to read:

    void swap(basic_regex&&);
    
  6. In 28.8.2 [re.regex.construct], just below the copy c'tor add a corresponding member definition of:

    basic_regex(basic_regex&&);
    
  7. Also in 28.8.2 [re.regex.construct], just below the copy assignment c'tor add a corresponding member definition of:

    basic_regex& operator=(basic_regex&&);
    
  8. In 28.8.3 [re.regex.assign], just below the first assign overload add a corresponding member definition of:

    basic_regex& assign(basic_regex&& that);
    
  9. In 28.8.6 [re.regex.swap], change the signature of swap to say:

    void swap(basic_regex&& e);
    
  10. In 28.8.7.1 [re.regex.nmswap], just below the single binary swap function, add the two missing overloads:

    template <class charT, class traits>
      void swap(basic_regex<charT, traits>&& e1, basic_regex<charT, traits>& e2);
    template <class charT, class traits>
      void swap(basic_regex<charT, traits>& e1, basic_regex<charT, traits>&& e2);
    

Of course there would be need of corresponding proper standardese to describe these additions.


724. DefaultConstructible is not defined

Section: 20.1.1 [utility.arg.requirements] Status: Open Submitter: Pablo Halpern Date: 2007-09-12

View other active issues in [utility.arg.requirements].

View all other issues in [utility.arg.requirements].

View all issues with Open status.

Discussion:

The DefaultConstructible requirement is referenced in several places in the August 2007 working draft N2369, but is not defined anywhere.

[ Bellevue: ]

Walking into the default/value-initialization mess...

Why two lines? Because we need both expressions to be valid.

AJM not sure what the phrase "default constructed" means. This is unfortunate, as the phrase is already used 24 times in the library!

Example: const int would not accept first line, but will accept the second.

This is an issue that must be solved by concepts, but we might need to solve it independantly first.

It seems that the requirements are the syntax in the proposed first column is valid, but not clear what semantics we need.

A table where there is no post-condition seems odd, but appears to sum up our position best.

At a minimum an object is declared and is destuctible.

Move to open, as no-one happy to produce wording on the fly.

Proposed resolution:

In section 20.1.1 [utility.arg.requirements], before table 33, add the following table:

Table 33: DefaultConstructible requirements

expression

post-condition

T t;
T()

T is default constructed.


726. Missing regex_replace() overloads

Section: 28.11.4 [re.alg.replace] Status: Open Submitter: Stephan T. Lavavej Date: 2007-09-22

View other active issues in [re.alg.replace].

View all other issues in [re.alg.replace].

View all issues with Open status.

Discussion:

Two overloads of regex_replace() are currently provided:

template <class OutputIterator, class BidirectionalIterator, 
    class traits, class charT> 
  OutputIterator 
  regex_replace(OutputIterator out, 
                BidirectionalIterator first, BidirectionalIterator last, 
                const basic_regex<charT, traits>& e, 
                const basic_string<charT>& fmt, 
                regex_constants::match_flag_type flags = 
                  regex_constants::match_default);
 
template <class traits, class charT> 
  basic_string<charT> 
  regex_replace(const basic_string<charT>& s, 
                const basic_regex<charT, traits>& e, 
                const basic_string<charT>& fmt, 
                regex_constants::match_flag_type flags = 
                  regex_constants::match_default);
  1. Overloads taking const charT * are provided for regex_match() and regex_search(), but not regex_replace(). This is inconsistent.
  2. The absence of const charT * overloads prevents ordinary-looking code from compiling, such as:

    const string s("kitten");
    const regex r("en");
    cout << regex_replace(s, r, "y") << endl;
    

    The compiler error message will be something like "could not deduce template argument for 'const std::basic_string<_Elem> &' from 'const char[1]'".

    Users expect that anything taking a basic_string<charT> can also take a const charT *. In their own code, when they write a function taking std::string (or std::wstring), they can pass a const char * (or const wchar_t *), thanks to basic_string's implicit constructor. Because the regex algorithms are templated on charT, they can't rely on basic_string's implicit constructor (as the compiler error message indicates, template argument deduction fails first).

    If a user figures out what the compiler error message means, workarounds are available - but they are all verbose. Explicit template arguments could be given to regex_replace(), allowing basic_string's implicit constructor to be invoked - but charT is the last template argument, not the first, so this would be extremely verbose. Therefore, constructing a basic_string from each C string is the simplest workaround.

  3. There is an efficiency consideration: constructing basic_strings can impose performance costs that could be avoided by a library implementation taking C strings and dealing with them directly. (Currently, for replacement sources, C strings can be converted into iterator pairs at the cost of verbosity, but for format strings, there is no way to avoid constructing a basic_string.)

[ Sophia Antipolis: ]

We note that Boost already has these overloads. However, the proposed wording is provided only for 28.11.4 [re.alg.replace]; wording is needed for the synopsis as well. We also note that this has impact on match_results::format, which may require further overloads.

Proposed resolution:

Provide additional overloads for regex_replace(): one additional overload of the iterator-based form (taking const charT* fmt), and three additional overloads of the convenience form (one taking const charT* str, another taking const charT* fmt, and the third taking both const charT* str and const charT* fmt). 28.11.4 [re.alg.replace]:

template <class OutputIterator, class BidirectionalIterator, 
    class traits, class charT> 
  OutputIterator 
  regex_replace(OutputIterator out, 
                BidirectionalIterator first, BidirectionalIterator last, 
                const basic_regex<charT, traits>& e, 
                const basic_string<charT>& fmt, 
                regex_constants::match_flag_type flags = 
                  regex_constants::match_default);

template <class OutputIterator, class BidirectionalIterator, 
    class traits, class charT> 
  OutputIterator 
  regex_replace(OutputIterator out, 
                BidirectionalIterator first, BidirectionalIterator last, 
                const basic_regex<charT, traits>& e, 
                const charT* fmt, 
                regex_constants::match_flag_type flags = 
                  regex_constants::match_default);

...

template <class traits, class charT> 
  basic_string<charT> 
  regex_replace(const basic_string<charT>& s, 
                const basic_regex<charT, traits>& e, 
                const basic_string<charT>& fmt, 
                regex_constants::match_flag_type flags = 
                  regex_constants::match_default);

template <class traits, class charT> 
  basic_string<charT> 
  regex_replace(const basic_string<charT>& s, 
                const basic_regex<charT, traits>& e, 
                const charT* fmt, 
                regex_constants::match_flag_type flags = 
                  regex_constants::match_default);

template <class traits, class charT> 
  basic_string<charT> 
  regex_replace(const charT* s, 
                const basic_regex<charT, traits>& e, 
                const basic_string<charT>& fmt, 
                regex_constants::match_flag_type flags = 
                  regex_constants::match_default);

template <class traits, class charT> 
  basic_string<charT> 
  regex_replace(const charT* s, 
                const basic_regex<charT, traits>& e, 
                const charT* fmt, 
                regex_constants::match_flag_type flags = 
                  regex_constants::match_default);

727. regex_replace() doesn't accept basic_strings with custom traits and allocators

Section: 28.11.4 [re.alg.replace] Status: New Submitter: Stephan T. Lavavej Date: 2007-09-22

View other active issues in [re.alg.replace].

View all other issues in [re.alg.replace].

View all issues with New status.

Discussion:

regex_match() and regex_search() take const basic_string<charT, ST, SA>&. regex_replace() takes const basic_string<charT>&. This prevents regex_replace() from accepting basic_strings with custom traits and allocators.

Proposed resolution:

Overloads of regex_replace() taking basic_string should be additionally templated on class ST, class SA and take const basic_string<charT, ST, SA>&. Consistency with regex_match() and regex_search() would place class ST, class SA as the first template arguments; compatibility with existing code using TR1 and giving explicit template arguments to regex_replace() would place class ST, class SA as the last template arguments.


728. Problem in [rand.eng.mers]/6

Section: 26.4.3.2 [rand.eng.mers] Status: Ready Submitter: Stephan Tolksdorf Date: 2007-09-21

View all other issues in [rand.eng.mers].

View all issues with Ready status.

Discussion:

The mersenne_twister_engine is required to use a seeding method that is given as an algorithm parameterized over the number of bits W. I doubt whether the given generalization of an algorithm that was originally developed only for unsigned 32-bit integers is appropriate for other bit widths. For instance, W could be theoretically 16 and UIntType a 16-bit integer, in which case the given multiplier would not fit into the UIntType. Moreover, T. Nishimura and M. Matsumoto have chosen a dif ferent multiplier for their 64 bit Mersenne Twister [reference].

I see two possible resolutions:

  1. Restrict the parameter W of the mersenne_twister_template to values of 32 or 64 and use the multiplier from [the above reference] for the 64-bit case (my preference)
  2. Interpret the state array for any W as a 32-bit array of appropriate length (and a specified byte order) and always employ the 32-bit algorithm for seeding

See N2424 for further discussion.

[ Bellevue: ]

Stephan Tolksdorf has additional comments on N2424. He comments: "there is a typo in the required behaviour for mt19937_64: It should be the 10000th (not 100000th) invocation whose value is given, and the value should be 9981545732273789042 (not 14002232017267485025)." These values need checking.

Take the proposed recommendation in N2424 and move to REVIEW.

Proposed resolution:

See N2424 for the proposed resolution.

[ Stephan Tolksdorf adds pre-Bellevue: ]

I support the proposed resolution in N2424, but there is a typo in the required behaviour for mt19937_64: It should be the 10000th (not 100000th) invocation whose value is given, and the value should be 9981545732273789042 (not 14002232017267485025). The change to para. 8 proposed by Charles Karney should also be included in the proposed wording.

[ Sophia Antipolis: ]

Note the main part of the issue is resolved by N2424.

732. Defect in [rand.dist.samp.genpdf]

Section: 26.4.8.5.3 [rand.dist.samp.genpdf] Status: Open Submitter: Stephan Tolksdorf Date: 2007-09-21

View all other issues in [rand.dist.samp.genpdf].

View all issues with Open status.

Duplicate of: 795

Discussion:

26.4.8.5.3 [rand.dist.samp.genpdf] describes the interface for a distribution template that is meant to simulate random numbers from any general distribution given only the density and the support of the distribution. I'm not aware of any general purpose algorithm that would be capable of correctly and efficiently implementing the described functionality. From what I know, this is essentially an unsolved research problem. Existing algorithms either require more knowledge about the distribution and the problem domain or work only under very limited circumstances. Even the state of the art special purpose library UNU.RAN does not solve the problem in full generality, and in any case, testing and customer support for such a library feature would be a nightmare.

Possible resolution: For these reasons, I propose to delete section 26.4.8.5.3 [rand.dist.samp.genpdf].

[ Bellevue: ]

Disagreement persists.

Objection to this issue is that this function takes a general functor. The general approach would be to normalize this function, integrate it, and take the inverse of the integral, which is not possible in general. An example function is sin(1+n*x) -- for any spatial frequency that the implementor chooses, there is a value of n that renders that choice arbitrarily erroneous.

Correction: The formula above should instead read 1+sin(n*x).

Objector proposes the following possible compromise positions:

Proposed resolution:

See N2424 for the proposed resolution.


734. Unnecessary restriction in [rand.dist.norm.chisq]

Section: 26.4.8.4.3 [rand.dist.norm.chisq] Status: Review Submitter: Stephan Tolksdorf Date: 2007-09-21

View all issues with Review status.

Discussion:

chi_squared_distribution, fisher_f_distribution and student_t_distribution have parameters for the "degrees of freedom" n and m that are specified as integers. For the following two reasons this is an unnecessary restriction: First, in many applications such as Bayesian inference or Monte Carlo simulations it is more convenient to treat the respective param- eters as continuous variables. Second, the standard non-naive algorithms (i.e. O(1) algorithms) for simulating from these distributions work with floating-point parameters anyway (all three distributions could be easily implemented using the Gamma distribution, for instance).

Similar arguments could in principle be made for the parameters t and k of the discrete binomial_distribution and negative_binomial_distribution, though in both cases continuous parameters are less frequently used in practice and in case of the binomial_distribution the implementation would be significantly complicated by a non-discrete parameter (in most implementations one would need an approximation of the log-gamma function instead of just the log-factorial function).

Possible resolution: For these reasons, I propose to change the type of the respective parameters to double.

[ Bellevue: ]

In N2424. Not wildly enthusiastic, not really felt necessary. Less frequently used in practice. Not terribly bad either. Move to OPEN.

[ Sophia Antipolis: ]

Marc Paterno: The generalizations were explicitly left out when designing the facility. It's harder to test.

Marc Paterno: Ask implementers whether floating-point is a significant burden.

Alisdair: It's neater to do it now, do ask Bill Plauger.

Disposition: move to review with the option for "NAD" if it's not straightforward to implement; unanimous consent.

Proposed resolution:

See N2424 for the proposed resolution.

[ Stephan Tolksdorf adds pre-Bellevue: ]

In 26.4.8.4.3 [rand.dist.norm.chisq]:

Delete ", where n is a positive integer" in the first paragraph.

Replace both occurrences of "explicit chi_squared_distribution(int n = 1);" with "explicit chi_squared_distribution(RealType n = 1);".

Replace both occurrences of "int n() const;" with "RealType n() const;".

In 26.4.8.4.5 [rand.dist.norm.f]:

Delete ", where m and n are positive integers" in the first paragraph.

Replace both occurrences of

explicit fisher_f_distribution(int m = 1, int n = 1);

with

explicit fisher_f_distribution(RealType m = 1, RealType n = 1);

Replace both occurrences of "int m() const;" with "RealType m() const;".

Replace both occurrences of "int n() const;" with "RealType n() const;".

In 26.4.8.4.6 [rand.dist.norm.t]:

Delete ", where n is a positive integer" in the first paragraph.

Replace both occurrences of "explicit student_t_distribution(int n = 1);" with "explicit student_t_distribution(RealType n = 1);".

Replace both occurrences of "int n() const;" with "RealType n() const;".


742. Enabling swap for proxy iterators

Section: 20.1.1 [utility.arg.requirements] Status: Open Submitter: Howard Hinnant Date: 2007-10-10

View other active issues in [utility.arg.requirements].

View all other issues in [utility.arg.requirements].

View all issues with Open status.

Discussion:

This issue was split from 672. 672 now just deals with changing the requirements of T in the Swappable requirement from CopyConstructible and CopyAssignable to MoveConstructible and MoveAssignable.

This issue seeks to widen the Swappable requirement to support proxy iterators. Here is example code:

namespace Mine {

template <class T>
struct proxy {...};

template <class T>
struct proxied_iterator
{
   typedef T value_type;
   typedef proxy<T> reference;
   reference operator*() const;
   ...
};

struct A
{
   // heavy type, has an optimized swap, maybe isn't even copyable or movable, just swappable
   void swap(A&);
   ...
};

void swap(A&, A&);
void swap(proxy<A>, A&);
void swap(A&, proxy<A>);
void swap(proxy<A>, proxy<A>);

}  // Mine

...

Mine::proxied_iterator<Mine::A> i(...)
Mine::A a;
swap(*i1, a);

The key point to note in the above code is that in the call to swap, *i1 and a are different types (currently types can only be Swappable with the same type). A secondary point is that to support proxies, one must be able to pass rvalues to swap. But note that I am not stating that the general purpose std::swap should accept rvalues! Only that overloaded swaps, as in the example above, be allowed to take rvalues.

That is, no standard library code needs to change. We simply need to have a more flexible definition of Swappable.

[ Bellevue: ]

While we believe Concepts work will define a swappable concept, we should still resolve this issue if possible to give guidance to the Concepts work.

Would an ambiguous swap function in two namespaces found by ADL break this wording? Suggest that the phrase "valid expression" means such a pair of types would still not be swappable.

Motivation is proxy-iterators, but facility is considerably more general. Are we happy going so far?

We think this wording is probably correct and probably an improvement on what's there in the WP. On the other hand, what's already there in the WP is awfully complicated. Why do we need the two bullet points? They're too implementation-centric. They don't add anything to the semantics of what swap() means, which is there in the post-condition. What's wrong with saying that types are swappable if you can call swap() and it satisfies the semantics of swapping?

Proposed resolution:

Change 20.1.1 [utility.arg.requirements]:

-1- The template definitions in the C++ Standard Library refer to various named requirements whose details are set out in tables 31-38. In these tables, T and V are is a types to be supplied by a C++ program instantiating a template; a, b, and c are values of type const T; s and t are modifiable lvalues of type T; u is a value of type (possibly const) T; and rv is a non-const rvalue of type T; w is a value of type T; and v is a value of type V.

Table 37: Swappable requirements [swappable]
expressionreturn typepost-condition
swap(sw,tv)void tw has the value originally held by uv, and uv has the value originally held by tw

The Swappable requirement is met by satisfying one or more of the following conditions:

  • T is Swappable if T and V are the same type and T satisfies the CopyConstructible MoveConstructible requirements (Table 34 33) and the CopyAssignable MoveAssignable requirements (Table 36 35);
  • T is Swappable with V if a namespace scope function named swap exists in the same namespace as the definition of T or V, such that the expression swap(tw,u v) is valid and has the semantics described in this table.

747. We have 3 separate type traits to identify classes supporting no-throw operations

Section: 20.4.4.3 [meta.unary.prop] Status: Open Submitter: Alisdair Meredith Date: 2007-10-10

View all other issues in [meta.unary.prop].

View all issues with Open status.

Discussion:

We have 3 separate type traits to identify classes supporting no-throw operations, which are very useful when trying to provide exception safety guarantees. However, I'm not entirely clear on what the current wording requires of a conforming implementation. To quote from has_nothrow_default_constructor:

or T is a class type with a default constructor that is known not to throw any exceptions

What level of magic do we expect to deduce if this is known?

E.g.

struct test{
 int x;
 test() : x() {}
};

Should I expect a conforming compiler to assert( has_nothrow_constructor<test>::value )

Is this a QoI issue?

Should I expect to 'know' only if-and-only-if there is an inline definition available?

Should I never expect that to be true, and insist that the user supplies an empty throw spec if they want to assert the no-throw guarantee?

It would be helpful to maybe have a footnote explaining what is required, but right now I don't know what to suggest putting in the footnote.

(agreement since is that trivial ops and explicit no-throws are required. Open if QoI should be allowed to detect further)

[ Bellevue: ]

This looks like a QoI issue. In the case of trivial and nothrow it is known. Static analysis of the program is definitely into QoI. Move to OPEN. Need to talk to Core about this.

Proposed resolution:


750. The current definition for is_convertible requires that the type be implicitly convertible, so explicit constructors are ignored.

Section: 20.4.5 [meta.rel] Status: Open Submitter: Alisdair Meredith Date: 2007-10-10

View all issues with Open status.

Discussion:

With the pending arrival of explicit conversion functions though, I'm wondering if we want an additional trait, is_explictly_convertible?

[ Bellevue: ]

Alisdair is considering preparing a paper listing a number of missing type traits, and feels that it might be useful to handle them all together rather than piecemeal. This would affect issue 719 and 750. These two issues should move to OPEN pending AM paper on type traits.

Proposed resolution:


751. change pass-by-reference members of vector<bool> to pass-by-value?

Section: 23.2.7 [vector.bool] Status: New Submitter: Alisdair Meredith Date: 2007-10-10

View other active issues in [vector.bool].

View all other issues in [vector.bool].

View all issues with New status.

Discussion:

A number of vector<bool> members take const bool& as arguments. Is there any chance we could change them to pass-by-value or would I be wasting everyone's time if wrote up an issue?

[ post Bellevue: ]

As we understand it, the original requester (Martin Sebor) would like for implementations to be permitted to pass-by-value. Alisdair suggests that if this is to be resolved, it should be resolved more generally, e.g. in other containers as well.

We note that this would break ABI. However, we also suspect that this might be covered under the "as-if" rule in section 1.9.

Many in the group feel that for vector<bool>, this is a "don't care", and that at this point in the process it's not worth the bandwidth.

Issue 679 -- which was in ready status pre-Bellevue and is now in the working paper -- is related to this, though not a duplicate.

Moving to Open with a task for Alisdair to craft a informative note to be put whereever appropriate in the WP. This note would clarify places where pass-by-const-ref can be transformed to pass-by-value under the as-if rule.

Proposed resolution:


752. Allocator complexity requirement

Section: 20.1.2 [allocator.requirements] Status: Review Submitter: Hans Boehm Date: 2007-10-11

View other active issues in [allocator.requirements].

View all other issues in [allocator.requirements].

View all issues with Review status.

Discussion:

Did LWG recently discuss 20.1.2 [allocator.requirements]-2, which states that "All the operations on the allocators are expected to be amortized constant time."?

As I think I pointed out earlier, this is currently fiction for allocate() if it has to obtain memory from the OS, and it's unclear to me how to interpret this for construct() and destroy() if they deal with large objects. Would it be controversial to officially let these take time linear in the size of the object, as they already do in real life?

Allocate() more blatantly takes time proportional to the size of the object if you mix in GC. But it's not really a new problem, and I think we'd be confusing things by leaving the bogus requirements there. The current requirement on allocate() is generally not important anyway, since it takes O(size) to construct objects in the resulting space. There are real performance issues here, but they're all concerned with the constants, not the asymptotic complexity.

Proposed resolution:

Change 20.1.2 [allocator.requirements]/2:

-2- Table 39 describes the requirements on types manipulated through allocators. All the operations on the allocators are expected to be amortized constant time. Table 40 describes the requirements on allocator types.


753. Move constructor in draft

Section: 20.1.1 [utility.arg.requirements] Status: Open Submitter: Yechezkel Mett Date: 2007-10-14

View other active issues in [utility.arg.requirements].

View all other issues in [utility.arg.requirements].

View all issues with Open status.

Discussion:

The draft standard n2369 uses the term move constructor in a few places, but doesn't seem to define it.

MoveConstructible requirements are defined in Table 33 in 20.1.1 [utility.arg.requirements] as follows:

MoveConstructible requirements
expression post-condition
T t = rv t is equivalent to the value of rv before the construction
[Note: There is no requirement on the value of rv after the construction. -- end note]

(where rv is a non-const rvalue of type T).

So I assume the move constructor is the constructor that would be used in filling the above requirement.

For vector::reserve, vector::resize and the vector modifiers given in 23.2.6.4 [vector.modifiers] we have

Requires: If value_type has a move constructor, that constructor shall not throw any exceptions.

Firstly "If value_type has a move constructor" is superfluous; every type which can be put into a vector has a move constructor (a copy constructor is also a move constructor). Secondly it means that for any value_type which has a throwing copy constructor and no other move constructor these functions cannot be used -- which I think will come as a shock to people who have been using such types in vector until now!

I can see two ways to correct this. The simpler, which is presumably what was intended, is to say "If value_type has a move constructor and no copy constructor, the move constructor shall not throw any exceptions" or "If value_type has a move constructor which changes the value of its parameter,".

The other alternative is add to MoveConstructible the requirement that the expression does not throw. This would mean that not every type that satisfies the CopyConstructible requirements also satisfies the MoveConstructible requirements. It would mean changing requirements in various places in the draft to allow either MoveConstructible or CopyConstructible, but I think the result would be clearer and possibly more concise too.

Proposed resolution:

Add new defintions to 17.1 [definitions]:

move constructor

a constructor which accepts only rvalue arguments of that type, and modifies the rvalue as a side effect during the construction.

move assignment operator

an assignment operator which accepts only rvalue arguments of that type, and modifies the rvalue as a side effect during the assignment.

move assignment

use of the move assignment operator.

[ Howard adds post-Bellevue: ]

Unfortunately I believe the wording recommended by the LWG in Bellevue is incorrect. reserve et. al. will use a move constructor if one is available, else it will use a copy constructor. A type may have both. If the move constructor is used, it must not throw. If the copy constructor is used, it can throw. The sentence in the proposed wording is correct without the recommended insertion. The Bellevue LWG recommended moving this issue to Ready. I am unfortunately pulling it back to Open. But I'm drafting wording to atone for this egregious action. :-)


758. shared_ptr and nullptr

Section: 20.6.12.2 [util.smartptr.shared] Status: Review Submitter: Joe Gottman Date: 2007-10-31

View other active issues in [util.smartptr.shared].

View all other issues in [util.smartptr.shared].

View all issues with Review status.

Discussion:

Consider the following program:

int main() {
   shared_ptr<int> p(nullptr); 
   return 0;
}

This program will fail to compile because shared_ptr uses the following template constructor to construct itself from pointers:

template <class Y> shared_ptr(Y *);

According to N2431, the conversion from nullptr_t to Y * is not deducible, so the above constructor will not be found. There are similar problems with the constructors that take a pointer and a deleter or a pointer, a deleter and an allocator, as well as the corresponding forms of reset(). Note that N2435 will solve this problem for constructing from just nullptr, but not for constructors that use deleters or allocators or for reset().

In the case of the functions that take deleters, there is the additional question of what argument should be passed to the deleter when it is eventually called. There are two reasonable possibilities: nullptr or static_cast<T *>(0), where T is the template argument of the shared_ptr. It is not immediately clear which of these is better. If D::operator() is a template function similar to shared_ptr's constructor, then d(static_cast<T*>(0)) will compile and d(nullptr) will not. On the other hand, if D::operator()() takes a parameter that is a pointer to some type other that T (for instance U* where U derives from T) then d(nullptr) will compile and d(static_cast<T *>(0)) may not.

[ Bellevue: ]

The general idea is right, we need to be able to pass a nullptr to a shared_ptr, but there are a few borderline editorial issues here. (For example, the single-argument nullptr_t constructor in the class synopsis isn't marked explicit, but it is marked explicit in the proposed wording for 20.6.6.2.1. There is a missing empty parenthesis in the form that takes a nullptr_t, a deleter, and an allocator.)

More seriously: this issue says that a shared_ptr constructed from a nullptr is empty. Since "empty" is undefined, it's hard to know whether that's right. This issue is pending on handling that term better.

Peter suggests definition of empty should be "does not own anything"

Is there an editorial issue that post-conditions should refer to get() = nullptr, rather than get() = 0?

No strong feeling towards accept or NAD, but prefer to make a decision than leave it open.

Seems there are no technical merits between NAD and Ready, comes down to "Do we intentially want to allow/disallow null pointers with these functions". Staw Poll - support null pointers 5 - No null pointers 0

Move to Ready, modulo editorial comments

[ post Bellevue Peter adds: ]

The following wording changes are less intrusive:

In 20.6.12.2.1 [util.smartptr.shared.const], add:

shared_ptr(nullptr_t);

after:

shared_ptr();

(Absence of explicit intentional.)

px.reset( nullptr ) seems a somewhat contrived way to write px.reset(), so I'm not convinced of its utility.

It's similarly not clear to me whether the deleter constructors need to be extended to take nullptr, but if they need to:

Add

template<class D> shared_ptr(nullptr_t p, D d);
template<class D, class A> shared_ptr(nullptr_t p, D d, A a);

after

template<class Y, class D> shared_ptr(Y* p, D d);
template<class Y, class D, class A> shared_ptr(Y* p, D d, A a);

Note that this changes the semantics of the new constructors such that they consistently call d(p) instead of d((T*)0) when p is nullptr.

The ability to be able to pass 0/NULL to a function that takes a shared_ptr has repeatedly been requested by users, but the other additions that the proposed resolution makes are not supported by real world demand or motivating examples.

It might be useful to split the obvious and non-controversial nullptr_t constructor into a separate issue. Waiting for "empty" to be clarified is unnecessary; this is effectively an alias for the default constructor.

[ Sophia Antipolis: ]

We want to remove the reset functions from the proposed resolution.

The remaining proposed resolution text (addressing the constructors) are wanted.

Disposition: move to review. The review should check the wording in the then-current working draft.

Proposed resolution:

Add the following constructors to 20.6.12.2 [util.smartptr.shared]:

shared_ptr(nullptr_t);
template <class D> shared_ptr(nullptr_t, D d);
template <class D, class A> shared_ptr(nullptr_t, D d, A a);

Add the following constructor definitions to 20.6.12.2.1 [util.smartptr.shared.const]:

 explicit shared_ptr(nullptr_t);

Effects: Constructs an empty shared_ptr object.

Postconditions: use_count() == 0 && get() == 0.

Throws: nothing.

template <class D> shared_ptr(nullptr_t, D d);
template <class D, class A> shared_ptr<nullptr_t, D d, A a);

Requires: D shall be CopyConstructible. The copy constructor and destructor of D shall not throw exceptions. The expression d(static_cast<T *>(0)) shall be well-formed, shall have well defined behavior, and shall not throw exceptions. A shall be an allocator (20.1.2 [allocator.requirements]). The copy constructor and destructor of A shall not throw exceptions.

Effects: Constructs a shared_ptr object that owns a null pointer of type T * and deleter d. The second constructor shall use a copy of a to allocate memory for internal use.

Postconditions: use_count() == 1 and get() == 0.

Throws: bad_alloc, or an implementation-defined exception when a resource other than memory could not be obtained.

Exception safety: If an exception is thrown, d(static_cast<Y *>(nullptr)) is called.


760. The emplace issue

Section: 23.1 [container.requirements] Status: Open Submitter: Paolo Carlini Date: 2007-11-11

View other active issues in [container.requirements].

View all other issues in [container.requirements].

View all issues with Open status.

Discussion:

In an emplace member function the function parameter pack may be bound to a priori unlimited number of objects: some or all of them can be elements of the container itself. Apparently, in order to conform to the blanket statement 23.1 [container.requirements]/11, the implementation must check all of them for that possibility. A possible solution can involve extending the exception in 23.1 [container.requirements]/12 also to the emplace member. As a side note, the push_back and push_front member functions are luckily not affected by this problem, can be efficiently implemented anyway

[ Related to 767 ]

[ Bellevue: ]

The proposed addition (13) is partially redundant with the existing paragraph 12. Why was the qualifier "rvalues" added to paragraph 12? Why does it not cover subelements and pointers?

Resolution: Alan Talbot to rework language, then set state to Review.

Proposed resolution:

Add after 23.1 [container.requirements]/12:

-12- Objects passed to member functions of a container as rvalue references shall not be elements of that container. No diagnostic required.

-13- Objects bound to the function parameter pack of the emplace member function shall not be elements or sub-objects of elements of the container. No diagnostic required.


762. std::unique_ptr requires complete type?

Section: 20.6.11 [unique.ptr] Status: Ready Submitter: Daniel Krügler Date: 2007-11-30

View all other issues in [unique.ptr].

View all issues with Ready status.

Discussion:

In contrast to the proposed std::shared_ptr, std::unique_ptr does currently not support incomplete types, because it gives no explicit grant - thus instantiating unique_ptr with an incomplete pointee type T automatically belongs to undefined behaviour according to 17.4.3.7 [res.on.functions]/2, last bullet. This is an unnecessary restriction and prevents many well-established patterns - like the bridge pattern - for std::unique_ptr.

[ Bellevue: ]

Move to open. The LWG is comfortable with the intent of allowing incomplete types and making unique_ptr more like shared_ptr, but we are not comfortable with the wording. The specification for unique_ptr should be more like that of shared_ptr. We need to know, for individual member functions, which ones require their types to be complete. The shared_ptr specification is careful to say that for each function, and we need the same level of care here. We also aren't comfortable with the "part of the operational semantic" language; it's not used elsewhere in the standard, and it's not clear what it means. We need a volunteer to produce new wording.

Proposed resolution:

The proposed changes in the following revision refers to the current state of N2521 including the assumption that 20.6.11.4 [unique.ptr.compiletime] will be removed according to the current state of 740.

The specialization unique_ptr<T[]> has some more restrictive constraints on type-completeness on T than unique_ptr<T>. The following proposed wordings try to cope with that. If the committee sees less usefulness on relaxed constraints on unique_ptr<T[]>, the alternative would be to stop this relaxation e.g. by adding one further bullet to 20.6.11.3 [unique.ptr.runtime]/1: "T shall be a complete type, if used as template argument of unique_ptr<T[], D>

This issue has some overlap with 673, but it seems not to cause any problems with this one, because 673 adds only optional requirements on D that do not conflict with the here discussed ones, provided that D::pointer's operations (including default construction, copy construction/assignment, and pointer conversion) are specified not to throw, otherwise this would have impact on the current specification of unique_ptr.

  1. In 20.6.11 [unique.ptr]/2 add as the last sentence to the existing para:

    The unique_ptr provides a semantics of strict ownership. A unique_ptr owns the object it holds a pointer to. A unique_ptr is not CopyConstructible, nor CopyAssignable, however it is MoveConstructible and MoveAssignable. The template parameter T of unique_ptr may be an incomplete type. [ Note: The uses of unique_ptr include providing exception safety for dynamically allcoated memory, passing ownership of dynamically allocated memory to a function, and returning dynamically allocated memory from a function. -- end note ]
  2. 20.6.11.2.1 [unique.ptr.single.ctor]/1: No changes necessary.

    [ N.B.: We only need the requirement that D is DefaultConstructible. The current wording says just this. ]

  3. In 20.6.11.2.1 [unique.ptr.single.ctor]/5 change the requires clause to say:

    Requires: The expression D()(p) shall be well formed. The default constructor of D shall not throw an exception. D must not be a reference type. D shall be default constructible, and that construction shall not throw an exception.

    [ N.B.: There is no need that the expression D()(p) is well-formed at this point. I assume that the current wording is based on the corresponding shared_ptr wording. In case of shared_ptr this requirement is necessary, because the corresponding c'tor *can* fail and must invoke delete p/d(p) in this case. Unique_ptr is simpler in this regard. The *only* functions that must insist on well-formedness and well-definedness of the expression get_deleter()(get()) are (1) the destructor and (2) reset. The reasoning for the wording change to explicitly require DefaultConstructible of D is to guarantee that invocation of D's default c'tor is both well-formed and well-defined. Note also that we do *not* need the requirement that T must be complete, also in contrast to shared_ptr. Shared_ptr needs this, because it's c'tor is a template c'tor which potentially requires Convertible<Y*, X*>, which again requires Completeness of Y, if !SameType<X, Y> ]

  4. Merge 20.6.11.2.1 [unique.ptr.single.ctor]/12+13 thereby removing the sentence of 12, but transferring the "requires" to 13:

    Requires: If D is not an lvalue-reference type then[..]

    [ N.B.: For the same reasons as for (3), there is no need that d(p) is well-formed/well-defined at this point. The current wording guarantees all what we need, namely that the initialization of both the T* pointer and the D deleter are well-formed and well-defined. ]

  5. 20.6.11.2.1 [unique.ptr.single.ctor]/17: No changes necessary.
  6. 20.6.11.2.1 [unique.ptr.single.ctor]/21:

    Requires: If D is not a reference type, construction of the deleter D from an rvalue of type E shall be well formed and shall not throw an exception. If D is a reference type, then E shall be the same type as D (diagnostic required). U* shall be implicitly convertible to T*. [Note: These requirements imply that T and U be complete types. -- end note]

    [ N.B.: The current wording of 21 already implicitly guarantees that U is completely defined, because it requires that Convertible<U*, T*> is true. If the committee wishes this explicit requirement can be added, e.g. "U shall be a complete type." ]

  7. 20.6.11.2.2 [unique.ptr.single.dtor]: Just before p1 add a new paragraph:

    Requires: The expression get_deleter()(get()) shall be well-formed, shall have well-defined behavior, and shall not throw exceptions. [Note: The use of default_delete requires T to be a complete type. -- end note]

    [ N.B.: This requirement ensures that the whole responsibility on type-completeness of T is delegated to this expression. ]

  8. 20.6.11.2.3 [unique.ptr.single.asgn]/1: No changes necessary, except the current editorial issue, that "must shall" has to be changed to "shall", but this change is not a special part of this resolution.

    [ N.B. The current wording is sufficient, because we can delegate all further requirements on the requirements of the effects clause ]

  9. 20.6.11.2.3 [unique.ptr.single.asgn]/6:

    Requires: Assignment of the deleter D from an rvalue D shall not throw an exception. U* shall be implicitly convertible to T*. [Note: These requirements imply that T and U be complete types. -- end note]

    [ N.B.: The current wording of p. 6 already implicitly guarantees that U is completely defined, because it requires that Convertible<U*, T*> is true, see (6)+(8). ]

  10. 20.6.11.2.3 [unique.ptr.single.asgn]/11: No changes necessary.

    [ N.B.: Delegation to requirements of effects clause is sufficient. ]

  11. 20.6.11.2.4 [unique.ptr.single.observers]/1+4+7+9+11:
  12. T* operator->() const;
    Note: Use typically requires T shall be complete. -- end note]
  13. 20.6.11.2.5 [unique.ptr.single.modifiers]/1: No changes necessary.
  14. 20.6.11.2.5 [unique.ptr.single.modifiers]/4: Just before p. 4 add a new paragraph:

    Requires: The expression get_deleter()(get()) shall be well-formed, shall have well-defined behavior, and shall not throw exceptions.
  15. 20.6.11.2.5 [unique.ptr.single.modifiers]/7: No changes necessary.
  16. 20.6.11.3 [unique.ptr.runtime]: Add one additional bullet on paragraph 1:

    A specialization for array types is provided with a slightly altered interface.

    • ...
    • T shall be a complete type.

[ post Bellevue: Daniel provided revised wording. ]


765. more on iterator validity

Section: 24.1 [iterator.requirements] Status: New Submitter: Martin Sebor Date: 2007-12-14

View other active issues in [iterator.requirements].

View all other issues in [iterator.requirements].

View all issues with New status.

Discussion:

Issue 278 defines the meaning of the term "invalid iterator" as one that may be singular.

Consider the following code:

   std::deque<int> x, y;
   std::deque<int>::iterator i = x.end(), j = y.end();
   x.swap(y);
       

Given that swap() is required not to invalidate iterators and using the definition above, what should be the expected result of comparing i and j to x.end() and y.end(), respectively, after the swap()?

I.e., is the expression below required to evaluate to true?

   i == y.end() && j == x.end()
       

(There are at least two implementations where the expression returns false.)

More generally, is the definition introduced in issue 278 meant to make any guarantees about whether iterators actually point to the same elements or be associated with the same containers after a non-invalidating operation as they did before?

Here's a motivating example intended to demonstrate the importance of the question:

   Container x, y ({ 1, 2});   // pseudocode to initialize y with { 1, 2 }
   Container::iterator i = y.begin() + 1;
   Container::iterator j = y.end();
   std::swap(x, y);
   std::find(i, j, 3);
       

swap() guarantees that i and j continue to be valid. Unless the spec says that even though they are valid they may no longer denote a valid range the code above must be well-defined. Expert opinions on this differ as does the behavior of popular implementations for some standard Containers.

Proposed resolution:


769. std::function should use nullptr_t instead of "unspecified-null-pointer-type"

Section: 20.5.15.2 [func.wrap.func] Status: Ready Submitter: Daniel Krügler Date: 2008-01-10

View all issues with Ready status.

Discussion:

N2461 already replaced in 20.5.15.2 [func.wrap.func] it's originally proposed (implicit) conversion operator to "unspecified-bool-type" by the new explicit bool conversion, but the inverse conversion should also use the new std::nullptr_t type instead of "unspecified-null-pointer- type".

Proposed resolution:

In 20.5 [function.objects], header <functional> synopsis replace:

template<class R, class... ArgTypes>
  bool operator==(const function<R(ArgTypes...)>&, unspecified-null-pointer-type nullptr_t);
template<class R, class... ArgTypes>
  bool operator==(unspecified-null-pointer-type nullptr_t , const function<R(ArgTypes...)>&);
template<class R, class... ArgTypes>
  bool operator!=(const function<R(ArgTypes...)>&, unspecified-null-pointer-type nullptr_t);
template<class R, class... ArgTypes>
  bool operator!=(unspecified-null-pointer-type nullptr_t , const function<R(ArgTypes...)>&);

In the class function synopsis of 20.5.15.2 [func.wrap.func] replace

function(unspecified-null-pointer-type nullptr_t);
...
function& operator=(unspecified-null-pointer-type nullptr_t);

In 20.5.15.2 [func.wrap.func], "Null pointer comparisons" replace:

template <class R, class... ArgTypes>
  bool operator==(const function<R(ArgTypes...)>&, unspecified-null-pointer-type nullptr_t);
template <class R, class... ArgTypes>
  bool operator==(unspecified-null-pointer-type nullptr_t , const function<R(ArgTypes...)>&);
template <class R, class... ArgTypes>
  bool operator!=(const function<R(ArgTypes...)>&, unspecified-null-pointer-type nullptr_t);
template <class R, class... ArgTypes>
  bool operator!=(unspecified-null-pointer-type nullptr_t , const function<R(ArgTypes...)>&);

In 20.5.15.2.1 [func.wrap.func.con], replace

function(unspecified-null-pointer-type nullptr_t);
...
function& operator=(unspecified-null-pointer-type nullptr_t);

In 20.5.15.2.6 [func.wrap.func.nullptr], replace

template <class R, class... ArgTypes>
  bool operator==(const function<R(ArgTypes...)>& f, unspecified-null-pointer-type nullptr_t);
template <class R, class... ArgTypes>
  bool operator==(unspecified-null-pointer-type nullptr_t , const function<R(ArgTypes...)>& f);

and replace

template <class R, class... ArgTypes>
  bool operator!=(const function<R(ArgTypes...)>& f, unspecified-null-pointer-type nullptr_t);
template <class R, class... ArgTypes>
  bool operator!=(unspecified-null-pointer-type nullptr_t , const function<R(ArgTypes...)>& f);

771. Impossible throws clause in [string.conversions]

Section: 21.4 [string.conversions] Status: Ready Submitter: Daniel Krügler Date: 2008-01-13

View other active issues in [string.conversions].

View all other issues in [string.conversions].

View all issues with Ready status.

Discussion:

The new to_string and to_wstring functions described in 21.4 [string.conversions] have throws clauses (paragraphs 8 and 16) which say:

Throws: nothing

Since all overloads return either a std::string or a std::wstring by value this throws clause is impossible to realize in general, since the basic_string constructors can fail due to out-of-memory conditions. Either these throws clauses should be removed or should be more detailled like:

Throws: Nothing if the string construction throws nothing

Further there is an editorial issue in p. 14: All three to_wstring overloads return a string, which should be wstring instead (The header <string> synopsis of 21.2 [string.classes] is correct in this regard).

Proposed resolution:

In 21.4 [string.conversions], remove the paragraphs 8 and 16.

string to_string(long long val); 
string to_string(unsigned long long val); 
string to_string(long double val); 
Throws: nothing
wstring to_wstring(long long val); 
wstring to_wstring(unsigned long long val); 
wstring to_wstring(long double val); 
Throws: nothing

772. Impossible return clause in [string.conversions]

Section: 21.4 [string.conversions] Status: Ready Submitter: Daniel Krügler Date: 2008-01-13

View other active issues in [string.conversions].

View all other issues in [string.conversions].

View all issues with Ready status.

Discussion:

The return clause 21.4 [string.conversions]/paragraph 15 of the new to_wstring overloads says:

Returns: each function returns a wstring object holding the character representation of the value of its argument that would be generated by calling wsprintf(buf, fmt, val) with a format specifier of L"%lld", L"%ulld", or L"%f", respectively.

Problem is: There does not exist any wsprintf function in C99 (I checked the 2nd edition of ISO 9899, and the first and the second corrigenda from 2001-09-01 and 2004-11-15). What probably meant here is the function swprintf from <wchar.h>/<cwchar>, but this has the non-equivalent declaration:

int swprintf(wchar_t * restrict s, size_t n,
const wchar_t * restrict format, ...);

therefore the paragraph needs to mention the size_t parameter n.

Proposed resolution:

Change the current wording of 21.4 [string.conversions]/p. 15 to:

Returns: eEach function returns a wstring object holding the character representation of the value of its argument that would be generated by calling wsswprintf(buf, bufsz, fmt, val) with a format specifier fmt of L"%lld", L"%ulld", or L"%f", respectively, where buf designates an internal character buffer of sufficient size bufsz.

[Hint to the editor: The resolution also adds to mention the name of the format specifier "fmt"]

I also would like to remark that the current wording of it's equivalent paragraph 7 should also mention the meaning of buf and fmt.

Change the current wording of 21.4 [string.conversions]/p. 7 to:

Returns: eEach function returns a string object holding the character representation of the value of its argument that would be generated by calling sprintf(buf, fmt, val) with a format specifier fmt of "%lld", "%ulld", or "%f", respectively, where buf designates an internal character buffer of sufficient size.

774. Member swap undefined for most containers

Section: 23 [containers] Status: Open Submitter: Alisdair Meredith Date: 2008-01-14

View other active issues in [containers].

View all other issues in [containers].

View all issues with Open status.

Discussion:

It appears most containers declare but do not define a member-swap function.

This is unfortunate, as all overload the swap algorithm to call the member-swap function! (required for swappable guarantees [Table 37] and Container Requirements [Table 87])

Note in particular that Table 87 gives semantics of a.swap(b) as swap(a,b), yet for all containers we define swap(a,b) to call a.swap(b) - a circular definition.

A quick survey of clause 23 shows that the following containers provide a definition for member-swap:

array
queue
stack
vector

Whereas the following declare it, but do not define the semantics:

deque
list
map
multimap
multiset
priority_queue
set
unordered_map
unordered_multi_map
unordered_multi_set
unordered_set

Suggested resolution:

Provide a definition for each of the affected containers...

[ Bellevue: ]

Move to Open and ask Alisdair to provide wording.

Proposed resolution:

Wording provided in N2590.


776. Undescribed assign function of std::array

Section: 23.2.1 [array] Status: Ready Submitter: Daniel Krügler Date: 2008-01-20

View other active issues in [array].

View all other issues in [array].

View all issues with Ready status.

Discussion:

The class template array synopsis in 23.2.1 [array]/3 declares a member function

void assign(const T& u);

which's semantic is no-where described. Since this signature is not part of the container requirements, such a semantic cannot be derived by those.

I found only one reference to this function in the issue list, 588 where the question is raised:

what's the effect of calling assign(T&) on a zero-sized array?

which does not answer the basic question of this issue.

If this function shall be part of the std::array, it's probable semantic should correspond to that of boost::array, but of course such wording must be added.

Proposed resolution:

Just after the section 23.2.1.4 [array.data] add the following new section:

23.2.1.5 array::fill [array.fill]

void fill(const T& u);

1: Effects: fill_n(begin(), N, u)

[N.B: I wonder, why class array does not have a "modifiers" section. If it had, then assign would naturally belong to it]

Change the synopsis in 23.2.1 [array]/3:

template <class T, size_t N>
struct array { 
  ...
  void assign fill(const T& u);
  ...

[ Bellevue: ]

Suggest substituting "fill" instead of "assign".

Set state to Review given substitution of "fill" for "assign".


779. Resolution of #283 incomplete

Section: 25.2.8 [alg.remove] Status: Ready Submitter: Daniel Krügler Date: 2008-01-25

View all other issues in [alg.remove].

View all issues with Ready status.

Discussion:

The resolution of 283 did not resolve similar necessary changes for algorithm remove_copy[_if], which seems to be an oversight.

Proposed resolution:

In 25.2.8 [alg.remove]/p.6, replace the N2461 requires clause with:

Requires: Type T is EqualityComparable (31). The ranges [first,last) and [result,result + (last - first)) shall not overlap. The expression *result = *first shall be valid.

780. std::merge() specification incorrect/insufficient

Section: 25.3.4 [alg.merge] Status: New Submitter: Daniel Krügler Date: 2008-01-25

View all issues with New status.

Discussion:

Though issue 283 has fixed many open issues, it seems that some are still open:

Both 25.3.4 [lib.alg.merge] in 14882:2003 and 25.3.4 [alg.merge] in N2461 have no Requires element and the Effects element contains some requirements, which is probably editorial. Worse is that:

Proposed resolution:

In 25.3.4 [alg.merge] replace p.1+ 2:

Effects: Merges Copies all the elements of the two sorted ranges [first1,last1) and [first2,last2) into the range [result,result + (last1 - first1) + (last2 - first2)) [result, last) (where last is equal to result + (last1 - first1) + (last2 - first2)), such that resulting range will be sorted in non-decreasing order; that is, for every iterator i in [result,last) other than result, the condition *i < *(i - 1) or, respectively, comp(*i, *(i - 1)) will be false.

Requires: The resulting range shall not overlap with either of the original ranges. The list will be sorted in non-decreasing order according to the ordering defined by comp; that is, for every iterator i in [first,last) other than first, the condition *i < *(i - 1) or comp(*i, *(i - 1)) will be false. The results of the expressions *first1 and *first2 shall be writable to the output iterator.

[N.B.: I attempted to reuse the wording style of inplace_merge, therefore proposing to insert ", respectively," between both predicate tests. This is no strictly necessary as other parts of <algorithm> show, just a matter of consistency]


785. Random Number Requirements in TR1

Section: TR1 5.1.4.5 [tr.rand.eng.disc], TR1 5.1.4.6 [tr.rand.eng.xor] Status: New Submitter: John Maddock Date: 2008-01-15

View all issues with New status.

Discussion:

Table 16 of TR1 requires that all Pseudo Random Number generators have a

seed(integer-type s)

member function that is equivalent to:

mygen = Generator(s)

But the generators xor_combine and discard_block have no such seed member, only the

template <class Gen>
seed(Gen&);

member, which will not accept an integer literal as an argument: something that appears to violate the intent of Table 16.

So... is this a bug in TR1?

This is a real issue BTW, since the Boost implementation does adhere to the requirements of Table 16, while at least one commercial implementation does not and follows a strict adherence to sections 5.1.4.5 and 5.1.4.6 instead.

[ Jens adds: ]

Both engines do have the necessary constructor, therefore the omission of the seed() member functions appears to be an oversight.

Proposed resolution:


787. complexity of binary_search

Section: 25.3.3.4 [binary.search] Status: Ready Submitter: Daniel Krügler Date: 2007-09-08

View all issues with Ready status.

Discussion:

In 25.3.3.4 [binary.search]/3 the complexity of binary_search is described as

At most log(last - first) + 2 comparisons.

This should be precised and brought in line with the nomenclature used for lower_bound, upper_bound, and equal_range.

All existing libraries I'm aware of, delegate to lower_bound (+ one further comparison). Since issue 384 has now WP status, the resolution of #787 should be brought in-line with 384 by changing the + 2 to + O(1).

[ Sophia Antipolis: ]

Alisdair prefers to apply an upper bound instead of O(1), but that would require fixing for lower_bound, upper_bound etc. as well. If he really cares about it, he'll send an issue to Howard.

Proposed resolution:

Change 25.3.3.4 [binary.search]/3

Complexity: At most log2(last - first) + 2 O(1) comparisons.

788. ambiguity in [istream.iterator]

Section: 24.5.1 [istream.iterator] Status: New Submitter: Martin Sebor Date: 2008-02-06

View other active issues in [istream.iterator].

View all other issues in [istream.iterator].

View all issues with New status.

Discussion:

The description of how an istream_iterator object becomes an end-of-stream iterator is a) ambiguous and b) out of date WRT issue 468:

istream_iterator reads (using operator>>) successive elements from the input stream for which it was constructed. After it is constructed, and every time ++ is used, the iterator reads and stores a value of T. If the end of stream is reached (operator void*() on the stream returns false), the iterator becomes equal to the end-of-stream iterator value. The constructor with no arguments istream_iterator() always constructs an end of stream input iterator object, which is the only legitimate iterator to be used for the end condition. The result of operator* on an end of stream is not defined. For any other iterator value a const T& is returned. The result of operator-> on an end of stream is not defined. For any other iterator value a const T* is returned. It is impossible to store things into istream iterators. The main peculiarity of the istream iterators is the fact that ++ operators are not equality preserving, that is, i == j does not guarantee at all that ++i == ++j. Every time ++ is used a new value is read.

istream::operator void*() returns null if istream::fail() is true, otherwise non-null. istream::fail() returns true if failbit or badbit is set in rdstate(). Reaching the end of stream doesn't necessarily imply that failbit or badbit is set (e.g., after extracting an int from stringstream("123") the stream object will have reached the end of stream but fail() is false and operator void*() will return a non-null value).

Also I would prefer to be explicit about calling fail() here (there is no operator void*() anymore.)

Proposed resolution:

Change 24.5.1 [istream.iterator]/1:

istream_iterator reads (using operator>>) successive elements from the input stream for which it was constructed. After it is constructed, and every time ++ is used, the iterator reads and stores a value of T. If the end of stream is reached the iterator fails to read and store a value of T (operator void*() fail() on the stream returns false true), the iterator becomes equal to the end-of-stream iterator value. The constructor with no arguments istream_iterator() always constructs an end of stream input iterator object, which is the only legitimate iterator to be used for the end condition. The result of operator* on an end of stream is not defined. For any other iterator value a const T& is returned. The result of operator-> on an end of stream is not defined. For any other iterator value a const T* is returned. It is impossible to store things into istream iterators. The main peculiarity of the istream iterators is the fact that ++ operators are not equality preserving, that is, i == j does not guarantee at all that ++i == ++j. Every time ++ is used a new value is read.

793. discrete_distribution missing constructor

Section: 26.4.8.5.1 [rand.dist.samp.discrete] Status: Open Submitter: P.J. Plauger Date: 2008-02-09

View all other issues in [rand.dist.samp.discrete].

View all issues with Open status.

Discussion:

discrete_distribution should have a constructor like:

template<class _Fn>
  discrete_distribution(result_type _Count, double _Low, double _High,
                        _Fn& _Func);

(Makes it easier to fill a histogram with function values over a range.)

[ Bellevue: ]

How do you specify the function so that it does not return negative values? If you do it is a bad construction. This requirement is already there. Where in each bin does one evaluate the function? In the middle. Need to revisit tomorrow.

[ Sophia Antipolis: ]

Bill is not requesting this.

Marc Paterno: _Fn cannot return negative values at the points where the function is sampled. It is sampled in the middle of each bin. _Fn cannot return 0 everywhere it is sampled.

Jens: lambda expressions are rvalues

Add a library issue to provide an initializer_list<double> constructor for discrete_distribution.

Marc Paterno: dislikes reference for _Fn parameter. Make it pass-by-value (to use lambda), use std::ref to wrap giant-state function objects.

Daniel: See random_shuffle, pass-by-rvalue-reference.

Daniel to draft wording.

Proposed resolution:


794. piecewise_constant_distribution missing constructor

Section: 26.4.8.5.2 [rand.dist.samp.pconst] Status: Open Submitter: P.J. Plauger Date: 2008-02-09

View all other issues in [rand.dist.samp.pconst].

View all issues with Open status.

Discussion:

piecewise_constant_distribution should have a constructor like:

template<class _Fn>
   piecewise_constant_distribution(size_t _Count,
            _Ty _Low, _Ty _High, _Fn& _Func);

(Makes it easier to fill a histogram with function values over a range. The two (reference 793) make a sensible replacement for general_pdf_distribution.)

[ Sophia Antipolis: ]

Marc: uses variable width of bins and weight for each bin. This is not giving enough flexibility to control both variables.

Add a library issue to provide an constructor taking an initializer_list<double> and _Fn for piecewise_constant_distribution.

Daniel to draft wording.

Proposed resolution:


800. Issues in 26.4.7.1 [rand.util.seedseq](6)

Section: 26.4.7.1 [rand.util.seedseq] Status: Open Submitter: Stephan Tolksdorf Date: 2008-02-18

View other active issues in [rand.util.seedseq].

View all other issues in [rand.util.seedseq].

View all issues with Open status.

Discussion:

The for-loop in the algorithm specification has n iterations, where n is defined to be end - begin, i.e. the number of supplied w-bit quantities. Previous versions of this algorithm and the general logic behind it suggest that this is an oversight and that in the context of the for-loop n should be the number of full 32-bit quantities in b (rounded upwards). If w is 64, the current algorithm throws away half of all bits in b. If w is 16, the current algorithm sets half of all elements in v to 0.

There are two more minor issues:

[ Bellevue: ]

Move to OPEN Bill will try to propose a resolution by the next meeting.

[ post Bellevue: Bill provided wording. ]

This issue is made moot if 803 is accepted.

Proposed resolution:

Replace 26.4.7.1 [rand.util.seedseq] paragraph 6 with:

Effects: Constructs a seed_seq object by effectively concatenating the low-order u bits of each of the elements of the supplied sequence [begin, end) in ascending order of significance to make a (possibly very large) unsigned binary number b having a total of n bits, and then carrying out the following algorithm:

for( v.clear(); n > 0; n -= 32 )
   v.push_back(b mod 232), b /= 232;

801. tuple and pair trivial members

Section: 20.3 [tuple] Status: Open Submitter: Lawrence Crowl Date: 2008-02-18

View other active issues in [tuple].

View all other issues in [tuple].

View all issues with Open status.

Discussion:

Classes with trivial special member functions are inherently more efficient than classes without such functions. This efficiency is particularly pronounced on modern ABIs that can pass small classes in registers. Examples include value classes such as complex numbers and floating-point intervals. Perhaps more important, though, are classes that are simple collections, like pair and tuple. When the parameter types of these classes are trivial, the pairs and tuples themselves can be trivial, leading to substantial performance wins.

The current working draft make specification of trivial functions (where possible) much easer through defaulted and deleted functions. As long as the semantics of defaulted and deleted functions match the intended semantics, specification of defaulted and deleted functions will yield more efficient programs.

There are at least two cases where specification of an explicitly defaulted function may be desirable.

First, the std::pair template has a non-trivial default constructor, which prevents static initialization of the pair even when the types are statically initializable. Changing the definition to

pair() = default;

would enable such initialization. Unfortunately, the change is not semantically neutral in that the current definition effectively forces value initialization whereas the change would not value initialize in some contexts.

** Does the committee confirm that forced value initialization was the intent? If not, does the committee wish to change the behavior of std::pair in C++0x?

Second, the same default constructor issue applies to std::tuple. Furthermore, the tuple copy constructor is current non-trivial, which effectively prevents passing it in registers. To enable passing tuples in registers, the copy constructor should be make explicitly defaulted. The new declarations are:

tuple() = default;
tuple(const tuple&) = default;

This changes is not implementation neutral. In particular, it prevents implementations based on pointers to the parameter types. It does however, permit implementations using the parameter types as bases.

** How does the committee wish to trade implementation efficiency versus implementation flexibility?

[ Bellevue: ]

General agreement; the first half of the issue is NAD.

Before voting on the second half, it was agreed that a "Strongly Favor" vote meant support for trivial tuples (assuming usual requirements met), even at the expense of other desired qualities. A "Weakly Favor" vote meant support only if not at the expense of other desired qualities.

Concensus: Go forward, but not at expense of other desired qualities.

It was agreed to Alisdair should fold this work in with his other pair/tuple action items, above, and that issue 801 should be "open", but tabled until Alisdair's proposals are disposed of.

Proposed resolution:


803. Simplification of seed_seq::seq_seq

Section: 26.4.7.1 [rand.util.seedseq] Status: Review Submitter: Charles Karney Date: 2008-02-22

View other active issues in [rand.util.seedseq].

View all other issues in [rand.util.seedseq].

View all issues with Review status.

Discussion:

seed_seq(InputIterator begin, InputIterator end); constructs a seed_seq object repacking the bits of supplied sequence [begin, end) into a 32-bit vector.

This repacking triggers several problems:

  1. Distinctness of the output of seed_seq::generate required the introduction of the initial "if (w < 32) v.push_back(n);" (Otherwise the unsigned short vectors [1, 0] and [1] generate the same sequence.)
  2. Portability demanded the introduction of the template parameter u. (Otherwise some sequences could not be obtained on computers where no integer types are exactly 32-bits wide.)
  3. The description and algorithm have become unduly complicated.

I propose simplifying this seed_seq constructor to be "32-bit only". Despite it's being simpler, there is NO loss of functionality (see below).

Here's how the description would read

26.4.7.1 [rand.util.seedseq] Class seed_seq

template<class InputIterator>
  seed_seq(InputIterator begin, InputIterator end);

5 Requires: NO CHANGE

6 Effects: Constructs a seed_seq object by

for (InputIterator s = begin; s != end; ++s)
   v.push_back((*s) mod 2^32);

Discussion:

The chief virtues here are simplicity, portability, and generality.

Arguments (and counter-arguments) against making this change (and retaining the n2461 behavior) are:

Note: this proposal renders moot issues 782 and 800.

[ Bellevue: ]

Walter needs to ask Fermilab for guidance. Defer till tomorrow. Bill likes the proposed resolution.

[ Sophia Antipolis: ]

Marc Paterno wants portable behavior between 32bit and 64bit machines; we've gone to significant trouble to support portability of engines and their values.

Jens: the new algorithm looks perfectly portable

Marc Paterno to review off-line.

Modify the proposed resolution to read "Constructs a seed_seq object by the following algorithm ..."

Disposition: move to review; unanimous consent.

(moots 782 and 800)

Proposed resolution:

Change 26.4.7.1 [rand.util.seedseq]:

template<class InputIterator, 
  size_t u = numeric_limits<iterator_traits<InputIterator>::value_type>::digits>
  seed_seq(InputIterator begin, InputIterator end);

-5- Requires: InputIterator shall satisfy the requirements of an input iterator (24.1.1) such that iterator_traits<InputIterator>::value_type shall denote an integral type.

-6- Constructs a seed_seq object by the following algorithm rearranging some or all of the bits of the supplied sequence [begin,end) of w-bit quantities into 32-bit units, as if by the following:

First extract the rightmost u bits from each of the n = end - begin elements of the supplied sequence and concatenate all the extracted bits to initialize a single (possibly very large) unsigned binary number, b = ∑n-1i=0 (begin[i] mod 2u) · 2w·i (in which the bits of each begin[i] are treated as denoting an unsigned quantity). Then carry out the following algorithm:


v.clear(); 
if ($w$ < 32) 
  v.push_back($n$); 
for( ; $n$ > 0; --$n$) 
  v.push_back(b mod 232), b /= 232;

for (InputIterator s = begin; s != end; ++s)
   v.push_back((*s) mod 232);

804. Some problems with classes error_code/error_condition

Section: 19.4 [syserr] Status: Review Submitter: Daniel Krügler Date: 2008-02-24

View other active issues in [syserr].

View all other issues in [syserr].

View all issues with Review status.

Discussion:

  1. 19.4.2.1 [syserr.errcode.overview]/1, class error_code and 19.4.3.1 [syserr.errcondition.overview]/, class error_condition synopses declare an expository data member cat_:

    const error_category& cat_; // exposition only
    

    which is used to define the semantics of several members. The decision to use a member of reference type lead to several problems:

    1. The classes are not (Copy)Assignable, which is probably not the intent.
    2. The post conditions of all modifiers from 19.4.2.3 [syserr.errcode.modifiers] and 19.4.3.3 [syserr.errcondition.modifiers], resp., cannot be fulfilled.

    The simple fix would be to replace the reference by a pointer member.

  2. I would like to give the editorial remark that in both classes the constrained operator= overload (template with ErrorCodeEnum argument) makes in invalid usage of std::enable_if: By using the default value for the second enable_if parameter the return type would be defined to be void& even in otherwise valid circumstances - this return type must be explicitly provided (In error_condition the first declaration uses an explicit value, but of wrong type).
  3. The member function message throws clauses ( 19.4.1.2 [syserr.errcat.virtuals]/10, 19.4.2.4 [syserr.errcode.observers]/8, and 19.4.3.4 [syserr.errcondition.observers]/6) guarantee "throws nothing", although they return a std::string by value, which might throw in out-of-memory conditions (see related issue 771).

[ Sophia Antipolis: ]

Part A: NAD (editorial), cleared by the resolution of issue 832.

Part B: Technically correct, save for typo. Rendered moot by the concept proposal (N2620) NAD (editorial).

Part C: We agree; this is consistent with the resolution of issue 721.

Howard: please ping Beman, asking him to clear away parts A and B from the wording in the proposed resolution, so it is clear to the editor what needs to be applied to the working paper.

Beman provided updated wording.

Proposed resolution:

In 19.4.1.2 [syserr.errcat.virtuals], remove the throws clause p. 10.

virtual string message(int ev) const = 0;

Returns: A string that describes the error condition denoted by ev.

Throws: Nothing.

In 19.4.2.4 [syserr.errcode.observers], remove the throws clause p. 8.

string message() const;

Returns: category().message(value()).

Throws: Nothing.

In 19.4.3.4 [syserr.errcondition.observers], remove the throws clause p. 6.

string message() const;

Returns: category().message(value()).

Throws: Nothing.


805. posix_error::posix_errno concerns

Section: 19.4 [syserr] Status: Ready Submitter: Jens Maurer Date: 2008-02-24

View other active issues in [syserr].

View all other issues in [syserr].

View all issues with Ready status.

Discussion:

19.4 [syserr]

namespace posix_error {
  enum posix_errno {
    address_family_not_supported, // EAFNOSUPPORT
    ...

should rather use the new scoped-enum facility (7.2 [dcl.enum]), which would avoid the necessity for a new posix_error namespace, if I understand correctly.

[ Further discussion: ]

See N2347, Strongly Typed Enums, since renamed Scoped Enums.

Alberto Ganesh Barbati also raised this issue in private email, and also proposed the scoped-enum solution.

Nick Stoughton asked in Bellevue that posix_error and posix_errno not be used as names. The LWG agreed.

The wording for the Proposed resolution was provided by Beman Dawes.

Proposed resolution:

Change System error support 19.4 [syserr] as indicated:

namespace posix_error {
  enum posix_errno class errc {
    address_family_not_supported, // EAFNOSUPPORT
    ...
    wrong_protocol_type, // EPROTOTYPE
  };
} // namespace posix_error

template <> struct is_error_condition_enum<posix_error::posix_errno errc>
  : public true_type {}

namespace posix_error {
  error_code make_error_code(posix_errno errc e);
  error_condition make_error_condition(posix_errno errc e);
} // namespace posix_error

Change System error support 19.4 [syserr] :

The is_error_code_enum and is_error_condition_enum templates may be specialized for user-defined types to indicate that such a type is eligible for class error_code and class error_condition automatic conversions, respectively.

Change System error support 19.4 [syserr] and its subsections:

Change Error category objects 19.4.1.5 [syserr.errcat.objects], paragraph 2:

Remarks: The object's default_error_condition and equivalent virtual functions shall behave as specified for the class error_category. The object's name virtual function shall return a pointer to the string "POSIX" "GENERIC".

Change 19.4.2.5 [syserr.errcode.nonmembers] Class error_code non-member functions as indicated:

error_code make_error_code(posix_errno errc e);
Returns: error_code(static_cast<int>(e), posixgeneric_category).

Change 19.4.3.5 [syserr.errcondition.nonmembers] Class error_condition non-member functions as indicated:

error_condition make_error_condition(posix_errno errc e);
Returns: error_condition(static_cast<int>(e), posixgeneric_category).

Rationale:

Names Considered
portable Too non-specific. Did not wish to reserve such a common word in namespace std. Not quite the right meaning, either.
portable_error Too long. Explicit qualification is always required for scoped enums, so a short name is desirable. Not quite the right meaning, either. May be misleading because *_error in the std lib is usually an exception class name.
std_error Fairly short, yet explicit. But in fully qualified names like std::std_error::not_enough_memory, the std_ would be unfortunate. Not quite the right meaning, either. May be misleading because *_error in the std lib is usually an exception class name.
generic Short enough. The category could be generic_category. Fully qualified names like std::generic::not_enough_memory read well. Reserving in namespace std seems dicey.
generic_error Longish. The category could be generic_category. Fully qualified names like std::generic_error::not_enough_memory read well. Misleading because *_error in the std lib is usually an exception class name.
generic_err A bit less longish. The category could be generic_category. Fully qualified names like std::generic_err::not_enough_memory read well.
gen_err Shorter still. The category could be generic_category. Fully qualified names like std::gen_err::not_enough_memory read well.
generr Shorter still. The category could be generic_category. Fully qualified names like std::generr::not_enough_memory read well.
error Shorter still. The category could be generic_category. Fully qualified names like std::error::not_enough_memory read well. Do we want to use this general a name?
err Shorter still. The category could be generic_category. Fully qualified names like std::err::not_enough_memory read well. Although alone it looks odd as a name, given the existing errno and namespace std names, it seems fairly intuitive. Problem: err is used throughout the standard library as an argument name and in examples as a variable name; it seems too confusing to add yet another use of the name.
errc Short enough. The "c" stands for "constant". The category could be generic_category. Fully qualified names like std::errc::not_enough_memory read well. Although alone it looks odd as a name, given the existing errno and namespace std names, it seems fairly intuitive. There are no uses of errc in the current C++ standard.

806. unique_ptr::reset effects incorrect, too permissive

Section: 20.6.11.2.5 [unique.ptr.single.modifiers] Status: Ready Submitter: Peter Dimov Date: 2008-03-13

View all issues with Ready status.

Discussion:

void unique_ptr::reset(T* p = 0) is currently specified as:

Effects: If p == get() there are no effects. Otherwise get_deleter()(get()).

There are two problems with this. One, if get() == 0 and p != 0, the deleter is called with a NULL pointer, and this is probably not what's intended (the destructor avoids calling the deleter with 0.)

Two, the special check for get() == p is generally not needed and such a situation usually indicates an error in the client code, which is being masked. As a data point, boost::shared_ptr was changed to assert on such self-resets in 2001 and there were no complaints.

One might think that self-resets are necessary for operator= to work; it's specified to perform

reset( u.release() );

and the self-assignment

p = move(p);

might appear to result in a self-reset. But it doesn't; the release() is performed first, zeroing the stored pointer. In other words, p.reset( q.release() ) works even when p and q are the same unique_ptr, and there is no need to special-case p.reset( q.get() ) to work in a similar scenario, as it definitely doesn't when p and q are separate.

Proposed resolution:

Change 20.6.11.2.5 [unique.ptr.single.modifiers]:

void reset(T* p = 0);
-4- Effects: If p == get() == 0 there are no effects. Otherwise get_deleter()(get()).

Change 20.6.11.3.3 [unique.ptr.runtime.modifiers]:

void reset(T* p = 0);

...

-2- Effects: If p == get() == 0 there are no effects. Otherwise get_deleter()(get()).


807. tuple construction should not fail unless its element's construction fails

Section: 20.3.1.2 [tuple.cnstr] Status: Ready Submitter: Howard Hinnant Date: 2008-03-13

View all issues with Ready status.

Discussion:

527 Added a throws clause to bind constructors. I believe the same throws clause should be added to tuple except it ought to take into account move constructors as well.

Proposed resolution:

Add to 20.3.1.2 [tuple.cnstr]:

For each tuple constructor and assignment operator, an exception is thrown only if the construction or assignment of one of the types in Types throws an exception.


808. [forward] incorrect redundant specification

Section: 20.2.2 [forward] Status: Ready Submitter: Jens Maurer Date: 2008-03-13

View other active issues in [forward].

View all other issues in [forward].

View all issues with Ready status.

Discussion:

p4 (forward) says:

Return type: If T is an lvalue-reference type, an lvalue; otherwise, an rvalue.

First of all, lvalue-ness and rvalue-ness are properties of an expression, not of a type (see 3.10 [basic.lval]). Thus, the phrasing "Return type" is wrong. Second, the phrase says exactly what the core language wording says for folding references in 14.3.1 [temp.arg.type]/p4 and for function return values in 5.2.2 [expr.call]/p10. (If we feel the wording should be retained, it should at most be a note with cross-references to those sections.)

The prose after the example talks about "forwarding as an int& (an lvalue)" etc. In my opinion, this is a category error: "int&" is a type, "lvalue" is a property of an expression, orthogonal to its type. (Btw, expressions cannot have reference type, ever.)

Similar with move:

Return type: an rvalue.

is just wrong and also redundant.

Proposed resolution:

Change 20.2.2 [forward] as indicated:

template <class T> T&& forward(typename identity<T>::type&& t);

...

Return type: If T is an lvalue-reference type, an lvalue; otherwise, an rvalue.

...

-7- In the first call to factory, A1 is deduced as int, so 2 is forwarded to A's constructor as an int&& (an rvalue). In the second call to factory, A1 is deduced as int&, so i is forwarded to A's constructor as an int& (an lvalue). In both cases, A2 is deduced as double, so 1.414 is forwarded to A's constructor as double&& (an rvalue).

template <class T> typename remove_reference<T>::type&& move(T&& t);

...

Return type: an rvalue.


809. std::swap should be overloaded for array types

Section: 25.2.3 [alg.swap] Status: Ready Submitter: Niels Dekker Date: 2008-02-28

View all other issues in [alg.swap].

View all issues with Ready status.

Discussion:

For the sake of generic programming, the header <algorithm> should provide an overload of std::swap for array types:

template<class T, size_t N> void swap(T (&a)[N], T (&b)[N]);

It became apparent to me that this overload is missing, when I considered how to write a swap function for a generic wrapper class template. (Actually I was thinking of Boost's value_initialized.) Please look at the following template, W, and suppose that is intended to be a very generic wrapper:

template<class T> class W {
public:
   T data;
};
Clearly W<T> is CopyConstructible and CopyAssignable, and therefore Swappable, whenever T is CopyConstructible and CopyAssignable. Moreover, W<T> is also Swappable when T is an array type whose element type is CopyConstructible and CopyAssignable. Still it is recommended to add a custom swap function template to such a class template, for the sake of efficiency and exception safety. (E.g., Scott Meyers, Effective C++, Third Edition, item 25: Consider support for a non-throwing swap.) This function template is typically written as follows:
template<class T> void swap(W<T>& x, W<T>& y) {
  using std::swap;
  swap(x.data, y.data);
}
Unfortunately, this will introduce an undesirable inconsistency, when T is an array. For instance, W<std::string[8]> is Swappable, but the current Standard does not allow calling the custom swap function that was especially written for W!
W<std::string[8]> w1, w2;  // Two objects of a Swappable type.
std::swap(w1, w2);  // Well-defined, but inefficient.
using std::swap;
swap(w1, w2);  // Ill-formed, just because ADL finds W's swap function!!!
W's swap function would try to call std::swap for an array, std::string[8], which is not supported by the Standard Library. This issue is easily solved by providing an overload of std::swap for array types. This swap function should be implemented in terms of swapping the elements of the arrays, so that it would be non-throwing for arrays whose element types have a non-throwing swap.

Note that such an overload of std::swap should also support multi-dimensional arrays. Fortunately that isn't really an issue, because it would do so automatically, by means of recursion.

For your information, there was a discussion on this issue at comp.lang.c++.moderated: [Standard Library] Shouldn't std::swap be overloaded for C-style arrays?

Proposed resolution:

Add an extra condition to the definition of Swappable requirements [swappable] in 20.1.1 [utility.arg.requirements]:

- T is Swappable if T is an array type whose element type is Swappable.

Add the following to 25.2.3 [alg.swap]:

template<class T, size_t N> void swap(T (&a)[N], T (&b)[N]);
Requires: Type T shall be Swappable.
Effects: swap_ranges(a, a + N, b);

810. Missing traits dependencies in operational semantics of extended manipulators

Section: 27.6.4 [ext.manip] Status: New Submitter: Daniel Krügler Date: 2008-03-01

View other active issues in [ext.manip].

View all other issues in [ext.manip].

View all issues with New status.

Discussion:

The recent draft (as well as the original proposal n2072) uses an operational semantic for get_money ([ext.manip]/3) and put_money ([ext.manip]/5), which uses

istreambuf_iterator<charT>

and

ostreambuf_iterator<charT>

resp, instead of the iterator instances, with explicitly provided traits argument (The operational semantic defined by f is also traits dependent). This is an obvious oversight because both *stream_buf c'tors expect a basic_streambuf<charT,traits> as argument.

The same problem occurs within the get_time and put_time semantic (p. 7 and p. 9) of n2071 incorporated in N2521, where additional to the problem we have an editorial issue in get_time (streambuf_iterator instead of istreambuf_iterator).

Proposed resolution:

In 27.6.4 [ext.manip]/3 within function f replace the first line

template <class charT, class traits, class moneyT> 
void f(basic_ios<charT, traits>& str, moneyT& mon, bool intl) { 
   typedef istreambuf_iterator<charT, traits> Iter;
   ...

In 27.6.4 [ext.manip]/4 remove the first template charT parameter:

template <class charT, class moneyT> unspecified put_money(const moneyT& mon, bool intl = false);

In 27.6.4 [ext.manip]/5 within function f replace the first line

template <class charT, class traits, class moneyT> 
void f(basic_ios<charT, traits>& str, const moneyT& mon, bool intl) { 
  typedef ostreambuf_iterator<charT, traits> Iter;
  ...

In 27.6.4 [ext.manip]/7 within function f replace the first line

template <class charT, class traits> 
void f(basic_ios<charT, traits>& str, struct tm *tmb, const charT *fmt) { 
  typedef istreambuf_iterator<charT, traits> Iter;
  ...

In 27.6.4 [ext.manip]/8 add const:

template <class charT> unspecified put_time(const struct tm *tmb, const charT *fmt);

In 27.6.4 [ext.manip]/9 within function f replace the first line

template <class charT, class traits> 
void f(basic_ios<charT, traits>& str, const struct tm *tmb, const charT *fmt) { 
  typedef ostreambuf_iterator<charT, traits> Iter;
  ...

Add to the <iomanip> synopsis in 27.6 [iostream.format]

template <class moneyT> unspecified get_money(moneyT& mon, bool intl = false);
template <class moneyT> unspecified put_money(const moneyT& mon, bool intl = false);
template <class charT> unspecified get_time(struct tm *tmb, const charT *fmt);
template <class charT> unspecified put_time(const struct tm *tmb, const charT *fmt);

811. pair of pointers no longer works with literal 0

Section: 20.2.3 [pairs] Status: New Submitter: Doug Gregor Date: 2008-03-14

View all other issues in [pairs].

View all issues with New status.

Discussion:

#include <utility>

int main()
{
   std::pair<char *, char *> p (0,0);
}

I just got a bug report about that, because it's valid C++03, but not C++0x. The important realization, for me, is that the emplace proposal---which made push_back variadic, causing the push_back(0) issue---didn't cause this break in backward compatibility. The break actually happened when we added this pair constructor as part of adding rvalue references into the language, long before variadic templates or emplace came along:

template<class U, class V> pair(U&& x, V&& y);

Now, concepts will address this issue by constraining that pair constructor to only U's and V's that can properly construct "first" and "second", e.g. (from N2322):

template<class U , class V >
requires Constructible<T1, U&&> && Constructible<T2, V&&>
pair(U&& x , V&& y );

Proposed resolution:


812. unsolicited multithreading considered harmful?

Section: 25.3.1 [alg.sort] Status: New Submitter: Paul McKenney Date: 2008-02-27

View all issues with New status.

Discussion:

Multi-threading is a good thing, but unsolicited multi-threading can potentially be harmful. For example, sort() performance might be greatly increased via a multithreaded implementation. However, such a multithreaded implementation could result in concurrent invocations of the user-supplied comparator. This would in turn result in problems given a caching comparator that might be written for complex sort keys. Please note that this is not a theoretical issue, as multithreaded implementations of sort() already exist.

Having a multithreaded sort() available is good, but it should not be the default for programs that are not explicitly multithreaded. Users should not be forced to deal with concurrency unless they have asked for it.

[ This may be covered by N2410 Thread-Safety in the Standard Library (Rev 1). ]

Proposed resolution:


813. "empty" undefined for shared_ptr

Section: 20.6.12.2 [util.smartptr.shared] Status: Ready Submitter: Matt Austern Date: 2008-02-26

View other active issues in [util.smartptr.shared].

View all other issues in [util.smartptr.shared].

View all issues with Ready status.

Discussion:

Several places in 20.6.12.2 [util.smartptr.shared] refer to an "empty" shared_ptr. However, that term is nowhere defined. The closest thing we have to a definition is that the default constructor creates an empty shared_ptr and that a copy of a default-constructed shared_ptr is empty. Are any other shared_ptrs empty? For example, is shared_ptr((T*) 0) empty? What are the properties of an empty shared_ptr? We should either clarify this term or stop using it.

One reason it's not good enough to leave this term up to the reader's intuition is that, in light of N2351 and issue 711, most readers' intuitive understanding is likely to be wrong. Intuitively one might expect that an empty shared_ptr is one that doesn't store a pointer, but, whatever the definition is, that isn't it.

[ Peter adds: ]

Or, what is an "empty" shared_ptr?

Alisdair's wording is fine.

Proposed resolution:

Append the following sentance to 20.6.12.2 [util.smartptr.shared]

The shared_ptr class template stores a pointer, usually obtained via new. shared_ptr implements semantics of shared ownership; the last remaining owner of the pointer is responsible for destroying the object, or otherwise releasing the resources associated with the stored pointer. A shared_ptr object that does not own a pointer is said to be empty.

814. vector<bool>::swap(reference, reference) not defined

Section: 23.2.7 [vector.bool] Status: New Submitter: Alisdair Meredith Date: 2008-03-17

View other active issues in [vector.bool].

View all other issues in [vector.bool].

View all issues with New status.

Discussion:

vector<bool>::swap(reference, reference) has no definition.

Proposed resolution:


815. std::function and reference_closure do not use perfect forwarding

Section: 20.5.15.2.4 [func.wrap.func.inv] Status: Open Submitter: Alisdair Meredith Date: 2008-03-16

View all issues with Open status.

Discussion:

std::function and reference_closure should use "perfect forwarding" as described in the rvalue core proposal.

[ Sophia Antipolis: ]

According to Doug Gregor, as far as std::function is concerned, perfect forwarding can not be obtained because of type erasure. Not everyone agreed with this diagnosis of forwarding.

Proposed resolution:


816. Should bind()'s returned functor have a nofail copy ctor when bind() is nofail?

Section: 20.5.11.1.3 [func.bind.bind] Status: New Submitter: Stephan T. Lavavej Date: 2008-02-08

View other active issues in [func.bind.bind].

View all other issues in [func.bind.bind].

View all issues with New status.

Discussion:

Library Issue 527 notes that bind(f, t1, ..., tN) should be nofail when f, t1, ..., tN have nofail copy ctors.

However, no guarantees are provided for the copy ctor of the functor returned by bind(). (It's guaranteed to have a copy ctor, which can throw implementation-defined exceptions: bind() returns a forwarding call wrapper, TR1 3.6.3/2. A forwarding call wrapper is a call wrapper, TR1 3.3/4. Every call wrapper shall be CopyConstructible, TR1 3.3/4. Everything without an exception-specification may throw implementation-defined exceptions unless otherwise specified, C++03 17.4.4.8/3.)

Should the nofail guarantee requested by Library Issue 527 be extended to cover both calling bind() and copying the returned functor?

[ Howard adds: ]

tuple construction should probably have a similar guarantee.

Proposed resolution:


817. bind needs to be moved

Section: 20.5.11.1.3 [func.bind.bind] Status: New Submitter: Howard Hinnant Date: 2008-03-17

View other active issues in [func.bind.bind].

View all other issues in [func.bind.bind].

View all issues with New status.

Discussion:

The functor retureed by bind() should have a move constructor that requires only move construction of its contained functor and bound arguments. That way move-only functors can be passed to objects such as thread.

This issue is related to issue 816.

Proposed resolution:


818. wording for memory ordering

Section: 29.1 [atomics.order] Status: New Submitter: Jens Maurer Date: 2008-03-22

View all issues with New status.

Discussion:

29.1 [atomics.order] p1 says in the table that

ElementMeaning
memory_order_acq_rel the operation has both acquire and release semantics

To my naked eye, that seems to imply that even an atomic read has both acquire and release semantics.

Then, p1 says in the table:

ElementMeaning
memory_order_seq_cst the operation has both acquire and release semantics, and, in addition, has sequentially-consistent operation ordering

So that seems to be "the same thing" as memory_order_acq_rel, with additional constraints.

I'm then reading p2, where it says:

The memory_order_seq_cst operations that load a value are acquire operations on the affected locations. The memory_order_seq_cst operations that store a value are release operations on the affected locations.

That seems to imply that atomic reads only have acquire semantics. If that is intended, does this also apply to memory_order_acq_rel and the individual load/store operations as well?

Also, the table in p1 contains phrases with "thus" that seem to indicate consequences of normative wording in 1.10 [intro.multithread]. That shouldn't be in normative text, for the fear of redundant or inconsistent specification with the other normative text.

Double-check 29.4 [atomics.types.operations] that each operation clearly says whether it's a load or a store operation, or both. (It could be clearer, IMO. Solution not in current proposed resolution.)

29.1 [atomics.order] p2: What's a "consistent execution"? It's not defined in 1.10 [intro.multithread], it's just used in notes there.

And why does 29.4 [atomics.types.operations] p9 for "load" say:

Requires: The order argument shall not be memory_order_acquire nor memory_order_acq_rel.

(Since this is exactly the same restriction as for "store", it seems to be a typo.)

And then: 29.4 [atomics.types.operations] p12:

These operations are read-modify-write operations in the sense of the "synchronizes with" definition (1.10 [intro.multithread]), so both such an operation and the evaluation that produced the input value synchronize with any evaluation that reads the updated value.

This is redundant with 1.10 [intro.multithread], see above for the reasoning.

Proposed resolution:

Replace the cross-reference in p1 to refer to 1.1 [intro.scope] instead of 1.7 [intro.memory]. Rephrase the table in as follows (maybe don't use a table):

For memory_order_relaxed, no operation orders memory.

For memory_order_release, memory_order_acq_rel, and memory_order_seq_cst, a store operation performs a release operation on the affected memory location.

For memory_order_acquire, memory_order_acq_rel, and memory_order_seq_cst, a load operation performs an acquire operation on the affected memory location.

Rephrase 29.1 [atomics.order] p2:

The memory_order_seq_cst operations that load a value are acquire operations on the affected locations. The memory_order_seq_cst operations that store a value are release operations on the affected locations. In addition, in a consistent execution, tThere must be is a single total order S on all memory_order_seq_cst operations, consistent with the happens before order and modification orders for all affected locations, such that each memory_order_seq_cst operation observes either the last preceding modification according to this order S, or the result of an operation that is not memory_order_seq_cst. [Note: Although it is not explicitly required that S include locks, it can always be extended to an order that does include lock and unlock operations, since the ordering between those is already included in the happens before ordering. -- end note]

Rephrase 29.4 [atomics.types.operations] p12 as:

Effects: Atomically replaces the value pointed to by object or by this with desired. Memory is affected according to the value of order. These operations are read-modify-write operations in the sense of the "synchronizes with" definition (1.10 [intro.multithread]), so both such an operation and the evaluation that produced the input value synchronize with any evaluation that reads the updated value.

Same in p15, p20, p22.


819. rethrow_if_nested

Section: 18.7.6 [except.nested] Status: New Submitter: Alisdair Meredith Date: 2008-03-25

View all issues with New status.

Discussion:

Looking at the wording I submitted for rethrow_if_nested, I don't think I got it quite right.

The current wording says:

template <class E> void rethrow_if_nested(const E& e);

Effects: Calls e.rethrow_nested() only if e is publicly derived from nested_exception.

This is trying to be a bit subtle, by requiring e (not E) to be publicly derived from nested_exception the idea is that a dynamic_cast would be required to be sure. Unfortunately, if e is dynamically but not statically derived from nested_exception, e.rethrow_nested() is ill-formed.

Proposed resolution:


820. current_exception()'s interaction with throwing copy ctors

Section: 18.7.5 [propagation] Status: New Submitter: Stephan T. Lavavej Date: 2008-03-26

View other active issues in [propagation].

View all other issues in [propagation].

View all issues with New status.

Discussion:

As of N2521, the Working Paper appears to be silent about what current_exception() should do if it tries to copy the currently handled exception and its copy constructor throws. 18.7.5 [propagation]/7 says "If the function needs to allocate memory and the attempt fails, it returns an exception_ptr object that refers to an instance of bad_alloc.", but doesn't say anything about what should happen if memory allocation succeeds but the actual copying fails.

I see three alternatives: (1) return an exception_ptr object that refers to an instance of some fixed exception type, (2) return an exception_ptr object that refers to an instance of the copy ctor's thrown exception (but if that has a throwing copy ctor, an infinite loop can occur), or (3) call terminate().

I believe that terminate() is the most reasonable course of action, but before we go implement that, I wanted to raise this issue.

[ Peter's summary: ]

The current practice is to not have throwing copy constructors in exception classes, because this can lead to terminate() as described in 15.5.1 [except.terminate]. Thus calling terminate() in this situation seems consistent and does not introduce any new problems.

However, the resolution of core issue 475 may relax this requirement:

The CWG agreed with the position that std::uncaught_exception() should return false during the copy to the exception object and that std::terminate() should not be called if that constructor exits with an exception.

Since throwing copy constructors will no longer call terminate(), option (3) doesn't seem reasonable as it is deemed too drastic a response in a recoverable situation.

Option (2) cannot be adopted by itself, because a potential infinite recursion will need to be terminated by one of the other options.

Proposed resolution:

Add the following paragraph after 18.7.5 [propagation]/7:

Returns (continued): If the attempt to copy the current exception object throws an exception, the function returns an exception_ptr that refers to the thrown exception or, if this is not possible, to an instance of bad_exception.

[Note: The copy constructor of the thrown exception may also fail, so the implementation is allowed to substitute a bad_exception to avoid infinite recursion. -- end note.]


821. Minor cleanup : unique_ptr

Section: 20.6.11.3.3 [unique.ptr.runtime.modifiers] Status: New Submitter: Alisdair Meredith Date: 2008-03-30

View all issues with New status.

Discussion:

Reading resolution of LWG issue 673 I noticed the following:

void reset(T* pointer p = 0 pointer());

-1- Requires: Does not accept pointer types which are convertible to T* pointer (diagnostic required). [Note: One implementation technique is to create a private templated overload. -- end note]

This could be cleaned up by mandating the overload as a public deleted function. In addition, we should probably overload reset on nullptr_t to be a stronger match than the deleted overload. Words...

Proposed resolution:

Add to class template definition in 20.6.11.3 [unique.ptr.runtime]

// modifiers 
T* release(); 
void reset(T* p = 0); 
void reset( nullptr_t );
template< typename T > void reset( T ) = delete;
void swap(unique_ptr&& u);

Update 20.6.11.3.3 [unique.ptr.runtime.modifiers]

void reset(pointer p = pointer());
void reset(nullptr_t);

-1- Requires: Does not accept pointer types which are convertible to pointer (diagnostic required). [Note: One implementation technique is to create a private templated overload. -- end note]

Effects: If get() == nullptr there are no effects. Otherwise get_deleter()(get()).

...

[ Note this wording incorporates resolutions for 806 (New) and 673 (Ready). ]


822. Object with explicit copy constructor no longer CopyConstructible

Section: 20.1.1 [utility.arg.requirements] Status: New Submitter: James Kanze Date: 2008-04-01

View other active issues in [utility.arg.requirements].

View all other issues in [utility.arg.requirements].

View all issues with New status.

Discussion:

I just noticed that the following program is legal in C++03, but is forbidden in the current draft:

#include <vector>
#include <iostream>

class Toto
{
public:
    Toto() {}
    explicit Toto( Toto const& ) {}
} ;

int
main()
{
    std::vector< Toto > v( 10 ) ;
    return 0 ;
}

Is this change intentional? (And if so, what is the justification? I wouldn't call such code good, but I don't see any reason to break it unless we get something else in return.)

Proposed resolution:

In 20.1.1 [utility.arg.requirements] change Table 33: MoveConstructible requirements [moveconstructible]:

expressionpost-condition
T t(rv) = rvt is equivalent to the value of rv before the construction
...

In 20.1.1 [utility.arg.requirements] change Table 34: CopyConstructible requirements [copyconstructible]:

expressionpost-condition
T t(u) = uthe value of u is unchanged and is equivalent to t
...

823. identity<void> seems broken

Section: 20.2.2 [forward] Status: Review Submitter: Walter Brown Date: 2008-04-09

View other active issues in [forward].

View all other issues in [forward].

View all issues with Review status.

Discussion:

N2588 seems to have added an operator() member function to the identity<> helper in 20.2.2 [forward]. I believe this change makes it no longer possible to instantiate identity<void>, as it would require forming a reference-to-void type as this operator()'s parameter type.

Suggested resolution: Specialize identity<void> so as not to require the member function's presence.

[ Sophia Antipolis: ]

Jens: suggests to add a requires clause to avoid specializing on void.

Alisdair: also consider cv-qualified void.

Alberto provided proposed wording.

Proposed resolution:

Change definition of identity in 20.2.2 [forward], paragraph 2, to:

template <class T>  struct identity {
    typedef T type;

    requires ReferentType<T>
      const T& operator()(const T& x) const;
  };

...

  requires ReferentType<T>
    const T& operator()(const T& x) const;

Rationale:

The point here is to able to write T& given T and ReferentType is precisely the concept that guarantees so, according to N2677 (Foundational concepts). Because of this, it seems preferable than an explicit check for cv void using SameType/remove_cv as it was suggested in Sophia. In particular, Daniel remarked that there may be types other than cv void which aren't referent types (int[], perhaps?).


824. rvalue ref issue with basic_string inserter

Section: 21.3.8.9 [string.io] Status: Ready Submitter: Alisdair Meredith Date: 2008-04-10

View all other issues in [string.io].

View all issues with Ready status.

Discussion:

In the current working paper, the <string> header synopsis at the end of 21.2 [string.classes] lists a single operator<< overload for basic_string.

template<class charT, class traits, class Allocator>
 basic_ostream<charT, traits>&
   operator<<(basic_ostream<charT, traits>&& os,
              const basic_string<charT,traits,Allocator>& str);

The definition in 21.3.8.9 [string.io] lists two:

template<class charT, class traits, class Allocator>
 basic_ostream<charT, traits>&
   operator<<(basic_ostream<charT, traits>& os,
              const basic_string<charT,traits,Allocator>& str);

template<class charT, class traits, class Allocator>
 basic_ostream<charT, traits>&
   operator<<(basic_ostream<charT, traits>&& os,
              const basic_string<charT,traits,Allocator>& str);

I believe the synopsis in 21.2 [string.classes] is correct, and the first of the two signatures in 21.3.8.9 [string.io] should be deleted.

Proposed resolution:

Delete the first of the two signatures in 21.3.8.9 [string.io]:

template<class charT, class traits, class Allocator>
 basic_ostream<charT, traits>&
   operator<<(basic_ostream<charT, traits>& os,
              const basic_string<charT,traits,Allocator>& str);

template<class charT, class traits, class Allocator>
 basic_ostream<charT, traits>&
   operator<<(basic_ostream<charT, traits>&& os,
              const basic_string<charT,traits,Allocator>& str);

825. Missing rvalues reference stream insert/extract operators?

Section: 19.4.2.1 [syserr.errcode.overview], 20.6.12.2.8 [util.smartptr.shared.io], 22.2.8 [facets.examples], 23.3.5.3 [bitset.operators], 26.3.6 [complex.ops], 27.5 [stream.buffers], 28.9 [re.submatch] Status: Open Submitter: Alisdair Meredith Date: 2008-04-10

View all issues with Open status.

Discussion:

Should the following use rvalues references to stream in insert/extract operators?

[ Sophia Antipolis ]

Agree with the idea in the issue, Alisdair to provide wording.

Proposed resolution:


827. constexpr shared_ptr::shared_ptr()?

Section: 20.6.12.2.1 [util.smartptr.shared.const] Status: New Submitter: Peter Dimov Date: 2008-04-11

View all other issues in [util.smartptr.shared.const].

View all issues with New status.

Discussion:

Would anyone object to making the default constructor of shared_ptr (and weak_ptr and enable_shared_from_this) constexpr? This would enable static initialization for shared_ptr variables, eliminating another unfair advantage of raw pointers.

Proposed resolution:


828. Static initialization for std::mutex?

Section: 30.3.1.1 [thread.mutex.class] Status: Review Submitter: Peter Dimov Date: 2008-04-18

View all issues with Review status.

Discussion:

[Note: I'm assuming here that 3.6.2 [basic.start.init]/1 will be fixed.]

Currently std::mutex doesn't support static initialization. This is a regression with respect to pthread_mutex_t, which does. I believe that we should strive to eliminate such regressions in expressive power where possible, both to ease migration and to not provide incentives to (or force) people to forego the C++ primitives in favor of pthreads.

[ Sophia Antipolis: ]

We believe this is implementable on POSIX, because the initializer-list feature and the constexpr feature make this work. Double-check core language about static initialization for this case. Ask core for a core issue about order of destruction of statically-initialized objects wrt. dynamically-initialized objects (should come afterwards). Check non-POSIX systems for implementability.

If ubiquitous implementability cannot be assured, plan B is to introduce another constructor, make this constexpr, which is conditionally-supported. To avod ambiguities, this new constructor needs to have an additional parameter.

Proposed resolution:

Change 30.3.1.1 [thread.mutex.class]:

class mutex { 
public: 
  constexpr mutex(); 
  ...

829. current_exception wording unclear about exception type

Section: 18.7.5 [propagation] Status: Ready Submitter: Beman Dawes Date: 2008-04-20

View other active issues in [propagation].

View all other issues in [propagation].

View all issues with Ready status.

Discussion:

Consider this code:

exception_ptr xp;
try {do_something(); }

catch (const runtime_error& ) {xp = current_exception();}

...

rethrow_exception(xp);

Say do_something() throws an exception object of type range_error. What is the type of the exception object thrown by rethrow_exception(xp) above? It must have a type of range_error; if it were of type runtime_error it still isn't possible to propagate an exception and the exception_ptr/current_exception/rethrow_exception machinery serves no useful purpose.

Unfortunately, the current wording does not explicitly say that. Different people read the current wording and come to different conclusions. While it may be possible to deduce the correct type from the current wording, it would be much clearer to come right out and explicitly say what the type of the referred to exception is.

[ Peter adds: ]

I don't like the proposed resolution of 829. The normative text is unambiguous that the exception_ptr refers to the currently handled exception. This term has a standard meaning, see 15.3 [except.handle]/8; this is the exception that throw; would rethrow, see 15.1 [except.throw]/7.

A better way to address this is to simply add the non-normative example in question as a clarification. The term currently handled exception should be italicized and cross-referenced. A [Note: the currently handled exception is the exception that a throw expression without an operand (15.1 [except.throw]/7) would rethrow. --end note] is also an option.

Proposed resolution:

After 18.7.5 [propagation] , paragraph 7, add the indicated text:

exception_ptr current_exception();

Returns: exception_ptr object that refers to the currently handled exception (15.3 [except.handle]) or a copy of the currently handled exception, or a null exception_ptr object if no exception is being handled. If the function needs to allocate memory and the attempt fails, it returns an exception_ptr object that refers to an instance of bad_alloc. It is unspecified whether the return values of two successive calls to current_exception refer to the same exception object. [Note: that is, it is unspecified whether current_exception creates a new copy each time it is called. -- end note]

Throws: nothing.


830. Incomplete list of char_traits specializations

Section: 21.1 [char.traits] Status: Open Submitter: Dietmar Kühl Date: 2008-04-23

View all other issues in [char.traits].

View all issues with Open status.

Discussion:

Paragraph 4 of 21.1 [char.traits] mentions that this section specifies two specializations (char_traits<char> and (char_traits<wchar_t>). However, there are actually four specializations provided, i.e. in addition to the two above also char_traits<char16_t> and char_traits<char32_t>). I guess this was just an oversight and there is nothing wrong with just fixing this.

[ Alisdair adds: ]

char_traits< char16/32_t > should also be added to <ios_fwd> in 27.2 [iostream.forward], and all the specializations taking a char_traits parameter in that header.

[ Sophia Antipolis: ]

Idea of the issue is ok.

Alisdair to provide wording, once that wording arrives, move to review.

Proposed resolution:

Replace paragraph 4 of 21.1 [char.traits] by:

This subclause specifies a struct template, char_traits<charT>, and four explicit specializations of it, char_traits<char>, char_traits<char16_t>, char_traits<char32_t>, and char_traits<wchar_t>, all of which appear in the header <string> and satisfy the requirements below.


832. Applying constexpr to System error support

Section: 19.4 [syserr] Status: Review Submitter: Beman Dawes Date: 2008-05-14

View other active issues in [syserr].

View all other issues in [syserr].

View all issues with Review status.

Discussion:

Initialization of objects of class error_code (19.4.2 [syserr.errcode]) and class error_condition (19.4.3 [syserr.errcondition]) can be made simpler and more reliable by use of the new constexpr feature [N2349] of C++0x. Less code will need to be generated for both library implementations and user programs when manipulating constant objects of these types.

This was not proposed originally because the constant expressions proposal was moving into the standard at about the same time as the Diagnostics Enhancements proposal [N2241], and it wasn't desirable to make the later depend on the former. There were also technical concerns as to how constexpr would apply to references. Those concerns are now resolved; constexpr can't be used for references, and that fact is reflected in the proposed resolution.

Thanks to Jens Maurer, Gabriel Dos Reis, and Bjarne Stroustrup for clarification of constexpr requirements.

LWG issue 804 is related in that it raises the question of whether the exposition only member cat_ of class error_code (19.4.2 [syserr.errcode]) and class error_condition (19.4.3 [syserr.errcondition]) should be presented as a reference or pointer. While in the context of 804 that is arguably an editorial question, presenting it as a pointer becomes more or less required with this proposal, given constexpr does not play well with references. The proposed resolution thus changes the private member to a pointer, which also brings it in sync with real implementations.

[ Sophia Antipolis: ]

On going question of extern pointer vs. inline functions for interface.

Proposed resolution:

The proposed wording assumes the LWG 805 proposed wording has been applied to the WP, resulting in the former posix_category being renamed generic_category. If 805 has not been applied, the names in this proposal must be adjusted accordingly.

Change 19.4.1.1 [syserr.errcat.overview] Class error_category overview error_category synopsis as indicated:

const error_category& get_generic_category();
const error_category& get_system_category();

static extern const error_category&* const generic_category = get_generic_category();
static extern const error_category&* const native_category system_category = get_system_category();

Change 19.4.1.5 [syserr.errcat.objects] Error category objects as indicated:

extern const error_category&* const get_generic_category();

Returns: A reference generic_category shall point to an a statically initialized object of a type derived from class error_category.

Remarks: The object's default_error_condition and equivalent virtual functions shall behave as specified for the class error_category. The object's name virtual function shall return a pointer to the string "GENERIC".

extern const error_category&* const get_system_category();

Returns: A reference system_category shall point to an a statically initialized object of a type derived from class error_category.

Remarks: The object's equivalent virtual functions shall behave as specified for class error_category. The object's name virtual function shall return a pointer to the string "system". The object's default_error_condition virtual function shall behave as follows:

If the argument ev corresponds to a POSIX errno value posv, the function shall return error_condition(posv, generic_category). Otherwise, the function shall return error_condition(ev, system_category). What constitutes correspondence for any given operating system is unspecified. [Note: The number of potential system error codes is large and unbounded, and some may not correspond to any POSIX errno value. Thus implementations are given latitude in determining correspondence. -- end note]

Change 19.4.2.1 [syserr.errcode.overview] Class error_code overview as indicated:

class error_code {
public:
  ...;
  constexpr error_code(int val, const error_category&* cat);
  ...
  void assign(int val, const error_category&* cat);
  ...
  const error_category&* category() const;
  ...
private:
  int val_;                    // exposition only
  const error_category&* cat_; // exposition only

Change 19.4.2.2 [syserr.errcode.constructors] Class error_code constructors as indicated:

constexpr error_code(int val, const error_category&* cat);

Effects: Constructs an object of type error_code.

Postconditions: val_ == val and cat_ == cat.

Throws: Nothing.

Change 19.4.2.3 [syserr.errcode.modifiers] Class error_code modifiers as indicated:

void assign(int val, const error_category&* cat);

Postconditions: val_ == val and cat_ == cat.

Throws: Nothing.

Change 19.4.2.4 [syserr.errcode.observers] Class error_code observers as indicated:

const error_category&* category() const;

Returns: cat_.

Throws: Nothing.

Change 19.4.3.1 [syserr.errcondition.overview] Class error_condition overview as indicated:

class error_condition {
public:
  ...;
  constexpr error_condition(int val, const error_category&* cat);
  ...
  void assign(int val, const error_category&* cat);
  ...
  const error_category&* category() const;
  ...
private:
  int val_;                    // exposition only
  const error_category&* cat_; // exposition only

Change 19.4.3.2 [syserr.errcondition.constructors] Class error_condition constructors as indicated:

constexpr error_condition(int val, const error_category&* cat);

Effects: Constructs an object of type error_condition.

Postconditions: val_ == val and cat_ == cat.

Throws: Nothing.

Change 19.4.3.3 [syserr.errcondition.modifiers] Class error_condition modifiers as indicated:

void assign(int val, const error_category&* cat);

Postconditions: val_ == val and cat_ == cat.

Throws: Nothing.

Change 19.4.3.4 [syserr.errcondition.observers] Class error_condition observers as indicated:

const error_category&* category() const;

Returns: cat_.

Throws: Nothing.

Throughout 19.4 [syserr] System error support, change "category()." to "category()->". Appears approximately six times.

[Partially Editorial] In 19.4.4 [syserr.compare] Comparison operators, paragraphs 2 and 4, change "category.equivalent(" to "category()->equivalent(".

Change 19.4.5.1 [syserr.syserr.overview] Class system_error overview as indicated:

public:
  system_error(error_code ec, const string& what_arg);
  system_error(error_code ec);
  system_error(int ev, const error_category&* ecat,
      const string& what_arg);
  system_error(int ev, const error_category&* ecat);

Change 19.4.5.2 [syserr.syserr.members] Class system_error members as indicated:

system_error(int ev, const error_category&* ecat, const string& what_arg);

Effects: Constructs an object of class system_error.

Postconditions: code() == error_code(ev, ecat) and strcmp(runtime_error::what(), what_arg.c_str()) == 0.

system_error(int ev, const error_category&* ecat);

Effects: Constructs an object of class system_error.

Postconditions: code() == error_code(ev, ecat) and strcmp(runtime_error::what(), "") == 0.


833. Freestanding implementations header list needs review for C++0x

Section: 17.4.1.3 [compliance] Status: Open Submitter: Beman Dawes Date: 2008-05-14

View all issues with Open status.

Discussion:

Once the C++0x standard library is feature complete, the LWG needs to review 17.4.1.3 [compliance] Freestanding implementations header list to ensure it reflects LWG consensus.

Proposed resolution:


834. Unique_ptr::pointer requirements underspecified

Section: 20.6.11.2 [unique.ptr.single] Status: Open Submitter: Daniel Krügler Date: 2008-05-14

View all issues with Open status.

Discussion:

Issue 673 (including recent updates by 821) proposes a useful extension point for unique_ptr by granting support for an optional deleter_type::pointer to act as pointer-like replacement for element_type* (In the following: pointer).

Unfortunately no requirements are specified for the type pointer which has impact on at least two key features of unique_ptr:

  1. Operational fail-safety.
  2. (Well-)Definedness of expressions.

Unique_ptr specification makes great efforts to require that essentially *all* operations cannot throw and therefore adds proper wording to the affected operations of the deleter as well. If user-provided pointer-emulating types ("smart pointers") will be allowed, either *all* throw-nothing clauses have to be replaced by weaker "An exception is thrown only if pointer's {op} throws an exception"-clauses or it has to be said explicitly that all used operations of pointer are required *not* to throw. I understand the main focus of unique_ptr to be as near as possible to the advantages of native pointers which cannot fail and thus strongly favor the second choice. Also, the alternative position would make it much harder to write safe and simple template code for unique_ptr. Additionally, I assume that a general statement need to be given that all of the expressions of pointer used to define semantics are required to be well-formed and well-defined (also as back-end for 762).

[ Sophia Antipolis: ]

Howard: We maybe need a core concept PointerLike, but we don't need the arithmetic (see shared_ptr vs. vector<T>::iterator.

Howard will go through and enumerate the individual requirements wrt. pointer for each member function.

Proposed resolution:

Add the following sentence just at the end of the newly proposed 20.6.11.2 [unique.ptr.single]/p. 3:

unique_ptr<T, D>::pointer's operations shall be well-formed, shall have well defined behavior, and shall not throw exceptions.

835. tying two streams together (correction to DR 581)

Section: 27.4.4.2 [basic.ios.members] Status: New Submitter: Martin Sebor Date: 2008-05-17

View other active issues in [basic.ios.members].

View all other issues in [basic.ios.members].

View all issues with New status.

Discussion:

The fix for issue 581, now integrated into the working paper, overlooks a couple of minor problems.

First, being an unformatted function once again, flush() is required to create a sentry object whose constructor must, among other things, flush the tied stream. When two streams are tied together, either directly or through another intermediate stream object, flushing one will also cause a call to flush() on the other tied stream(s) and vice versa, ad infinitum. The program below demonstrates the problem.

Second, as Bo Persson notes in his comp.lang.c++.moderated post, for streams with the unitbuf flag set such as std::stderr, the destructor of the sentry object will again call flush(). This seems to create an infinite recursion for std::cerr << std::flush;

#include <iostream>

int main ()
{
   std::cout.tie (&std::cerr);
   std::cerr.tie (&std::cout);
   std::cout << "cout\n";
   std::cerr << "cerr\n";
} 
           

Proposed resolution:

I think an easy way to plug the first hole is to add a requires clause to ostream::tie(ostream *tiestr) requiring the this pointer not be equal to any pointer on the list starting with tiestr->tie() through tiestr()->tie()->tie() and so on. I am not proposing that we require implementations to traverse this list, although I think we could since the list is unlikely to be very long.

Add a Requires clause to 27.4.4.2 [basic.ios.members] withethe following text:

Requires: If (tiestr != 0) is true, tiestr must not be reachable by traversing the linked list of tied stream objects starting from tiestr->tie().

In addition, to prevent the infinite recursion that Bo writes about in his comp.lang.c++.moderated post, I propose to change 27.6.2.4 [ostream::sentry], p2 like so:

If ((os.flags() & ios_base::unitbuf) && !uncaught_exception()) is true, calls os.flush() os.rdbuf()->pubsync().

836. effects of money_base::space and money_base::none on money_get

Section: 22.2.6.1.2 [locale.money.get.virtuals] Status: New Submitter: Martin Sebor Date: 2008-05-17

View other active issues in [locale.money.get.virtuals].

View all other issues in [locale.money.get.virtuals].

View all issues with New status.

Duplicate of: 670

Discussion:

In paragraph 2, 22.2.6.1.2 [locale.money.get.virtuals] specifies the following:

Where space or none appears in the format pattern, except at the end, optional white space (as recognized by ct.is) is consumed after any required space.

This requirement can be (and has been) interpreted two mutually exclusive ways by different readers. One possible interpretation is that:

  1. where money_base::space appears in the format, at least one space is required, and
  2. where money_base::none appears in the format, space is allowed but not required.

The other is that:

where either money_base::space or money_base::none appears in the format, white space is optional.

Proposed resolution:

I propose to change the text to make it clear that the first interpretation is intended, that is, to make following change to 22.2.6.1.2 [locale.money.get.virtuals], p2:

When money_base::space or money_base::none appears as the last element in the format pattern, except at the end, optional white space (as recognized by ct.is) is consumed after any required space. no white space is consumed. Otherwise, where money_base::space appears in any of the initial elements of the format pattern, at least one white space character is required. Where money_base::none appears in any of the initial elements of the format pattern, white space is allowed but not required. In either case, any required followed by all optional white space (as recognized by ct.is()) is consumed. If (str.flags() & str.showbase) is false, ...

837. basic_ios::copyfmt() overly loosely specified

Section: 27.4.4.2 [basic.ios.members] Status: New Submitter: Martin Sebor Date: 2008-05-17

View other active issues in [basic.ios.members].

View all other issues in [basic.ios.members].

View all issues with New status.

Discussion:

The basic_ios::copyfmt() member function is specified in 27.4.4.2 [basic.ios.members] to have the following effects:

Effects: If (this == &rhs) does nothing. Otherwise assigns to the member objects of *this the corresponding member objects of rhs, except that

Since the rest of the text doesn't specify what the member objects of basic_ios are this seems a little too loose.

Proposed resolution:

I propose to tighten things up by adding a Postcondition clause to the function like so:

Postconditions:
copyfmt() postconditions
Element Value
rdbuf() unchanged
tie() rhs.tie()
rdstate() unchanged
exceptions() rhs.exceptions()
flags() rhs.flags()
width() rhs.width()
precision() rhs.precision()
fill() rhs.fill()
getloc() rhs.getloc()

The format of the table follows Table 117 (as of N2588): basic_ios::init() effects.

The intent of the new table is not to impose any new requirements or change existing ones, just to be more explicit about what I believe is already there.


838. can an end-of-stream iterator become a non-end-of-stream one?

Section: 24.5.1 [istream.iterator] Status: New Submitter: Martin Sebor Date: 2008-05-17

View other active issues in [istream.iterator].

View all other issues in [istream.iterator].

View all issues with New status.

Discussion:

From message c++std-lib-20003...

The description of istream_iterator in 24.5.1 [istream.iterator], p1 specifies that objects of the class become the end-of-stream (EOS) iterators under the following condition (see also issue 836 another problem with this paragraph):

If the end of stream is reached (operator void*() on the stream returns false), the iterator becomes equal to the end-of-stream iterator value.

One possible implementation approach that has been used in practice is for the iterator to set its in_stream pointer to 0 when it reaches the end of the stream, just like the default ctor does on initialization. The problem with this approach is that the Effects clause for operator++() says the iterator unconditionally extracts the next value from the stream by evaluating *in_stream >> value, without checking for (in_stream == 0).

Conformance to the requirement outlined in the Effects clause can easily be verified in programs by setting eofbit or failbit in exceptions() of the associated stream and attempting to iterate past the end of the stream: each past-the-end access should trigger an exception. This suggests that some other, more elaborate technique might be intended.

Another approach, one that allows operator++() to attempt to extract the value even for EOS iterators (just as long as in_stream is non-0) is for the iterator to maintain a flag indicating whether it has reached the end of the stream. This technique would satisfy the presumed requirement implied by the Effects clause mentioned above, but it isn't supported by the exposition-only members of the class (no such flag is shown). This approach is also found in existing practice.

The inconsistency between existing implementations raises the question of whether the intent of the specification is that a non-EOS iterator that has reached the EOS become a non-EOS one again after the stream's eofbit flag has been cleared? That is, are the assertions in the program below expected to pass?

   sstream strm ("1 ");
   istream_iterator eos;
   istream_iterator it (strm);
   int i;
   i = *it++
   assert (it == eos);
   strm.clear ();
   strm << "2 3 ";
   assert (it != eos);
   i = *++it;
   assert (3 == i);
     

Or is it intended that once an iterator becomes EOS it stays EOS until the end of its lifetime?

Proposed resolution:

The discussion of this issue on the reflector suggests that the intent of the standard is for an istreambuf_iterator that has reached the EOS to remain in the EOS state until the end of its lifetime. Implementations that permit EOS iterators to return to a non-EOS state may only do so as an extension, and only as a result of calling istream_iterator member functions on EOS iterators whose behavior is in this case undefined.

To this end we propose to change 24.5.1 [istream.iterator], p1, as follows:

The result of operator-> on an end-of-stream is not defined. For any other iterator value a const T* is returned. Invoking operator++() on an end-of-stream iterator is undefined. It is impossible to store things into istream iterators...

Add pre/postconditions to the member function descriptions of istream_iterator like so:

istream_iterator();
Effects: Constructs the end-of-stream iterator.
Postcondition: in_stream == 0.
istream_iterator(istream_type &s);
Effects: Initializes in_stream with &s. value may be initialized during construction or the first time it is referenced.
Postcondition: in_stream == &s.
istream_iterator(const istream_iterator &x);
Effects: Constructs a copy of x.
Postcondition: in_stream == x.in_stream.
istream_iterator& operator++();
Requires: in_stream != 0.
Effects: *in_stream >> value.
istream_iterator& operator++(int);
Requires: in_stream != 0.
Effects:
istream_iterator tmp (*this);
*in_stream >> value;
return tmp;
     

839. Maps and sets missing splice operation

Section: 23.3 [associative], 23.4 [unord] Status: Open Submitter: Alan Talbot Date: 2008-05-18

View all issues with Open status.

Discussion:

Splice is a very useful feature of list. This functionality is also very useful for any other node based container, and I frequently wish it were available for maps and sets. It seems like an omission that these containers lack this capability. Although the complexity for a splice is the same as for an insert, the actual time can be much less since the objects need not be reallocated and copied. When the element objects are heavy and the compare operations are fast (say a map<int, huge_thingy>) this can be a big win.

Suggested resolution:

Add the following signatures to map, set, multimap, multiset, and the unordered associative containers:

 
void splice(list<T,Allocator>&& x);
void splice(list<T,Allocator>&& x, const_iterator i);
void splice(list<T,Allocator>&& x, const_iterator first, const_iterator last);

Hint versions of these are also useful to the extent hint is useful. (I'm looking for guidance about whether hints are in fact useful.)

 
void splice(const_iterator position, list<T,Allocator>&& x);
void splice(const_iterator position, list<T,Allocator>&& x, const_iterator i);
void splice(const_iterator position, list<T,Allocator>&& x, const_iterator first, const_iterator last);

[ Sophia Antipolis: ]

Don't try to splice "list" into the other containers, it should be container-type.

forward_list already has splice_after.

Would "splice" make sense for an unordered_map?

Jens, Robert: "splice" is not the right term, it implies maintaining ordering in lists.

Howard: adopt?

Jens: absorb?

Alan: subsume?

Robert: recycle?

Howard: transfer? (but no direction)

Jens: transfer_from. No.

Alisdair: Can we give a nothrow guarantee? If your compare() and hash() doesn't throw, yes.

Daniel: For unordered_map, we can't guarantee nothrow.

Proposed resolution:


841. cstdint.syn inconsistent with C99

Section: 18.3.1 [cstdint.syn] Status: New Submitter: Martin Sebor Date: 2008-05-17

View all other issues in [cstdint.syn].

View all issues with New status.

Discussion:

In specifying the names of macros and types defined in header <stdint.h>, C99 makes use of the symbol N to accommodate unusual platforms with word sizes that aren't powers of two. C99 permits N to take on any positive integer value (including, for example, 24).

In cstdint.syn Header <cstdint> synopsis, C++ on the other hand, fixes the value of N to 8, 16, 32, and 64, and specifies only types with these exact widths.

In addition, paragraph 1 of the same section makes use of a rather informal shorthand notation to specify sets of macros. When interpreted strictly, the notation specifies macros such as INT_8_MIN that are not intended to be specified.

Finally, the section is missing the usual table of symbols defined in that header, making it inconsistent with the rest of the specification.

Proposed resolution:

I propose to use the same approach in the C++ spec as C99 uses, that is, to specify the header synopsis in terms of "exposition only" types that make use of the symbol N to denote one or more of a theoretically unbounded set of widths.

Further, I propose to add a new table to section listing the symbols defined in the header using a more formal notation that avoids introducing inconsistencies.

To this effect, in cstdint.syn Header <cstdint> synopsis, replace both the synopsis and paragraph 1 with the following text:

  1. In the names defined in the <cstdint> header, the symbol N represents a positive decimal integer with no leading zeros (e.g., 8 or 24, but not 0, 04, or 048). With the exception of exact-width types, macros and types for values of N in the set of 8, 16, 32, and 64 are required. Exact-width types, and any macros and types for values of N other than 8, 16, 32, and 64 are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, the corresponding exact-width types and macros are required.

namespace std {

   // required types

   // Fastest minimum-width integer types
   typedef signed integer type   int_fast8_t;
   typedef signed integer type   int_fast16_t;
   typedef signed integer type   int_fast32_t;
   typedef signed integer type   int_fast64_t;

   typedef unsigned integer type uint_fast8_t;
   typedef unsigned integer type uint_fast16_t;
   typedef unsigned integer type uint_fast32_t;
   typedef unsigned integer type uint_fast64_t;

   // Minimum-width integer types
   typedef signed integer type   int_least8_t;
   typedef signed integer type   int_least16_t;
   typedef signed integer type   int_least32_t;
   typedef signed integer type   int_least64_t;

   typedef unsigned integer type uint_least8_t;
   typedef unsigned integer type uint_least16_t;
   typedef unsigned integer type uint_least32_t;
   typedef unsigned integer type uint_least64_t;

   // Greatest-width integer types
   typedef signed integer type   intmax_t;
   typedef unsigned integer type uintmax_t;

   // optionally defined types

   // Exact-width integer types
   typedef signed integer type   intN_t;
   typedef unsigned integer type uintN_t;

   // Fastest minimum-width integer types for values
   // of N other than 8, 16, 32, and 64
   typedef signed integer type   uint_fastN_t;
   typedef unsigned integer type uint_fastN_t;

   // Minimum-width integer types for values
   // of N other than 8, 16, 32, and 64
   typedef signed integer type   uint_leastN_t;
   typedef unsigned integer type uint_leastN_t;

   // Integer types capable of holding object pointers
   typedef signed integer type   intptr_t;
   typedef signed integer type   intptr_t;

}

[Note to editor: Remove all of the existing paragraph 1 from cstdint.syn.]

Table ??: Header <cstdint> synopsis
Type Name(s)
Macros: INTN_MIN INTN_MAX UINTN_MAX
INT_FASTN_MIN INT_FASTN_MAX UINT_FASTN_MAX
INT_LEASTN_MIN INT_LEASTN_MAX UINT_LEASTN_MAX
INTPTR_MIN INTPTR_MAX UINTPTR_MAX
INTMAX_MIN INTMAX_MAX UINTMAX_MAX
PTRDIFF_MIN PTRDIFF_MAX PTRDIFF_MAX
SIG_ATOMIC_MIN SIG_ATOMIC_MAX SIZE_MAX
WCHAR_MIN WCHAR_MAX
WINT_MIN WINT_MAX
INTN_C() UINTN_C()
INTMAX_C() UINTMAX_C()
Types: intN_t uintN_t
int_fastN_t uint_fastN_t
int_leastN_t uint_leastN_t
intptr_t uintptr_t
intmax_t uintmax_t

842. ConstructibleAsElement and bit containers

Section: 23.1 [container.requirements], 23.2.7 [vector.bool], 23.3.5 [template.bitset] Status: Ready Submitter: Howard Hinnant Date: 2008-06-03

View other active issues in [container.requirements].

View all other issues in [container.requirements].

View all issues with Ready status.

Discussion:

23.1 [container.requirements]/p3 says:

Objects stored in these components shall be constructed using construct_element (20.6.9). For each operation that inserts an element of type T into a container (insert, push_back, push_front, emplace, etc.) with arguments args... T shall be ConstructibleAsElement, as described in table 88. [Note: If the component is instantiated with a scoped allocator of type A (i.e., an allocator for which is_scoped_allocator<A>::value is true), then construct_element may pass an inner allocator argument to T's constructor. -- end note]

However vector<bool, A> (23.2.7 [vector.bool]) and bitset<N> (23.3.5 [template.bitset]) store bits, not bools, and bitset<N> does not even have an allocator. But these containers are governed by this clause. Clearly this is not implementable.

Proposed resolution:

Change 23.1 [container.requirements]/p3:

Objects stored in these components shall be constructed using construct_element (20.6.9), unless otherwise specified. For each operation that inserts an element of type T into a container (insert, push_back, push_front, emplace, etc.) with arguments args... T shall be ConstructibleAsElement, as described in table 88. [Note: If the component is instantiated with a scoped allocator of type A (i.e., an allocator for which is_scoped_allocator<A>::value is true), then construct_element may pass an inner allocator argument to T's constructor. -- end note]

Change 23.2.7 [vector.bool]/p2:

Unless described below, all operations have the same requirements and semantics as the primary vector template, except that operations dealing with the bool value type map to bit values in the container storage, and construct_element (23.1 [container.requirements]) is not used to construct these values.

Move 23.3.5 [template.bitset] to clause 20.


843. Reference Closure

Section: 20.5.17.1 [func.referenceclosure.cons] Status: New Submitter: Lawrence Crowl Date: 2008-06-02

View all issues with New status.

Discussion:

The std::reference_closure type has a deleted copy assignment operator under the theory that references cannot be assigned, and hence the assignment of its reference member must necessarily be ill-formed.

However, other types, notably std::reference_wrapper and std::function provide for the "copying of references", and thus the current definition of std::reference_closure seems unnecessarily restrictive. In particular, it should be possible to write generic functions using both std::function and std::reference_closure, but this generality is much harder when one such type does not support assignment.

The definition of reference_closure does not necessarily imply direct implementation via reference types. Indeed, the reference_closure is best implemented via a frame pointer, for which there is no standard type.

The semantics of assignment are effectively obtained by use of the default destructor and default copy assignment operator via

x.~reference_closure(); new (x) reference_closure(y);

So the copy assignment operator generates no significant real burden to the implementation.

Proposed resolution:

In 20.5.17 [func.referenceclosure] Class template reference_closure, replace the =delete in the copy assignment operator in the synopsis with =default.

template<class R , class... ArgTypes > 
  class reference_closure<R (ArgTypes...)> { 
  public:
     ...
     reference_closure& operator=(const reference_closure&) = delete default;
     ...

In 20.5.17.1 [func.referenceclosure.cons] Construct, copy, destroy, add the member function description

reference_closure& operator=(const reference_closure& f)

Postcondition: *this is a copy of f.

Returns: *this.


844. complex pow return type is ambiguous

Section: 26.3.9 [cmplx.over] Status: Ready Submitter: Howard Hinnant Date: 2008-06-03

View all issues with Ready status.

Discussion:

The current working draft is in an inconsistent state.

26.3.8 [complex.transcendentals] says that:

pow(complex<float>(), int()) returns a complex<float>.

26.3.9 [cmplx.over] says that:

pow(complex<float>(), int()) returns a complex<double>.

[ Sophia Antipolis: ]

Since int promotes to double, and C99 doesn't have an int-based overload for pow, the C99 result is complex<double>, see also C99 7.22, see also library issue 550.

Special note: ask P.J. Plauger.

Looks fine.

Proposed resolution:

Strike this pow overload in 26.3.1 [complex.synopsis] and in 26.3.8 [complex.transcendentals]:

template<class T> complex<T> pow(const complex<T>& x, int y);

845. atomics cannot support aggregate initialization

Section: 29.3 [atomics.types] Status: New Submitter: Alisdair Meredith Date: 2008-06-03

View other active issues in [atomics.types].

View all other issues in [atomics.types].

View all issues with New status.

Discussion:

The atomic classes (and class templates) are required to support aggregate initialization (29.3.1 [atomics.types.integral]p2 / 29.3.2 [atomics.types.address]p1) yet also have user declared constructors, so cannot be aggregates.

This problem might be solved with the introduction of the proposed initialization syntax at Antipolis, but the wording above should be altered. Either strike the sentence as redundant with new syntax, or refer to 'brace initialization'.

Proposed resolution:


846. No definition for constructor

Section: 29.3 [atomics.types] Status: New Submitter: Alisdair Meredith Date: 2008-06-03

View other active issues in [atomics.types].

View all other issues in [atomics.types].

View all issues with New status.

Discussion:

The atomic classes and class templates (29.3.1 [atomics.types.integral] / 29.3.2 [atomics.types.address]) have a constexpr constructor taking a value of the appropriate type for that atomic. However, neither clause provides semantics or a definition for this constructor. I'm not sure if the initialization is implied by use of constexpr keyword (which restricts the form of a constructor) but even if that is the case, I think it is worth spelling out explicitly as the inference would be far too subtle in that case.

Proposed resolution:


847. string exception safety guarantees

Section: 21.3.1 [string.require] Status: New Submitter: Hervé Brönnimann Date: 2008-06-05

View all other issues in [string.require].

View all issues with New status.

Discussion:

In March, on comp.lang.c++.moderated, I asked what were the string exception safety guarantees are, because I cannot see *any* in the working paper, and any implementation I know offers the strong exception safety guarantee (string unchanged if a member throws exception). The closest the current draft comes to offering any guarantees is 21.3 [basic.string], para 3:

The class template basic_string conforms to the requirements for a Sequence Container (23.1.1), for a Reversible Container (23.1), and for an Allocator-aware container (91). The iterators supported by basic_string are random access iterators (24.1.5).

However, the chapter 23 only says, on the topic of exceptions: 23.1 [container.requirements], para 10:

Unless otherwise specified (see 23.2.2.3 and 23.2.6.4) all container types defined in this clause meet the following additional requirements:

I take it as saying that this paragraph has *no* implication on std::basic_string, as basic_string isn't defined in Clause 23 and this paragraph does not define a *requirement* of Sequence nor Reversible Container, just of the models defined in Clause 23. In addition, LWG Issue 718 proposes to remove 23.1 [container.requirements], para 3.

Finally, the fact that no operation on Traits should throw exceptions has no bearing, except to suggest (since the only other throws should be allocation, out_of_range, or length_error) that the strong exception guarantee can be achieved.

The reaction in that group by Niels Dekker, Martin Sebor, and Bo Persson, was all that this would be worth an LWG issue.

A related issue is that erase() does not throw. This should be stated somewhere (and again, I don't think that the 23.1 [container.requirements], para 1 applies here).

Proposed resolution:

Add a blanket statement in 21.3.1 [string.require]:

- if any member function or operator of basic_string<charT, traits, Allocator> throws, that function or operator has no effect.

- no erase() or pop_back() function throws.

As far as I can tell, this is achieved by any implementation. If I made a mistake and it is not possible to offer this guarantee, then either state all the functions for which this is possible (certainly at least operator+=, append, assign, and insert), or add paragraphs to Effects clauses wherever appropriate.


848. missing std::hash specializations for std::bitset/std::vector<bool>

Section: 20.5.16 [unord.hash] Status: Ready Submitter: Thorsten Ottosen Date: 2008-06-05

View all issues with Ready status.

Discussion:

In the current working draft, std::hash<T> is specialized for builtin types and a few other types. Bitsets seems like one that is missing from the list, not because it cannot not be done by the user, but because it is hard or impossible to write an efficient implementation that works on 32bit/64bit chunks at a time. For example, std::bitset is too much encapsulated in this respect.

Proposed resolution:

Add the following to the synopsis in 20.5 [function.objects]/2:

template<class Allocator> struct hash<std::vector<bool,Allocator>>;
template<size_t N> struct hash<std::bitset<N>>;

Modify the last sentence of 20.5.16 [unord.hash]/1 to end with:

... and std::string, std::u16string, std::u32string, std::wstring, std::error_code, std::thread::id, std::bitset, and std::vector<bool>.

849. missing type traits to compute root class and derived class of types in a class hierachy

Section: 20.4.7 [meta.trans.other] Status: New Submitter: Thorsten Ottosen Date: 2008-06-05

View other active issues in [meta.trans.other].

View all other issues in [meta.trans.other].

View all issues with New status.

Discussion:

The type traits library contains various traits to dealt with polymorphic types, e.g. std::has_virtual_destructor, std::is_polymorphic and std::is_base_of. However, there is no way to compute the unique public base class of a type if such one exists. Such a trait could be very useful if one needs to instantiate a specialization made for the root class whenever a derived class is passed as parameter. For example, imagine that you wanted to specialize std::hash for a class hierarchy---instead of specializing each class, you could specialize the std::hash<root_class> and provide a partial specialization that worked for all derived classes.

This ability---to specify operations in terms of their equivalent in the root class---can be done with e.g. normal functions, but there is, AFAIK, no way to do it for class templates. Being able to access compile-time information about the type-hierachy can be very powerful, and I therefore also suggest traits that computes the directly derived class whenever that is possible.

If the computation can not be done, the traits should fall back on an identity transformation. I expect this gives the best overall usability.

Proposed resolution:

Add the following to the synopsis in 20.4.2 [meta.type.synop] under "other transformations":

template< class T > struct direct_base_class;
template< class T > struct direct_derived_class;
template< class T > struct root_base_class;

Add three new entries to table 51 (20.4.7 [meta.trans.other]) with the following content

TemplateConditionComments
template< class T > struct direct_base_class; T shall be a complete type. The member typedef type shall equal the accessible unambiguous direct base class of T. If no such type exists, the member typedef type shall equal T.
template< class T > struct direct_derived_class; T shall be a complete type. The member typedef type shall equal the unambiguous type which has T as an accessible unambiguous direct base class. If no such type exists, the member typedef type shall equal T.
template< class T > struct root_base_class; T shall be a complete type. The member typedef type shall equal the accessible unambiguous most indirect base class of T. If no such type exists, the member typedef type shall equal T.

850. Should shrink_to_fit apply to std::deque?

Section: 23.2.2.2 [deque.capacity] Status: Ready Submitter: Niels Dekker Date: 2008-06-05

View other active issues in [deque.capacity].

View all other issues in [deque.capacity].

View all issues with Ready status.

Discussion:

Issue 755 added a shrink_to_fit function to std::vector and std::string. It did not yet deal with std::deque, because of the fundamental difference between std::deque and the other two container types. The need for std::deque may seem less evident, because one might think that for this container, the overhead is a small map, and some number of blocks that's bounded by a small constant.

The container overhead can in fact be arbitrarily large (i.e. is not necessarily O(N) where N is the number of elements currently held by the deque). As Bill Plauger noted in a reflector message, unless the map of block pointers is shrunk, it must hold at least maxN/B pointers where maxN is the maximum of N over the lifetime of the deque since its creation. This is independent of how the map is implemented (vector-like circular buffer and all), and maxN bears no relation to N, the number of elements it currently holds.

Hervé Brönnimann reports a situation where a deque of requests grew very large due to some temporary backup (the front request hanging), and the map of the deque grew quite large before getting back to normal. Just to put some color on it, assuming a deque with 1K pointer elements in steady regime, that held, at some point in its lifetime, maxN=10M pointers, with one block holding 128 elements, the spine must be at least (maxN / 128), in that case 100K. In that case, shrink-to-fit would allow to reuse about 100K which would otherwise never be reclaimed in the lifetime of the deque.

An added bonus would be that it *allows* implementations to hang on to empty blocks at the end (but does not care if they do or not). A shrink_to_fit would take care of both shrinks, and guarantee that at most O(B) space is used in addition to the storage to hold the N elements and the N/B block pointers.

Proposed resolution:

To Class template deque 23.2.2 [deque] synopsis, add:

void shrink_to_fit();

To deque capacity 23.2.2.2 [deque.capacity], add:

void shrink_to_fit();
Remarks: shrink_to_fit is a non-binding request to reduce memory use. [Note: The request is non-binding to allow latitude for implementation-specific optimizations. -- end note]

851. simplified array construction

Section: 23.2.1 [array] Status: Review Submitter: Benjamin Kosnik Date: 2008-06-05

View other active issues in [array].

View all other issues in [array].

View all issues with Review status.

Discussion:

This is an issue that came up on the libstdc++ list, where a discrepency between "C" arrays and C++0x's std::array was pointed out.

In "C," this array usage is possible:

int ar[] = {1, 4, 6};

But for C++,

std::array<int> a = { 1, 4, 6 }; // error

Instead, the second parameter of the array template must be explicit, like so:

std::array<int, 3> a = { 1, 4, 6 };

Doug Gregor proposes the following solution, that assumes generalized initializer lists.

template<typename T, typename... Args>
inline array<T, sizeof...(Args)> 
make_array(Args&&... args) 
{ return { std::forward<Args>(args)... };  }

Then, the way to build an array from a list of unknown size is:

auto a = make_array<T>(1, 4, 6);

Proposed resolution:

Add to the array synopis in 23.2 [sequences]:

template<typename T, typename... Args>
  requires Convertible<Args, T>...
  array<T, sizeof...(Args)> 
  make_array(Args&&... args);

Append after 23.2.1.6 [array.tuple] Tuple interface to class template array the following new section.

23.2.1.7 Convenience interface to class template array [array.tuple]

template<typename T, typename... Args>
  requires Convertible<Args, T>...
  array<T, sizeof...(Args)> 
  make_array(Args&&... args);

Returns: {std::forward<Args>(args)...}


852. unordered containers begin(n) mistakenly const

Section: 23.4 [unord] Status: Ready Submitter: Robert Klarer Date: 2008-06-12

View other active issues in [unord].

View all other issues in [unord].

View all issues with Ready status.

Discussion:

In 3 of the four unordered containers the local begin member is mistakenly declared const:

local_iterator begin(size_type n) const;

Proposed resolution:

Change the synopsis in 23.4.1 [unord.map], 23.4.2 [unord.multimap], and 23.4.4 [unord.multiset]:

local_iterator begin(size_type n) const;

853. to_string needs updating with zero and one

Section: 23.3.5 [template.bitset] Status: New Submitter: Howard Hinnant Date: 2008-06-18

View all other issues in [template.bitset].

View all issues with New status.

Discussion:

Issue 396 adds defaulted arguments to the to_string member, but neglects to update the three newer to_string overloads.

Proposed resolution:

Change the synopsis in 23.3.5 [template.bitset], and the signatures in 23.3.5.2 [bitset.members] to:

template <class charT, class traits> 
  basic_string<charT, traits, allocator<charT> > to_string(charT zero = charT('0'), charT one = charT('1')) const; 
template <class charT> 
  basic_string<charT, char_traits<charT>, allocator<charT> > to_string(charT zero = charT('0'), charT one = charT('1')) const; 
basic_string<char, char_traits<char>, allocator<char> > to_string(char zero = '0', char one = '1') const; 

854. default_delete converting constructor underspecified

Section: 20.6.11.1.1 [unique.ptr.dltr.dflt] Status: New Submitter: Howard Hinnant Date: 2008-06-18

View all issues with New status.

Discussion:

No relationship between U and T in the converting constructor for default_delete template.

Requirements: U* is convertible to T* and has_virtual_destructor<T>; the latter should also become a concept.

Rules out cross-casting.

The requirements for unique_ptr conversions should be the same as those on the deleter.

Proposed resolution:

Change 20.6.11.1.1 [unique.ptr.dltr.dflt]:

namespace std { 
  template <class T> struct default_delete { 
    default_delete(); 
    template <class U>
      requires Convertible<U*, T*> && HasVirtualDestructor<T>
      default_delete(const default_delete<U>&); 
    void operator()(T*) const; 
  }; 
}

...

template <class U>
  requires Convertible<U*, T*> && HasVirtualDestructor<T>
  default_delete(const default_delete<U>& other);

855. capacity() and reserve() for deque?

Section: 23.2.2.2 [deque.capacity] Status: New Submitter: Hervé Brönnimann Date: 2008-06-11

View other active issues in [deque.capacity].

View all other issues in [deque.capacity].

View all issues with New status.

Discussion:

The main point is that capacity can be viewed as a mechanism to guarantee the validity of iterators when only push_back/pop_back operations are used. For vector, this goes with reallocation. For deque, this is a bit more subtle: capacity() of a deque may shrink, whereas that of vector doesn't. In a circular buffer impl. of the map, as Howard did, there is very similar notion of capacity: as long as size() is less than B * (total size of the map - 2), it is guaranteed that no iterator is invalidated after any number of push_front/back and pop_front/back operations. But this does not hold for other implementations.

Still, I believe, capacity() can be defined by size() + how many push_front/back minus pop_front/back that can be performed before terators are invalidated. In a classical impl., capacity() = size() + the min distance to either "physical" end of the deque (i.e., counting the empty space in the last block plus all the blocks until the end of the map of block pointers). In Howard's circular buffer impl., capacity() = B * (total size of the map - 2) still works with this definition, even though the guarantee could be made stronger.

A simple picture of a deque:

A-----|----|-----|---F+|++++|++B--|-----|-----Z

(A,Z mark the beginning/end, | the block boundaries, F=front, B=back, and - are uninitialized, + are initialized) In that picture: capacity = size() + min(dist(A,F),dist(B,Z)) = min (dist(A,B),dist(F,Z)).

Reserve(n) can grow the map of pointers and add possibly a number of empty blocks to it, in order to guarantee that the next n-size() push_back/push_front operations will not invalidate iterators, and also will not allocate (i.e. cannot throw). The second guarantee is not essential and can be left as a QoI. I know well enough existing implementations of deque (sgi/stl, roguewave, stlport, and dinkumware) to know that either can be implemented with no change to the existing class layout and code, and only a few modifications if blocks are pre-allocated (instead of always allocating a new block, check if the next entry in the map of block pointers is not zero).

Due to the difference with vector, wording is crucial. Here's a proposed wording to make things concrete; I tried to be reasonably careful but please double-check me:

Proposed resolution:

Add new signatures to synopsis in 23.2.2 [deque]:

size_type capacity() const;
bool reserve(size_type n);

Add new signatures to 23.2.2.2 [deque.capacity]:

size_type capacity() const;

1 Returns: An upper bound on n + max(n_f - m_f, n_b - m_b) such that, for any sequence of n_f push_front, m_f pop_front, n_b push_back, and m_b pop_back operations, interleaved in any order, starting with the current deque of size n, the deque does not invalidate any of its iterators except to the erased elements.

2 Remarks: Unlike a vector's capacity, the capacity of a deque can decrease after a sequence of insertions at both ends, even if none of the operations caused the deque to invalidate any of its iterators except to the erased elements.

bool reserve(size_type n);

2 Effects: A directive that informs a deque of a planned sequence of push_front, pop_front, push_back, and pop_back operations, so that it can manage iterator invalidation accordingly. After reserve(), capacity() is greater or equal to the argument of reserve if this operation returns true; and equal to the previous value of capacity() otherwise. If an exception is thrown, there are no effects.

3 Returns: true if iterators are invalidated as a result of this operation, and false otherwise.

4 Complexity: It does not change the size of the sequence and takes at most linear time in n.

5 Throws: length_error if n > max_size().

6 Remarks: It is guaranteed that no invalidation takes place during a sequence of insert or erase operations at either end that happens after a call to reserve() except to the erased elements, until the time when an insertion would make max(n_f-m_f, n_b-m_b) larger than capacity(), where n_f is the number of push_front, m_f of pop_front, n_b of push_back, and m_b of pop_back operations since the call to reserve().

7 An implementation is free to pre-allocate buffers so as to offer the additional guarantee that no exception will be thrown during such a sequence other than by the element constructors.

And 23.2.2.3 [deque.modifiers] para 1, can be enhanced:

1 Effects: An insertion in the middle of the deque invalidates all the iterators and references to elements of the deque. An insertion at either end of the deque invalidates all the iterators to the deque, unless provisions have been made with reserve, but has no effect on the validity of references to elements of the deque.

856. Removal of aligned_union

Section: 20.4.7 [meta.trans.other] Status: New Submitter: Jens Maurer Date: 2008-06-12

View other active issues in [meta.trans.other].

View all other issues in [meta.trans.other].

View all issues with New status.

Discussion:

With the arrival of extended unions (N2544), there is no known use of aligned_union that couldn't be handled by the "extended unions" core-language facility.

Proposed resolution:

Remove the following signature from 20.4.2 [meta.type.synop]:

template <std::size_t Len, class... Types> struct aligned_union;

Remove the second row from table 51 in 20.4.7 [meta.trans.other], starting with:

template <std::size_t Len,
class... Types>
struct aligned_union;

857. condition_variable::time_wait return bool error prone

Section: 30.4.1 [thread.condition.condvar] Status: New Submitter: Beman Dawes Date: 2008-06-13

View all issues with New status.

Discussion:

The meaning of the bool returned by condition_variable::timed_wait is so obscure that even the class' designer can't deduce it correctly. Several people have independently stumbled on this issue.

It might be simpler to change the return type to a scoped enum:

enum class timeout { not_reached, reached };

That's the same cost as returning a bool, but not subject to mistakes. Your example below would be:

if (cv.wait_until(lk, time_limit) == timeout::reached )
  throw time_out();

[ Beman to supply exact wording. ]

Proposed resolution:


858. Wording for Minimal Support for Garbage Collection

Section: X [garbage.collection] Status: New Submitter: Pete Becker Date: 2008-06-21

View all issues with New status.

Discussion:

The first sentence of the Effects clause for undeclare_reachable seems to be missing some words. I can't parse

... for all non-null p referencing the argument is no longer declared reachable...

I take it the intent is that undeclare_reachable should be called only when there has been a corresponding call to declare_reachable. In particular, although the wording seems to allow it, I assume that code shouldn't call declare_reachable once then call undeclare_reachable twice.

I don't know what "shall be live" in the Requires clause means.

In the final Note for undeclare_reachable, what does "cannot be deallocated" mean? Is this different from "will not be able to collect"?

For the wording on nesting of declare_reachable and undeclare_reachable, the words for locking and unlocking recursive mutexes probably are a good model.

Proposed resolution:


859. Monotonic Clock is Conditionally Supported?

Section: X [datetime] Status: New Submitter: Pete Becker Date: 2008-06-23

View all issues with New status.

Discussion:

N2661 says that there is a class named monotonic_clock. It also says that this name may be a synonym for system_clock, and that it's conditionally supported. So the actual requirement is that it can be monotonic or not, and you can tell by looking at is_monotonic, or it might not exist at all (since it's conditionally supported). Okay, maybe too much flexibility, but so be it.

A problem comes up in the threading specification, where several variants of wait_for explicitly use monotonic_clock::now(). What is the meaning of an effects clause that says

wait_until(lock, chrono::monotonic_clock::now() + rel_time)

when monotonic_clock is not required to exist?

Proposed resolution:


860. Floating-Point State

Section: 26 [numerics] Status: New Submitter: Lawrence Crowl Date: 2008-06-23

View all issues with New status.

Discussion:

There are a number of functions that affect the floating point state. These function need to be thread-safe, but I'm unsure of the right approach in the standard, as we inherit them from C.

Proposed resolution:


861. Incomplete specification of EqualityComparable for std::forward_list

Section: 23.1 [container.requirements] Status: New Submitter: Daniel Krügler Date: 2008-06-24

View other active issues in [container.requirements].

View all other issues in [container.requirements].

View all issues with New status.

Discussion:

Table 89, Container requirements, defines operator== in terms of the container member function size() and the algorithm std::equal:

== is an equivalence relation. a.size() == b.size() && equal(a.begin(), a.end(), b.begin()

The new container forward_list does not provide a size member function by design but does provide operator== and operator!= without specifying it's semantic.

Other parts of the (sequence) container requirements do also depend on size(), e.g. empty() or clear(), but this issue explicitly attempts to solve the missing EqualityComparable specification, because of the special design choices of forward_list.

I propose to apply one of the following resolutions, which are described as:

  1. Provide a definition, which is optimal for this special container without previous size test. This choice prevents two O(N) calls of std::distance() with the corresponding container ranges and instead uses a special equals implementation which takes two container ranges instead of 1 1/2.
  2. The simple fix where the usual test is adapted such that size() is replaced by distance with corresponding performance disadvantages.

Both proposal choices are discussed, the preferred choice of the author is to apply (A).

Proposed resolution:

Common part:

Option (A):

Option (B):