<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 10/08/2018 12:38 PM, Markus Scherer
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAN49p6qA8AMJo7XGuQx1uWDEkYYZ3O6BGR7JNz38pa--QBWSJA@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<div dir="ltr">> <span
style="color:rgb(0,0,0);font-family:sans-serif;font-size:medium">ICU
supports customization of its internal code unit type, but </span><code
class="gmail-highlight"
style="font-family:Menlo,Consolas,"DejaVu Sans
Mono",Monaco,monospace;font-size:0.9em;break-inside:avoid;padding:0.1em;border-radius:0.3em;background:rgb(245,242,240);color:rgb(0,0,0)"><span
style="color:rgb(153,0,85)">char16_t</span></code><span
style="color:rgb(0,0,0);font-family:sans-serif;font-size:medium"> is
used by default, following ICU’s adoption of C++11.</span><br>
<div><br>
</div>
<div>Not quite... ICU supports customization of its code unit
type <u><i>for C APIs</i></u>. Internally, and in C++ APIs,
we switched to char16_t. And because that broke call sites, we
mitigated where we could with overloads and shim classes.</div>
</div>
</blockquote>
<br>
Ah, thank you for the correction. If we end up submitting a
revision of the paper, I'll include this correction. I had checked
the ICU sources (<tt>include/unicode/umachine.h</tt>) and verified
that the <tt>UChar</tt> typedef was configurable, but I didn't
realize that configuration was limited to C code.<br>
<br>
<blockquote type="cite"
cite="mid:CAN49p6qA8AMJo7XGuQx1uWDEkYYZ3O6BGR7JNz38pa--QBWSJA@mail.gmail.com">
<div dir="ltr">
<div><br>
</div>
<div>This was all quite painful.</div>
</div>
</blockquote>
<br>
I believe that. I discovered the <tt>U_ALIASING_BARRIER</tt> macro
used to work around the fact that, for example, <tt>reinterpret_cast<const
wchar_t*></tt> from a pointer to <tt>char16_t</tt> results in
undefined behavior. The need for such heroics is a bit more limited
for <tt>char8_t </tt>since <tt>char</tt> and <tt>unsigned char</tt>
are allowed to alias with <tt>char8_t</tt> (though not the other
way around).<br>
<br>
It would be interesting to get more perspective on how and why ICU
evolved like it did. What was the motivation for ICU to switch to <tt>char16_t</tt>?
Were the anticipated benefits realized despite the perhaps
unanticipated complexities? If Windows were to suddenly sprout
Win32 interfaces defined in terms of <tt>char16_t</tt>, would the
pain be substantially relieved? Are code bases that use ICU on
non-Windows platforms (slowly) migrating from <tt>uint16_t</tt> to
<tt>char16_t</tt>?<br>
<br>
<blockquote type="cite"
cite="mid:CAN49p6qA8AMJo7XGuQx1uWDEkYYZ3O6BGR7JNz38pa--QBWSJA@mail.gmail.com">
<div dir="ltr">
<div><br>
</div>
<div>As for char8_t, I realize that you think the benefits
outweigh the costs.</div>
<div>I asked some C++ experts about the potential for
performance gains from better optimizations; one responded
with a skeptical note.</div>
</div>
</blockquote>
<br>
This is something I would like to get more data on. I've looked and
I've asked, but so far haven't found any research that attempts to
quantify the lost optimization cost due to aliasing <tt>char</tt>.
I've heard claims that it is significant, but have not seen data to
support such claims. The benefits of TBAA in general are not
disputed, and it seems reasonable to conclude that there is
therefore a lost opportunity if TBAA cannot be applied fully for <tt>char</tt>.
But whether that opportunity is large or small I really don't know.
In theory, we could use the current support in gcc and Clang for <tt>char8_t</tt>
to explore this further.<br>
<br>
<blockquote type="cite"
cite="mid:CAN49p6qA8AMJo7XGuQx1uWDEkYYZ3O6BGR7JNz38pa--QBWSJA@mail.gmail.com">
<div dir="ltr">
<div><br>
</div>
<div>If you do want a distinct type, why not just standardize on
uint8_t? Why does it need to be a new type that is distinct
from that, too?</div>
</div>
</blockquote>
<br>
Lyberta provided one example; we do need to be able to overload or
specialize on character vs integer types. Since <tt>uint8_t</tt>
is conditionally supported, we can't rely on its existence within
the standard (we'd have to use <tt>unsigned char</tt> or <tt>uint_least8_t</tt>
instead).<br>
<br>
I think there is value in maintaining consistency with <tt>char16_t</tt>
and <tt>char32_t</tt>. <tt>char8_t</tt> provides the missing
piece needed to enable a clean, type safe, external vs internal
encoding model that allows use of any of UTF-8, UTF-16, or UTF-32 as
the internal encoding, that is easy to teach, and that facilitates
generic libraries like text_view that work seamlessly with any of
these encodings.<br>
<br>
Tom.<br>
<br>
<blockquote type="cite"
cite="mid:CAN49p6qA8AMJo7XGuQx1uWDEkYYZ3O6BGR7JNz38pa--QBWSJA@mail.gmail.com">
<div dir="ltr">
<div><br>
</div>
<div>Best regards,</div>
<div>markus</div>
</div>
</blockquote>
<p><br>
</p>
</body>
</html>