From owner-sc22wg5+sc22wg5-dom8=www.open-std.org@open-std.org  Tue May 14 23:24:23 2013
Return-Path: <owner-sc22wg5+sc22wg5-dom8=www.open-std.org@open-std.org>
X-Original-To: sc22wg5-dom8
Delivered-To: sc22wg5-dom8@www.open-std.org
Received: by www.open-std.org (Postfix, from userid 521)
	id DD614356E8A; Tue, 14 May 2013 23:24:22 +0200 (CEST)
Delivered-To: sc22wg5@open-std.org
X-Greylist: delayed 1347 seconds by postgrey-1.34 at www5.open-std.org; Tue, 14 May 2013 23:24:21 CEST
Received: from ndmsnpf03.ndc.nasa.gov (ndmsnpf03.ndc.nasa.gov [198.117.0.123])
	by www.open-std.org (Postfix) with ESMTP id 17642356E85
	for <sc22wg5@open-std.org>; Tue, 14 May 2013 23:24:05 +0200 (CEST)
Received: from ndmsppt105.ndc.nasa.gov (NDMSPPT105.ndc.nasa.gov [198.117.0.70])
	by ndmsnpf03.ndc.nasa.gov (Postfix) with ESMTP id 2AB9E2D8060;
	Tue, 14 May 2013 16:01:38 -0500 (CDT)
Received: from NDMSCHT108.ndc.nasa.gov (NDMSCHT108-pub.ndc.nasa.gov [198.117.0.208])
	by ndmsppt105.ndc.nasa.gov (8.14.5/8.14.5) with ESMTP id r4EL1cE9025815;
	Tue, 14 May 2013 16:01:38 -0500
Received: from gs6103clune.gsfc.nasa.gov (128.183.105.50) by
 smtp02.ndc.nasa.gov (198.117.0.208) with Microsoft SMTP Server (TLS) id
 14.2.328.9; Tue, 14 May 2013 16:01:38 -0500
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_3F93C84E-6496-4DE4-BF8F-DBC2E3B3A547"
MIME-Version: 1.0 (Mac OS X Mail 6.3 \(1503\))
Subject: Re: (j3.2006) (SC22WG5.4994) Existing support for uses of atomics in Fortran coarray codes
From: Tom Clune <Thomas.L.Clune@nasa.gov>
In-Reply-To: <20130514201252.6BDAE356E8A@www.open-std.org>
Date: Tue, 14 May 2013 17:01:41 -0400
CC: "N.M. Maclaren" <nmm1@cam.ac.uk>, Mark Batty <mbatty@cantab.net>,
        "Lionel, Steve" <steve.lionel@intel.com>,
        "sc22wg5@open-std.org"
	<sc22wg5@open-std.org>,
        Lorri Menard <lorri.menard@intel.com>,
        Daniel C Chen
	<cdchen@ca.ibm.com>
Message-ID: <05FAB38E-66EC-4DBF-B162-5BF137D9C3CF@nasa.gov>
References: <Prayer.1.3.5.1305141957470.21184@hermes-2.csi.cam.ac.uk> <20130514201252.6BDAE356E8A@www.open-std.org>
To: "longb@cray.com" <longb@cray.com>,
        fortran standards email list for J3
	<j3@mailman.j3-fortran.org>
X-Mailer: Apple Mail (2.1503)
X-Originating-IP: [128.183.105.50]
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.10.8626,1.0.431,0.0.0000
 definitions=2013-05-14_05:2013-05-14,2013-05-14,1970-01-01 signatures=0
Sender: owner-sc22wg5@open-std.org
Precedence: bulk

--Apple-Mail=_3F93C84E-6496-4DE4-BF8F-DBC2E3B3A547
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="us-ascii"

Bill,

As I frequently work with pseudospectral models (alas not with =
co-arrays), your second example intrigues _and_ confuses me.   In the =
all-to-all cases that I have, the section of the remote buffer that the =
local process writes to is invariant.   So I don't see what the atomic =
update is accomplishing.   Your example seems to be a =
generalization/replacement of all-to-all for when buffer sizes and =
ordering are not computable in advance and/or vary frequently.

If the sendcount/recvcount are computable in advance, is there still a =
measurable performance to the use of co-arrays vs mpi?=20

- Tom


On May 14, 2013, at 4:12 PM, Bill Long <longb@cray.com> wrote:

>=20
>=20
> On 5/14/13 1:57 PM, N.M. Maclaren wrote:
>> Mark Batty, Peter Sewell and I had a discussion about atomic =
semantics and
>> Fortran this morning, but there were a couple of things that were =
rather
>> important and I didn't have a feel for the answers.  Specifically, =
what
>> semantics are provided by existing coarray atomics, and how they are =
used
>> in real programs.  We are definitely going to have to decide what to =
say
>> about this at Delft, and the problem isn't simple :-(
>>=20
>> In particular:
>>=20
>>    Do implementations guarantee coherence of access to a single =
atomic
>> location and, if not, what do they guarantee?
>=20
> As long as all of the accesses are done using atomic operations, there=20=

> should be no problem. If the network atomics and the local processor=20=

> atomics are not coherent (this depends on the hardware =
characteristics)=20
> then atomics to local memory locations need to bounce off the NIC if=20=

> remote atomics are possible.
>>=20
>>    What, if anything, do they guarantee about the consistency of =
accesses
>> to two different atomic locations?
>=20
> None without explicit SYNC operations. Similar to the current atomic=20=

> subroutines in Fortran.
>=20
>>=20
>> We know what POWER and x86 hardware guarantee, but have no idea of =
what
>> (more? less?) is guaranteed in coarray implementations.  Even with =
using
>> MPI as a basis, it could be anything from sequential consistency to
>> nothing, depending on the details of the implementation.  And the =
fancy
>> RDMA networks are another matter entirely!
>>=20
>> It would also be useful if there were some examples of how they are =
used
>> for ordering (i.e. in combination with SYNC MEMORY) and running =
totals
>> etc.  Specifically, any use where the consistency semantics matter to =
the
>> program.
>=20
> Two examples come to mind.
>=20
> 1) The famous (notorious ? ) Table Toy benchmark code, also known as =
the=20
> "Random Access" benchmark.  It involves a large distributed table of=20=

> 64-bit integers spread across many images with all of those images=20
> asynchronously replacing a randomly located table value with the XOR =
of=20
> the current table value with a local value.  The "standard" version of=20=

> the code is several pages of MPI calls. Well beyond comprehension by=20=

> normal humans.   The coarray version is a simple loop of about 10-15=20=

> lines.  The loop has no explicit synchronizations.
>=20
> 2) By far the most common usage I've seen of atomics in coarray codes =
is=20
> for buffer filling.  Suppose you have a "receiving" buffer that is a=20=

> coarray of globally know size on each image, and a separate coarray=20
> integer equal to  the subscript value of the next free element in the=20=

> buffer array.   The goal is for other images to write data into =
buffers=20
> on remote images.  The process is simple: If I want to write N =
elements=20
> into the buffer in image T, I do a "fetch and add" atomic  of N on the=20=

> buffer subscript on image T.  The returned value is the old starting=20=

> point in the buffer on that image. If that value + N is still within =
the=20
> buffer, do the assignment buffer(old_val:old_val+N-1)[T] =3D =
mydata(1:N).=20
> Several images can be "attacking" image T asynchronously and each gets =
a=20
> non-overlapping part of the buffer as the target of the assignment.=20
> Basically no synchronization involved.   This code sequence is,=20
> effectively, the guts of the "all-to-all" memory rearrangement that=20
> seems to pop up in multiple codes.  In  practice it is about twice as=20=

> fast as the standard-distribution MPI_Alltoall routine for the same=20
> operation. And it has the significant advantage that images can send =
the=20
> data to remote locations as soon as it is ready, rather than waiting =
for=20
> a global sync (as would be the case with the MPI call). This allows =
some=20
> images to be sending data across the network while others are =
computing.=20
> And if the implementation does the sends as non-blocking operations=20
> (until the next image control statement) there is also overlap of =
local=20
> computation and communication. Besides all-to-all, the scheme  can =
also=20
> be used as a way to add items to a remote work queue without having to=20=

> deal with the overhead of locks.   In my experience, this is the =
"killer=20
> app" for coarray atomics.
>=20
> Cheers,
> Bill
>=20
>=20
>>=20
>> Any feedback appreciated.  Thanks.
>>=20
>>=20
>> Regards,
>> Nick Maclaren.
>>=20
>>=20
>>=20
>=20
> --=20
> Bill Long                                           longb@cray.com
> Fortran Technical Support    &                 voice: 651-605-9024
> Bioinformatics Software Development            fax:   651-605-9142
> Cray Inc./Cray Plaza, Suite 210/380 Jackson St./St. Paul, MN 55101
>=20
>=20
> _______________________________________________
> J3 mailing list
> J3@mailman.j3-fortran.org
> http://mailman.j3-fortran.org/mailman/listinfo/j3

Thomas Clune, Ph. D. 					=
<Thomas.L.Clune@nasa.gov>
Chief, Software Systems Support Office		Code 610.3
NASA GSFC								=
301-286-4635
MS 610.8 B33-C128						=
<http://ssso.gsfc.nasa.gov>
Greenbelt, MD 20771






--Apple-Mail=_3F93C84E-6496-4DE4-BF8F-DBC2E3B3A547
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset="us-ascii"

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
">Bill,<div><br></div><div>As I frequently work with pseudospectral =
models (alas not with co-arrays), your second example intrigues _and_ =
confuses me. &nbsp; In the all-to-all cases that I have, the section of =
the remote buffer that the local process writes to is invariant. &nbsp; =
So I don't see what the atomic update is accomplishing. &nbsp; Your =
example seems to be a generalization/replacement of all-to-all for when =
buffer sizes and ordering are not computable in advance and/or vary =
frequently.</div><div><br></div><div>If the sendcount/recvcount are =
computable in advance, is there still a measurable performance to the =
use of co-arrays vs mpi?&nbsp;</div><div><br></div><div>- =
Tom</div><div><br></div><div><br><div><div>On May 14, 2013, at 4:12 PM, =
Bill Long &lt;<a href=3D"mailto:longb@cray.com">longb@cray.com</a>&gt; =
wrote:</div><br class=3D"Apple-interchange-newline"><blockquote =
type=3D"cite"><br><br>On 5/14/13 1:57 PM, N.M. Maclaren =
wrote:<br><blockquote type=3D"cite">Mark Batty, Peter Sewell and I had a =
discussion about atomic semantics and<br>Fortran this morning, but there =
were a couple of things that were rather<br>important and I didn't have =
a feel for the answers. &nbsp;Specifically, what<br>semantics are =
provided by existing coarray atomics, and how they are used<br>in real =
programs. &nbsp;We are definitely going to have to decide what to =
say<br>about this at Delft, and the problem isn't simple :-(<br><br>In =
particular:<br><br> &nbsp;&nbsp;&nbsp;Do implementations guarantee =
coherence of access to a single atomic<br>location and, if not, what do =
they guarantee?<br></blockquote><br>As long as all of the accesses are =
done using atomic operations, there <br>should be no problem. If the =
network atomics and the local processor <br>atomics are not coherent =
(this depends on the hardware characteristics) <br>then atomics to local =
memory locations need to bounce off the NIC if <br>remote atomics are =
possible.<br><blockquote type=3D"cite"><br> &nbsp;&nbsp;&nbsp;What, if =
anything, do they guarantee about the consistency of accesses<br>to two =
different atomic locations?<br></blockquote><br>None without explicit =
SYNC operations. Similar to the current atomic <br>subroutines in =
Fortran.<br><br><blockquote type=3D"cite"><br>We know what POWER and x86 =
hardware guarantee, but have no idea of what<br>(more? less?) is =
guaranteed in coarray implementations. &nbsp;Even with using<br>MPI as a =
basis, it could be anything from sequential consistency to<br>nothing, =
depending on the details of the implementation. &nbsp;And the =
fancy<br>RDMA networks are another matter entirely!<br><br>It would also =
be useful if there were some examples of how they are used<br>for =
ordering (i.e. in combination with SYNC MEMORY) and running =
totals<br>etc. &nbsp;Specifically, any use where the consistency =
semantics matter to the<br>program.<br></blockquote><br>Two examples =
come to mind.<br><br>1) The famous (notorious ? ) Table Toy benchmark =
code, also known as the <br>"Random Access" benchmark. &nbsp;It involves =
a large distributed table of <br>64-bit integers spread across many =
images with all of those images <br>asynchronously replacing a randomly =
located table value with the XOR of <br>the current table value with a =
local value. &nbsp;The "standard" version of <br>the code is several =
pages of MPI calls. Well beyond comprehension by <br>normal humans. =
&nbsp;&nbsp;The coarray version is a simple loop of about 10-15 =
<br>lines. &nbsp;The loop has no explicit synchronizations.<br><br>2) By =
far the most common usage I've seen of atomics in coarray codes is =
<br>for buffer filling. &nbsp;Suppose you have a "receiving" buffer that =
is a <br>coarray of globally know size on each image, and a separate =
coarray <br>integer equal to &nbsp;the subscript value of the next free =
element in the <br>buffer array. &nbsp;&nbsp;The goal is for other =
images to write data into buffers <br>on remote images. &nbsp;The =
process is simple: If I want to write N elements <br>into the buffer in =
image T, I do a "fetch and add" atomic &nbsp;of N on the <br>buffer =
subscript on image T. &nbsp;The returned value is the old starting =
<br>point in the buffer on that image. If that value + N is still within =
the <br>buffer, do the assignment buffer(old_val:old_val+N-1)[T] =3D =
mydata(1:N). <br>Several images can be "attacking" image T =
asynchronously and each gets a <br>non-overlapping part of the buffer as =
the target of the assignment. <br>Basically no synchronization involved. =
&nbsp;&nbsp;This code sequence is, <br>effectively, the guts of the =
"all-to-all" memory rearrangement that <br>seems to pop up in multiple =
codes. &nbsp;In &nbsp;practice it is about twice as <br>fast as the =
standard-distribution MPI_Alltoall routine for the same <br>operation. =
And it has the significant advantage that images can send the <br>data =
to remote locations as soon as it is ready, rather than waiting for =
<br>a global sync (as would be the case with the MPI call). This allows =
some <br>images to be sending data across the network while others are =
computing. <br>And if the implementation does the sends as non-blocking =
operations <br>(until the next image control statement) there is also =
overlap of local <br>computation and communication. Besides all-to-all, =
the scheme &nbsp;can also <br>be used as a way to add items to a remote =
work queue without having to <br>deal with the overhead of locks. =
&nbsp;&nbsp;In my experience, this is the "killer <br>app" for coarray =
atomics.<br><br>Cheers,<br>Bill<br><br><br><blockquote =
type=3D"cite"><br>Any feedback appreciated. =
&nbsp;Thanks.<br><br><br>Regards,<br>Nick =
Maclaren.<br><br><br><br></blockquote><br>-- <br>Bill Long =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a =
href=3D"mailto:longb@cray.com">longb@cray.com</a><br>Fortran Technical =
Support &nbsp;&nbsp;&nbsp;&amp; =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;voice: 651-605-9024<br>Bioinformatics Software =
Development =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;fax: =
&nbsp;&nbsp;651-605-9142<br>Cray Inc./Cray Plaza, Suite 210/380 Jackson =
St./St. Paul, MN =
55101<br><br><br>_______________________________________________<br>J3 =
mailing list<br><a =
href=3D"mailto:J3@mailman.j3-fortran.org">J3@mailman.j3-fortran.org</a><br=
>http://mailman.j3-fortran.org/mailman/listinfo/j3<br></blockquote></div><=
br><div apple-content-edited=3D"true">
<span class=3D"Apple-style-span" style=3D"border-collapse: separate; =
color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; =
font-variant: normal; font-weight: normal; letter-spacing: normal; =
line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: =
0px; text-transform: none; white-space: normal; widows: 2; word-spacing: =
0px; border-spacing: 0px; -webkit-text-decorations-in-effect: none; =
-webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; =
font-size: medium; "><div style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: =
normal; font-weight: normal; letter-spacing: normal; line-height: =
normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; =
text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; =
border-spacing: 0px; -webkit-text-decorations-in-effect: none; =
-webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; =
font-size: medium; "><div style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: =
normal; font-weight: normal; letter-spacing: normal; line-height: =
normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; =
text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; =
border-spacing: 0px; -webkit-text-decorations-in-effect: none; =
-webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; =
font-size: medium; "><div style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: =
normal; font-weight: normal; letter-spacing: normal; line-height: =
normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; =
text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; =
border-spacing: 0px; -webkit-text-decorations-in-effect: none; =
-webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; =
font-size: medium; "><div style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: =
normal; font-weight: normal; letter-spacing: normal; line-height: =
normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; =
text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; =
border-spacing: 0px; -webkit-text-decorations-in-effect: none; =
-webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; =
font-size: medium; "><div style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: =
normal; font-weight: normal; letter-spacing: normal; line-height: =
normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; =
text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; =
border-spacing: 0px; -webkit-text-decorations-in-effect: none; =
-webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; =
font-size: medium; "><div style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: =
normal; font-weight: normal; letter-spacing: normal; line-height: =
normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; =
text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; =
border-spacing: 0px; -webkit-text-decorations-in-effect: none; =
-webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; =
font-size: medium; "><div style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: =
normal; font-weight: normal; letter-spacing: normal; line-height: =
normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: =
normal; widows: 2; word-spacing: 0px; border-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space; "><span class=3D"Apple-style-span" =
style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: =
Helvetica; font-style: normal; font-variant: normal; font-weight: =
normal; letter-spacing: normal; line-height: normal; orphans: 2; =
text-indent: 0px; text-transform: none; white-space: normal; widows: 2; =
word-spacing: 0px; border-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space; "><span class=3D"Apple-style-span" =
style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: medium; font-style: normal; font-variant: normal; =
font-weight: normal; letter-spacing: normal; line-height: normal; =
orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; =
widows: 2; word-spacing: 0px; border-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; "><div style=3D"word-wrap: =
break-word; -webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space; "><span class=3D"Apple-style-span" =
style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 12px; font-style: normal; font-variant: normal; =
font-weight: normal; letter-spacing: normal; line-height: normal; =
orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; =
widows: 2; word-spacing: 0px; border-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; "><div style=3D"word-wrap: =
break-word; -webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space; "><span class=3D"Apple-style-span" =
style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 12px; font-style: normal; font-variant: normal; =
font-weight: normal; letter-spacing: normal; line-height: normal; =
orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; =
widows: 2; word-spacing: 0px; border-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; "><div style=3D"word-wrap: =
break-word; -webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space; "><span class=3D"Apple-style-span" =
style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 12px; font-style: normal; font-variant: normal; =
font-weight: normal; letter-spacing: normal; line-height: normal; =
orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; =
widows: 2; word-spacing: 0px; border-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; "><div>Thomas Clune, Ph. =
D.&nbsp;<span class=3D"Apple-tab-span" style=3D"white-space: pre; ">		=
			</span>&lt;<a =
href=3D"mailto:Thomas.L.Clune@nasa.gov">Thomas.L.Clune@nasa.gov</a>&gt;</d=
iv><div>Chief, Software Systems Support Office<span =
class=3D"Apple-tab-span" style=3D"white-space: pre; ">		=
</span>Code 610.3</div><div>NASA GSFC<span class=3D"Apple-tab-span" =
style=3D"white-space: pre; ">							=
	</span>301-286-4635</div><div>MS 610.8 B33-C128<span =
class=3D"Apple-tab-span" style=3D"white-space: pre; ">			=
</span><span class=3D"Apple-tab-span" style=3D"white-space: pre; ">		=
	</span>&lt;<a =
href=3D"http://ssso.gsfc.nasa.gov">http://ssso.gsfc.nasa.gov</a>&gt;</div>=
</span><div>Greenbelt, MD =
20771</div><div><br></div></div></span></div></span></div></span></div></s=
pan></div></span></div></span></div></span></div></span></div></span></div=
></span></div></span><br =
class=3D"Apple-interchange-newline"></div></span><br =
class=3D"Apple-interchange-newline"><br =
class=3D"Apple-interchange-newline">
</div>
<br></div></body></html>=

--Apple-Mail=_3F93C84E-6496-4DE4-BF8F-DBC2E3B3A547--
