From owner-sc22wg5@open-std.org  Wed Nov  5 23:04:32 2008
Return-Path: <owner-sc22wg5@open-std.org>
X-Original-To: sc22wg5-dom7
Delivered-To: sc22wg5-dom7@www2.open-std.org
Received: by www2.open-std.org (Postfix, from userid 521)
	id AB99ECA3428; Wed,  5 Nov 2008 23:04:32 +0100 (CET)
X-Original-To: sc22wg5@open-std.org
Delivered-To: sc22wg5@open-std.org
Received: from mail.jpl.nasa.gov (sentrion2.jpl.nasa.gov [128.149.139.106])
	by www2.open-std.org (Postfix) with ESMTP id 92402C178E7
	for <sc22wg5@open-std.org>; Wed,  5 Nov 2008 23:04:30 +0100 (CET)
Received: from mprox1.jpl.nasa.gov (mprox1.jpl.nasa.gov [137.78.160.140])
	by mail.jpl.nasa.gov (Switch-3.3.2mp/Switch-3.3.2mp) with ESMTP id mA5M4QKs018295
	for <sc22wg5@open-std.org>; Wed, 5 Nov 2008 22:04:27 GMT
Received: from [137.79.7.57] (math.jpl.nasa.gov [137.79.7.57])
	(authenticated bits=0)
	by mprox1.jpl.nasa.gov (Switch-3.2.6/Switch-3.2.6) with ESMTP id mA5M4Ohr015124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT)
	for <sc22wg5@open-std.org>; Wed, 5 Nov 2008 14:04:25 -0800
Subject: A comment on John Wallin's comments on Nick MacLaren's comments
From: Van Snyder <Van.Snyder@jpl.nasa.gov>
Reply-To: Van.Snyder@jpl.nasa.gov
To: sc22wg5 <sc22wg5@open-std.org>
In-Reply-To: <f54d8ee594e5.4911b89b@gmu.edu>
References: <Prayer.1.3.1.0811051340280.23724@hermes-1.csi.cam.ac.uk>
	 <f54d8ee594e5.4911b89b@gmu.edu>
Content-Type: text/plain
Organization: Yes
Date: Wed, 05 Nov 2008 14:04:24 -0800
Message-Id: <1225922664.26045.130.camel@math.jpl.nasa.gov>
Mime-Version: 1.0
X-Mailer: Evolution 2.12.3 (2.12.3-8.el5_2.2) 
Content-Transfer-Encoding: 7bit
X-Source-IP: math.jpl.nasa.gov [137.79.7.57]
X-Source-Sender: Van.Snyder@jpl.nasa.gov
X-AUTH: Authorized
Sender: owner-sc22wg5@open-std.org
Precedence: bulk

This is a message I recently sent to the J3 list, re-sent to the WG5
list (with one small change).


On Wed, 2008-11-05 at 12:15 -0800, John Wallin wrote:
> Here is a simple example of an MPI coding horror.  I have to pass a
> Fortran derived type between nodes.
> 
> Here is the code I need to use to even make this exchange possible.
> Please note that a lot of code has been deleted so I don't overwhelm
> everyone's email.[...]
> 
> You might wonder what the 37 is the sphblock_counts.  It is the number
> of double precision numbers in a row within this particular data
> structure.
> 
> The important thing to note is that every time a grad student adds a
> single element to the data structure, you have to alter the block
> counts and sizes by hand.   This leads to huge problems debugging and
> maintaining the code if the base structures are modified.  (And this
> code is the best way I have found for doing it.)

I remarked on this problem before m185, and proposed a trivial addition
to the OPEN statement to allow message passing using I/O statements,
which already know how to do DTIO and asynchronous, in 08-204.  Subgroup
didn't even consider it.

If coarrays are kicked off the train in Tokyo, we really should go back
and look at the directions proposed in 08-204 and 08-205.  Fortran I/O
applied to message passing should provide all the basic functionality of
MPI, and would be far clearer.

Within a single SMP, say a dual quad-core PC, one can already accomplish
what's proposed in 08-204 with a pipe, but I haven't met a system yet
where pipes work across NFS.  08-204 provides syntax for users to hook
to stuff that vendors provide that go beyond NFS.  08-205 provides a way
for users to roll their own, perhaps atop MPI, while hiding the ugly
details behind I/O statements.

-- 
Van Snyder                    |  What fraction of Americans believe 
Van.Snyder@jpl.nasa.gov       |  Wrestling is real and NASA is fake?
Any alleged opinions are my own and have not been approved or
disapproved by JPL, CalTech, NASA, the President, or anybody else.

