From owner-sc22wg5@open-std.org  Sat Jan 24 05:29:07 2009
Return-Path: <owner-sc22wg5@open-std.org>
X-Original-To: sc22wg5-dom7
Delivered-To: sc22wg5-dom7@www2.open-std.org
Received: by www2.open-std.org (Postfix, from userid 521)
	id 06E29CA6002; Sat, 24 Jan 2009 05:29:07 +0100 (CET)
X-Original-To: sc22wg5@open-std.org
Delivered-To: sc22wg5@open-std.org
Received: from nspiron-2.llnl.gov (nspiron-2.llnl.gov [128.115.41.82])
	by www2.open-std.org (Postfix) with ESMTP id 11D2CC178D9
	for <sc22wg5@open-std.org>; Sat, 24 Jan 2009 05:29:05 +0100 (CET)
X-Attachments: None
Received: from vpna-user-128-15-244-77.llnl.gov (HELO [128.15.244.77]) ([128.15.244.77])
  by nspiron-2.llnl.gov with ESMTP; 23 Jan 2009 20:29:04 -0800
Message-ID: <497A9910.6090904@llnl.gov>
Date: Fri, 23 Jan 2009 20:29:04 -0800
From: Aleksandar Donev <donev1@llnl.gov>
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.8) Gecko/20071009 SeaMonkey/1.1.5
MIME-Version: 1.0
Cc: MPI-3 Fortran working group <mpi3-fortran@lists.mpi-forum.org>,
	WG5 <sc22wg5@open-std.org>
Subject: Re: (j3.2006) (SC22WG5.3921) [MPI3 Fortran] (SC22WG5.3917)	(SC22WG5.3909)
 [ukfortran] [MPI3	Fortran]	MPI	non-blocking transfers
References: <Prayer.1.3.1.0901211104060.5654@hermes-2.csi.cam.ac.uk>	<49776DF7.1040900@cray.com>	<20090121211748.130A5C178D9@www2.open-std.org>	<20090121224014.6CB63C178D9@www2.open-std.org>	<20090121234200.4F3BDCA3434@www2.open-std.org>	<20090122000407.D5A8ECA3434@www2.open-std.org>	<Prayer.1.3.1.0901221006510.28472@hermes-2.csi.cam.ac.uk>	<1232658808.15119.824.camel@math.jpl.nasa.gov>	<20090123095622.CBD80CA5FED@www2.open-std.org>	<20090123190515.A5B24CA5FE6@www2.open-std.org>	<497A1CE5.3000708@cray.com>	<Prayer.1.3.1.0901232009180.25233@hermes-2.csi.cam.ac.uk>	<20090123204209.615C8CA5FED@www2.open-std.org>	<1232747203.15119.979.camel@math.jpl.nasa.gov>	<497A51B9.3080304@cray.com>	<1232753543.15119.989.camel@math.jpl.nasa.gov> <20090123234036.9C7A4CA5FED@www2.open-std.org>
In-Reply-To: <20090123234036.9C7A4CA5FED@www2.open-std.org>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Sender: owner-sc22wg5@open-std.org
Precedence: bulk

Bill Long wrote:

> The real question is whether "keep rewriting your code until the 
> compiler accepts it" is a "solution" to the MPI group.  Realistically, I 
> think it is the where we'll end up. 
As Nick said, no one can fix broken code. If code passes a non-contigous 
array to an MPI routine that cannot handle that, and copy happens, 
nothing can fix that but changes to either the code or making a run-time 
error in an MPI wrapper (since the routine cannot handle it---and a 
compile-time error is much better than a runtime one). Likely the code 
would never work correctly to begin with, so it does not sound like a 
good reason to change MPI to me.

Remember that ANY of the actually technically-feasible solutions 
includes adding some attribute on the buffer array. These are not there 
in existing codes. The codes will need to be changed. Bill has mentioned 
some sort of implicit acquisition of said attribute but that still 
smells fishy to me.

The only actual problem is that existing constraints for asynchronous, 
for a good reason, require that the actual be "simply contiguous". This 
is more restrictive than "contiguous". There may be codes that rely on 
"oh it will work cause it is contiguous", but to make it simply 
contiguous that may require code changes (e.g., adding the CONTIGUOUS 
attribute here and there). All of these are nontrivial especially for 
people that do not really understand the meaning of these attributes.

> I just want to make sure the MPI 
> group fully understands what they are getting.
Well, no one will if you subject them to this ongoing circular 
discussion which even the discussants can really follow!

Aleks
