From owner-sc22wg5@open-std.org  Fri Jan 23 22:46:55 2009
Return-Path: <owner-sc22wg5@open-std.org>
X-Original-To: sc22wg5-dom7
Delivered-To: sc22wg5-dom7@www2.open-std.org
Received: by www2.open-std.org (Postfix, from userid 521)
	id 5CDAECA6002; Fri, 23 Jan 2009 22:46:55 +0100 (CET)
X-Original-To: sc22wg5@open-std.org
Delivered-To: sc22wg5@open-std.org
Received: from mail.jpl.nasa.gov (sentrion1.jpl.nasa.gov [128.149.139.105])
	by www2.open-std.org (Postfix) with ESMTP id 3D87ACA5FED
	for <sc22wg5@open-std.org>; Fri, 23 Jan 2009 22:46:53 +0100 (CET)
Received: from mprox3.jpl.nasa.gov (mprox3.jpl.nasa.gov [137.78.160.171])
	by mail.jpl.nasa.gov (Switch-3.3.2mp/Switch-3.3.2mp) with ESMTP id n0NLkkiv005838;
	Fri, 23 Jan 2009 21:46:47 GMT
Received: from [137.79.7.57] (math.jpl.nasa.gov [137.79.7.57])
	(authenticated bits=0)
	by mprox3.jpl.nasa.gov (Switch-3.2.6/Switch-3.2.6) with ESMTP id n0NLkhCk021621
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 23 Jan 2009 13:46:45 -0800
Subject: Re: (j3.2006) (SC22WG5.3917) [MPI3 Fortran]
	(SC22WG5.3909)	[ukfortran] [MPI3	Fortran] MPI	non-blocking transfers
From: Van Snyder <Van.Snyder@jpl.nasa.gov>
Reply-To: Van.Snyder@jpl.nasa.gov
To: MPI-3 Fortran working group <mpi3-fortran@lists.mpi-forum.org>,
	WG5 <sc22wg5@open-std.org>
In-Reply-To: <20090123204209.615C8CA5FED@www2.open-std.org>
References: <Prayer.1.3.1.0901211104060.5654@hermes-2.csi.cam.ac.uk>
	 <49776DF7.1040900@cray.com>	<20090121211748.130A5C178D9@www2.open-std.org>
	 <20090121224014.6CB63C178D9@www2.open-std.org>
	 <20090121234200.4F3BDCA3434@www2.open-std.org>
	 <20090122000407.D5A8ECA3434@www2.open-std.org>
	 <Prayer.1.3.1.0901221006510.28472@hermes-2.csi.cam.ac.uk>
	 <1232658808.15119.824.camel@math.jpl.nasa.gov>
	 <20090123095622.CBD80CA5FED@www2.open-std.org>
	 <20090123190515.A5B24CA5FE6@www2.open-std.org>	<497A1CE5.3000708@cray.com>
	 <Prayer.1.3.1.0901232009180.25233@hermes-2.csi.cam.ac.uk>
	 <20090123204209.615C8CA5FED@www2.open-std.org>
Content-Type: text/plain
Organization: Yes
Date: Fri, 23 Jan 2009 13:46:43 -0800
Message-Id: <1232747203.15119.979.camel@math.jpl.nasa.gov>
Mime-Version: 1.0
X-Mailer: Evolution 2.12.3 (2.12.3-8.el5_2.3) 
Content-Transfer-Encoding: 7bit
X-Source-IP: math.jpl.nasa.gov [137.79.7.57]
X-Source-Sender: Van.Snyder@jpl.nasa.gov
X-AUTH: Authorized
Sender: owner-sc22wg5@open-std.org
Precedence: bulk


On Fri, 2009-01-23 at 12:44 -0800, Bill Long wrote:

> The problem is solved for asynchronous I/O in Fortran by exactly the
> same means.  The solution relies on C1239 and C1240 [299:12-19] in
> 09-007.  If you violate either of these constraints, your program will
> be rejected by the compiler and you have to fix it....  If we plan to
> base a solution on asynchronous (or volatile) for MPI, then the
> solution is exactly the same as for asynchronous I/O in Fortran.

Since solutions exist for asynchronous Fortran I/O, which is
indistinguishable from MPI hiding in an external procedure, why are we
having this conversation?

Just write an extra layer to get at MPI_wait, having an assumed-shape
buffer dummy argument with the ASYNCHRONOUS attribute and without the
CONTIGUOUS attribute.  Since the buffer dummy won't be referenced, make
the procedure external, or better yet, write it in C, so a clever
optimizer won't inline it, notice the buffer isn't used, and then move
accesses to the buffer across the remaining reference to MPI_wait, which
is the present problem.

Make this layer generic with MPI_wait.  Tell users to put the buffer or
something associated with it, if one is in scope, as the associated
actual argument, and tell them why.  If no buffer with which the "real"
buffer might be associated is accessible in the scope of the call to
MPI_wait, the processor can't possibly affect the buffer by moving code
that accesses it across the call to MPI_wait.  This is actually *more
precise* than a Fortran WAIT statement, because processors can't move
acceses to ANY asynchronous variable across a WAIT statement.

Since we already have a solution, I don't see the need for another one,
especially not a means to say "this procedure effectively contains a
WAIT statement, so don't move accesses to anything with the ASYNCHRONOUS
attribute across references to it."  If you want to accomplish that
effect, just put a WAIT statement, maybe for a nonexistent unit (see
09-007:233:12-14), before and after the call to MPI_wait.  Maybe add
Note 9.52a "An optimizer cannot move accesses to any variables with the
ASYNCHRONOUS attribute across a WAIT statement, even one for a
nonexistent unit."

Besides, Resolution T7 says "don't invent new stuff."


