From owner-sc22wg5+sc22wg5-dom8=www.open-std.org@open-std.org  Tue Sep 15 04:31:08 2015
Return-Path: <owner-sc22wg5+sc22wg5-dom8=www.open-std.org@open-std.org>
X-Original-To: sc22wg5-dom8
Delivered-To: sc22wg5-dom8@www.open-std.org
Received: by www.open-std.org (Postfix, from userid 521)
	id 1C92D9DB155; Tue, 15 Sep 2015 04:31:07 +0200 (CEST)
Delivered-To: sc22wg5@open-std.org
Received: from mail.jpl.nasa.gov (smtp.jpl.nasa.gov [128.149.139.109])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by www.open-std.org (Postfix) with ESMTP id B0CEF3568D4
	for <sc22wg5@open-std.org>; Tue, 15 Sep 2015 04:31:00 +0200 (CEST)
Received: from [137.79.7.57] (math.jpl.nasa.gov [137.79.7.57])
	by smtp.jpl.nasa.gov (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with ESMTP id t8F2Uu9J017880
	(using TLSv1.2 with cipher DHE-RSA-AES128-GCM-SHA256 (128 bits) verified NO)
	for <sc22wg5@open-std.org>; Mon, 14 Sep 2015 19:30:58 -0700
Subject: LCPC conference in Raleigh
From: Van Snyder <Van.Snyder@jpl.nasa.gov>
Reply-To: Van.Snyder@jpl.nasa.gov
To: sc22wg5 <sc22wg5@open-std.org>
Content-Type: text/plain; charset="ISO-8859-1"
Organization: Yes
Date: Mon, 14 Sep 2015 19:30:56 -0700
Message-ID: <1442284256.13944.138.camel@math.jpl.nasa.gov>
Mime-Version: 1.0
X-Mailer: Evolution 2.32.3 (2.32.3-34.el6) 
Content-Transfer-Encoding: 7bit
X-Source-Sender: Van.Snyder@jpl.nasa.gov
X-AUTH: Authorized
Sender: owner-sc22wg5@open-std.org
Precedence: bulk

I attended the 28th Languages and Compilers for Parallel Computing (LCPC
2015) conference http://www.csc2.ncsu.edu/workshops/lcpc2015/ in Raleigh
last week.  This wasn't funded by JPL; a friend who is the chair of the
Computer Science department at NCSU paid my way.

I stood beside Damian's poster about OpenCoarrays, and gave a 2-minute
presentation about it.

There were several interesting talks.  Several times I mentioned that
what presenters were describing as rather heroic efforts are automatic
in coarray Fortran.

Several people asked "Why coarrays?"  I pointed to one line of code,
something like X = Y[7] on Damian's poster, and asked them to compare it
to the page of MPI code on the next poster.

That next poster was presented by Hadia Ahmed <hadia@cis.uab.edu>,
concerning work she had done with Anthony Skjellum and Peter Pirkelbauer
at the University of Alabama at Birmingham.  They used ROSE to analyze
legacy C+MPI code having blocking transfers, and transformed it to use
non-blocking transfers.  Their analysis might be useful in Fortran
compilers to decide when coarray transfers can be non-blocking, and to
report what users have done wrong that prevents that.  This project is
called PETAL.  There should be something in the LCPC 2015 proceedings
when they appear online.  There's a poster about the parent project,
called iProgress, from a different (IPDPS) meeting last year, at
http://iprogress.cis.uab.edu/media/2014/05/ipdps_poster_final.pdf.

I don't know ROSE http://rosecompiler.org well, but another attendee
mentioned that it can parse C, C++, UPC, Fortran 2003, Java, Python and
PHP, and that it represents them using a common abstract syntax tree.
It can also un-parse (pretty print).  If ROSE were extended to
understand coarrays (I sent them a message to urge them to do it), the
work that Hadia did might be useful to transform Fortran+MPI to coarray
Fortran.  If ROSE can unparse to a different language from its input, it
could conceivably convert C+MPI code to coarray Fortran.

Many presentations described unstructured problems, such as arbitrary
mesh representations in Earth science or fluid dynamics calculations.
Paul Kelly <p.kelly@imperial.ac.uk> from Imperial College London
described work he had done in this area.  He developed what he calls an
inspector-executor framework, which looks hard at a problem to develop
efficient schedules to solve it.  This is effective if one's code
examines the same mesh many times.  His tool is embedded in Python.  It
generates and compiles C code at runtime.  The performance frequently
exceeds hand-coded C.  Other speakers described graph problems (which
are isomorphic to sparse matrix problems).  These sorts of problems
don't lend themselves well to parallelization using coarrays, array
operations, FORALL statements, or DO CONCURRENT constructs because the
parallelization opportunities arise from the data, not the program
structure.  It would be useful to contemplate more parallelism
constructs in Fortran to address them.  Many speakers remarked that
multigrain parallelism gives greater speed-up.  Some speakers mentioned
fork-join constructs.  Others mentioned tasks and threads (I don't know
what distinctions they drew between these).  Somebody mentioned futures.
Padma Raghavan <padma@psu.edu> from Penn State University mentioned maps
(I don't know what she had in mind, but she did invite me to contact her
offline; maybe she said map reduce).

Dhruva Chakrabarti <dhruva.chakrabarti@hpe.com> mentioned that effective
use of persistent store (he mentioned at least three technologies, but I
only remember memresistors and MRAM) provides challenges for programming
languages.  It might be fruitful to think about the relationship between
persistent store and Fortran.  One tiny step might be to support what
Multics called "associated memory" and POSIX calls a "memory-mapped
file" (which is already supported by Boost, Java, D, Ruby, Python,
Perl, .NET, PHP, R, J, and probably others).

One of the conference organizers, Chen Ding <cding@cs.rochester.edu> at
the University of Rochester http://www.cs.rochester.edu/~cding/, hoped I
would come to the meeting next year (in Rochester).  I see no prospect
to get funding to attend further meetings.  It would be useful if
somebody represented Fortran at these meetings (and also at PPoPP
meetings).

Van


