From owner-sc22wg5+sc22wg5-dom8=www.open-std.org@open-std.org  Thu Jul  6 21:00:37 2017
Return-Path: <owner-sc22wg5+sc22wg5-dom8=www.open-std.org@open-std.org>
X-Original-To: sc22wg5-dom8
Delivered-To: sc22wg5-dom8@www.open-std.org
Received: by www.open-std.org (Postfix, from userid 521)
	id 7F232357250; Thu,  6 Jul 2017 21:00:37 +0200 (CEST)
Delivered-To: sc22wg5@open-std.org
Received: from mail.jpl.nasa.gov (sentrion3.jpl.nasa.gov [128.149.139.109])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by www.open-std.org (Postfix) with ESMTP id E0BC7357140
	for <sc22wg5@open-std.org>; Thu,  6 Jul 2017 21:00:35 +0200 (CEST)
Received: from [137.79.7.57] (math.jpl.nasa.gov [137.79.7.57])
	by smtp.jpl.nasa.gov (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with ESMTP id v66J0V6q017690
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128 bits) verified NO)
	for <sc22wg5@open-std.org>; Thu, 6 Jul 2017 12:00:32 -0700
Subject: Re: (j3.2006) (SC22WG5.5898) 3 levels of parallelism?
From: Van Snyder <Van.Snyder@jpl.nasa.gov>
Reply-To: Van.Snyder@jpl.nasa.gov
To: sc22wg5 <sc22wg5@open-std.org>
In-Reply-To: <31050494-69BF-4BCD-991D-069E70443B1D@cray.com>
References: <20170705131003.C2A753587D1@www.open-std.org>
	 <677196EB-62B3-448D-8AD9-6D0E36BAFD32@cray.com>
	 <888EAD5C-B10E-4E55-9F63-35F1BBE2F342@nasa.gov>
	 <CA+KjR2bpJiEB+PRnxzrvX6VBoiM=ifOp-rOQdY4fMFatKhiQLQ@mail.gmail.com>
	 <20170705194923.55D94358633@www.open-std.org>
	 <CA+KjR2azkyDuo+osGkRiooJE6Guzb2F5PjkcDmiP8u=M+0=2+Q@mail.gmail.com>
	 <74223362-5937-4576-81B7-86D2A75460D3@nasa.gov>
	 <31050494-69BF-4BCD-991D-069E70443B1D@cray.com>
Content-Type: text/plain; charset="ISO-8859-1"
Organization: Yes
Date: Thu, 06 Jul 2017 12:00:31 -0700
Message-ID: <1499367631.2359.77.camel@math.jpl.nasa.gov>
Mime-Version: 1.0
X-Mailer: Evolution 2.32.3 (2.32.3-37.el6) 
Content-Transfer-Encoding: 7bit
X-Source-Sender: Van.Snyder@jpl.nasa.gov
X-AUTH: Authorized
Sender: owner-sc22wg5@open-std.org
Precedence: bulk

On Thu, 2017-07-06 at 17:52 +0000, Bill Long wrote:
> Intentionally. The SPMD model, used by both Fortran and MPI, has
> proved to provide the best scaling and performance in real
> distributed-memory applications. 

MPMD is also useful.  Interactive graphics are not Fortran's strong
suit.  We use PVM (for historical reasons) to communicate between our
separately-launched Fortran code instances, and between Fortran and IDL
for interactive graphics.

I don't see a way to use coarrays for MPMD, but using PVM (and MPI) are
kind of like programming in assembler.


