[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference heron::euro_swas_ai

Title:Europe-Swas-Artificial-Intelligence
Moderator:HERON::BUCHANAN
Created:Fri Jun 03 1988
Last Modified:Thu Aug 04 1994
Last Successful Update:Fri Jun 06 1997
Number of topics:442
Total number of notes:1429

305.0. "fCC trip report" by ULYSSE::ROACH (TANSTAAFL !) Tue Mar 26 1991 11:47

 

                  I N T E R O F F I C E   M E M O R A N D U M

                                        Date:     04-Mar-1991 03:39pm CET
                                        From:     BEANE
                                                  BEANE@BIGRED@MRGATE@DPD04@DPD
                                        Dept:      
                                        Tel No:    

TO: See Below

Subject: MCC trip report

I attended the MCC Workshop on Parallel Applications in the Oil
Industry (Feb 28). The day was split into three parts: presentations
on the state of the art; description of the MCC SPEED project; open
discussion.

Attendees came from Rice U., U. Houston, UT Austin, Schlumberger,
Arco, Western Atlas, Amoco, BP Exploration, Exxon, Intel, NCUBE,
Maspar, E Systems (in no special order).

Except for the vendors, most of the participants are involved in
reservoir modeling applications.

Morning Presentations:

Kamy Sepehrnoori, Dept. Petroleum Engineering, UT Austin, "Vector
Parallel Applications using UTCHEM on CRAY YMP"
  Discussion of the conversion of UTCHEM (implicit pressure, explicit
  concentration reservoir model) to parallel vector. Conclusions:
  use parallel for outer loops, vector for inner loops; vector first,
  then parallel. Note: CRAY only vectorizes innermost loop. UTCHEM
  loops had to be rewritten like this:
        OLD		      		NEW
    do 10 i = 1 to nX		nBL = nX * nY * nZ
       do 10 j = 1 to nY	do 10 i = 1 to nBL
          do 10 k = 1 to nZ	   (code)
             (code)	     10 continue
  10 continue

Alan Weiser, Math Sci Dept, Rice U, "Parallel Algorithms for
Reservoir Simulation"
  A new project (since January). Porting UTCHEM to i860 hypercube. In
  this case, nested loops are good, long vectors poor (no vector
  instructions on i860). Domain decomposition is current area of
  interest. Have decided on equal sized rectangular grid block sub-
  domains, but don't know optimum sizes or shapes, yet. I/O system
  is converted, but expected to be a performance bottleneck
  (strategy: I/O through designated node, no host node, distributed
  control). 2/3 loops converted. Looking for access to other
  distributed memory systems. {NCUBE sales rep jumped to offer time}

Gary Li, BP Exploration, "Multitasking of a Reservoir Simulator on a
CRAY XMP"
  Description of parallelization project for internal model. Major
  effort in removing subroutine calls from inside loops (or to
  declare no side effects). Examined, rejected, macrotasking due
  to high overhead, manual and non-portable compiler directives.
  Used autotasking, got only moderate parallelism. Decided that
  memory bank addressing limits on XMP are bottleneck: got much
  better results on YMP (better than cycle speed up).

  Several other attendees questioned Gary on not using microtasking,
  that macrotasking was "widely" known to be relatively useless, that
  autotasking doesn't give big wins. No answer. 

  [Observation: the university folks all knew each other and had been
  communicating. None of the industry folks knew each other and
  obviously didn't have data the academicians considered common
  knowledge. If this MCC project succeeds, that should be its major
  benefit.]

Indranil Chakravarty, Schlumberger, "Current Research in Parallel
Processing on the CM-2"
  Have 16K connection machine w/ 2Gb memory. Commented that CM is
  only very much better than CRAY when problem won't fit in CRAY
  memory. Projects:
  forward modeling of logging tools (extend current 2D to 3D)
  3D seismic 1-pass depth migration
  Parallel numerical methods
  parallel programming methods
    SINAPSE: language designed to bring programming tasks closer
    to the domain of the modeler (uses Mathematica). Target language
    is Fortran.
    Optimizing compiler for connection machine

Paul Stoffa, Institute of Geophysics, "Experience With a Frequency
Wavenumber Migration Algorithm on Parallel Computer Architectures"
  Application of spilt-step Fourier transforms in pre- and post-stack
  migration. Good results in post-stack working in frequency domain.
  Poorer results in pre-stack (wave domain better), but [obviously]
  takes about 100 times longer.

David Young, David Kincaid, Ctr for Numerical Analysis, UT Austin,
"The ITPACK and NSPCG Software Packages for Solving Large Sparse
Linear Systems"
  Conversion of iterative solvers to parallel vector. Issues:
  adaptive selection of iteration parameters, accurate stopping
  procedures. Effort: remove all indirect addressing, add wavefront
  ordering. Work is with few cpu, shared memory systems: Aliant,
  Sequent, Cray. Conclusion: probably ought to completely re-write
  packages because of single cpu dependencies that have been built
  in. Paper: CNA-233 "Recent vectorization & paralleization of
  ITPACKV 2D"

Rao [last name missed], U Houston, [title missed: not on agenda]
  Evaluation of Jacobi gradient decomposition method. Processed
  grid blocks in block and diagonal directions. Conclusion: algorithm
  is up to 10 times faster than linear sequential solvers.

Afternoon Sessions:

Vineet Singh, MCC, "Scalable Parallel EnginEering Design Project"
  SPEED proposes to develop new scalable parallel algrorithms in
  (targets) ECAD and petro exploration. Target languages are ESP
  C++, Parasoft Express C, Fortran. Expected developments include
  finite element partial differential equation and linear systems
  solvers, heuristic search libraries and shells. After description
  of ESP C++ (from MCC Extensible Software Platform project), session
  moved to fund-raising/associate recruiting. A comment was made that
  only 1 shareholder (DEC) was present; no oil companies are
  shareholders.

General Discussion
  The remainder of the afternoon was spent in round table discussions
  on a number of topics, with few items resolved. Most of the
  discussion was between NCUBE and Intel, some input from the
  university people, almost no input from the industry people.
  {Note: one of the 3 NCUBE people was clearly there to sell product.
  At every opportunity he listed the products they had to sell, even
  to pre-announcing the NCUBE/Oracle database engine due out next
  month at $1,000,000 and 3+ times faster than IBM 3090-180J, and
  NCUBE/Ampex 30 Tbyte archive storage system.}

  Topics:
  compilers
  programming environments for parallel systems
    current tools: Express C, MIMDizer, ASPAR
    need: parallel symbolic debugger (VAX DEBUG is desired model!!!)
          performance analyzer
  ESP C++
  Fed. Gvt. High Performance Computing Systems (supplement to budget)
    New algorithms have contributed more to performance than cpus
  combined vector & parallel
    will MPP replace vector? will risc? unknown; conversion risk high
  natural expression of computational methods
    loss of information in programs for smart compilers to use (e.g.,
    conversion of 3D to vector)
    impact of future automatic parallelizing compilers (hoped for)
  software and structure methods
  scaling architectures
    how to determine appropriate # cpus per problem
    largest "reasonable" problem for daily use not currently
    solveable: 256,000 grid blocks with 12 components
  feedback to vendors {especially NCUBE}
    how to help w/ new algorithms
    ease of use most serious problem {trust us, fixed: NCUBE}
    universities need systems for students so they'll be trained
    when they enter industry
  parallel I/O
    interactive visualization
  special purpose vs. general purpose h/w
    sometimes makes sense (eg vectors), often doesn't
  parallelism
    more natural than vectors for compiler automation {NCUBE}
    when highly pipelined, perhaps {Intel}
  Standards
    Express
    SPMD (single program multi data) program structures
         tools to compile to MIMD & SIMD needed
    {degenerated into NCUBE sales spiel}

Dinner: Don Bynum, MCC
  MCC is last bastion of US technical excellence; join up, or go out
  of business due to foreign competition. See how wonderful it is to
  be a shareholder (don't need escort in MCC building) with "free"
  access to MCC technology. {he said it much nicer, though!}


Distribution:

TO:  Pat Roach@VBE
TO:  Susan Sugar@MWO
TO:  Steve Becker@AQO
TO:  Ed Hurry@DVO
TO:  SHIRLEY CRIDER@DVO
TO:  STEVE DONOVAN@DLO
TO:  DENNIS DICKERSON@DLO
TO:  Gale Kleinberger@HSO
TO:  Mike Sievers@HSO
TO:  Mike Willis@HSO
TO:  Sherry Williams@HSO
TO:  Katherine Jones@HSO
TO:  Dale Stout@HSO
TO:  Tommy Gaut@HSO
TO:  Tom Wilson@HST
TO:  jim rather@HSO
TO:  Kathy Makgill@MRO


T.RTitleUserPersonal
Name
DateLines