fyi -- where Amin will be in late july

Tom Anderson (tom@emigrant)
Wed, 13 May 1998 23:59:08 -0700 (PDT)

>From tzhu@MailBox.Syr.Edu Wed May 13 11:55:13 1998
Received: from orodruin.CS.Berkeley.EDU (orodruin.CS.Berkeley.EDU [128.32.40.24]) by emigrant.cs.washington.edu (8.8.5+CS/7.2ws+) with ESMTP id LAA26358 for <tom@emigrant.cs.washington.edu>; Wed, 13 May 1998 11:55:12 -0700 (PDT)
Received: from hofmann.CS.Berkeley.EDU (root@hofmann.CS.Berkeley.EDU [128.32.35.123]) by orodruin.CS.Berkeley.EDU (8.8.2/8.8.2) with ESMTP id LAA25439 for <tea@orodruin.CS.Berkeley.EDU>; Wed, 13 May 1998 11:55:10 -0700 (PDT)
Received: from mailer.syr.edu (mailer.syr.edu [128.230.20.20]) by hofmann.CS.Berkeley.EDU (8.9.0.Beta5/8.6.6.Beta11) with ESMTP id LAA23585 for <tea@cs.berkeley.edu>; Wed, 13 May 1998 11:55:09 -0700 (PDT)
Received: from rodan.syr.edu by mailer.syr.edu (LSMTP for Windows NT v1.1a) with SMTP id <0.1FF04BB0@mailer.syr.edu>; Wed, 13 May 1998 14:55:10 -0400
Received: from rodan.syr.edu (tzhu@localhost [127.0.0.1])
by rodan.syr.edu (8.8.7/8.8.7) with ESMTP id OAA28941
for <tea@cs.berkeley.edu>; Wed, 13 May 1998 14:55:04 -0400 (EDT)
Message-Id: <199805131855.OAA28941@rodan.syr.edu>
To: tea@cs.berkeley.edu
Date: Wed, 13 May 1998 14:55:03 -0400
From: "Tao Zhu" <tzhu@MailBox.Syr.Edu>
Status: RO


**********************************************************************
HPDC-7 FINAL PROGRAM
http://www.mcs.anl.gov/hpdc7/
**********************************************************************

SEVENTH IEEE INTERNATIONAL SYMPOSIUM ON
HIGH PERFORMANCE DISTRIBUTED COMPUTING (HPDC-7)

Drake Hotel, Chicago, IL USA
July 28-31, 1998

The IEEE International Symposium on High Performance Distributed
Computing (HPDC) provides a forum for presenting the latest
research findings that unify parallel and distributed computing.
In HPDC environments, parallel or distributed computing techniques
are applied to the solution of computationally intensive
applications across networks of computers.

SPONSORS:
- IEEE Computer Society
- Argonne National Laboratory
- HPDC Laboratory at Syracuse University

IN COOPERATION WITH:
- Rome Laboratory

***************************************************************************
PRE-SYMPOSIUM TUTORIALS
TUESDAY, JULY 28, 1998

***************************************************************************

Full Day Tutorial: (9:00 a.m. - 4:30 p.m.)

Tutorial 1: How to build a Beowulf:
Assembling, Programming, and Using a Clustered PC -
Do-it-yourself Supercomputer
Thomas Sterling, California Institute of Technology

Morning Half-Day Tutorials (8:30 - 12:00 PM)

Tutorial 2: Collaborative Visualization in Distributed Virtual Environments
Jason Leigh, NCSA and Electronic Visualization Laboratory
Andrew E. Johnson, University of Illinois at Chicago

Tutorial 3: High Performance Computing with Legion
Andrew Grimshaw, University of Virginia

Afternoon Half-Day Tutorials (1:30 - 5:00 PM)

Tutorial 4: Introduction to Performance Issues in Using MPI
for Communication and I/O
William Gropp, Rusty Lusk, Rajeev Thakur
Argonne National Laboratory

Tutorial 5: The Globus Grid Programming Toolkit
Gregor von Laszewski, Argonne National Laboratory
Steven Fitzgerald, Information Sciences Institute of the
University of Southern California

===========================================================================
7:00 - 9:00 p.m. Evening Reception and Registration
===========================================================================

***************************************************************************
WEDNESDAY, July 29
***************************************************************************

8:00 - 10:00 a.m. Registration

8:30 - 10:00 a.m. KEYNOTE SPEECH: Larry Smarr, National Center for
Supercomputing Applications
How Distributed Computing Changes Science

10:00-10:30 BREAK

10:30 a.m. - 12:00 noon CONCURRENT SESSIONS (1, 2)

SESSION 1: COMMUNICATIONS, Chair: Doug Schmidt, WUStL

1. Adaptive Utilization of Communication and Computational Resources
in High-Performance Distributed Systems: The EMOP Approach
Shridhar Diwan, Dennis Gannon, Indiana University

2. Efficient Layering for High Speed Communication: Fast Messages 2.x
M. Lauria, S. Pakin, Andrew Chien, U. of Illinois, Urbana-Champaign

3. The Software Architecture of a Distributed Quality of Session Control
Layer
Alaa Youssef, Hussein Abdel-Wahab, Kurt Maly, Old Dominion University

SESSION 2: APPLICATIONS, Chair: Andrew Grimshaw, University of Virginia

1. TeleMed: Wide-Area, Secure, Collaborative Object Computing with Java
and CORBA for Healthcare
David W. Forslund, James E. George, Eugene M. Gavrilov,
Los Alamos National Lab

2. Optimizing Protocol Parameters to Large Scale PC Cluster and
Evaluation of Its Effectiveness with Parallel Data Mining
Masato Oguchi, Takahiho Shintani, Takayuki Tamura, Masaru Kitsuregawa,
University of Tokyo

3. Computing Twin Primes and Brun's Constant: A Distributed Approach
Patrick Fry, Jeffrey Nesheiwat, and Boleslaw Szymanski
Rensselaer Polytechnic Institute

===========================================================================
12:00 noon - 1:30 p.m. LUNCH
===========================================================================

1:30 p.m. - 2:30 p.m. PANEL: BUILDING THE GRID
Moderator/Organizer: Ian Foster, Argonne National Laboratory
Panelists: To be announced.

This panel provides an opportunity for discussion of issues relating to
the construction of a national-scale "grid" providing efficient and uniform
access to high-end resources. The panelists are all participants in the
July 1998 "Grids 98" workshop at which these issues were discussed.

===========================================================================
2:30 p.m. - 3:00 p.m. Break
===========================================================================
===========================================================================

3:00 - 5:00 p.m. -- CONCURRENT SESSIONS (3, 4)

SESSION 3: METACOMPUTING, Chair: Cliff Neuman, Univ. of Southern California

1. WebOS: Operating System Services For Wide Area Applications,
Amin Vahdat, Thomas Anderson, Michael Dahlin,
University of California, Berkeley, University of Washington

2. The DOGMA Approach to High-Utilization Supercomputing,
Glenn Judd, Mark Clement, Quinn Snell, Brigham Young University

3. On the Design of a Demand-Based Network-Computing System: The Purdue
University Network Computing Hub,
Nirav H. Kapadia and Jose' A. B. Fortes, Purdue University

4. Application Experiences with the Globus Toolkit,
Sharon Brunett, Karl Czajkowski, Ian Foster, Andy Johnson,
Carl Kesselman, Jason Leigh, and Steven Tuecke,
Argonne National Lab, Caltech, U. of Illinois, Chicago,
U. of Southern California.

SESSION 4: SYSTEMS, Chair: Rob Armstrong, Sandia National Laboratories

1. A High Performance Distributed Shared Memory for Symmetrical
Multiprocessor Clusters
Sumit Roy, Vipin Chaudhary, Wayne State University

2. Two-Stage Transaction Processing in Client-Server DBMSs
Vinay Kanitkar, Alex Delis, Polytechnic University

3. Hectiling: An Integration of Fine and Coarse-Grained Load-Balancing
Strategies
Samuel H. Russ, Ioana Banicescu, Sheikh Ghafoor, Bharathi Janapareddi,
Jonathan Robinson, Rong Lu, Mississippi State University

4. Bridging the Gap between MATLAB and ScaLAPACK,
Michael J. Quinn, Alexey Malishevsky, Nagajagadeswar Seelam
Oregon State University, Otter

===========================================================================
5:00 - 5:30 p.m. BREAK

===========================================================================

5:30 - 7:30 p.m. POSTERS, DEMOS AND RECEPTION

===========================================================================

***************************************************************************
THURSDAY, JULY 30th
***************************************************************************

9:00 - 10:00 a.m. KEYNOTE SPEECH: Rick Rashid, Microsoft

===========================================================================
10:00 - 10:30 a.m. BREAK
===========================================================================
10:30 a.m. - 12:00 noon Concurrent Sessions (5, 6)

SESSION 5: DISTRIBUTED COMPUTING, Chair: Fran Berman, U. of Calif., San Diego

1. On the Effectiveness of Distributed Checkpoint Algorithms,
For Domino-Free Recovery,
Franco Zambonelli, University of Modena

2. Authorization for Metacomputing Applications,
Grig Gheorghiu, T. Ryutov, Clifford Neuman, University of Southern
California

3. Matchmaking: Distributed Resource Management for High Throughput
Computing,
Rajesh Raman, Miron Livny, and Marvin Solomon, University of Wisconsin

SESSION 6: I/O , Chair: Bill Gropp, Argonne National Laboratory

1. Distant I/O: One-Sided Access to Secondary Storage on Remote Processors,
J. Nieplocha, I. Foster, H. Dachsel, Argonne National Laboratory
Pacific Northwest National Laboratory

2. Automatic Parallel I/O Performance Optimization using Genetic Algorithms,
Ying Chen, Marianne Winslett, Y. Cho, S. Kuo,
University of Illinois, Urbana-Champaign

3. Parallel I/O Performance of Fine Grained Data Distributions,
Yong Cho, Marianne Winslett, Ying Chen, Szu-wen Kuo,
University of Illinois, Urbana-Champaign

===========================================================================
12:00 noon - 1:30 p.m. LUNCH
===========================================================================
1:30 p.m. - 2:30 p.m. PANEL: HPDC EXPERIENCES
Moderator/Organizer: David Culler, U. of Calif., Berkeley
Panelists: To be announced.

This panel will bring together people with practical experience in building
large-scale high-performance distributed computing systems and
applications.
Discussion will focus on the lessons learned.

===========================================================================
2:30 p.m. - 3:00 p.m. Break
===========================================================================

3:00 p.m. - 4:30 p.m. CONCURRENT SESSIONS (7, 8)

SESSION 7: ADAPTIVITY, Chair: Michael Quinn, Oregon State University

1. Autopilot: Adaptive Control of Distributed Applications,
Randy Ribler, Jeffrey Vetter, Huseyin Simitci, Daniel Reed,
University of Illinois, Urbana-Champaign

2. Prediction and Adaptation in Active Harmony,
Jeffrey Hollingsworth, Peter Keleher, University of Maryland

3. A Resource Query Interface for Network-Aware Applications
Bruce Lowekamp, Nancy Miller, Dean Sutherland, Thomas Gross,
Peter Steenkiste, Jaspal Subhlok,
Carnegie Mellon University

SESSION 8: INTERACTIVE SYSTEMS, Daniel McAuliffe, Rome Laboratory

1. Personal Tele-Immersion Devices,
Tom DeFanti, Dan Sandin, Greg Dawe, Maxine Brown, Maggie Rawlings,
Gary Lindahl, University of Illinois, Chicago

2. A Framework for Interacting with Distributed Programs and Data,
Steven Hackstadt, Christopher Harrop, Allen Maloney, University
of Oregon

3. Efficient Coupling of Parallel Applications Using PAWS,
Peter H. Beckman, Patricia K. Fasel, William F. Humphrey,
Susan M. Mniszewski, Los Alamos National Laboratory

===========================================================================
6:30-9:30 p.m. DINNER CRUISE
===========================================================================

***************************************************************************
FRIDAY, JULY 31
***************************************************************************

===========================================================================
8:30 - 10:00 a.m. CONCURRENT SESSIONS (9, 10)

SESSION 9: DIGITAL LIBRARIES AND DATABASES, Chair: Reagan Moore, SDSC

1. High-Performance Digital Libraries: Building the Interspace for the Grid,
Bruce Schatz, University of Illinois, Urbana-Champaign

2. Adaptive Load Sharing for Clustered Digital Library Servers,
Huican Zhu, Tao Yang, Qi Zheng, David Watson, Oscar Ibarra, Terry Smith,
University of California, Santa Barbara

3. Cooperative Caching of Dynamic Content on a Distributed Web Server,
Vegard Holmedahl, Ben Smith, and Tao Yang,
University of California, Santa Barbara

SESSION 10: INFRASTRUCTURE, Chair: Charlie Catlett, NCSA

1. A High Performance Network Connection for Research and Education between
the vBNS and the Asia-Pacific Advanced Network (APAN)
Michael A. McRobbie, Karen H. Adams, Dennis B. Gannon, Donald F.
McMullen,
Douglas D. Pearson, R. Allen Robel, Steven S. Wallace, James G. Williams,
Indiana University

2. The NetLogger Methodology for High Performance Distributed Systems
Performance Analysis,
Brian Tierney, William Johnston, Jason Lee, Gary Hoo,
Lawrence Berkeley National Laboratory

3. Monitoring Health & Status in a Metacomputing Environment: The Globus
Heartbeat Monitor
Paul Stelling, Craig A. Lee, Carl Kesselman, Ian Foster,
Gregor von Laszewski,
The Aerospace Corporation, Argonne National Laboratory,
University of Southern California

=============================================================================
10:00 - 10:30 a.m. BREAK

=============================================================================

=============================================================================
10:30 a.m.- 12:00 p.m. CONCURRENT SESSIONS (11, 12)
=============================================================================

SESSION 11: APPLICATIONS AND SYSTEMS, Chair, David Abramson, Monash
University

1. High-Speed, Wide Area, Data Intensive Computing: A Ten Year Retrospective,
Bill Johnston, Lawrence Berkeley National Lab

2. Mesh Partitioning for Distributed Systems,
Jian Chen, Valerie Taylor, Northwestern University

3. Towards a Hierarchical Scheduling System for Distributed WWW Server
Clusters
Daniel A. Andresen, Tim R. McCune, Kansas State University

SESSION 12: COMMUNICATIONS, Chair: Salim Hariri, Syracuse University

1. Adaptive Data Communication Algorithms for Distributed Heterogeneous
Systems
Prashanth B. Bhat, Viktor K. Prasanna, C.S. Raghavendra,
The Aerospace Corporation, University of Southern California

2. Sender Coordination in the Distributed Virtual Communication Machine,
Marcel-Catalin Rosu, Karsten Schwan, Georgia Institute of Technology

3. A Software Architecture for Global Address Space Communication
on Clusters: Put/Get on Fast Messages
Louis A. Giannini, Andrew A. Chien,
University of Illinois, Urbana-Champaign

***************************************************************************
KEYNOTE SPEAKERS
***************************************************************************
Wednesday Keynote Speech: Larry Smarr,
National Center for Supercomputing Applications

Dr. Smarr has been one of the pioneers in creating a national information
infrastructure to support academic research, governmental functions, and
industrial competitiveness. In 1985, Dr. Smarr became the Director of the
National Center for Supercomputing Applications (NCSA) at the University of
Illinois at Urbana-Champaign (UIUC). In 1997, he became Director of the
National Computational Science Alliance. Dr. Smarr has conducted
observational, theoretical, and computational based astrophysical sciences
research for fifteen years. In 1995 he was elected to the National Academy
of Engineering.

Thursday Keynote Speech: Rick Rashid,
Vice President, Research
Microsoft Corporation

Dr. Richard Rashid was named vice president of research for Microsoft in
July, 1994. Today he heads the Microsoft Research Division.

Prior to his promotion, Dr. Rashid was the director of Microsoft Research
where he focused on operating systems, networking, and multiprocessors. In
that role, he was responsible for the creation of key technologies leading
to the development of Microsoft's interactive TV system, now in test
deployment in Redmond, WA.

Before coming to Microsoft in September 1991, Dr. Rashid was a professor of
computer science at Carnegie Mellon University for 12 years. He directed
the design and implementation of several influential network operating
systems, including the Mach operating system, and published dozens of papers
in the areas of computer vision, operating systems, programming languages
for distributed processing, network protocols, and communications security.
Dr. Rashid is credited with co-development of one of the earliest networked
computer games, Alto Trek, during the mid-1970s.

He holds both doctor of science and master of science degree in computer
science from the University of Rochester and a bachelor of science in
mathematics with honors from Stanford University.

Dr. Rashid is a past member of the DARPA UNIX Steering Committee and CSNet
Executive Committee. He is also a former chairman of the ACM System Awards
Committee.

***************************************************************************
TUTORIAL DESCRIPTIONS
****************************************************************************

TUTORIAL 1: How to build a Beowulf: Assembling, Programming, and Using a
Clustered PC -- Do-it-yourself Supercomputer
Thomas Sterling,
California Institute of Technology
Full-day Tutorial - July 28, 1998

It has recently become possible to assemble a collection of commodity
mass market hardware components and freely available software packages in
a day and be executing real world applications by dinner time to achieve
a sustained performance at greater than 1 Gflops at a total cost of
around $50,000. Furthermore, on almost a daily basis, these numbers are
improving. This full-day tutorial will cover all aspects of system assembly,
integration, software installation, programming, application development,
system management, and benchmarking.

Demonstrations with actual hardware and software components will be
conducted throughout the tutorial. Participants will be encouraged to
closely examine and manipulate elements of a Beowulf at various stages of
integration with strong Q&A interaction between presenters and attendees.

Dr. Thomas Sterling has been engaged in research related to parallel
computer architecture, system software, and evaluation for more than
a decade. He was a key contributor to the design, implementation, and
testing
of several experimental parallel architectures.

The focus of Dr. Sterling's research has been on the modeling and evaluation
of performance factors determining scalability of high performance computing
systems. Upon completion of his Ph.D. as a Hertz Fellow from MIT in 1984,
Dr. Sterling served as a research scientist at Harris Corporation's Advanced
Technology Department, and later with the systems group of the IDA
Supercomputing Research Center. In 1992, Dr. Sterling joined the USRA
Center for Excellence in Space Data and Information Sciences to support
the NASA HPCC earth and space sciences project at the Goddard Space Flight
Center. Dr. Sterling was Adjunct Associate Professor at the University of
Maryland College Park, where he lectured on computer architecture. He holds
six patents, is the co-author of two books and has published dozens of
papers
in the field of parallel computing. Dr. Thomas Sterling is currently Senior
Staff Scientist, High Performance Computing Systems Group, Jet Propulsion
Laboratory; and Visiting Associate, Center for Advanced Computing Research,
California Institute of Technology.

Thomas Sterling
Caltech/JPL
Mail Code 158-79
1200 E. California Blvd.
Pasadena, CA 91125
Phone: (626)-395-3901
Fax: (626)-584-5917
Email: tron@cacr.caltech.edu

============================================================================

TUTORIAL 2: Collaborative Visualization in Distributed Virtual Environments
Jason Leigh, NCSA and Electronic Visualization Laboratory
Andrew E. Johnson, University of Illinois at Chicago
Half-day Tutorial - Morning July 28, 1998

Tele-Immersion is the unification of collaborative virtual reality (VR) and
video conferencing in the context of significant computation and
data-mining.
The goal of Tele-Immersion is to use the latest in visualization, networking
and database technology to allow domain scientists to collaboratively steer
scientific computations, query enormous raw and derived data-sets, and
visualize their results in a seamless virtual environment.

This course will provide:

- an introduction to the field of Tele-Immersion, including examples
of a number of successive tele-immersive applications in a wide range of
disciplines such as environmental hydrology, design and engineering, and
education;
- a discussion of the technological requirements of Tele-Immersion;
- tips for building effective tele-immersive environments;
- and tools for the rapid construction of tele-immersive applications;

This course is targeted at: domain scientists and engineers who are
interested in learning how to apply collaborative virtual reality technology
in their areas of research; and at HPDC application developers who are
interested in creating and deploying such technologies.

Jason Leigh is a senior scientist with a joint appointment at the National
Center for Supercomputing Applications and the Electronic Visualization
Laboratory (EVL) at the University of Illinois at Chicago, where his main
research focus is collaborative virtual reality (or Tele-Immersion).
Jason's background in human-factors, interactive computer graphics,
networking,
databases, and art allows him to approach the development of techniques and
tools for Tele-Immersion from multiple perspectives. Jason is assisting
General Motors and Hughes Research in applying collaborative technologies to
their VisualEyes VR vehicle design system. He is working as a member of the
NICE project to develop VR collaborative educational environments for
children.
Finally Jason is leading the research and development of a software
architecture (called CAVERNsoft) that integrates networking and databases
in a manner that is optimized for Tele-Immersion.

Jason Leigh
Electronic Visualization Lab (M/C 154)
University of Illinois at Chicago
851 S. Morgan St. Room 1120 SEO
Chicago, IL 60607-7053

Email: jleigh@eecs.uic.edu
Phone: (312) 996-3002
Fax: (312) 413-7585

Andrew E. Johnson, PhD, is a faculty member of the Electronic Visualization
Laboratory and an Assistant Professor in the Electrical Engineering and
Computer Science Department at the University of Illinois at Chicago. His
current research interests focus on "tele-immersion", collaboration in
immersive virtual environments, working on the CAVERNsoft project. As
a continuation of his work on the NICE project, a collaborative virtual
reality learning environment for young children, he is currently working on
an NSF-funded study of deep learning and visualization technologies,
investigating how collaborative virtual reality can be used to help teach
concepts that are counter-intuitive to a learner's current mental model.

============================================================================

TUTORIAL 3: High Performance Computing with Legion
Andrew Grimshaw University of Virginia
Half-day Tutorial - Morning July 28, 1998

Developed at the University of Virginia, Legion is an integrated software
system for distributed parallel computation. While fully supporting
existing
codes written in MPI and PVM, Legion provides features and services that
allow
users to take advantage of much larger, more complex resource pools. With
Legion, for example, a user can easily run a computation on a supercomputer
at a national center while dynamically visualizing the results on a local
machine. As another example, Legion makes it trivial to schedule and run
a large parameter space study on several workstation farms simultaneously.
Legion permits computational scientists to use cycles wherever they are,
allowing bigger jobs to run in shorter times through higher degrees of
parallelization.

Key capabilities include the following:

- Legion eliminates the need to move and install binaries manually on
multiple platforms. After Legion schedules a set of tasks over multiple
remote machines, it automatically transfers the appropriate binaries to
each host. A single job can run on multiple heterogeneous architectures
simultaneously; Legion will ensure that the right binaries go to each, and
that it only schedules onto architectures for which it has binaries.

- Legion provides a virtual file system that spans all the machines
in a Legion system. Input and output files can be seen by all the parts
of a computation, even when the computation is split over multiple
machines that don't share a common file system. Different users can also
use
the virtual file system to collaborate, sharing data files and even
accessing
the same running computations.

- Legion's object-based architecture dramatically simplifies building
add-on tools for tasks such as visualization, application steering, load
monitoring, and job migration.

- Legion provides optional privacy and integrity of communications for
applications distributed over public networks. Multiple users in a Legion
system are protected from one another.

These features also make Legion attractive to administrators looking for
ways to increase and simplify the use of shared high-performance machines.
The Legion implementation emphasizes extensibility, and multiple policies
for
resource use can be embedded in a single Legion system that spans multiple
resources or even administrative domains.

This tutorial will provide background on the Legion system and teach how to
run existing parallel codes within the Legion environment. The target
audience is supercomputing experts who help scientists and other users get
their codes parallelized and running on high performance systems.

Andrew S. Grimshaw is an Associate Professor of Computer Science and
Director of the Institute of Parallel Computation at the University
of Virginia. His research interests include high-performance parallel
computing, heterogeneous parallel computing, compilers for parallel systems,
operating systems, and high-performance parallel I/O. He is the chief
designer
and architect of Mentat and Legion. Grimshaw received his M.S. and Ph.D. from
the University of Illinois at Urbana-Champaign in 1986 and 1988
respectively.

Andrew Grimshaw
Department of Computer Science
University of Virginia
Charlottesville, VA 22903

(804) 982-2204
fax: (804) 982-2214
grimshaw@Virginia.edu

============================================================================
TUTORIAL 4: Introduction to Performance Issues in Using MPI
for Communication and I/O
William Gropp, Rusty Lusk, Rajeev Thakur
Argonne National Laboratory
Half-day Tutorial - Afternoon July 28, 1998

MPI is now widely accepted as a standard for message-passing parallel
computing libraries. MPI-2, released in June 1997, adds additional
capabilities to MPI, including remote-memory access and parallel I/O.
The richness of MPI provides many ways to express an operation, such as
exchanging messages for a grid-based computation or writing out a
distributed
array to a parallel file system. Choosing the best way requires both
an understanding of the MPI approach to high performance and the
capabilities
of particular implementations of MPI.

This tutorial will discuss performance-critical issues in message-passing
programs, explain how to examine the performance of an application using
MPI-oriented tools, and show how the features of MPI can be used to attain
peak application performance. It will be assumed that attendees have an
understanding of the basic elements of the MPI specification. Experience
with message-passing parallel applications will be helpful but not required.

William Gropp is a senior computer scientist in the Mathematics and Computer
Science Division at Argonne National Laboratory. After receiving his Ph.D.
in Computer Science from Stanford University in 1982, he held the positions
of Assistant (1982-1988) and Associate (1988-1990) Professor in the Computer
Science Department of Yale University. In 1990, he joined the Numerical
Analysis group at Argonne. His research interests are in parallel
computing,
software for scientific computing, and numerical methods for partial
differential equations. He is a co-author of "Using MPI: Portable Parallel
Programming with the Message-Passing Interface" and is a chapter author in
the MPI-2 Forum. His current projects include the design and implementation
of MPICH, a portable implementation of the MPI Message-Passing Standard, the
design and implementation of PETSc, a parallel, numerical library for PDEs,
and research into programming models for parallel architectures.

Ewing ("Rusty") Lusk is a senior computer scientist in the Mathematics and
Computer Science Division at Argonne National Laboratory. After receiving
his Ph.D. in mathematics at the University of Maryland in 1970, he served
first in the Mathematics Department and later in the Computer Science
Department at Northern Illinois University before joining Argonne National
Laboratory in 1982. He has been involved in the MPI standardization effort
both as an organizer of the MPI-2 Forum and as a designer and implementor of
the MPICH portable implementation of the MPI Standard. His current projects
include design and implementation of the MPI-2 extensions to MPICH and
research into programming models for parallel architectures. Past interests
include automated theorem-proving, logic programming, and parallel
computing.
He is a co-author of several books in automated reasoning and parallel
computing, including "Using MPI: Portable Parallel Programming with the
Message-Passing Interface". He is the author of more than eighty research
articles in mathematics, automated deduction, and parallel computing.

Rajeev Thakur is an assistant computer scientist in the Mathematics
and Computer Science Division at Argonne National Laboratory. He received
a Ph.D. in Computer Engineering from Syracuse University in 1995. He is
actively engaged in parallel I/O research, particularly in implementing
portable parallel I/O interfaces and I/O characterization of parallel
applications. He participated in the MPI-2 Forum to define a standard,
portable interface for parallel I/O (MPI-IO). He is currently developing a
high-performance, portable MPI-IO implementation called ROMIO.

Rajeev Thakur
Argonne National Laboratory
Building 221, Room C-247
Argonne, IL, 60439
Email: thakur@mcs.anl.gov
Phone: (630) 252-1682
Fax: (630) 252-5986

============================================================================
TUTORIAL 5: The Globus Grid Programming Toolkit
Gregor von Laszewski, Argonne National Laboratory
Steven Fitzgerald, Information Sciences Institute of the
University of Southern California
Half-day Tutorial - Afternoon July 28, 1998

This tutorial is a introduction to the capabilities of the Globus grid
programming toolkit. Computational grids promise to enable a wide range
of emerging application concepts such as remote computing, distributed
supercomputing, tele-immersion, smart instruments, and data mining.
However,
the development and use of such applications is in practice difficult and
time consuming, because of the need to deal with complex and highly
heterogeneous systems. The Globus grid programming toolkit is designed to
help application developers and tool builders overcome these obstacles to
the construction of "grid-enabled" scientific and engineering applications.
It does this by providing a set of standard services for authentication,
resource location, resource allocation, configuration, communication, file
access, fault detection, and executable management. These services can be
incorporated into applications and/or programming tools in a "mix-and-match"
fashion to provide access to needed capabilities.

Our goal in this tutorial is both to introduce the capabilities of the
Globus toolkit and to help attendees apply Globus services to their own
applications. Hence, we will structure the tutorial as a combination of
Globus system description and application examples.

Dr. Gregor von Laszewski obtained his Ph.D. in Computer Science from
Syracuse University and is currently a researcher in the Mathematics
and Computer Science Division at Argonne National Laboratory. He has been
involved in the development of many Globus components and applications, and
leads an effort to apply Globus services to the real-time analysis of data
from scientific instruments.

Gregor von Laszewski
Argonne National Laboratory
Building 221, Room A
Argonne, IL, 60439
Email: gregor@mcs.anl.gov
Phone: (630) 252-0472
Fax: (630) 252-5986

Dr. Steven Fitzgerald received his D.Sc. in Computer Science from the
University of Massachusetts Lowell. He is a researcher at the Information
Sciences Institute of the University of Southern California and holds a
faculty position at California State University Northridge.
Steve's involvement in the Globus project has focused on Globus's
information
services, and the creation and deployment of the GUSTO testbed.

***************************************************************
CONFERENCE LOCATION
*****************************************************************

HPDC-7 will be held at the Drake Hotel in Chicago at 140 East Walton
Place.
The Drake is a city landmark, and the pride of all Chicagoans.
It's located in the heart of the Gold Coast, overlooking Lake
Michigan,
across from Oak Street Beach and right on the Magnificent Mile.

*****************************************************************
HOTEL RESERVATIONS
******************************************************************
All Hotel Reservations must be made directly with the Hotel.
For online reservations, see the website at
http://www.hilton.com/hotels/CHIDHVI/index.html,
or you may call 1-800-HILTONS. The local phone number for the
Chicago Drake is 312 787-2200, and the guest fax number is
312 951-5803. Mention IEEE or HPDC7 for our special rate of
$155.00 plus the sales/room tax of 14.9 % for a single/double room.
This rate is good for the night of July 26 up to Sunday morning
August 3. The reservation cut-off date is Friday, July 3, 1998 at
5:00 p.m. central time. Reservations received after this date
will be accepted by the Hotel on a space available basis at the
conference rate.

*******************************************************************
TRANSPORTATION
******************************************************************
The Drake is 20 miles from O'Hare Airport (30-45 minute drive).
You may take the "Airport Express" bus to the Hotel. The price is
$15.50 one way. Tickets may be purchased at the airport. The
Drake Hotel is the second hotel stop on the bus route. A taxi
would cost approximately $30.00.

********************************************************************
DINNER CRUISE
******************************************************************
Plan to join us for a Dinner Cruise on Lake Michigan Thursday
evening, July 30. "Chicago's First Lady" will board beginning at
6:30 pm. The cruise starts at 7:00 pm and returns to dock at
10:00 pm. There will be a buffet dinner and cash bar. Extra
tickets for guests and student registrants may be purchased.
Please see the registration form.

******************************************************************
Email: Will be provided at the conference site.
******************************************************************

Plan to come early or stay late: there's lot's to see and do in Chicago.

Weather: Weather in late July is usually hot. You can
expect highs in the 90s and lows in the 60s.
*****************************************************************
HPDC-7 REGISTER TODAY!
http://www.mcs.anl.gov/hpdc7

July 28-31, 1998, The Drake Hotel, Chicago, Illinois

Register by mail through July 17 using the form below. E-mail is available
also through July 17; you must use a credit card number. On-site
registration
in Chicago begins Tuesday, July 28. You are strongly encouraged to
pre-register.

Please MAIL this completed form to:

Syracuse University
EECS Dept / 2-120 CST
HPDC-7 Symposium
Syracuse, NY 13244
OR
FAX it to 315-443-9436 or 315-443-1122
CALL 315-443-1260
OR
You may register by E-mail using your credit card. Send to:
skafidas@summon3.syr.edu


Name:______________________________________________________________
Last Name/Family Name) First Name

Affiliation:
______________________________________________________

Address:
__________________________________________________________

City/State/Zip/Postal
Code/Country:________________________________

Phone:________________________________Fax
Number___________________

E-mail Address:
___________________________________________________

IEEE/ACM SIGCOMM Member #:
________________________________________
(required for member rate)

[ ] If author, check here.
[ ] If poster paper author, check here.
[ ] If you have a disability and may require accommodation in
order to fully participate in this activity, please check here. You
will be contacted to discuss your needs.

[ ] Please check here if you wish vegetarian meals.


------------------------------------------------------------------
REGISTRATION FOR HPDC-7 SYMPOSIUM, July 28-31 (please check
appropriate fee)

Advance Registration Regular
Registration
(Received by July 6) (Received
after July 6)

IEEE/ACM SIGCOMM Member [ ] $335 [ ] $395
Non-Member [ ] $450 [ ] $495
Full-time student [ ] $250 [ ] $275

Symposium registration fee includes a copy of the proceedings, coffee
breaks, sponsored lunches, and the Thursday evening dinner cruise.

Student registration does not include the evening cruise. Extra
tickets
and extra copies of the proceedings may be purchased on-site.

Registration cancellations made after July 17 are not refundable.

-----------------------------------------------------------------
REGISTRATION FOR PRE-SYMPOSIUM TUTORIALS, July 28

You may select the all-day tutorial; or one tutorial from the morning
session
and/or one from the afternoon session. Each tutorial registration fee
includes
attendance at the tutorial session and materials. There are no
student fees
for the tutorials. Cancellations of tutorial registrations made after
July 17
will be subject to the total fee. We reserve the right to cancel the
tutorials due to insufficient participation or other unforseeable
problems,
in which case fees will be refunded.

Advance Registration Regular
Registration
(Received by July 6) (Received
after July 6)

Tutorial No. 1 (all day)
IEEE/ACM SIGCOMM Member [ ] $310 [ ] $360
Non-Member [ ] $385 [ ] $445

Tutorials 2,3,4,5 (half-day) each
Member [ ] $230 [ ] $280
Non-Member [ ] $290 [ ] $360

Please register me for the following tutorials:

All day
__ 1 How to Build a Beowulf: Assembling, Programming, and Using
a Clustered PC Do-it-yourself Supercomputer

Morning (select one)
__ 2 Collaborative Visualization in Distributed Virtual
Environments
__ 3 High Performance Computing with Legion

Afternoon (select one)
__ 4 Introduction to Performance Issues in Using MPI for
Communication
and I/O
__ 5 The Globus Grid Programming Toolkit

___ Number of Tutorials x $______ = $_______

-----------------------------------------------------------------
Other fees:
Guest tickets for the Dinner Cruise aboard "Chicago's First
Lady",
Thursday, July 30, 6:30 pm - 10:00 pm.
___ Number of tickets X $40.00 = $______

TOTAL ENCLOSED

Symp. Fee $_______+ Tutorial Fees $______ + Dinner Cruise
$_____=

Total_______

TOTAL AMOUNT PAID $__________________U.S.

METHOD OF PAYMENT

[ ] Check enclosed (payable to SYRACUSE UNIVERSITY)
[ ] Please charge [ ] Visa [ ] MasterCard
Card Number: ______________________________Exp.
Date_____________

Cardholder Name(as it appears on the
card):___________________________
(Please print)

Cardholder
Signature:_________________________________________________

*****************************************************************************
SYMPOSIUM ORGANIZING COMMITTEE
*****************************************************************************

GENERAL CHAIR:
Salim Hariri, Syracuse University

PROGRAM COMMITTEE CHAIR:
Ian Foster, Argonne National Laboratory and University of Chicago

PROGRAM COMMITTEE:
David Abramson, Monash University, Australia
Khalid Al-Tawil, King Fahd University, Saudi Arabia
Rob Armstrong, Sandia National Laboratory
Fran Berman, University of California at San Diego
Ken Birman, Cornell University
Andrew Chien, University of Illinois at Urbana-Champaign
David Culler, University of California at Berkeley
Ian Foster, Argonne National Laboratory
Andrew Grimshaw, University of Virginia
Salim Hariri, Syracuse University
Carl Kesselman, USC Information Sciences Institute
T. V. Lakshman, Bell Laboratories
Reagan Moore, San Diego Supercomputer Center
Clifford Neuman, USC Information Sciences Institute
Mike Quinn, Oregon State University
C. S. Raghavendra, The Aerospace Corporation
Doug Schmidt, Washington University at Saint Louis
Rolf Stadler, Columbia University
Rick Stevens, Argonne National Laboratory

LOCAL ARRANGEMENTS AND PUBLICITY CHAIR:
Maxine Brown, Electronic Visualization Laboratory, UIC, and
National Center for Supercomputing Applications, UIUC

EXHIBITS AND DEMONSTRATIONS CHAIR:
Jim Costigan, Electronic Visualization Laboratory, UIC

TUTORIAL CHAIR:
Michael Papka, Argonne National Laboratory

REGISTRATION CHAIR:
Cynthia Bromka-Skafidas, Syracuse University