Selected Presentations

Keynote Address: How Healthcare Could Leverage Exascale

Patricia Kovatch
Exascale and Healthcare
Associate Dean, Scientific Computing
Icahn School of Medicine at Mount Sinai

About the speaker:
Patricia Kovatch is the founding Associate Dean for Scientific Computing at the Icahn School of Medicine at Mount Sinai and the Co-Director for the Master of Science in Biomedical Informatics. She is funded by NIH to lead multiple projects including the creation of a Big Omics Data Engine, a national data repository for the Children’s Health Exposure Analysis Resource and a de-identified image data warehouse for the Clinical and Translational Science Award. She also leads the Community Research Education and Engagement for Data Science project for NIH. Prior to joining Mount Sinai, she was Director of an NSF supercomputer center.

[slides]


Scientific Computing, High-Performance Computing and Data Science in Higher Education

Marcelo Ponce, Erik Spence, Ramses van Zon and Daniel Gruner*
SciNet HPC Consortium, University of Toronto, 256 McCaul St, Toronto, Ontario, Canada, M5T1W5

*Presenting author

We present an overview of current academic curricula for Scientific Computing, High-Performance Computing and Data Science. After a survey of current academic and non- academic programs across the globe, we focus on the education program of the SciNet HPC Consortium, using its detailed enrolment and course statistics for the past four to five years. Not only do these data display a steady and rapid increase in the demand for research- computing instruction, they also show a clear shift from traditional (high performance) computing to data-oriented methods. It is argued that this growing demand warrants specialized research computing degrees. We describe proposed curricula, based on SciNet’s experiences of student desires as well as trends in advanced research computing.

[slides]


MPI-IO Performance Optimization – IOR Benchmark on IBM ESS GL4 Systems
Xinghong He
IBM STG WW Client Centers

IOR (InterleavedOrRandom) benchmark has been frequently used  to measure IO performance of parallel file systems such as IBM Spectrum Scale (GPFS). IOR supports POSIX, MPI-IO, HDF5, and PnetCDF interfaces. The IO bandwidth performance, typically expressed in terms of MiB/sec for write and read operations largely depends on the configuration of the parallel file system being tested. This includes the hardware configurations and system parameter configurations of the file system, the underlying storage systems and the interconnect among different components of the whole system: the storage systems, the file servers, and the client systems (the file system users). For MPI-IO, which is widely used in High Performance Computing applications, the implementation of MPI (for example, IBM PE, OpenMPI, MVAPICH) and their run-time parameters also have large impact on the performance.

The MPI-IO support in IBM PE-2.2 is done through the use of ROMIO, a high performance, portable implementation of MPI-IO. ROMIO uses some data movement strategies such as data sieving and collective buffering to optimize IO performance of applications with non-contiguous access patterns. The implementation allows external parameters (hints parameters) to control the run-time behavior such as whether to use the collective buffering method and what size of buffer to allocate.

In this presentation, we focus on the MPI-IO performance tuning on an IBM Spectrum Scale file system consisting of ESS GL4 storage systems and InfiniBand FDR connections. We will show how to tune IBM PE parameters and the ROMIO parameters for optimal MPI-IO write and read performance on a cluster of POWER8 systems. The performance difference between the default parameters and the tuned ones could be up to more than 100 or even higher, depending on specific test requirements such the number of nodes, the block size, and whether the MPI-IO tests are for single shared file or not.

[slides]


ANL Site Update: Summer 2016

Ben Allen, Ray Loy*, Gordon McPheeters, Jack O’Connell, William Scullin, and Tim Williams
Argonne Leadership Computing Facility

*Presenting author

This talk will present a summary of current activities at the Argonne Leadership Computing Facility (ALCF). We start with an overview of ALCF’s current production resources centered around Blue Gene/Q, and the roadmap for its successors which are part of the CORAL project. After a discussion of current BG/Q support topics, we detail two systems projects: implementing a filesystem burst buffer via GFPS AFM; and deployment of HSI for improved functionality of our existing data archive system. Transitioning to the applications domain, we discuss the ongoing DOE multi-lab efforts towards achieving performance portability. Finally, we describe Argonne’s contribution to the preparation of the next generation of computational scientists through the Argonne Training Program on Extreme-Scale Computing (ATPESC).

[slides]


TotalView on IBM PowerLE and CORAL Sierra/Summit
Martin Bakal
Rogue Wave Software

Rogue Wave has been working to support IBM PowerLE with NVidia GPUs, in preparation for the CORAL Sierra/Summit systems. This work includes basic system support, the design and implementation of a proposed standard OpenMP 4 Debugging Interface (OMPD), and TotalView debugger scalability. The work also includes performance and functionality enhancements when debugging MPI + GPU clusters. OMPD has been designed in collaboration with LLNL, Aachen, and others, and it allows a debugger to inspect the internal runtime state of an OpenMP runtime library. Using OMPD, we’ll show how TotalView can inspect the state of OpenMP threads, parallel regions, task regions, ICVs, stack frames, and more. The next phase of OMPD will add support for debugging OpenMP 4 target device constructs. We’ll also report on our experiences debugging existing MPI + GPU programs, where multiple MPI processes are sharing the GPUs on a node.

[slides]