Keynote Speakers

CCGrid 2013 will have three keynote speakers, one of every morning of the conference, among whom the winner of the IEEE Award for Excellence in Scalable Computing.


Marc Snir
Marc Snir

Speaker: Marc Snir, Director of the Mathematics and Computer Science Division, Argonne National Laboratory, and Michael Faiman and Saburo Muroga Professor, University of Illinois at Urbana-Champaign, USA

Title: Programming Models for High-Performance Computing

Abstract: The first version of the MPI standard was released in November 1993. At the time, many of the authors of this standard, myself included, viewed MPI as a temporary solution, to be used until it is replaced by a good programming language for distributed memory systems. Almost twenty years later, MPI is the main programming model for High-Performance Computing, and practically all HPC applications use MPI, which is now in its third generation; nobody expects MPI to disappear in the coming decade. On the other hand, attempts to replace MPI with languages, such as High-Performance-Fortran, failed. Current attempts (UPC, Fortran, X10, Chapel, etc.) face major obstacles and seem unlikely to replace MPI.

The talk will discuss some plausible reasons for this situation. These include:

  1. Design issues with the various proposed languages: Namely, a few key things that MPI got right and most potential alternatives got wrong
  2. Lack of compelling motivation, so far, for switching programming environments
  3. The economic and social constraints of High-Performance Computing
  4. The large volume of "legacy" MPI code

We shall next discuss the implications of this situation for research on new programming models for High-Performance Computing.

Biography: Marc Snir is Director of the Mathematics and Computer Science Division at the Argonne National Laboratory and Michael Faiman and Saburo Muroga Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. He currently pursues research in parallel computing. He was head of the Computer Science Department from 2001 to 2007. Until 2001 he was a senior manager at the IBM T. J. Watson Research Center where he led the Scalable Parallel Systems research group that was responsible for major contributions to the IBM SP scalable parallel system and to the IBM Blue Gene system.

Marc Snir received a Ph.D. in Mathematics from the Hebrew University of Jerusalem in 1979, worked at NYU on the NYU Ultracomputer project in 1980-1982, and was at the Hebrew University of Jerusalem in 1982-1986, before joining IBM. Marc Snir was a major contributor to the design of the Message Passing Interface. He has published numerous papers and given many presentations on computational complexity, parallel algorithms, parallel architectures, interconnection networks, parallel languages and libraries, and parallel programming environments.

Marc is Argonne Distinguished Fellow, AAAS Fellow, ACM Fellow and IEEE Fellow. He has Erdős number 2 and is a mathematical descendant of Jacques Salomon Hadamard.


Simon Portegies Zwart
Simon Portegies Zwart

Speaker: Simon Portegies Zwart, Leiden Observatory, Leiden University, the Netherlands

Title: The Astronomical Multipurpose Software Environment and the Ecology of Star Clusters

Abstract: Star cluster ecology is the field of research where stellar evolution, gravitational dynamics, hydrodynamcs and the background potential dynamics of the parent galaxy interact to a complex non-linear evolution of self gravitating stellar systems. I will review the processes related to the ecology of stellar clusters, discuss the numerical hurdles and the physical principles. In addition, I will introduce the AMUSE framework with which we are performing simulations of the ecology of stellar clusters. AMUSE is a general purpose framework for interconnecting existing scientific software with a homogeneous and unified interface.  Since the framework is based on the standard message passing interface, any production ready code that is written in a language that supports its native bindings can be incorporated; in addition, our framework is intrinsically parallel and it conveniently separates all the numerical solvers in memory. The strict separation also enables the possibility to realize unit conversion between the different modules and to recover from fatalities in a unified and structured way. The time spent in the framework is relatively small, and for production simulations we measured an overhead of at most 10%, which in our case is acceptable. Due to the unified structure of the interface, incorporating new modules which address the same physics is relatively straightforward. The time stepping between the codes can be simply consecutive or realized via a mixed variable symplectic method in which the Hamiltonian of the problem is solved in separate steps and combined via a Verlet-leapfrog integration scheme.  In our experience with an implementation for multiphysics simulations in astrophysics, we encounter relatively few problems with the strict separation in methods, and the results of our test simulations are consistent with earlier results that use a monolithic framework.

Biography: Simon Portegies Zwart was born in Amsterdam and studied astronomy at the University of Amsterdam. After his PhD with Frank Verbunt at Utrecht University he traveled over the world while working as a postdoctoral fellow at the University of Amsterdam, Tokyo University (Japan), MIT (USA), and back to Amsterdam.  He is now full professor of computational astrophysics at the Sterrewacht Leiden of Leiden University. His professional interests are high-performance computing and gravitational stellar dynamics, in particular the ecology of dense stellar systems. His personal interests include translating Egyptian hieroglyphs and brewing beer.


Daniel A. Reed
Daniel A. Reed

Speaker: Daniel A. Reed, Vice President for Research and Economic Development and University Computational Science and Bioinformatics Chair, University of Iowa, USA

Title: Clusters, Grids and Clouds: A Look from Both Sides

Abstract: In science and engineering, a tsunami of new experimental and computational data and a suite of increasingly ubiquitous sensors pose vexing problems in data analysis, transport, visualization and collaboration. Cloud computing and "big data", together with our experiences with clusters and grids, are extending our notions of computational science and engineering, bringing technical, political and economic challenges.  

What are the software structures and capabilities that best exploit these capabilities and economics while providing application compatibility and community continuity? What are the appropriate roles of public clouds relative to local computing systems, private clouds and grids? How can we best exploit elasticity for peak demand? How do we optimize performance and reliability?  How do we provide privacy and security? How do we balance traditional HPC investments against distributed systems and big data opportunities and avoid past research infrastructure pitfalls? How do we integrate the emerging Internet of Things and ubiquitous sensors for multidisciplinary fusion, while also managing security and privacy?

Biography: Daniel A. Reed is Vice President for Research and Economic Development, as well as University Chair in Computational Science and Bioinformatics and Professor of Computer Science, Electrical and Computer Engineering and Medicine, at the University of Iowa. Previously, he was Microsoft's Corporate Vice President for Technology Policy and Extreme Computing, where he helped shape Microsoft's long-term vision for technology innovations in cloud computing and the company's associated policy engagement with governments and institutions around the world.

Before joining Microsoft, he was the Chancellor's Eminent Professor at UNC Chapel Hill, as well as the Director of the Renaissance Computing Institute (RENCI) and the Chancellor's Senior Advisor for Strategy and Innovation for UNC Chapel Hill.  Prior to that, he was Gutgsell Professor and Head of the Department of Computer Science at the University of Illinois at Urbana-Champaign (UIUC) and Director of the National Center for Supercomputing Applications (NCSA). He was also one of the principal investigators and chief architect for the NSF TeraGrid.  He received his PhD in computer science in 1983 from Purdue University.  Dr. Reed served as a member of the President's Council of Advisors on Science and Technology (PCAST) and the President's Information Technology Advisory Committee (PITAC).