WoDet 2013 Invited Talks

The Imperative of Disciplined Parallelism: A Hardware Architect’s Perspective
Sarita Adve, University of Illinois at Urbana-Champaign

Abstract: With the end of Dennard scaling, converting the bounty of Moore’s law into usable performance will require opening up the hardware-software interface. Exposing parallelism was the first step. The continued use of “wild shared memory” programming models, however, undermines the efficiencies that can be achieved through parallelism. This talk will discuss how disciplined (shared memory) programming models can address such inefficiencies, and are imperative for a path towards energy proportional computing. In discussing the meaning and the rewards of discipline, I will also visit myths such as: “some data races are benign,” “discipline requires determinism,” “GPUs (or specialization) will solve all our problems,” and “legacy software will kill all innovation.” I will draw upon my experiences from the DeNovo hardware project, the Deterministic Parallel Java software project, more recent work on disciplined interfaces and hardware for heterogeneous systems, and work on memory models.

Bio: Sarita Adve is Professor of Computer Science at the University of Illinois at Urbana-Champaign. Her research interests are in computer architecture and systems, parallel computing, and power and reliability-aware systems. She co-developed the memory models for the C++ and Java programming languages based on her early work on data-race-free models. She received the Anita Borg Institute Women of Vision award in the innovation category in 2012, was named an IEEE fellow in 2012 and an ACM fellow in 2010, received the ACM SIGARCH Maurice Wilkes award in 2008, was named a University Scholar by the University of Illinois in 2004, and received an Alfred P. Sloan Research Fellowship in 1998. She serves on the boards of the Computing Research Association and ACM SIGARCH. She received the Ph.D. in Computer Science from Wisconsin in 1993.

Internally Deterministic Parallel Algorithms
Guy Blelloch, Carnegie Mellon University

Abstract: An internally deterministic parallel program is one in which the full trace of all operations and partial results is deterministic. In addition to returning deterministic results, internal determinism has many potential advantages including ease of reasoning about the code, ease of verifying correctness, ease of debugging, ease of defining invariants, ease of defining good coverage for testing, and ease of reasoning about performance.  On the other hand requiring internal determinism might make programs more complicated and/or less efficient.

In this talk I will cover some of our work on developing and analyzing algorithms that are fully internally deterministic.  I will describe a variety of techniques for developing fast internally deterministic algorithms, including writing in a functional style, using commutative operations, and a technique called deterministic reservations.  We have used these techniques to design and implement parallel algorithms for a reasonably wide class of problems including: sorting, Delaunay triangulation, nearest neighbors, minimum spanning tree, breadth-first search, n-body simulation, dictionaries, triangle-ray intersection, set-cover, graph-separators, suffix arrays, suffix trees, and convex hull.  Most of these problems are part of the a problem based benchmark suite we have developed for comparing algorithmic approaches.  I will report on timings and show code examples.

Bio: Guy Blelloch is a Professor of Computer Science at Carnegie Mellon University.  He received his B.S. and B.A. from Swarthmore College in 1983, and his and PhD from MIT in 1988.  His research is in the areas of algorithms and programming with an emphasis on parallelism.  He is an ACM Fellow and conference chair for the ACM Symposium on Parallelism in Algorithms and Architecture.