Invited Speakers

"The end of coding as we know it", Prof.Boyana Norris

Faced with increasingly parallel, heterogeneous, and diverse architectures, developers are faced with a growing number of tradeoffs they must consider. What data structure is best for a given computation? What happens when that same data must then be accessed in a different manner by another algorithm? How can different modules be coupled most effectively? Which library and specific function is best for a given subcomputation? What parallel programming model should I use? At present, most of these choices are made based on the developers’ expertise and experience and subject to many external constraints, such as upcoming deadlines, available human and computational resources, and many other human factors. As a result, the achieved solution rarely achieves the full potential of the algorithm or underlying architectures. Autotuning approaches offer a means to procrastinate from making these choices and delegate them instead to one or more tools that attempt to generate and fine-tune implementations automatically for a given target platform. In this talk I will present our autotuning approach and discuss its place in the current ontology of autotuning approaches, and discuss future autotuning/code generation opportunities.

Short Bio:

Boyana Norris is an Assistant Professor of Department of Computer and Information Science, University of Oregon, since Sept. 2013. Before joining the University of Oregon, she has been a Computer Scientist, Argonne National Laboratory, and Senior Fellow, Computation Institute, University of Chicago, during March 2006-Aug. 2013. As a group leader of the performance group in the Mathematics and Computer Science Division, she has led several projects on compiler-based tools for automatic differentiation, performance modeling and prediction, power-aware high-performance computing, and software engineering approaches for component-based multi-language HPC software development. Her research interests are novel approaches to addressing the challenge of developing and maintaining complex high-performance applications for rapidly evolving parallel architectures, with emphasis on of methods and tools for automation of the development, deployment, testing, and performance tuning of parallel scientific applications.

"Auto-tuning for unreliable HPC", Keita Teranishi, Ph.D.

Along with the unprecedented parallelism and reduced component sizes, the future extreme HPC systems will no longer provide “reliable computing machines” to the end users without large penalties in the performance and power use. Therefore, it is essential for application developers, themselves, to manage the resilience issues in the application level. However, the current practice of checkpoint/restart entails potentialdrawbacks in scalability and its capabilities, suggesting the needs foralternative resilient programming models.

In this talk, I will introduce the programming models and ongoing work at Sandia for enabling these models to develop resilient applications and algorithms. Then, we will discuss how the auto-tuning research community can contribute to enhance the capability of these new programming models and tools to optimize the complex trade-offs between the performance and reliability of applications.

Short Bio:

Keita Teranishi is a Principal Member of Technical Staff in Scalable Modeling and Analysis Systems department at SNL since 2013. Before joining SNL, he has been involved in several scientific library development projects at Cray Inc., including Cray’s automatic sparse kernels (CASK). He holds BS and MS degrees in computer science from University of Tennessee, Knoxville (1998 and 2000) and Ph.D. in computer science from The Pennsylvania State University (2004). His research interests are high performance numerical linear algebra and resilience issues in high performance computing.