Fourth International Workshop on Visual Performance Analysis (VPA 17)

Denver, Colorado, USA

November 17, 2017

               

Held in conjunction with SC17: The International Conference on High Performance Computing, Networking, Storage and Analysis, in cooperation with TCHPC: The IEEE Computer Society Technical Consortium on High Performance Computing

Over the last decades an incredible amount of resources has been devoted to building ever more powerful supercomputers. However, exploiting the full capabilities of these machines is becoming exponentially more difficult with each new generation of hardware. To help understand and optimize the behavior of massively parallel simulations the performance analysis community has created a wide range of tools and APIs to collect performance data, such as flop counts, network traffic or cache behavior at the largest scale. However, this success has created a new challenge, as the resulting data is far too large and too complex to be analyzed in a straightforward manner. Therefore, new automatic analysis and visualization approaches must be developed to allow application developers to intuitively understand the multiple, interdependent effects that their algorithmic choices have on the final performance.

This workshop will bring together researchers from the fields of performance analysis and visualization to discuss new approaches of applying visualization and visual analytics techniques to large scale applications.

Workshop Topics

  • Scalable displays of performance data
  • Data models to enable scalable visualization
  • Graph representation of unstructured performance data
  • Presentation of high-dimensional data
  • Visual correlations between multiple data source
  • Human-Computer Interfaces for exploring performance data
  • Multi-scale representations of performance data for visual exploration

Previous Workshops

Papers

Call for Papers

We solicit 8-page full papers as well as 4-page short papers that focus on techniques at the intersection of performance analysis and visualization, and either use visualization techniques to display large scale performance data or that develop new visualization or visual analytics methods that help create new insights.

Papers must be submitted in PDF format (readable by Adobe Acrobat Reader 5.0 and higher) and formatted for 8.5” x 11” (U.S. Letter). Submissions are limited to 8 pages in the ACM format, using the sample-sigconf template. The 8-page limit includes figures, tables, and references.

All papers must be submitted through Easychair at:

https://easychair.org/conferences/?conf=vpa2017

Dates

Important Dates

  • Submission deadline (extended): August 21, 2017 (AoE)
  • Notification of acceptance: September 18, 2017 (AoE)
  • Camera-ready deadline: October 9, 2017 (AoE)
Program

Technical Program

All times refer to Friday, Nov. 17, 2017

8:30am - 8:35am ---- Welcome and Introduction ----
8:35am - 9:35am
Keynote Talk: Visual Performance Analysis for Extremely Heterogeneous Systems
Lucy Nowell, PhD, Program Manager, Department of Energy Advanced Scientific Computing Research
Extreme heterogeneity is the result of using multiple types of processors, accelerators, memory and storage in a single computing platform or environment that must support an expanding variety of application workflows to meet the needs of increasingly heterogeneous users. Extremely heterogeneous supercomputers are likely be acquired by the ASCR-supported supercomputing facilities as we reach the end of Moore’s Law while still facing rapidly increasing computational and data intensive requirements. The exponential increase in system complexity will make it essential for system administrators and software developers to have new tools that help them understand the behavior of extremely heterogeneous supercomputing environments and the applications that run in them. The vast bandwidth of visual perception makes the combination of visualization and performance analysis essential.

Dr. Lucy Nowell is a Computer Scientist and Program Manager in the Advanced Scientific Computing Research (ASCR) program in the Department of Energy’s Office of Science. Until recently, she managed a research portfolio emphasizing scientific data management, analysis and visualization. A change of assignment has her now focused on reshaping the ASCR Computer Science program to address challenges in the realm of operating and runtime systems and programming models/environments that will result from exponential increases in the complexity of Post Moore Era supercomputers. Previously she served as a Program Director in NSF’s Office of Cyberinfrastructure and as a Program Manager for the Department of Defense, managing projects related to information analysis and visualization. Her MS and PhD in Computer Science are from Virginia Tech. She has a BA and MA in Theatre from the University of Alabama – Tuscaloosa and the MFA from the University of New Orleans. Her own research focused on information visualization for digital libraries and science applications, drawing on her background in visual art and cognitive/perceptual psychology, as well as computer science.
9:35am - 10:00am
Paper Talk: Chad Wood, Matthew Larsen, Alfredo Gimenez, Cyrus Harrison, Todd Gamblin and Allen Malony. Projecting Performance Data Over Simulation Geometry Using SOSflow and Alpine
The performance of HPC simulation codes is often tied to their simulated domains; e.g., properties of the input decks, boundaries of the underlying meshes, and parallel decomposition of the simulation space. A variety of research efforts have demonstrated the utility of projecting performance data onto the simulation geometry to enable analysis of these kinds of performance problems. However, current methods to do so are largely ad-hoc and limited in terms of extensibility and scalability. Furthermore, few methods enable this projection online, resulting in large storage and processing requirements for offline analysis. We present a general, extensible, and scalable solution for in-situ (online) visualization of performance data projected onto the underlying geometry of simulation codes. Our solution employs the scalable observation system SOSflow with the in-situ visualization framework ALPINE to automatically extract simulation geometry and stream aggregated performance metrics to respective locations within the geometry at runtime. Our system decouples the resources and mechanisms to collect, aggregate, project, and visualize the resulting data, thus mitigating overhead and enabling online analysis at large scales. Furthermore, our method requires minimal user input and modification of existing code, enabling general and widespread adoption. [PDF]
10:00am - 10:30am ---- Coffee Break ----
10:30am - 11:10am
Panel Discussion: Challenges and the Future of HPC Performance Visualization
Panelists:
  • Holger Brunst, TU Dresden
  • Katherine Isaacs, University of Arizona
  • Matthew Legendre, Lawrence Livermore National Laboratory
  • David Richards, Lawrence Livermore National Laboratory
11:10am - 11:35am
Paper Talk: Nico Reissmann, Magnus Jahre and Ananya Muddukrishna. Towards Aggregated Grain Graphs
Grain graphs simplify OpenMP performance analysis by visualizing performance problems from a fork-join perspective that is familiar to programmers. However, it is tedious to navigate and diagnose problems in large grain graphs with thousands of task and parallel for-loop chunk instances. We present an aggregation method that matches recurring patterns in grain graphs and groups related nodes together, reducing graphs of any size to one root group. The aggregated grain graph is then navigated by progressively uncovering groups and analyzing only those groups that have problems. This enhances productivity by enabling programmers to understand program structure and problems in large grain graphs with less effort than before. [PDF]
11:35am - 12:00pm
Paper Talk: Matthias Diener, Sam White and Laxmikant Kale. Visualizing, measuring, and tuning Adaptive MPI parameters
Adaptive MPI (AMPI) is an advanced MPI runtime environment that offers several features over traditional MPI runtimes, which can lead to a better utilization of the underlying hardware platform and therefore higher performance. These features are overdecomposition through virtualization, and load balancing via rank migration. Choosing which of these features to use, and finding the optimal parameters for them is a challenging task however, since different applications and systems may require different options. Furthermore, there is a lack of information about the impact of each option. In this paper, we present a new visualization of AMPI in its companion Projections tool, which depicts the operation of an MPI application and details the impact of the different AMPI features on its resource usage. We show how these visualizations can help to improve the efficiency and execution time of an MPI application. Applying optimizations indicated by the performance analysis to two MPI-based applications results in performance improvements of up 18% from overdecomposition and load balancing. [PDF]
---- Closing ----
Committees

Steering Committee

Peer-Timo Bremer, Lawrence Livermore National Laboratory
Bernd Mohr, Juelich Supercomputing Center
Valerio Pascucci, University of Utah
Martin Schulz, Lawrence Livermore National Laboratory

Workshop Chairs

Fabian Beck, University of Duisburg-Essen
Abhinav Bhatele, Lawrence Livermore National Laboratory
Judit Gimenez, Barcelona Supercomputing Center
Joshua A. Levine, University of Arizona

Program Committee

Harsh Bhatia, Lawrence Livermore National Laboratory
Holger Brunst, TU Dresden
Alexandru Calotoiu, Technical University Darmstadt
Todd Gamblin, Lawrence Livermore National Laboratory
Marc-Andre Hermanns, Juelich Supercomputing Center
Kevin Huck, University of Oregon
Katherine Isaacs, University of Arizona
Yarden Livnat, University of Utah
Naoya Maruyama, Lawrence Livermore National Laboratory
Bernd Mohr, Juelich Supercomputing Center
Ananya Muddukrishna, KTH Royal Institute of Technology
Matthias Mueller, RWTH Aachen University
Valerio Pascucci, University of Utah
Paul Rosen, University of South Florida
Carlos Scheidegger, University of Arizona
Chad Steed, Oak Ridge National Laboratory