Live Webcast 15th Annual Charm++ Workshop

Charm++ Tutorial at HPCCamp 2017 - ECAR 2017

High Performance Computing Camp - Escuela de Computación de Alto Rendimiento

Techniques and methodology for parallel programming - Module 4: Programming with parallel objects

Rescheduled and is now Sep 18-29, 2017, Buenos Aires, Argentina

Detailed Program:

Day 1: Parallel Objects Programming Fundamentals Introduction to basic concepts: overdecomposition, asynchrony, migratability and adaptivity. The parallel objects model and its advantages over traditional methods. Introduction to Charm++ programming language. Charm++ programming and execution model. Installation of Charm++ and associated libraries. Basic Charm++ code samples. Use and properties of chare arrays.

Day 2: Performance Analysis and Load Balancing Introduction to Projections, a performance analysis tool. Visualizing executions and analysis of experimental results. Performance bottleneck detection. Introduction to load balancing. Object migration and PUP methods. Load balancing strategies in Charm++. Use of different load balancing strategies for particular problems.

Day 3: Advanced Programming with Charm++ Advanced programming mechanisms in Charm++. Multidimensional array usage and chare groups. Introduction to checkpointing and its applications.

Day 4: High Level Programming with Charm++ Introduction to Structured Dagger (SDAG), a tool for high-level programming in Charm++. Survey of other high-level languages in the Charm++ ecosystem. Presentation of real applications using Charm++.

Beta release Charm++ 6.8.0.

Hello everyone!

We're pleased to announce a beta release of Charm++ in advance of the upcoming version 6.8.0. We ask that users take this opportunity to test the latest code with their applications and report any issues encountered.

The code for this release can be obtained by

git clone

git checkout v6.8.0-beta2

(Beta 1 was not announced due to bugs found in internal testing)

We have also posted corresponding updated Java binaries of Projections and CharmDebug

Among over 700 commits made since the release of version 6.7.1, some of the larger and more exciting improvements in the system include:

  • Calls to entry methods taking a single fixed-size parameter can now automatically be aggregated and routed through the TRAM library by marking them with the [aggregate] attribute.
  • Calls to parameter-marshalled entry methods with large array arguments can ask for asynchronous zero-copy send behavior with an 'rdma' tag in the parameter's declaration.
  • The runtime system now integrates an OpenMP runtime library so that code using OpenMP parallelism will dispatch work to idle worker threads within the Charm++ process.
  • Applications can ask the runtime system to perform automatic high-level end-of-run performance analysis by linking with the ' -tracemode perfReport' option.
  • Added a new dynamic remapping/load-balancing strategy, GreedyRefineLB, that offers high result quality and well bounded execution time.
  • Charm++ programs can now define their own main() function, rather than using a generated implementation from a mainmodule/mainchare combination. This extends the existing Charm++/MPI interoperation feature.
  • GPU manager now creates one instance per OS process and scales the pre-allocated memory pool size according to the GPU memory size and number of GPU manager instances on a physical node.
  • Several GPU Manager API changes including:
    • Replaced references to global variables in the GPU manager API with calls to functions.
    • The user is no longer required to specify a bufferID in dataInfo struct.
    • Replaced calls to kernelSelect with direct invocation of functions passed via the work request object (allows CUDA to be built with all programs).
  • Added support for malleable jobs that can dynamically shrink and expand the set of compute nodes hosting Charm++ processes.
  • Greatly expanded and improved reduction operations:
    • Added built-in reductions for all logical and bitwise operations on integer and boolean input.
    • Reductions over groups and chare arrays that apply commutative, associative operations (e.g. MIN, MAX, SUM, AND, OR, XOR) are now processed in a streaming fashion. This reduces the memory footprint of reductions. User-defined reductions can opt into this mode as well.
    • Added a new 'Tuple' reducer that allows combining multiple reductions of different input data and operations from a common set of source objects to a single target callback.
    • Added a new 'Summary Statistics' reducer that provides count, mean, and standard deviation using a numerically-stable streaming algorithm.
  • Added a '++quiet' option to suppress charmrun and charm++ non-error messages at startup.
  • Calls to chare array element entry methods with the [inline] tag now avoid copying their arguments when the called method takes its parameters by const&, offering a substantial reduction in overhead in those cases.
  • Synchronous entry methods that block until completion (marked with the [sync] attribute) can now return any type that defines a PUP method, rather than only message types.
  • Improved and expanded topology-aware spanning tree generation strategies, including support for runs on a torus with holes, such as Blue Waters and other Cray XE/XK systems.

Future portability/compatibility note:

Please be aware that all feature releases of the Charm++ system following the final 6.8 will require full C++11 support from the compiler and standard library in use.

NAMD 1 of 10 selected for the Aurora Early Science Program

Aurora Early Science Program

NAMD is 1 of 10 computational science and engineering research projects that were selected for the ALCF Aurora Early Science Program. Aurora is expected to arrive in 2018 and will be a massively parallel, manycore Intel-Cray supercomputer. For more information about this program, click here.

The project "Free energy landscapes of membrane transport proteins" will be using NAMD and is lead by Benoit Roux, The University of Chicago, in collaboration with NIH Center for Macromolecular Modeling and Bioinformatics, Beckman Institute, The University of Illinois.

Xiang Ni defended her dissertation
Last summer, Xiang Ni successfully defended her thesis titled “Mitigation of Failures in High Performance Computing via Runtime Techniques”. Her thesis leverages runtime system and compiler techniques to mitigate a significant fraction of failures automatically with low overhead. The main goals of various system-level fault tolerance strategies designed in this thesis are: reducing the extra cost added to application execution while improving system reliability; automatically adjusting fault tolerance decisions without user intervention based on environmental changes; protecting applications not only from fail-stop failures but also from silent data corruptions.
Harshitha defended her dissertation
Harshitha Menon successfully defended her thesis titled ‘Adaptive Load Balancing for HPC Applications’ over the summer. Her thesis addresses load imbalance problem in HPC applications. The thesis presents various load balancing algorithms and the use of adaptive runtime techniques along with machine learning to automate the load balancing decisions. Abstract.
PPL and Charm++ at SC16
Phil Miller defended his dissertation
Over the summer, Philip Miller successfully defended his dissertation titled "Reducing Synchronization in Distributed Parallel Programs". Abstract
Charm++ power, resilience work featured in IEEE computer Oct ’16 issue!
Over the years PPL has carried out research on multiple aspects of optimizing power, energy, temperature without sacrificing performance. Automatic runtime adaptation through the Charm++ runtime system has been a key foundation to all the approaches explored. Resilience related solutions are also enabled by the same runtime approach. A broad summary of our research and its connection with adaptive runtime was published recently in IEEE Computer.

Link to the web article: Power, Reliability, Performance: One System to Rule Them All [IEEE Computer October 2016]
Charm++ Release 6.7.1

Changes in this release are primarily bug fixes for 6.7.0. The major exception is AMPI, which has seen changes to its extension APIs and now complies with more of the MPI standard. A brief list of changes follows:

Charm++ Bug Fixes

  • Startup and exit sequences are more robust
  • Error and warning messages are generally more informative
  • CkMulticast’s set and concat reducers work correctly

AMPI Features

  • AMPI’s extensions have been renamed to use the prefix AMPI_ instead of MPI_ and to generally follow MPI’s naming conventions
  • AMPI_Migrate(MPI_Info) is now used for dynamic load balancing and all fault tolerance schemes (see the AMPI manual)
  • AMPI officially supports MPI-2.2, and also implements the non-blocking collectives and neighborhood collectives from MPI-3.1

Platforms and Portability

  • Cray regularpages build target has been fixed
  • Clang compiler target for BlueGene/Q systems added
  • Comm. thread tracing for SMP mode added
  • AMPI’s compiler wrappers are easier to use with autoconf and cmake
Jonathan Lifflander defends his dissertation
Jonathan Lifflander successfully defended his dissertation entitled "Optimizing Work Stealing Algorithms with Scheduling Constraints". His thesis examines methodologies to improve the efficiency of fork--join programming models in conjunction with work stealing schedulers by exploiting persistency in iterative scientific benchmarks. His thesis demonstrates a highly scalable implementation of distributed-memory work stealing using a novel tracing framework to record task execution locations in the presence of random steals, while incurring very low overheads. This same tracing framework is used to optimize work stealing on NUMA architectures. Finally, by introducing data effect annotations to fork--join models in conjunction with runtime tracing, his work enables fork--join schedulers to execute ahead of syncs to accrue cache locality benefits.
Nikhil defends his dissertation
PPLer Nikhil Jain has successfully defended his dissertation titled "Optimization of Communication Intensive Applications on HPC Networks". In a hour long public presentation given to his thesis committee, which consists of Illinois Professors Kale, Gropp, Torrellas, and OSU Prof. Panda, Nikhil described the importance of communication in HPC applications and presented his two step approach for optimizing it on HPC networks. Use of machine learning to perform diagnostic studies that can help identify important metrics forms the first step. The second step is to use parallel discrete event simulation tools developed based on learning from the first step for mimicking communication flow on HPC networks. The thesis presents a few example use cases of these tools by comparing HPC networks with different topologies and by predicting the impact of changes in network parameters. In addition to this methodology, the thesis also contains work on topology aware mapping, job placement, and communication algorithms. More details on Nikhil’s research and his thesis can be found at his personal home page.
Charm++ and AMPI BoF at SC15
PPL at SC15
Charm++ Tutorial at SBAC-PAD 2015
Celso, Laércio and Esteban will present the Charm++ tutorial at the 27th annual SBAC-PAD on October 21st in Santa Catarina, Brazil. Link to info
Akhil Langer receives Kenichi Miura Award 2015
PPLer Akhil Langer has received the 2015 Kenichi Miura Award. This award honors a graduate student for outstanding accomplishments in High Performance Computing. Akhil works with Prof Laxmikant Kale and Prof Udatta Palekar on several aspects of high performance computing including power and energy optimizations, stochastic optimization, load balancing, adaptive mesh refinement. Akhil's thesis work provides a computational-engine for many real-time and dynamic problems faced by US Air Mobility Command. It is expected that this work will provide the springboard for more robust problem solving with HPC in many logistics and planning problems.
Charm++ tutorial in Brazil
Laércio Lima Pilla is leading a Charm++ tutorial as part of a regional gathering on HPC:, April 22-24, 2015. Laércio is a former student of Prof. Navaux (Federal University of Rio Grande do Sul) and Prof. Mehaut (University of Grenoble). He is now an associate professor at the University of Santa Catarina in Brazil.
The Coding Illini team with PPLer Phil Miller win 2014 PUCC
PPL @ SC'14
Lifflander et al. Win Best Student Paper at CLUSTER'14

Jonathan Lifflander, Esteban Meneses, Harshitha Menon, Phil Miller, Sriram Krishnamoorthy, and Laxmikant V. Kale have won the best student paper award at CLUSTER'14 in Madrid, Spain!

This was awarded for their fault-tolerance paper that describes a new theoretical model for dependencies that reduces the amount of data required to perform deterministic replay. Using the algorithm presented, we demonstrate 2x better performance and scalability up to 128k cores of BG/P `Intrepid'. The paper is entitled: Scalable Replay with Partial-Order Dependencies for Message-Logging Fault Tolerance.

Harshitha to receive George Michael Memorial HPC Fellowship at SC'14
Harshitha Menon, PhD candidate advised by Prof. Laxmikant Kale, is a recipient of the 2014 ACM/IEEE-CS George Michael Memorial High Performance Computing Fellowship. This prestigious fellowship honors exceptional PhD students around the world whose research focus is on high performance computing, networking, storage, and large-scale data analysis. Fellowship winners are selected based on overall potential for research excellence and academic progress. This fellowship provides a $5000 honorarium and the award will be presented at the SC’14 Awards Ceremony.

“I am honored to receive this award.” said Harshitha. “It is a great opportunity to publicize my research work within the HPC community.”

Harshitha's research focuses on developing scalable load balancing algorithms and adaptive run time techniques to improve the performance of large scale dynamic applications. Her research covers performance optimizations of cosmology simulation application called ChaNGa, which is a collaborative research project between PPL and astrophysicists at University of Washington.

Also this year, Harshitha received the 2014 Google Anita Borg Memorial Scholarship and in 2012 she was selected as a Siebel Scholar.

“This award will be another prestigious feather in Harshitha’s cap!” said Prof. Laxmikant Kalé, PPL director. “Just a few months ago she won the Google Anita Borg scholarship. She has been doing excellent work in parallel computing and I’m especially proud of her efforts in scaling ChaNGa, our computational cosmology application, up to 512K cores.”

This is the third year in a row that a PPL student is acknowledged for the George Michael Memorial HPC Fellowship award.

See announcement reprint at
Ehsan receives Andrew and Shana Laursen Fellowship at Illinois
Ehsan has been selected for the Andrew and Shana Laursen Fellowship for fall 2014. About the fellowship: the Andrew and Shana Laursen Fellowship was established in 2001 to provide meaningful assistance in the recruitment and support of top graduate students to the Department of Computer Science, and to improve the quality of education and research at the University of Illinois.
Xiang wins Best Poster Award at LLNL Student Poster Symposium
Title: Lossy Compression for Checkpointing: Fallible or Feasible?

Large checkpoints pose a challenge as HPC applications scale to hundreds of thousands of processors because of the space they consume and the time required to transfer them to stable storage. To address this problem, this poster proposes use of lossy compression to reduce checkpoint size and studies the trade-off between the loss of precision and the compression ratio. As a proof of concept, for ChaNGa (a cosmology code developed over Charm++), we show that use of moderate lossy compression reduces checkpoint size by 3-5x while maintaining correctness.

This poster by Xiang Ni, a PPLer interning at LLNL, was judged as one of the best posters at Lawrence Livermore National Laboratory's annual Student's Poster Symposium that hosted approximately 100 posters.
6 Papers Accepted at Supercomputing'2014
PPL has six papers accepted in the technical program of the prestigious supercomputing 2014 conference! This is a record for us, although PPL had 4 papers in some previous years (2013, 2011). The 6 papers are:

We are looking forward to a strong presence at SC14 in New Orleans.
Abhishek Gupta defends his PhD thesis on HPC in cloud
PPLer Abhishek Gupta has successfuly defended his PhD thesis on effective High Performance Computing (HPC) in the Cloud. Here is his thesis abstract: The advantages of pay-as-you-go model, elasticity, and the flexibility and customization offered by virtualization make cloud computing an attractive option for meeting the needs of some HPC users. However, there is a mismatch between cloud environments and HPC requirements. The poor interconnect and I/O performance in cloud, HPC-agnostic cloud schedulers, and the inherent heterogeneity and multi-tenancy in cloud are some bottlenecks for HPC in cloud. This thesis goes beyond the research question: "what is the performance of HPC in cloud?" and explores "how can we perform effective and efficient HPC in cloud?" To this end, we adopt the complementary approach of making clouds HPC-aware, and HPC runtime system cloud-aware. Through intelligent application-to-platform mapping, HPC-aware VM placement, interference-aware VM consolidation, cloud-aware HPC load balancing, and malleable jobs, we demonstrate significant benefits for both: users and cloud providers in terms of cost (up to 60%), performance (up to 45%), and throughput (up to 32%).
Ehsan receives 3rd place award for ACM Student Research Competition Grand Finals 2014
Ehsan is placed 3rd for ACM Student Research Competition (SRC) Grand Finals 2014 in graduate category. ACM SRC is held every year at several major conferences in different computer science areas, and the winners compete in the Grand Finals round (more info here). Ehsan won ACM SRC at SC'13 to be able to compete in the ACM SRC Grand Finals round for 2014. His research is entitled "Structure-Adaptive Parallel Solution of Sparse Triangular Linear Systems." He receives his award at the ACM Awards Banquet.
Harshitha Menon receives the Anita Borg Scholarship for 2014
Harshitha is one of the recipients of the Google Anita Borg Memorial Scholarship for 2014. More information about the scholarship can be found here
Nikhil receives IBM PhD Fellowship Award
IBM announced PPLer Nikhil Jain as one of the recepients for the IBM PhD Fellowship Award for the academic year 2014-2015. More information on the fellowship can be obtained here. Nikhil's research page is here.
Lukasz receives ORNL distinguished software award
Developers of the Scalable Heterogeneous Computing (SHOC) Benchmark Suite, including Lukasz Wesolowski of PPL, have received an award from Oak Ridge National Laboratory for the most distinguished software released in the last five fiscal years in the Computer Science and Mathematics Division. SHOC , developed by a team led by Jeff Vetter of ORNL, is a collection of CUDA/OpenCL/MPI benchmarks to test performance and stability of modern heterogeneous computing systems and clusters comprising GPUs and Intel Xeon Phi accelerators.
Akhil et al win Best Paper Award at HiPC 2013
Akhil et al's work on Parallelization of Stochastic Integer Optimization has won the best paper award at the 20th IEEE International Conference on High Performance Computing, HiPC 2013 (details here). This work has been done in collaboration with Prof Udatta Palekar from Department of Business at UIUC and is supported by MITRE Corp and AMC. The talk on the paper can be found here.
Akhil et al win Two Best Poster Awards at HiPC 2013
Akhil et al has won two SRS best poster awards at HiPC 2013 for their work on Scalable and Asynchronous Algorithms for Structured Adaptive Mesh Refinement (corresponding earlier version of the paper can be found here). The talk on the poster can be found here.