Charm++ Tutorial at SBAC-PAD 2015
Celso, Laércio and Esteban will present the Charm++ tutorial at the 27th annual SBAC-PAD on October 21st in Santa Catarina, Brazil.
Akhil Langer receives Kenichi Miura Award 2015
PPLer Akhil Langer has received the 2015 Kenichi Miura Award. This award honors a graduate student for outstanding accomplishments in High Performance Computing. Akhil works with Prof Laxmikant Kale and Prof Udatta Palekar on several aspects of high performance computing including power and energy optimizations, stochastic optimization, load balancing, adaptive mesh refinement. Akhil's thesis work provides a computational-engine for many real-time and dynamic problems faced by US Air Mobility Command. It is expected that this work will provide the springboard for more robust problem solving with HPC in many logistics and planning problems.
Charm++ tutorial in Brazil
Laércio Lima Pilla is leading a Charm++ tutorial as part of a regional gathering on HPC:, April 22-24, 2015. Laércio is a former student of Prof. Navaux (Federal University of Rio Grande do Sul) and Prof. Mehaut (University of Grenoble). He is now an associate professor at the University of Santa Catarina in Brazil.
Lifflander et al. Win Best Student Paper at CLUSTER'14

Jonathan Lifflander, Esteban Meneses, Harshitha Menon, Phil Miller, Sriram Krishnamoorthy, and Laxmikant V. Kale have won the best student paper award at CLUSTER'14 in Madrid, Spain!

This was awarded for their fault-tolerance paper that describes a new theoretical model for dependencies that reduces the amount of data required to perform deterministic replay. Using the algorithm presented, we demonstrate 2x better performance and scalability up to 128k cores of BG/P `Intrepid'. The paper is entitled: Scalable Replay with Partial-Order Dependencies for Message-Logging Fault Tolerance.

Harshitha to receive George Michael Memorial HPC Fellowship at SC'14
Harshitha Menon, PhD candidate advised by Prof. Laxmikant Kale, is a recipient of the 2014 ACM/IEEE-CS George Michael Memorial High Performance Computing Fellowship. This prestigious fellowship honors exceptional PhD students around the world whose research focus is on high performance computing, networking, storage, and large-scale data analysis. Fellowship winners are selected based on overall potential for research excellence and academic progress. This fellowship provides a $5000 honorarium and the award will be presented at the SC’14 Awards Ceremony.

“I am honored to receive this award.” said Harshitha. “It is a great opportunity to publicize my research work within the HPC community.”

Harshitha's research focuses on developing scalable load balancing algorithms and adaptive run time techniques to improve the performance of large scale dynamic applications. Her research covers performance optimizations of cosmology simulation application called ChaNGa, which is a collaborative research project between PPL and astrophysicists at University of Washington.

Also this year, Harshitha received the 2014 Google Anita Borg Memorial Scholarship and in 2012 she was selected as a Siebel Scholar.

“This award will be another prestigious feather in Harshitha’s cap!” said Prof. Laxmikant Kalé, PPL director. “Just a few months ago she won the Google Anita Borg scholarship. She has been doing excellent work in parallel computing and I’m especially proud of her efforts in scaling ChaNGa, our computational cosmology application, up to 512K cores.”

This is the third year in a row that a PPL student is acknowledged for the George Michael Memorial HPC Fellowship award.

See announcement reprint at
Ehsan receives Andrew and Shana Laursen Fellowship at Illinois
Ehsan has been selected for the Andrew and Shana Laursen Fellowship for fall 2014. About the fellowship: the Andrew and Shana Laursen Fellowship was established in 2001 to provide meaningful assistance in the recruitment and support of top graduate students to the Department of Computer Science, and to improve the quality of education and research at the University of Illinois.
Xiang wins Best Poster Award at LLNL Student Poster Symposium
Title: Lossy Compression for Checkpointing: Fallible or Feasible?

Large checkpoints pose a challenge as HPC applications scale to hundreds of thousands of processors because of the space they consume and the time required to transfer them to stable storage. To address this problem, this poster proposes use of lossy compression to reduce checkpoint size and studies the trade-off between the loss of precision and the compression ratio. As a proof of concept, for ChaNGa (a cosmology code developed over Charm++), we show that use of moderate lossy compression reduces checkpoint size by 3-5x while maintaining correctness.

This poster by Xiang Ni, a PPLer interning at LLNL, was judged as one of the best posters at Lawrence Livermore National Laboratory's annual Student's Poster Symposium that hosted approximately 100 posters.
6 Papers Accepted at Supercomputing'2014
PPL has six papers accepted in the technical program of the prestigious supercomputing 2014 conference! This is a record for us, although PPL had 4 papers in some previous years (2013, 2011). The 6 papers are:

We are looking forward to a strong presence at SC14 in New Orleans.
Abhishek Gupta defends his PhD thesis on HPC in cloud
PPLer Abhishek Gupta has successfuly defended his PhD thesis on effective High Performance Computing (HPC) in the Cloud. Here is his thesis abstract: The advantages of pay-as-you-go model, elasticity, and the flexibility and customization offered by virtualization make cloud computing an attractive option for meeting the needs of some HPC users. However, there is a mismatch between cloud environments and HPC requirements. The poor interconnect and I/O performance in cloud, HPC-agnostic cloud schedulers, and the inherent heterogeneity and multi-tenancy in cloud are some bottlenecks for HPC in cloud. This thesis goes beyond the research question: "what is the performance of HPC in cloud?" and explores "how can we perform effective and efficient HPC in cloud?" To this end, we adopt the complementary approach of making clouds HPC-aware, and HPC runtime system cloud-aware. Through intelligent application-to-platform mapping, HPC-aware VM placement, interference-aware VM consolidation, cloud-aware HPC load balancing, and malleable jobs, we demonstrate significant benefits for both: users and cloud providers in terms of cost (up to 60%), performance (up to 45%), and throughput (up to 32%).
Ehsan receives 3rd place award for ACM Student Research Competition Grand Finals 2014
Ehsan is placed 3rd for ACM Student Research Competition (SRC) Grand Finals 2014 in graduate category. ACM SRC is held every year at several major conferences in different computer science areas, and the winners compete in the Grand Finals round (more info here). Ehsan won ACM SRC at SC'13 to be able to compete in the ACM SRC Grand Finals round for 2014. His research is entitled "Structure-Adaptive Parallel Solution of Sparse Triangular Linear Systems." He receives his award at the ACM Awards Banquet.
Harshitha Menon receives the Anita Borg Scholarship for 2014
Harshitha is one of the recipients of the Google Anita Borg Memorial Scholarship for 2014. More information about the scholarship can be found here
Nikhil receives IBM PhD Fellowship Award
IBM announced PPLer Nikhil Jain as one of the recepients for the IBM PhD Fellowship Award for the academic year 2014-2015. More information on the fellowship can be obtained here. Nikhil's research page is here.
Lukasz receives ORNL distinguished software award
Developers of the Scalable Heterogeneous Computing (SHOC) Benchmark Suite, including Lukasz Wesolowski of PPL, have received an award from Oak Ridge National Laboratory for the most distinguished software released in the last five fiscal years in the Computer Science and Mathematics Division. SHOC , developed by a team led by Jeff Vetter of ORNL, is a collection of CUDA/OpenCL/MPI benchmarks to test performance and stability of modern heterogeneous computing systems and clusters comprising GPUs and Intel Xeon Phi accelerators.
Akhil et al win Best Paper Award at HiPC 2013
Akhil et al's work on Parallelization of Stochastic Integer Optimization has won the best paper award at the 20th IEEE International Conference on High Performance Computing, HiPC 2013 (details here). This work has been done in collaboration with Prof Udatta Palekar from Department of Business at UIUC and is supported by MITRE Corp and AMC. The talk on the paper can be found here.
Akhil et al win Two Best Poster Awards at HiPC 2013
Akhil et al has won two SRS best poster awards at HiPC 2013 for their work on Scalable and Asynchronous Algorithms for Structured Adaptive Mesh Refinement (corresponding earlier version of the paper can be found here). The talk on the poster can be found here.
Gupta et al. win Best Paper Award at IEEE CloudCom '13
The team of researchers led by PPLer Abhishek Gupta has received the Best Paper Award for their work "The Who, What, Why, and How of High Performance Computing in the Cloud" presented at the 5th IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2013) held in Bristol, UK. The paper was selected among the sixty papers accepted to the conference, which had a 18% acceptance rate.

This work was the outcome of a successful collaboration between University of Illinois at Urbana Champaign and HP Labs. This research is motivated by the recent emergence of cloud computing as an alternative to supercomputers for some of the high-performance computing (HPC) applications that do not require a fully dedicated machine. With cloud as an additional deployment option, HPC users are faced with the challenges of dealing with highly heterogeneous resources, where the variability spans across a wide range of processor configurations, interconnections, virtualization environments, and pricing rates and models.

This work takes a holistic viewpoint to answer the question – why and who should choose cloud for HPC, for what applications, and how should cloud be used for HPC? To this end, the paper presents a comprehensive performance evaluation and analysis of a set of benchmarks and complex HPC applications on a range of platforms, varying from supercomputers to clouds. Further, the paper demonstrates HPC performance improvements in cloud using alternative lightweight virtualization mechanisms – thin VMs and OS-level containers, and hypervisor- and application-level CPU affinity. Next, it analyzes the economic aspects and business models for HPC in clouds. The team believes that is an important area that has not been sufficiently addressed by past research. Overall results indicate that current public clouds are cost-effective only at small scale for the chosen HPC applications, when considered in isolation, but can complement supercomputers using business models such as cloud burst and application-aware mapping.

Here is a link to the paper . This work has also received some good media recognition
Ehsan and Nikhil win ACM SRC at SC'13
PPLers Ehsan Totoni and Nikhil Jain won Gold and Silver awards respectively in ACM Student Research Competition at Supercomputing'2013. Ehsan presented his work on Structure-Aware Parallel Algorithm for Solution of Sparse Triangular Linear Systems (details here). He will also get a chance to compete in the ACM SRC Grand Finale. Nikhil presented his work on Fast Prediction of Network Performance: k-packet Simulation (details here). More information on ACM SRC is available here.
Stable release of Charm++ version 6.5.1
The Parallel Programming Laboratory is pleased to announce the stable release of Charm++ version 6.5.1. This release offers several bug fixes, especially on the Cray Gemini and IBM Blue Gene Q architectures, and a new port to the Cray Cascade (XC-30) systems. The list of bugs fixed by this release can be found in Redmine. Release notes and full version control change logs can be found in our Git repository The source code for this release can be downloaded here This stable release will be precompiled and offered for use on various major supercomputer installations, including systems at Argonne National Lab, NERSC, NCSA, NICS, Oak Ridge National Lab, SDSC, and TACC.
Simulations of HIV-1 capsid displayed on cover of Nature

NAMD, one of the major applications developed using Charm++, was recently used in a ground-breaking simulation to determine the chemical structure of the HIV Capsid, as reported in "Nature" (Volume 497 Number 7451 pp535-658) this week. The simulation involving about 64 million atoms, was carried out on the Illinois' Blue Waters system. More information can be found in the HIV Capsid Article on the University of Illinois' web page.

That work is the latest achievement to leverage the work of the 20+ year NAMD collaboration between Sanjay Kale's PPL and Klaus Schulten's TCBG. It benefited from recent intensive efforts to optimize NAMD and Charm++ performance on the Blue Waters platform by PPLers: Chao Mei, Yanhua Sun, Gengbin Zheng, and Eric Bohm.

Meneses, Sarood and Kale receive Feng Chen Memorial Award
The Department of Computer Science in the University of Illinois at Urbana-Champaign granted Esteban Meneses, Osman Sarood and Professor Laxmikant V. Kale the Feng Chen Memorial Award for their work on energy profile of fault tolerance methods. Their paper named "Assessing Energy Efficiency of Fault Tolerance Protocols for HPC Systems" received the best paper award at the 24th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD 2012). Understanding the interplay between energy, power and fault tolerance is an active research area in the Parallel Programming Laboratory. Currently, this team of researchers is trying to create the first software-based mechanism to control the failure rate of a machine via frequency scaling and load balancing.
Stable release of Charm++ version 6.5.0
The Parallel Programming Laboratory is pleased to announce the stable release of Charm++ version 6.5.0. This release offers substantially increased performance on the Cray Gemini and IBM Blue Gene Q architectures, revamped developer and user documentation, and numerous performance and usability improvements across the runtime. In depth release notes as well as full version control change logs can be found here. The source code for this release can be downloaded here This stable release will be precompiled and offered for use on various major supercomputer installations, including systems at Argonne National Lab, NERSC, NCSA, NICS, Oak Ridge National Lab, SDSC, and TACC.
Meneses, Sarood and Kale receive Best Paper Award at SBAC-PAD 2012
The paper "Assessing Energy Efficiency of Fault Tolerance Protocols for HPC Systems" received the Julio Salek Aude Best Paper Award at the 24th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD 2012). The paper presents a comparison of three fault tolerance mechanisms from the energy consumption point of view. The traditional checkpoint/restart, a message-logging protocol and the parallel recovery mechanism were evaluated in this study. The results show how migratable objects and adaptivity (two cornerstones of Charm++) can deliver a high-performance, energy-efficient mechanism to tolerate failures at exascale.
Fernbach Award for Profs. Kale, Schulten
Profs. Kale was named one of the winners of the Sidney Fernbach Award, to be presented at Supercomputing 2012.
Local SIAM chapter hosting Charm++ Tutorial
On Tuesday, May 1, the UIUC chapter of SIAM will host a Charm++ tutorial given by PPL'r Phil Miller. The tutorial will start at 4:00 p.m. in 4403 Siebel Center. Registration is required. For more information, please visit or register here.

The tutorial will present Charm++, which is a portable parallel programming system designed with programmer productivity as a major goal. Attendees will become familiar with the asynchronous, object-based programming model of Charm++ and the capabilities its adaptive runtime system offers.

Developed by the Parallel Programming Laboratory over the last 20 years, Charm++ is a portable, mature environment that provides the foundation for several highly scalable and widely used applications in science and engineering, including NAMD, ChaNGa, and OpenAtom. Charm++ runs the same application code on multicore desktops with shared memory, clusters of all sizes, and IBM and Cray supercomputers (such as the upcoming NSF-sponsored Blue Waters), and efficiently supports GPU accelerators where available.

The following week, the Parallel Programming Lab will host its 10th Annual Workshop on Charm++ and its Applications May 7-9 at the Siebel Center, bringing together the Charm++ community and showcasing leading-edge developments in parallel computing.

The target audience for this tutorial is programmers and researchers with any sort of parallel programming experience and basic knowledge of C or C++.
First Beta of Charm++ version 6.4.0 released
The Parallel Programming Laboratory is pleased to announce the release of a first beta for Charm++ version 6.4.0. A list of advances in this release can be found in gitweb. Please test your applications for bugs and performance regressions, and post your results on the mailing list. A tarball of the source can be found here, and compiled binaries for our autobuild platforms can be found here.
Jetley and Kale receive best paper award at HiPC 2011
A paper submitted by Pritish Jetley and Laxmikant Kale received the best paper award at the 2011 IEEE International High Performance Computing Conference (HiPC'11).

The paper, titled Optimizations for Message Driven Applications on Multicore Architectures, explores the use of message-driven execution for shared-memory programming. A case study was conducted in the context of a kd-tree construction application that exhibits nested parallelism.

The authors conclude that while certain optimizations translate well from the distributed memory world, there are performance challenges specific to shared memory architectures that must be addressed. The paper presents speedups of over 2 in comparison to an existing, TBB-based approach.

PPL Awarded 1st Place in HPC Challenge Class 2

At the HPC Challenge award session during SC'11, PPL members were presented with the first place award for their submission to the 2011 HPC Challenge Class 2 (programming environment) in the performance category. This represents PPL's first submission to the contest.

The class 2 competition tries to identify promising high-level parallel programming systems that offer high performance coupled with developer productivity. The class 2 award was partitioned into "performance" and "elegance" categories, both with separate winners. The PPL submission for performance used Charm++. The goal of the competition is to focus the HPC community's attention on developing a broad set of HPC hardware and HPC software capabilities that are necessary to productively use HPC systems.

PPL Submission Chosen As Finalist for HPC Challenge
PPL is pleased to announce that our submission to the 2011 HPC Challenge Class 2 (productivity), has been selected as a finalist. The class 2 competition tries to identify promising high-level parallel programming systems that offer high performance coupled with developer productivity. Finalists have been invited to give a presentation at the HPCC Birds of a Feather at SC'11 in Seattle. These talks will include code size statistics and achieved performance. This submission represents PPL's first submission to the contest.
Scaling to 100 Million Atoms Video
Recently, NCSA sat down with Eric Bohm to discuss PPL and research he is involved in that will be presented at SC'11 (Mei et al). In the video linked below, Bohm talks about the challenges and solutions involved scaling the NAMD molecular dynamics application to support both extremely large systems and to run on extremely large machines.

Scaling to 100 Million Atoms video.
PPLers Langer and Menon chosen as Siebel Scholars
PPL members Akhil Langer and Harshitha Menon have been picked for the 2012 class of Siebel Scholars. Langer and Menon, along with three other CS graduate students, are now among an elite group chosen on the basis of outstanding academic performance and demonstrated qualities of leadership. Read more at: