Adaptive MPI Manual

1 Introduction

This manual describes Adaptive MPI (AMPI ), which is an implementation of a significant subset 1 of the MPI-2.0 Standard over Charm++ . Charm++ is a C++ -based parallel programming library being developed by Prof. L. V. Kalé and his students back from 1992 until now at University of Illinois. We first describe our philosophy behind this work (why we do what we do). Later we give a brief introduction to Charm++ and rationale for AMPI (tools of the trade). We then describe AMPI in detail. Finally we summarize the changes required for original MPI codes to get them working with AMPI . Appendices contain the gory details of installing AMPI , building and running AMPI programs.

1 . 1 Overview

Developing parallel Computational Science and Engineering (CSE) applications is a complex task. One has to implement the right physics, develop or choose and code appropriate numerical methods, decide and implement the proper input and output data formats, perform visualizations, and be concerned with correctness and efficiency of the programs. It becomes even more complex for multi-physics coupled simulations, many of which are dynamic and adaptively refined so that load imbalance becomes a major challenge. In addition to imbalance caused by dynamic program behavior, hardware factors such as latencies, variability, and failures must be tolerated by applications. Our philosophy is to lessen the burden of application developers by providing advanced programming paradigms and versatile runtime systems that can handle many common programming and performance concerns automatically and let the application programmers focus on the actual application content. Many of these concerns can be addressed using the processor virtualization and over-decomposition philosophy of Charm++ . Thus, the developer only sees virtual processors and lets the runtime system deal with underlying physical processors. This is implemented in AMPI by mapping MPI ranks to Charm++ user-level threads as illustrated in Figure 1 . As an immediate and simple benefit, the programmer can use as many virtual processors ("MPI ranks") as the problem can be easily decomposed to them. For example, suppose the problem domain has $n*2^n$ parts that can be easily distributed but programming for general number of MPI processes is burdensome, then the developer can have $n*2^n$ virtual processors on any number of physical ones using AMPI .

Figure 1: MPI processes are implemented as user-level threads in AMPI
Image virtualization
AMPI 's execution model consists of multiple user-level threads per process and, typically, one process per physical processor. The Charm++ scheduler coordinates execution of these threads (also called Virtual Processors or VPs) and controls execution as shown in Figure 2 . These VPs can also migrate between processors for the purpose of load balancing or other reasons. The number of VPs per processor specifies the virtualization ratio (degree of over-decomposition). For example, in Figure 2 virtualization ratio is four (there are four VPs per each processor). Figure 3 show how the problem domain is over-decomposed in AMPI 's VPs as opposed to other MPI implementations.
Figure 2: VPs are managed by Charm++ scheduler
Image ratio
Figure 3: Problem domain is over-decomposed to more VPs
Image prac
Another benefit of virtualization is communication and computation overlap which is automatically achieved without programming effort. Techniques such as software pipelining require significant programming effort to achieve this goal and improve performance. However, one can use AMPI to have more virtual processors than physical processors to overlap communication and computation. Each time a VP is blocked for communication, the Charm++ scheduler picks the next VP among those that are ready to execute. In this manner, while some of the VPs of a physical processor are waiting for a message to arrive, others can continue their execution. Thus, performance will be improved without any change to the source code. A potential benefit is that of better cache utilization. With over-decomposition, a smaller subdomain is accessed by a VP repeatedly in different function calls before getting blocked by communication and switching to another VP. That smaller subdomain may fit into cache if over-decomposition is enough. This concept is illustrated in Figure 4 where each AMPI subdomain (such as 12) is smaller than corresponding MPI subdomain (such as 3) and may fit into cache memory. Thus, there is a potential performance improvement without changing the source code.
Figure 4: Smaller subdomains may fit into cache and result in better performance
Image cache
One important concern is that of load imbalance. New generation parallel applications are dynamically varying, meaning that processors' load is shifting during execution. In a dynamic simulation application such as rocket simulation, burning solid fuel, sub-scaling for a certain part of the mesh, crack propagation, particle flows all contribute to load imbalance. Centralized load balancing strategy built into an application is impractical since each individual module is developed mostly independently by various developers. In addition, embedding a load balancing strategy in the code complicates it, and programming effort increases significantly. The runtime system is uniquely positioned to deal with load imbalance. Figure 5 shows the runtime system migrating a VP after detecting load imbalance. This domain may correspond to a weather forecast model where there is a storm cell in the top-left quadrant, which requires more computation to simulate. AMPI will then migrate VP 13 to balance the division of work across processors and improve performance. Note that incorporating this sort of load balancing inside the application code may take a lot of effort and complicate the code.
Figure 5: AMPI migrates VPs across processors for load balancing
Image migrate
There are many different load balancing strategies built into Charm++ that can be selected by an AMPI application developer. Among those, some may fit better for a particular application depending on its characteristics. Moreover, one can write a new load balancer, best suited for an application, by the simple API provided inside Charm++ infrastructure. Our approach is based on actual measurement of load information at runtime, and on migrating computations from heavily loaded to lightly loaded processors. For this approach to be effective, we need the computation to be split into pieces many more in number than available processors. This allows us to flexibly map and re-map these computational pieces to available processors. This approach is usually called ``multi-domain decomposition''. Charm++ , which we use as a runtime system layer for the work described here, simplifies our approach. It embeds an elaborate performance tracing mechanism, a suite of plug-in load balancing strategies, infrastructure for defining and migrating computational load, and is interoperable with other programming paradigms.

1 . 2 Terminology

A module refers to either a complete program or a library with an orchestrator subroutine 2 . An orchestrator subroutine specifies the main control flow of the module by calling various subroutines from the associated library and does not usually have much state associated with it.

A thread is a lightweight process that owns a stack and machine registers including a program counter, but shares code and data with other threads within the same address space. If the underlying operating system recognizes a thread, it is known as a kernel thread, otherwise it is known as a user-thread. A context-switch between threads refers to suspending one thread's execution and transferring control to another thread. Kernel threads typically have higher context switching costs than user-threads because of operating system overheads. The policy implemented by the underlying system for transferring control between threads is known as its thread scheduling policy. Scheduling policy for kernel threads is determined by the operating system, and is often more inflexible than user-threads. Scheduling policy is said to be non-preemptive if a context-switch occurs only when the currently running thread willingly asks to be suspended, otherwise it is said to be preemptive. AMPI threads are non-preemptive user-level threads.

A chunk is a combination of a user-level thread and the data it manipulates. When a program is converted from MPI to AMPI , we convert an MPI process into a chunk. This conversion is referred to as chunkification.

An object is just a blob of memory on which certain computations can be performed. The memory is referred to as an object's state, and the set of computations that can be performed on the object is called the interface of the object.

2 Charm++

Charm++ is an object-oriented parallel programming library for C++ . It differs from traditional message passing programming libraries (such as MPI) in that Charm++ is ``message-driven''. Message-driven parallel programs do not block the processor waiting for a message to be received. Instead, each message carries with itself a computation that the processor performs on arrival of that message. The underlying runtime system of Charm++ is called Converse , which implements a ``scheduler'' that chooses which message to schedule next (message-scheduling in Charm++ involves locating the object for which the message is intended, and executing the computation specified in the incoming message on that object). A parallel object in Charm++ is a C++ object on which a certain computations can be asked to be performed from remote processors.

Charm++ programs exhibit latency tolerance since the scheduler always picks up the next available message rather than waiting for a particular message to arrive. They also tend to be modular, because of their object-based nature. Most importantly, Charm++ programs can be dynamically load balanced , because the messages are directed at objects and not at processors; thus allowing the runtime system to migrate the objects from heavily loaded processors to lightly loaded processors. It is this feature of Charm++ that we utilize for AMPI .

Since many CSE applications are originally written using MPI, one would have to do a complete rewrite if they were to be converted to Charm++ to take advantage of dynamic load balancing and other Charm++ features. This is indeed impractical. However, Converse - the runtime system of Charm++ - supports interoperability between different parallel programming paradigms such as parallel objects and threads. Using this feature, we developed AMPI , an implementation of a significant subset of the MPI-2.0 standard over Charm++ . AMPI is described in the next section.


AMPI utilizes the dynamic load balancing and other capabilities of Charm++ by associating a ``user-level'' thread with each Charm++ migratable object. User's code runs inside this thread, so that it can issue blocking receive calls similar to MPI, and still present the underlying scheduler an opportunity to schedule other computations on the same processor. The runtime system keeps track of the computational loads of each thread as well as the communication graph between AMPI threads, and can migrate these threads in order to balance the overall load while simultaneously minimizing communication overhead.

3 . 1 AMPI Status

Currently all the MPI-1.1 Standard functions are supported in AMPI , with a collection of our extentions explained in detail in this manual. One-sided communication calls in MPI-2 and MPI-3 are implemented, but they do not yet take advantage of RMA features. Also ROMIO 3 has been integrated to support parallel I/O features. Link with -lampiromio to take advantage of this library.

The following MPI-1.1 basic datatypes are supported in AMPI . (Some are not available in Fortran binding. Refer to the MPI-1.1 Standard for details.)



MPI_INT            MPI_SHORT           MPI_FLOAT_INT     MPI_2DOUBLE





The following MPI-1.1 reduction operations are supported in AMPI .



The following are AMPI extension calls, which will be explained in detail in this manual.

 MPI_Migrate     MPI_MemCheckpoint  MPI_Checkpoint  MPI_Restart     MPI_Register

MPI_My_pe       MPI_My_node        MPI_Num_pes     MPI_Num_nodes   MPI_Get_userdata

MPI_Ialltoall   MPI_Iallgather     MPI_Iallreduce  MPI_Ireduce     MPI_IGet

3 . 2 Name for Main Program

To convert an existing program to use AMPI, the main function or program may need to be renamed. The changes should be made as follows:

3 . 2 . 1 Fortran

You must declare the main program as a subroutine called ``MPI_MAIN''. Do not declare the main subroutine as a program because it will never be called by the AMPI runtime.

program pgm -> subroutine MPI_Main
 ...              ...

end program -> end subroutine

3 . 2 . 2 C or C++

The main function can be left as is, if mpi.h is included before the main function. This header file has a preprocessor macro that renames main, and the renamed version is called by the AMPI runtime by each thread.

3 . 3 Global Variable Privatization

For the before-mentioned benefits to be effective, one needs to map multiple user-level threads onto each processor. Traditional MPI programs assume that the entire processor is allocated to themselves, and that only one thread of control exists within the process's address space. So, they may use global and static variables in the program. However, global and static variables are problematic for multi-threaded environments such as AMPI or OpenMP. This is because there is a single instance of those variables so they will be shared among different threads in the single address space and a wrong result may be produced by the program. Figure 6 shows an example of a multi-threaded application with two threads in a single process. $var$ is a global or static variable in this example. Thread 1 assigns a value to it, then it gets blocked for communication and another thread can continue. Thereby, thread 2 is scheduled next and accesses $var$ which is wrong. Semantics of this program needs separate instances of $var$ for each of the threads. That is where the need arises to make some transformations to the original MPI program in order to run correctly with AMPI . Note, this is the only change necessary to run an MPI program with AMPI , that the program be thread-safe and have no global variables whose values differ across different MPI ranks.

Figure 6: Global or static variables are an issue for AMPI
Image global

The basic transformation needed to port the MPI program to AMPI is privatization of global variables. 4 With the MPI process model, each MPI node can keep a copy of its own ``permanent variables'' - variables that are accessible from more than one subroutines without passing them as arguments. Module variables, ``saved'' subroutine local variables, and common blocks in Fortran 90 belong to this category. If such a program is executed without privatization on AMPI , all the AMPI threads that reside on one processor will access the same copy of such variables, which is clearly not the desired semantics. To ensure correct execution of the original source program, it is necessary to make such variables ``private'' to individual threads. We provide two choices: automatic global swapping and manual code modification.

3 . 3 . 1 Automatic Globals Swapping

Thanks to the ELF Object Format, we have successfully automated the procedure of switching the set of user global variables when switching thread contexts. Executable and Linkable Format (ELF) is a common standard file format for Object Files in Unix-like operating systems. ELF maintains a Global Offset Table (GOT) for globals so it is possible to switch GOT contents at thread context-switch by the runtime system.

The only thing that the user needs to do is to set flag -swapglobals at compile and link time (e.g. ``ampicc -o prog prog.c -swapglobals"). It does not need any change to the source code and works with any language (C, C++, Fortran, etc). However, it does not handle static variables and has a context switching overhead that grows with the number of global variables. Currently, this feature only works on x86 and x86_64 (e.g. amd64) platforms that fully support ELF. Thus, it may not work on PPC or Itanium, or on some microkernels such as Catamount. When this feature does not work for you, you can try other ways of handling global or static variables, which are detailed in the following sections.

3 . 3 . 2 Manual Change

We have employed a strategy of argument passing to do this privatization transformation. That is, the global variables are bunched together in a single user-defined type, which is allocated by each thread dynamically. Then a pointer to this type is passed from subroutine to subroutine as an argument. Since the subroutine arguments are passed on the stack, which is not shared across all threads, each subroutine when executing within a thread operates on a private copy of the global variables.

This scheme is demonstrated in the following examples. The original Fortran 90 code contains a module shareddata . This module is used in the main program and a subroutine subA .


MODULE shareddata
  INTEGER :: myrank
  DOUBLE PRECISION :: xyz(100)


  USE shareddata
  include 'mpif.h'
  INTEGER :: i, ierr
  CALL MPI_Init(ierr)
  CALL MPI_Comm_rank(MPI_COMM_WORLD, myrank, ierr)
  DO i = 1, 100
    xyz(i) =  i + myrank
  CALL subA
  CALL MPI_Finalize(ierr)


  USE shareddata
  INTEGER :: i
  DO i = 1, 100
    xyz(i) = xyz(i) + 1.0


//C Example
#include <mpi.h>

int myrank;

double xyz[100];

void subA();

int main(int argc, char** argv){
  int i;
  MPI_Init(&argc, &argv);
  MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
    xyz[i] = i + myrank;

void subA(){
  int i;
    xyz[i] = xyz[i] + 1.0;

AMPI executes the main subroutine inside a user-level thread as a subroutine.

Now we transform this program using the argument passing strategy. We first group the shared data into a user-defined type.


MODULE shareddata
  TYPE chunk
    INTEGER :: myrank
    DOUBLE PRECISION :: xyz(100)


//C Example

struct shareddata{
  int myrank;
  double xyz[100];

Now we modify the main subroutine to dynamically allocate this data and change the references to them. Subroutine subA is then modified to take this data as argument.


  USE shareddata
  INTEGER :: i, ierr
  TYPE(chunk), pointer :: c
  CALL MPI_Init(ierr)
  CALL MPI_Comm_rank(MPI_COMM_WORLD, c%myrank, ierr)
  DO i = 1, 100
    c%xyz(i) =  i + c%myrank
  CALL subA(c)
  CALL MPI_Finalize(ierr)


  USE shareddata
  TYPE(chunk) :: c
  INTEGER :: i
  DO i = 1, 100
    c%xyz(i) = c%xyz(i) + 1.0


//C Example

void MPI_Main{
  int i,ierr;
  struct shareddata *c;
  ierr = MPI_Init();
  c = (struct shareddata*)malloc(sizeof(struct shareddata));
  ierr = MPI_Comm_rank(MPI_COMM_WORLD, c.myrank);
  for(i=0;i<100;i++)[i] = i + c.myrank;
  ierr = MPI_Finalize();

void subA(struct shareddata *c){
  int i;
  for(i=0;i<100;i++)[i] =[i] + 1.0;

With these changes, the above program can be made thread-safe. Note that it is not really necessary to dynamically allocate chunk . One could have declared it as a local variable in subroutine MPI_Main . (Or for a small example such as this, one could have just removed the shareddata module, and instead declared both variables xyz and myrank as local variables). This is indeed a good idea if shared data are small in size. For large shared data, it would be better to do heap allocation because in AMPI , the stack sizes are fixed at the beginning (and can be specified from the command line) and stacks do not grow dynamically.

3 . 3 . 3 Source-to-source Transformation

Another approach is to do the changes described in the previous scheme automatically. It means that we can use a tool to transform the source code to move global or static variables in an object and pass them around. This approach is portable across systems and compilers and may also improve locality and hence cache utilization. It also does not have the context-switch overhead of swapping globals. We have multiple tools for automating these transformations for different languages. Currently, there is a tool called Photran 5 for refactoring Fortran codes that can do this transformation. It is Eclipse-based and works by constructing Abstract Syntax Trees (ASTs) of the program. We also have a tool built on top of the ROSE compiler 6 that works for C/C++ and Fortran programs that is available upon request.

3 . 3 . 4 TLS-Globals

Thread Local Store (TLS) was originally employed in kernel threads to localize variables and provide thread safety. It can be used by annotating global/static variables in C/C++ with __thread in the source code. Thus, those variables will have one instance per extant thread. This keyword is not an official extension of the C language, though compiler writers are encouraged to implement this feature. Currently, the ELF file format supports Thread Local Storage.

It handles both global and static variables and has no context-switching overhead. Context-switching is just changing the TLS segment register to point to the thread's local copy. However, although it is popular, it is not supported by all compilers. Currently, Charm++ supports it for x86/x86_64 platforms. A modified gfortran is also available to use this feature. To use TLS-Globals, one has to add __thread before all global variables. For the example above, the following changes to the code handles the global variables:

 __thread int myrank;

__thread double xyz[100];

The runtime system also should know that TLS-Globals is used at compile time:

 ampiCC -o example example.C -tlsglobals

Table 1 shows portability of different schemes.

Table 1: Portability of current implementations of three privatization schemes. ``Yes'' means we have implemented this technique. ``Maybe'' indicates there are no theoretical problems, but no implementation exists. ``No'' indicates the technique is impossible on this platform.
Privatization Scheme X86 IA64 Opteron Mac OS X IBM SP SUN BG/P Cray/XT Windows
Transformation Yes Yes Yes Yes Yes Yes Yes Yes Yes
GOT-Globals Yes Yes Yes No No Maybe No No No
TLS-Globals Yes Maybe Yes No Maybe Maybe No Yes Maybe

3 . 4 Extensions for Migrations

For MPI chunks to migrate, we have added a few calls to AMPI . These include ability to register thread-specific data with the run-time system, to pack all the thread's data, and to express willingness to migrate.

3 . 4 . 1 Registering Chunk data

When the AMPI runtime system decides that load imbalance exists within the application, it will invoke one of its internal load balancing strategies, which determines the new mapping of AMPI chunks so as to balance the load. Then the AMPI runtime packs up the chunk's state and moves it to its new home processor. AMPI packs up any internal data in use by the chunk, including the thread's stack in use. This means that the local variables declared in subroutines in a chunk, which are created on stack, are automatically packed up by the AMPI runtime system. However, it has no way of knowing what other data are in use by the chunk. Thus upon starting execution, a chunk needs to notify the system about the data that it is going to use (apart from local variables). Even with the data registration, AMPI cannot determine what size the data is, or whether the registered data contains pointers to other places in memory. For this purpose, a packing subroutine also needs to be provided to the AMPI runtime system along with registered data. (See next section for writing packing subroutines.) The call provided by AMPI for doing this is MPI_Register . This function takes two arguments: A data item to be transported along with the chunk, and the pack subroutine, and returns an integer denoting the registration identifier. In C/C++ programs, it may be necessary to use this return value after migration completes and control returns to the chunk, using function MPI_Get_userdata . Therefore, the return value should be stored in a local variable.

3 . 4 . 2 Migration

The AMPI runtime system could detect load imbalance by itself and invoke the load balancing strategy. However, since the application code is going to pack/unpack the chunk's data, writing the pack subroutine will be complicated if migrations occur at a stage unknown to the application. For example, if the system decides to migrate a chunk while it is in initialization stage (say, reading input files), application code will have to keep track of how much data it has read, what files are open etc. Typically, since initialization occurs only once in the beginning, load imbalance at that stage would not matter much. Therefore, we want the demand to perform load balance check to be initiated by the application.

AMPI provides a subroutine MPI_Migrate for this purpose. Each chunk periodically calls MPI_Migrate . Typical CSE applications are iterative and perform multiple time-steps. One should call MPI_Migrate in each chunk at the end of some fixed number of timesteps. The frequency of MPI_Migrate should be determined by a tradeoff between conflicting factors such as the load balancing overhead, and performance degradation caused by load imbalance. In some other applications, where application suspects that load imbalance may have occurred, as in the case of adaptive mesh refinement; it would be more effective if it performs a couple of timesteps before telling the system to re-map chunks. This will give the AMPI runtime system some time to collect the new load and communication statistics upon which it bases its migration decisions. Note that MPI_Migrate does NOT tell the system to migrate the chunk, but merely tells the system to check the load balance after all the chunks call MPI_Migrate . To migrate the chunk or not is decided only by the system's load balancing strategy.

3 . 4 . 3 Packing/Unpacking Thread Data

Once the AMPI runtime system decides which chunks to send to which processors, it calls the specified pack subroutine for that chunk, with the chunk-specific data that was registered with the system using MPI_Register . This section explains how a subroutine should be written for performing pack/unpack.

There are three steps for transporting the chunk's data to another processor. First, the system calls a subroutine to get the size of the buffer required to pack the chunk's data. This is called the ``sizing'' step. In the next step, which is called immediately afterward on the source processor, the system allocates the required buffer and calls the subroutine to pack the chunk's data into that buffer. This is called the ``packing'' step. This packed data is then sent as a message to the destination processor, where first a chunk is created (along with the thread) and a subroutine is called to unpack the chunk's data from the buffer. This is called the ``unpacking'' step.

Though the above description mentions three subroutines called by the AMPI runtime system, it is possible to actually write a single subroutine that will perform all the three tasks. This is achieved using something we call a ``pupper''. A pupper is an external subroutine that is passed to the chunk's pack-unpack-sizing subroutine, and this subroutine, when called in different phases performs different tasks. An example will make this clear:

Suppose the chunk data is defined as a user-defined type in Fortran 90:


MODULE chunkmod
  INTEGER, parameter :: nx=4, ny=4, tchunks=16
  TYPE, PUBLIC :: chunk
      REAL(KIND=8) t(22,22)
      INTEGER xidx, yidx
      REAL(KIND=8), dimension(400):: bxm, bxp, bym, byp
  END TYPE chunk


//C Example

struct chunk{
  double t;
  int xidx, yidx;
  double bxm,bxp,bym,byp;

Then the pack-unpack subroutine chunkpup for this chunk module is written as:


SUBROUTINE chunkpup(p, c)
  USE pupmod
  USE chunkmod
  INTEGER :: p
  TYPE(chunk) :: c

  call pup(p, c%t)
  call pup(p, c%xidx)
  call pup(p, c%yidx)
  call pup(p, c%bxm)
  call pup(p, c%bxp)
  call pup(p, c%bym)
  call pup(p, c%byp)

end subroutine

//C Example

void chunkpup(pup_er p, struct chunk c){

There are several things to note in this example. First, the same subroutine pup (declared in module pupmod ) is called to size/pack/unpack any type of data. This is possible because of procedure overloading possible in Fortran 90. Second is the integer argument p . It is this argument that specifies whether this invocation of subroutine chunkpup is sizing, packing or unpacking. Third, the integer parameters declared in the type chunk need not be packed or unpacked since they are guaranteed to be constants and thus available on any processor.

A few other functions are provided in module pupmod . These functions provide more control over the packing/unpacking process. Suppose one modifies the chunk type to include allocatable data or pointers that are allocated dynamically at runtime. In this case, when the chunk is packed, these allocated data structures should be deallocated after copying them to buffers, and when the chunk is unpacked, these data structures should be allocated before copying them from the buffers. For this purpose, one needs to know whether the invocation of chunkpup is a packing one or unpacking one. For this purpose, the pupmod module provides functions fpup_isdeleting ( fpup_isunpacking ). These functions return logical value .TRUE. if the invocation is for packing (unpacking), and .FALSE. otherwise. The following example demonstrates this:

Suppose the type dchunk is declared as:


MODULE dchunkmod
  TYPE, PUBLIC :: dchunk
      INTEGER :: asize
      REAL(KIND=8), pointer :: xarr(:), yarr(:)
  END TYPE dchunk


//C Example

struct dchunk{
  int asize;
  double* xarr, *yarr;

Then the pack-unpack subroutine is written as:


SUBROUTINE dchunkpup(p, c)
  USE pupmod
  USE dchunkmod
  INTEGER :: p
  TYPE(dchunk) :: c

  pup(p, c%asize)
  IF (fpup_isunpacking(p)) THEN       !! if invocation is for unpacking
  pup(p, c%xarr)
  pup(p, c%yarr)
  IF (fpup_isdeleting(p)) THEN        !! if invocation is for packing


//C Example

void dchunkpup(pup_er p, struct dchunk c){
    c.xarr = (double *)malloc(sizeof(double)*c.asize);
    c.yarr = (double *)malloc(sizeof(double)*c.asize);

One more function fpup_issizing is also available in module pupmod that returns .TRUE. when the invocation is a sizing one. In practice one almost never needs to use it.

3 . 5 Extensions for Checkpointing

The pack-unpack subroutines written for migrations make sure that the current state of the program is correctly packed (serialized) so that it can be restarted on a different processor. Using the same subroutines, it is also possible to save the state of the program to disk, so that if the program were to crash abruptly, or if the allocated time for the program expires before completing execution, the program can be restarted from the previously checkpointed state. Thus, the pack-unpack subroutines act as the key facility for checkpointing in addition to their usual role for migration. Just as in load balancing, no application specific code is required: the AMPI runtime takes care of all the details involved in migrating data.

A subroutine for checkpoint purpose has been added to AMPI: void MPI_Checkpoint(char *dirname); This subroutine takes a directory name as its argument. It is a collective function, meaning every virtual processor in the program needs to call this subroutine and specify the same directory name. (Typically, in an iterative AMPI program, the iteration number, converted to a character string, can serve as a checkpoint directory name.) This directory is created, and the entire state of the program is checkpointed to this directory. One can restart the program from the checkpointed state by specifying "+restart dirname" on the command-line. This capability is powered by the Charm++ runtime system. For more information about checkpoint/restart mechanisms in Charm++ please refer to Charm++ manual.

3 . 6 Extensions for Memory Efficiency

MPI functions usually require the user to preallocate the data buffers needed before the functions being called. For unblocking communication primitives, sometimes the user would like to do lazy memory allocation until the data actually arrives, which gives the oppotunities to write more memory efficient programs. We provide a set of AMPI functions as an extension to the standard MPI-2 one-sided calls, where we provide a split phase MPI_Get called MPI_IGet. MPI_IGet preserves the similar semantics as MPI_Get except that no user buffer is provided to hold incoming data. MPI_IGet_Wait will block until the requested data arrives and runtime system takes care to allocate space, do appropriate unpacking based on data type, and return. MPI_IGet_Free lets the runtime system free the resources being used for this get request including the data buffer. And MPI_IGet_Data is the utility program that returns the actual data.

int MPI_IGet(MPI_Aint orgdisp, int orgcnt, MPI_Datatype orgtype, int rank,
             MPI_Aint targdisp, int targcnt, MPI_Datatype targtype, MPI_Win win,
             MPI_Request *request);

int MPI_IGet_Wait(MPI_Request *request, MPI_Status *status, MPI_Win win);

int MPI_IGet_Free(MPI_Request *request, MPI_Status *status, MPI_Win win);

char* MPI_IGet_Data(MPI_Status status);

3 . 7 Extensions for Interoperability

Interoperability between different modules is essential for coding coupled simulations. In this extension to AMPI , each MPI application module runs within its own group of user-level threads distributed over the physical parallel machine. In order to let AMPI know which chunks are to be created, and in what order, a top level registration routine needs to be written. A real-world example will make this clear. We have an MPI code for fluids and another MPI code for solids, both with their main programs, then we first transform each individual code to run correctly under AMPI as standalone codes. This involves the usual ``chunkification'' transformation so that multiple chunks from the application can run on the same processor without overwriting each other's data. This also involves making the main program into a subroutine and naming it MPI_Main .

Thus now, we have two MPI_Main s, one for the fluids code and one for the solids code. We now make these codes co-exist within the same executable, by first renaming these MPI_Main s as Fluids_Main and Solids_Main 7 writing a subroutine called MPI_Setup .


  USE ampi
  CALL MPI_Register_main(Solids_Main)
  CALL MPI_Register_main(Fluids_Main)


//C Example

void MPI_Setup(){

This subroutine is called from the internal initialization routines of AMPI and tells AMPI how many numbers of distinct chunk types (modules) exist, and which orchestrator subroutines they execute.

The number of chunks to create for each chunk type is specified on the command line when an AMPI program is run. Appendix B explains how AMPI programs are run, and how to specify the number of chunks ( +vp option). In the above case, suppose one wants to create 128 chunks of Solids and 64 chunks of Fluids on 32 physical processors, one would specify those with multiple +vp options on the command line as:

 > charmrun gen1.x +p 32 +vp 128 +vp 64

This will ensure that multiple chunk types representing different complete applications can co-exist within the same executable. They can also continue to communicate among their own chunk-types using the same AMPI function calls to send and receive with communicator argument as MPI_COMM_WORLD . But this would be completely useless if these individual applications cannot communicate with each other, which is essential for building efficient coupled codes. For this purpose, we have extended the AMPI functionality to allow multiple `` COMM_WORLD s''; one for each application. These world communicators form a ``communicator universe'': an array of communicators aptly called MPI_COMM_UNIVERSE . This array of communicators is indexed [1 . . . MPI_MAX_COMM ]. In the current implementation, MPI_MAX_COMM is 8, that is, maximum of 8 applications can co-exist within the same executable.

The order of these COMM_WORLD s within MPI_COMM_UNIVERSE is determined by the order in which individual applications are registered in MPI_Setup .

Thus, in the above example, the communicator for the Solids module would be MPI_COMM_UNIVERSE(1) and communicator for Fluids module would be MPI_COMM_UNIVERSE(2) .

Now any chunk within one application can communicate with any chunk in the other application using the familiar send or receive AMPI calls by specifying the appropriate communicator and the chunk number within that communicator in the call. For example if a Solids chunk number 36 wants to send data to chunk number 47 within the Fluids module, it calls:


INTEGER , PARAMETER :: Fluids_Comm = 2

CALL MPI_Send(InitialTime, 1, MPI_Double_Precision, tag, 
              47, MPI_Comm_Universe(Fluids_Comm), ierr)

//C Example

int Fluids_Comm = 2;

ierr = MPI_Send(InitialTime, 1, MPI_DOUBLE, tag,
                47, MPI_Comm_Universe(Fluids_Comm));

The Fluids chunk has to issue a corresponding receive call to receive this data:


INTEGER , PARAMETER :: Solids_Comm = 1

CALL MPI_Recv(InitialTime, 1, MPI_Double_Precision, tag, 
              36, MPI_Comm_Universe(Solids_Comm), stat, ierr)

//C Example

int Solids_Comm = 1;

ierr = MPI_Recv(InitialTime, 1, MPI_DOUBLE, tag,
                36, MPI_Comm_Universe(Solids_Comm), &stat);

3 . 8 Extensions for Sequential Re-run of a Parallel Node

In some scenarios, a sequential re-run of a parallel node is desired. One example is instruction-level accurate architecture simulations, in which case the user may wish to repeat the execution of a node in a parallel run in the sequential simulator. AMPI provides support for such needs by logging the change in the MPI environment on a certain processors. To activate the feature, build AMPI module with variable ``AMPIMSGLOG'' defined, like the following command in charm directory. (Linking with zlib ``-lz'' might be required with this, for generating compressed log file.)

 > ./build AMPI net-linux -DAMPIMSGLOG

The feature is used in two phases: writing (logging) the environment and repeating the run. The first logging phase is invoked by a parallel run of the AMPI program with some additional command line options.

 > ./charmrun ./pgm +p4 +vp4 +msgLogWrite +msgLogRank 2 +msgLogFilename "msg2.log"

In the above example, a parallel run with 4 processors and 4 VPs will be executed, and the changes in the MPI environment of processor 2 (also VP 2, starting from 0) will get logged into diskfile "msg2.log".

Unlike the first run, the re-run is a sequential program, so it is not invoked by charmrun (and omitting charmrun options like +p4 and +vp4), and additional command line options are required as well.

 > ./pgm +msgLogRead +msgLogRank 2 +msgLogFilename "msg2.log"

3 . 9 Communication Optimizations for AMPI

AMPI is powered by the Charm++ communication optimization support now! Currently the user needs to specify the communication pattern by command line option. In the future this can be done automatically by the system.

Currently there are four strategies available: USE_DIRECT, USE_MESH, USE_HYPERCUBE and USE_GRID. USE_DIRECT sends the message directly. USE_MESH imposes a 2d Mesh virtual topology on the processors so each processor sends messages to its neighbors in its row and column of the mesh which forward the messages to their correct destinations. USE_HYPERCUBE and USE_GRID impose a hypercube and a 3d Grid topologies on the processors. USE_HYPERCUBE will do best for very small messages and small number of processors, 3d has better performance for slightly higher message sizes and then Mesh starts performing best. The programmer is encouraged to try out all the strategies. (Stolen from the CommLib manual by Sameer :)

For more details please refer to the CommLib paper 8 .

Specifying the strategy is as simple as a command line option +strategy. For example:

 > ./charmrun +p64 alltoall +vp64 1000 100 +strategy USE_MESH

tells the system to use MESH strategy for CommLib. By default USE_DIRECT is used.

3 . 10 User Defined Initial Mapping

You can define the initial mapping of virtual processors (vp) to physical processors (p) as a runtime option. You can choose from predefined initial mappings or define your own mappings. The following predefined mappings are available:

Round Robin

This mapping scheme maps virtual processor to physical processor in round-robin fashion, i.e. if there are 8 virtual processors and 2 physical processors then virtual processors indexed 0,2,4,6 will be mapped to physical processor 0 and virtual processors indexed 1,3,5,7 will be mapped to physical processor 1.

 > ./charmrun ./hello +p2 +vp8 +mapping RR_MAP

Block Mapping

This mapping scheme maps virtual processors to physical processor in chunks, i.e. if there are 8 virtual processors and 2 physical processors then virtual processors indexed 0,1,2,3 will be mapped to physical processor 0 and virtual processors indexed 4,5,6,7 will be mapped to physical processor 1.

 > ./charmrun ./hello +p2 +vp8 +mapping BLOCK_MAP

Proportional Mapping

This scheme takes the processing capability of physical processors into account for mapping virtual processors to physical processors, i.e. if there are 2 processors with different processing power, then the number of virtual processors mapped to processors will be in proportion to their processing power.

 > ./charmrun ./hello +p2 +vp8 +mapping PROP_MAP
> ./charmrun ./hello +p2 +vp8

If you want to define your own mapping scheme, please contact us for assistance.

3 . 11 Compiling AMPI Programs

Charm++ provides a cross-platform compile-and-link script called charmc to compile C, C++ , Fortran, Charm++ and AMPI programs. This script resides in the bin subdirectory in the Charm++ installation directory. The main purpose of this script is to deal with the differences of various compiler names and command-line options across various machines on which Charm++ runs. It is recommended that charmc be used to compile and link AMPI programs. One major advantage of using charmc is that one does not have to specify which libraries are to be linked for ensuring that C++ and Fortran 90 codes are linked together correctly. Appropriate libraries required for linking such modules together are known to charmc for various machines.

In spite of the platform-neutral syntax of charmc , one may have to specify some platform-specific options for compiling and building AMPI codes. Fortunately, if charmc does not recognize any particular options on its command line, it promptly passes it to all the individual compilers and linkers it invokes to compile the program.

A. Installing AMPI

AMPI is included in the source distribution of Charm++ . To get the latest sources from PPL, visit:

and follow the download link. Then build Charm++ and AMPI from source.

The build script for Charm++ is called build . The syntax for this script is:

 > build <target> <version> <opts>

For building AMPI (which also includes building Charm++ and other libraries needed by AMPI ), specify <target> to be AMPI . And <opts> are command line options passed to the charmc compile script. Common compile time options such as -g, -O, -Ipath, -Lpath, -llib are accepted.

To build a debugging version of AMPI , use the option: `` -g ''. To build a production version of AMPI , use the options: `` -with-production ''.

<version> depends on the machine, operating system, and the underlying communication library one wants to use for running AMPI programs. See the charm/README file for details on picking the proper version. Following is an example of how to build AMPI under linux and ethernet environment, with debugging info produced:

 > build AMPI net-linux -g

B. Building and Running AMPI Programs

B.. 1 Building

Charm++ provides a compiler called charmc in your charm/bin/ directory. You can use this compiler to build your AMPI program the same way as other compilers like cc. Especially to build an AMPI program, the command line option -language ampi should be applied. All the command line flags that you would use for other compilers can be used with charmc the same way. For example:

 > charmc -language ampi -c pgm.c -O3
> charmc -language ampi -o pgm pgm.o -lm -O3 

Shortcuts to the AMPI compiler are provided. If you have added charm/bin into your $PATH environment variable, simply type mpicc, mpiCC, mpif77, and mpif90 as provided by other MPI implementations.

 > mpicc -c pgm.c -g

B.. 2 Running

Charm++ distribution contains a script called charmrun that makes the job of running AMPI programs portable and easier across all parallel machines supported by Charm++ . charmrun is copied to a directory where an AMPI prgram is built using charmc . It takes a command line parameter specifying number of processors, and the name of the program followed by AMPI options (such as number of chunks to create, and the stack size of every chunk) and the program arguments. A typical invocation of an AMPI program pgm with charmrun is:

 > charmrun pgm +p16 +vp64 +tcharm_stacksize 3276800

Here, the AMPI program pgm is run on 16 physical processors with 64 total chunks (which will be mapped 4 per processor initially), where each user-level thread associated with a chunk has a stack size of 3,276,800 bytes.


... subset 1
Currently, the PMPI profiling interface is missing from AMPI and the MPI-3.0 standard is under development.
... subroutine 2
Like many software engineering terms, this term is overused, and unfortunately clashes with Fortran 90 module that denotes a program unit. We specifically refer to the later as ``Fortran 90 module'' to avoid confusion.
... ROMIO 3
... variables. 4
Typical Fortran MPI programs contain three types of global variables.

  1. Global variables that are ``read-only''. These are either parameters that are set at compile-time. Or other variables that are read as input or set at the beginning of the program and do not change during execution. It is not necessary to privatize such variables.

  2. Global variables that are used as temporary buffers. These are variables that are used temporarily to store values to be accessible across subroutines. These variables have a characteristic that there is no blocking call such as MPI_recv between the time the variable is set and the time it is ever used. It is not necessary to privatize such variables either.

  3. True global variables. These are used across subroutines that contain blocking receives and therefore the possibility of a context switch between the definition and use of the variable. These variables need to be privatized.

...Photran 5
... compiler 6
...Solids_Main 7
Currently, we assume that the interface code, which does mapping and interpolation among the boundary values of Fluids and Solids domain, is integrated with one of Fluids and Solids.
... paper 8
L. V. Kale and Sameer Kumar and Krishnan Vardarajan, 2002.