Feature #1300

Integrated OpenMP should co-exist with MPI interop

Added by Phil Miller over 2 years ago. Updated over 2 years ago.

In Progress
MPI Interoperability
Target version:
Start date:
Due date:
% Done:



I expect that if we were to compile, link, and run a mixed Charm++/MPI application with OpenMP pragmas in both parts, it would either fail to parallelize the OpenMP code in one side or the other, oversubscribe threads to processors, or hang outright. The integrated OpenMP runtime should check the interop code's flag to see whether the process is running under Charm++ or MPI, and behave accordingly. This may also require back and forth initialization code at the boundary methods going between the two models.


#1 Updated by Seonmyeong Bak over 2 years ago

Can I get any example to test my patch for this issue?

#2 Updated by Phil Miller over 2 years ago

It should be straightforward to modify the current MPI interop example we already have in the repo, to add OpenMP pragmas on each side. Could you do that?

#3 Updated by Seonmyeong Bak over 2 years ago

OK. I'll test it and think of how to make the integration compatible with this environment.

#4 Updated by Seonmyeong Bak over 2 years ago

When MPI interoperate with Charm++ and OpenMP integration, MPI codes cannot use OpenMP properly right now.

It's because the binary of MPI and Charm++ linked with OpenMP integration, so OpenMP codes on MPI side calls functions on integrated OpenMP runtime, which doesn't allow the MPI codes to be parallelized. The integrated OpenMP uses the number of idle threads within a node and history vector of the previous ratios of the stolen and locally executed tasks.
All these variables are zero when Charm++ is not started, which prevent MPI codes from being parallelized by OpenMP pragmas.

#5 Updated by Phil Miller over 2 years ago

OK, that's matches my expectations. Please make it work.

#6 Updated by Seonmyeong Bak over 2 years ago

  • Status changed from New to In Progress

Also available in: Atom PDF