Integrated OpenMP should co-exist with MPI interop
I expect that if we were to compile, link, and run a mixed Charm++/MPI application with OpenMP pragmas in both parts, it would either fail to parallelize the OpenMP code in one side or the other, oversubscribe threads to processors, or hang outright. The integrated OpenMP runtime should check the interop code's flag to see whether the process is running under Charm++ or MPI, and behave accordingly. This may also require back and forth initialization code at the boundary methods going between the two models.
#4 Updated by Seonmyeong Bak over 2 years ago
When MPI interoperate with Charm++ and OpenMP integration, MPI codes cannot use OpenMP properly right now.
It's because the binary of MPI and Charm++ linked with OpenMP integration, so OpenMP codes on MPI side calls functions on integrated OpenMP runtime, which doesn't allow the MPI codes to be parallelized. The integrated OpenMP uses the number of idle threads within a node and history vector of the previous ratios of the stolen and locally executed tasks.
All these variables are zero when Charm++ is not started, which prevent MPI codes from being parallelized by OpenMP pragmas.