Optimization of MPI layer
Revisit MPI layer and possibly rewrite to improve performance.
#4 Updated by Nikhil Jain over 6 years ago
Quoting Ralf's Email on this-
As for MPI, we've found something that apparently makes it be as fast as gemini_gni-crayxe, and the same seems to be true of ibverbs. We're testing this on intrepid & vesta as well. Pavan doesn't care about tcp (i.e. MPI over sockets) at all, where the difference in performance is biggest (almost 4x).
#12 Updated by Sam White about 1 year ago
Small optimization to use MPI-3's MPI_Mprobe and MPI_Mrecv where possible: https://charm.cs.illinois.edu/gerrit/#/c/charm/+/2785/
Edit: we ended up reverting this because support for MPI_Mprobe is spotty and not really detectable at configure time in a portable manner.