CMA support for passing data between processes on the same node
PXSHM exists for this purpose on the net layers. However it is not generally used in SMP mode to exchange data when multiple comm threads share the same node.
Shared memory safety of our pxshm implementation and portability is a concern, so if the implementation choice were PXSHM, we would need to smoothly fail over to not using it on nodes which do not support it. Note, that is a runtime property as it can depend on kernel module load choices on compute nodes which can differ from the head node on which charm was compiled.
Some experimentation should be undertaken to determine where (if anywhere) this is provides any benefit.
#1 Updated by Sam White almost 2 years ago
The main things to investigate here appear to be xpmem, pxshm, knem, limic, and cma. LiMIC appears to be tied to MPI's pt2pt semantics. OpenMPI's "vader" shared-memory BTL is able to use any of those implementations that are available (http://blogs.cisco.com/performance/the-vader-shared-memory-transport-in-open-mpi-now-featuring-3-flavors-of-zero-copy) and I've seen results from several papers (look at Nathan Hjelm's publications) that xpmem is the best performing of those, with knem being second most performant and perhaps more portable than xpmem. OpenMPI can get to <0.3us 1-byte message latency within a node using xpmem on Cray XE6 and XC40. I've also seen Intel MPI achieve ~0.5us shared-memory latency for 1-byte messages on KNLs and Haswells. From looking at Charm SMP pingpong we're usually 5-10x worse than those numbers.
The main benefit will be for large messages, so a good first target would be the zero copy send API.
#4 Updated by Sam White over 1 year ago
Nitin is working on adding support for using Cross Memory Attach (CMA) for this. We already has an implementation working for the zero copy send API, which shows good performance. CMA is available by default on Linux kernels v3.2+ so is portable across most of the systems we care about.