Feature #1497

Shared memory method to pass data between processes that share the same node

Added by Eric Bohm 6 months ago. Updated 2 months ago.

Target version:
Start date:
Due date:
% Done:



PXSHM exists for this purpose on the net layers. However it is not generally used in SMP mode to exchange data when multiple comm threads share the same node.

Shared memory safety of our pxshm implementation and portability is a concern, so if the implementation choice were PXSHM, we would need to smoothly fail over to not using it on nodes which do not support it. Note, that is a runtime property as it can depend on kernel module load choices on compute nodes which can differ from the head node on which charm was compiled.

Some experimentation should be undertaken to determine where (if anywhere) this is provides any benefit.


Feature #1478: Investigate use of pxshm in CmiAllocNew

Feature #1655: Enable use of pxshm/xpmem on mpi and verbs buildsNew


#1 Updated by Sam White 6 months ago

The main things to investigate here appear to be xpmem, pxshm, knem, limic, and cma. LiMIC appears to be tied to MPI's pt2pt semantics. OpenMPI's "vader" shared-memory BTL is able to use any of those implementations that are available ( and I've seen results from several papers (look at Nathan Hjelm's publications) that xpmem is the best performing of those, with knem being second most performant and perhaps more portable than xpmem. OpenMPI can get to <0.3us 1-byte message latency within a node using xpmem on Cray XE6 and XC40. I've also seen Intel MPI achieve ~0.5us shared-memory latency for 1-byte messages on KNLs and Haswells. From looking at Charm SMP pingpong we're usually 5-10x worse than those numbers.

The main benefit will be for large messages, so a good first target would be the zero copy send API.

#2 Updated by Eric Bohm 2 months ago

  • Assignee set to Juan Galvez

Also available in: Atom PDF