Project

General

Profile

Feature #1497

Shared memory method to pass data between processes that share the same node

Added by Eric Bohm 7 months ago. Updated 8 days ago.

Status:
New
Priority:
Normal
Assignee:
Category:
-
Target version:
Start date:
03/31/2017
Due date:
% Done:

100%


Description

PXSHM exists for this purpose on the net layers. However it is not generally used in SMP mode to exchange data when multiple comm threads share the same node.

Shared memory safety of our pxshm implementation and portability is a concern, so if the implementation choice were PXSHM, we would need to smoothly fail over to not using it on nodes which do not support it. Note, that is a runtime property as it can depend on kernel module load choices on compute nodes which can differ from the head node on which charm was compiled.

Some experimentation should be undertaken to determine where (if anywhere) this is provides any benefit.


Subtasks

Feature #1478: Investigate use of pxshm in CmiAllocClosed

Feature #1655: Enable use of shm transport for regular messages in LRTSNew

Feature #1721: pxshm in OFIRejectedNitin Bhat

Feature #1722: pxshm for mpi layerRejectedSam White

History

#1 Updated by Sam White 7 months ago

The main things to investigate here appear to be xpmem, pxshm, knem, limic, and cma. LiMIC appears to be tied to MPI's pt2pt semantics. OpenMPI's "vader" shared-memory BTL is able to use any of those implementations that are available (http://blogs.cisco.com/performance/the-vader-shared-memory-transport-in-open-mpi-now-featuring-3-flavors-of-zero-copy) and I've seen results from several papers (look at Nathan Hjelm's publications) that xpmem is the best performing of those, with knem being second most performant and perhaps more portable than xpmem. OpenMPI can get to <0.3us 1-byte message latency within a node using xpmem on Cray XE6 and XC40. I've also seen Intel MPI achieve ~0.5us shared-memory latency for 1-byte messages on KNLs and Haswells. From looking at Charm SMP pingpong we're usually 5-10x worse than those numbers.

The main benefit will be for large messages, so a good first target would be the zero copy send API.

#2 Updated by Eric Bohm 4 months ago

  • Assignee set to Juan Galvez

#3 Updated by Juan Galvez about 1 month ago

  • Target version changed from 6.9.0 to Unscheduled

#4 Updated by Sam White 23 days ago

Nitin is working on adding support for using Cross Memory Attach (CMA) for this. We already has an implementation working for the zero copy send API, which shows good performance. CMA is available by default on Linux kernels v3.2+ so is portable across most of the systems we care about.

Also available in: Atom PDF