Project

General

Profile

Feature #23

Optimization of MPI layer

Added by Nikhil Jain over 6 years ago. Updated 10 months ago.

Status:
New
Priority:
Normal
Assignee:
Category:
-
Target version:
Start date:
02/06/2013
Due date:
% Done:

0%


Description

Revisit MPI layer and possibly rewrite to improve performance.

History

#1 Updated by Phil Miller over 6 years ago

  • Status changed from New to In Progress

I've heard through the grapevine (not to be confused with Harshitha's LB strategy) that Ralf has achieved near-parity with some native layer in his work at Argonne for Pavan. Marking it 'in-progress' accordingly.

#2 Updated by Phil Miller over 6 years ago

  • Category set to Machine Layers

#3 Updated by Phil Miller over 6 years ago

  • Target version set to Unscheduled

#4 Updated by Nikhil Jain over 6 years ago

Quoting Ralf's Email on this-

As for MPI, we've found something that apparently makes it be as fast as gemini_gni-crayxe, and the same seems to be true of ibverbs. We're testing this on intrepid & vesta as well. Pavan doesn't care about tcp (i.e. MPI over sockets) at all, where the difference in performance is biggest (almost 4x).

#5 Updated by Ramprasad Venkataraman over 6 years ago

  • Project changed from Charm++ to 5
  • Category deleted (Machine Layers)
  • Target version deleted (Unscheduled)

#6 Updated by Phil Miller over 6 years ago

  • Project changed from 5 to Charm++

#7 Updated by Phil Miller over 6 years ago

  • Assignee changed from Nikhil Jain to Laxmikant "Sanjay" Kale

#8 Updated by Phil Miller over 6 years ago

  • Target version set to 6.6.0

#9 Updated by Nikhil Jain over 5 years ago

  • Target version changed from 6.6.0 to Unscheduled

#10 Updated by Sam White over 2 years ago

  • Status changed from In Progress to New

It looks like none of the work mentioned above was ever merged...

#11 Updated by Sam White over 1 year ago

  • Assignee deleted (Laxmikant "Sanjay" Kale)

#12 Updated by Sam White about 1 year ago

Small optimization to use MPI-3's MPI_Mprobe and MPI_Mrecv where possible: https://charm.cs.illinois.edu/gerrit/#/c/charm/+/2785/

Edit: we ended up reverting this because support for MPI_Mprobe is spotty and not really detectable at configure time in a portable manner.

#13 Updated by Eric Bohm 10 months ago

  • Assignee set to Nitin Bhat

Also available in: Atom PDF