Project

General

Profile

Bug #1050

Feature #944: AMPI compliance with MPI-2.2 standard

AMPI collectives support for MPI_IN_PLACE

Added by Sam White about 3 years ago. Updated over 2 years ago.

Status:
Merged
Priority:
Normal
Category:
AMPI
Target version:
Start date:
04/29/2016
Due date:
% Done:

100%


Description

AMPI currently implements MPI_IN_PLACE only for MPI_Reduce, MPI_Ireduce, MPI_Allreduce, MPI_Iallreduce, MPI_Reduce_scatter_block, and MPI_Reduce_scatter.

We are missing MPI_IN_PLACE support for the following functions:
MPI_Scan
MPI_Exscan
MPI_Gather
MPI_Gatherv
MPI_Allgather
MPI_Allgatherv
MPI_Alltoall
MPI_Alltoallv
MPI_Alltoallw
MPI_Scatter
MPI_Scatterv

And all of the non-blocking variants of these.

History

#1 Updated by Sam White about 3 years ago

  • Parent task set to #944
  • Tracker changed from Feature to Bug

#2 Updated by Sam White about 3 years ago

ampif.C needs to test if buffers are NULL and set them to MPI_IN_PLACE if so. Currently, this is only done for MPI_Reduce and MPI_Allreduce. Added support to other reduce routines in AMPIF here: https://charm.cs.illinois.edu/gerrit/#/c/1178/

#3 Updated by Sam White about 3 years ago

  • Subject changed from AMPI is missing MPI_IN_PLACE support for some collectives to AMPI is missing MPI_IN_PLACE support for non-reduction collectives

#4 Updated by Sam White about 3 years ago

  • Assignee changed from Sam White to Edward Hutter

#5 Updated by Sam White about 3 years ago

  • Status changed from New to Implemented

#6 Updated by Sam White about 3 years ago

  • Subject changed from AMPI is missing MPI_IN_PLACE support for non-reduction collectives to AMPI collectives support for MPI_IN_PLACE
  • Status changed from Implemented to In Progress
  • % Done changed from 0 to 90

MPI_IN_PLACE for everything but MPI_Alltoall added here. Note that we just need a different algorithm for large message sizes with non-power-of-two communicator sizes: https://charm.cs.illinois.edu/gerrit/#/c/1303/

#7 Updated by Sam White over 2 years ago

  • Assignee changed from Edward Hutter to Matthias Diener

As noted above, the only thing left to do on this is MPI_Alltoall.

#8 Updated by Sam White over 2 years ago

What's the status of this? It would be good to have a fix merged before the 6.8.0 release

#9 Updated by Matthias Diener over 2 years ago

I just pushed a patch that implements MPI_IN_PLACE for MPI_Alltoall:

https://charm.cs.illinois.edu/gerrit/#/c/2240/

Tested with a small custom MPI_Alltoall code.

#10 Updated by Sam White over 2 years ago

  • Status changed from In Progress to Implemented

Fix for large Alltoalls using MPI_IN_PLACE, the last part of this issue: https://charm.cs.illinois.edu/gerrit/#/c/2240/

#11 Updated by Sam White over 2 years ago

  • translation missing: en.field_closed_date set to 2017-02-14 21:27:35.514374
  • % Done changed from 90 to 100
  • Status changed from Implemented to Merged

Also available in: Atom PDF