MPI RMA routines take both origin_type and target_type arguments, which are only valid at the sender since it's a one-sided operation.
This means that we can't just send over an int as the target_type's index into the target rank's DDT type table.
Instead, send over a CkDDT_DataType object that AMPI can use to deserialize the payload on the remote side.
An example of this issue for MPI_Accumulate is here: http://mpi.deino.net/mpi_functions/MPI_Win_fence.html
#3 Updated by Sam White over 2 years ago
When I refactored AMPI's MPI_Op data structures to support non-commutative operations, I made the table to look up Op's a member of ampiParent. Now applyOp() has to access that table in ampiParent where as previously MPI_Op's (like MPI_MAX) were just function pointers. This is not a problem for two-sided communication because we don't access the MPI_Op from the runtime when receiving a message normally. But for one-sided messages, applyOp is called from the runtime's thread. We need to refactor MPI_Op data structures to be accessible from the runtime.
I opened a separate issue for this: https://charm.cs.illinois.edu/redmine/issues/1356
#5 Updated by Sam White over 2 years ago
- Target version changed from 6.8.1 to 6.8.0
I think we can maybe get the basic support for this into the 6.8.0 release (~ next week), and then follow up with performance improvements afterwards. We'll need to add tests for these cases to the recently added RMA tests in megampi too.