Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MPI_IN_PLACE in MPI_Sendrecv/MPI_Isendrecv, deprecate MPI_Sendrecv_replace/MPI_Isendrecv_replace #860

Open
softwaretraff opened this issue Dec 9, 2024 · 4 comments
Labels
mpi-6 For inclusion in the MPI 5.1 or 6.0 standard wg-p2p Point-to-Point Working Group

Comments

@softwaretraff
Copy link

Problem

Specific function interfaces for point-to-point send-receive-replace functionality are, after the introduction of the
MPI_IN_PLACE buffer argument for collectives, strictly not needed. Instead, MPI_IN_PLACE could be an allowed sendbuf argument indicating that data are sent and received from the recvbuf with the recvcount and recvtype determining the
structure of the data.

Proposal

Deprecate and eventually remove the MPI_Sendrecv_replace() and MPI_Isendrecv_replace() interfaces, remove all text pertaining to this. Add extra explanation to MPI_Sendrecv() and MPI_Isendrecv() to explain use and semantics of MPI_IN_PLACE.

Changes to the Text

MPI 4.1, p.45, add after text explaining MPI_Sendrecv():

MPI_IN_PLACE may be given as sendbuf argument. This indicates that data will be sent out of the receive buffer and new
data received in the receive buffer as well. The sendcount and sendtype arguments will in that case not be significant,
the recvcount and recvtype arguments determine the type of both the data sent and the data received. Note that both
sendtag and recvtag are significant, though.

Advice to implementers: Internal buffering of data being received may be necessary in order to avoid race conditions.

MPI 4.1, p.77, add after explanation of MPI_ISendrecv():

Same as above.

Impact on Implementations

A sendbuf check for MPI_IN_PLACE, old implementation for MPI_Sendrecv_replace can be reused.

Impact on Users

Code change, or write small wrapper implementing the old functionality with MPI_Sendrecv and MPI_IN_PLACE.

References and Pull Requests

@jeffhammond
Copy link
Member

Who benefits from this? Implementers have to keep all the same code. Users of MPI_Sendrecv_replace are annoyed, to the extent that anyone cares about deprecated functions. Teachers of MPI shouldn't be wasting time on this anyways.

I get that this makes MPI more sensible and lean, but it seems like work for the MPI Forum that has essentially no benefits on the ecosystem, as compared to doing nothing.

I am not aware of any use of MPI_Sendrecv_replace, so the new capability to allow MPI_IN_PLACE will like be unused as well.

@softwaretraff
Copy link
Author

No use of MPI_Sendrecv_replace? Then just remove it... (see also my comment on MPI_Dims_create: remove this!)

I think there is some value in making the standard leaner and removing as much fat as possible. In the short run, this is more work than benefit, true, in the longer, I think not

Jesper

@jprotze
Copy link

jprotze commented Dec 9, 2024

@jeffhammond
Copy link
Member

I am confident that QE will be better if they replace that call with an OOP implementation that doesn't require MPI to call malloc and free O(N^2) times in a loop.

@wesbland wesbland added wg-p2p Point-to-Point Working Group mpi-6 For inclusion in the MPI 5.1 or 6.0 standard labels Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mpi-6 For inclusion in the MPI 5.1 or 6.0 standard wg-p2p Point-to-Point Working Group
Projects
Status: To Do
Development

No branches or pull requests

4 participants