You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Specific function interfaces for point-to-point send-receive-replace functionality are, after the introduction of the
MPI_IN_PLACE buffer argument for collectives, strictly not needed. Instead, MPI_IN_PLACE could be an allowed sendbuf argument indicating that data are sent and received from the recvbuf with the recvcount and recvtype determining the
structure of the data.
Proposal
Deprecate and eventually remove the MPI_Sendrecv_replace() and MPI_Isendrecv_replace() interfaces, remove all text pertaining to this. Add extra explanation to MPI_Sendrecv() and MPI_Isendrecv() to explain use and semantics of MPI_IN_PLACE.
Changes to the Text
MPI 4.1, p.45, add after text explaining MPI_Sendrecv():
MPI_IN_PLACE may be given as sendbuf argument. This indicates that data will be sent out of the receive buffer and new
data received in the receive buffer as well. The sendcount and sendtype arguments will in that case not be significant,
the recvcount and recvtype arguments determine the type of both the data sent and the data received. Note that both
sendtag and recvtag are significant, though.
Advice to implementers: Internal buffering of data being received may be necessary in order to avoid race conditions.
MPI 4.1, p.77, add after explanation of MPI_ISendrecv():
Same as above.
Impact on Implementations
A sendbuf check for MPI_IN_PLACE, old implementation for MPI_Sendrecv_replace can be reused.
Impact on Users
Code change, or write small wrapper implementing the old functionality with MPI_Sendrecv and MPI_IN_PLACE.
References and Pull Requests
The text was updated successfully, but these errors were encountered:
Who benefits from this? Implementers have to keep all the same code. Users of MPI_Sendrecv_replace are annoyed, to the extent that anyone cares about deprecated functions. Teachers of MPI shouldn't be wasting time on this anyways.
I get that this makes MPI more sensible and lean, but it seems like work for the MPI Forum that has essentially no benefits on the ecosystem, as compared to doing nothing.
I am not aware of any use of MPI_Sendrecv_replace, so the new capability to allow MPI_IN_PLACE will like be unused as well.
No use of MPI_Sendrecv_replace? Then just remove it... (see also my comment on MPI_Dims_create: remove this!)
I think there is some value in making the standard leaner and removing as much fat as possible. In the short run, this is more work than benefit, true, in the longer, I think not
I am confident that QE will be better if they replace that call with an OOP implementation that doesn't require MPI to call malloc and free O(N^2) times in a loop.
Problem
Specific function interfaces for point-to-point send-receive-replace functionality are, after the introduction of the
MPI_IN_PLACE buffer argument for collectives, strictly not needed. Instead, MPI_IN_PLACE could be an allowed sendbuf argument indicating that data are sent and received from the recvbuf with the recvcount and recvtype determining the
structure of the data.
Proposal
Deprecate and eventually remove the MPI_Sendrecv_replace() and MPI_Isendrecv_replace() interfaces, remove all text pertaining to this. Add extra explanation to MPI_Sendrecv() and MPI_Isendrecv() to explain use and semantics of MPI_IN_PLACE.
Changes to the Text
MPI 4.1, p.45, add after text explaining MPI_Sendrecv():
MPI_IN_PLACE may be given as sendbuf argument. This indicates that data will be sent out of the receive buffer and new
data received in the receive buffer as well. The sendcount and sendtype arguments will in that case not be significant,
the recvcount and recvtype arguments determine the type of both the data sent and the data received. Note that both
sendtag and recvtag are significant, though.
Advice to implementers: Internal buffering of data being received may be necessary in order to avoid race conditions.
MPI 4.1, p.77, add after explanation of MPI_ISendrecv():
Same as above.
Impact on Implementations
A sendbuf check for MPI_IN_PLACE, old implementation for MPI_Sendrecv_replace can be reused.
Impact on Users
Code change, or write small wrapper implementing the old functionality with MPI_Sendrecv and MPI_IN_PLACE.
References and Pull Requests
The text was updated successfully, but these errors were encountered: