Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fortran MPI_Waitall array_of_requests etc to dimension(*) #9566

Merged
merged 1 commit into from
Oct 26, 2021

Conversation

markalle
Copy link
Contributor

@markalle markalle commented Oct 19, 2021

Looking at the 3rd language binding for each MPI function, eg
the fortran "USE mpi" or "INCLUDE ’mpif.h’" binding, I looked
at the various lines that had "dimension" and "array_of_...".
There were several things like MPI_Waitall where I changed

-  integer, dimension(count), intent(inout) :: array_of_requests
+  integer, dimension(*), intent(inout) :: array_of_requests

and maybe one array_of_statuses for a spawn call

Signed-off-by: Mark Allen markalle@us.ibm.com

#9484

Copy link
Member

@jsquyres jsquyres left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@markalle Looks like you added MPI_WAITSOME, but there's still a conflict that needs to be fixed.

Looking at the 3rd language binding for each MPI function, eg
the fortran "USE mpi" or "INCLUDE ’mpif.h’" binding, I looked
at the various lines that had "dimension" and "array_of_...".
There were several things like MPI_Waitall where I changed
-  integer, dimension(count), intent(inout) :: array_of_requests
+  integer, dimension(*), intent(inout) :: array_of_requests
and maybe one array_of_statuses for a spawn call

Signed-off-by: Mark Allen <markalle@us.ibm.com>
@gpaulsen
Copy link
Member

@jsquyres ^^^

@jsquyres
Copy link
Member

This should be cherry-picked to v5.0.x.

I think there should be no effect in cherry-picking this functionality back to v4.0.x and v4.1.x, but given that this functionality has been this way -- literally -- for years and no one has noticed, I think we should take the zero-risk approach and not back-port to v4.0.x and v4.1.x. If the issue comes up, the fix is in 5.0.x.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants