You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What version of Open MPI are you using? (e.g., v3.0.5, v4.0.2, git branch name and hash, etc.)
OpenMPI main branch
Describe how Open MPI was installed (e.g., from a source/distribution tarball, from a git clone, from an operating system distribution package, etc.)
Built from current main branch (3/22/22)
If you are building/installing from a git clone, please copy-n-paste the output from git submodule status.
git submodule status
1b86a35db2816ee9c0f3a41988005a2ba7d29adb 3rd-party/openpmix (v1.1.3-3481-g1b86a35d)
91f791e209ccbdfb4b8647900d292ef51d52f37d 3rd-party/prrte (psrvr-v2.0.0rc1-4319-g91f791e2)
Please describe the system on which you are running
Operating system/version:
RHEL 8.4
Computer hardware:
Single Power8 node
Network type:
Localhost
Details of the problem
I ran the set of self-checking tests from ompi-tests-public/collective-big-count with collective components specified as --mca coll_adapt_priority 100 --mca coll adapt,basic,sm,self,inter,libnbc
The following testcases had failures. The remaining testcases were successful:
test_allgather_uniform_count
test-allreduce-uniform-count
test-alltoall-uniform_count
test-gather-uniform-count
test-scatter-uniform-count
The tests were compiled by running make in the directory containing the source files
The following environment variables were set for all tests:
Results from MPI_Iscatter(int x 6442450941 = 25769803764 or 24.0 GB):
Rank 2: ERROR: DI in 2147483647 of 2147483647 slots ( 100.0 % wrong)
Rank 1: PASSED
Rank 0: PASSED
Results from MPI_Iscatter(double _Complex x 6442450941 = 103079215056 or 96.0 GB):
Rank 2: ERROR: DI in 2147483647 of 2147483647 slots ( 100.0 % wrong)
Rank 1: PASSED
Rank 0: PASSED
The text was updated successfully, but these errors were encountered:
I got some, maybe most, of them but there are other issues that need a little bit more thinking. There are also few corner cases where one of the processes gets killed by the OOM, and that's something you cannot trap in gdb. I'll push a PR soon for both #10186 and #10187.
Thank you for taking the time to submit an issue!
Background information
What version of Open MPI are you using? (e.g., v3.0.5, v4.0.2, git branch name and hash, etc.)
OpenMPI main branch
Describe how Open MPI was installed (e.g., from a source/distribution tarball, from a git clone, from an operating system distribution package, etc.)
Built from current main branch (3/22/22)
If you are building/installing from a git clone, please copy-n-paste the output from
git submodule status
.Please describe the system on which you are running
Details of the problem
I ran the set of self-checking tests from ompi-tests-public/collective-big-count with collective components specified as --mca coll_adapt_priority 100 --mca coll adapt,basic,sm,self,inter,libnbc
The following testcases had failures. The remaining testcases were successful:
The tests were compiled by running make in the directory containing the source files
The following environment variables were set for all tests:
The following command failed in a MPI_Allgather call
This command fails with an assert and the following traceback:
The following command failed in a MPI_Allreduce call
The assert and traceback looks similar:
The following command failed with a self-check that detected invalid results then a SIGSEGV
The error message and traceback are:
The following command failed with an assert and traceback similar to test_allreduce_uniform_count except the failing MPI call is MPI_Alltoall:
The following command failed with an error message indicating a self-check failed then double free or storage corruption:
The following command failed with an assert and traceback similar to test_allreduce_uniform_count except the failing MPI call is MPI_Reduce:
The following command failed with a self-check message indicating the testcase generated invalid data
The text was updated successfully, but these errors were encountered: