You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running PETSc.jl with mpirun gives the following MPI communication error:
mpirun has exited due to process rank 0 with PID 82427 on
node n0000.scs00 exiting improperly. There are two reasons this could occur:
this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
I found this occurs because MPI is not properly finalized in PETSc.init(). A simple fix is to add MPI.Finalize() after line 34 in petsc_com.jl. Could this be changed in the master branch?
The text was updated successfully, but these errors were encountered:
Running PETSc.jl with mpirun gives the following MPI communication error:
I found this occurs because MPI is not properly finalized in PETSc.init(). A simple fix is to add MPI.Finalize() after line 34 in petsc_com.jl. Could this be changed in the master branch?
The text was updated successfully, but these errors were encountered: