You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are a number of things that need to be done to complete the OMPI integration effort. I'm going to list them here for tracking purposes and in the hope that others might pick some of them up. If you do, please edit this comment and put your name at the beginning of the item you are working on so we avoid duplicate effort. Obviously, there will be some "ompi" items in this list. This is a "living" list, so expect more things to be added as they are identified.
[@rhc54] Revise command line setup/parsing. Need to expand it a bit to allow for multiple command line definitions. Need to handle different MCA params for OMPI vs PRRTE.
Singleton support. IIRC, I enabled PMIx_Init to support singletons - i.e., when the client is not launched by a daemon and thus has no contact information for a PMIx server. However, I didn't do anything about the case of singleton comm_spawn where the client needs to start a PMIx server and then connect back to it.
Resolve reported comm_spawn issues. Multiple reports of comm_spawn problems on the OMPI mailing lists and issues. Includes missing support for various MPI_Info arguments such as "add_hostfile" that may (likely) require some updates to PRRTE
Decide what to do about legacy ORTE MCA params. These probably need to be detected and converted to their PRRTE equivalent
Update PRRTE frameworks to use MCA params solely for setting default behavior, overridden on a per-job basis by user specifications.
[@jsquyres] Come up with a way for "ompi_info" to include PRRTE information
Resolve multi-mpirun connect/accept issues - do we auto-detect the presence of another DVM and launch within it, or do we launch a 2nd DVM and "connect" between them, or...?
Devise support for user obtaining an MPI "port", printing it out, and then feeding it to another mpirun on the cmd line for connect/accept
The text was updated successfully, but these errors were encountered:
There are a number of things that need to be done to complete the OMPI integration effort. I'm going to list them here for tracking purposes and in the hope that others might pick some of them up. If you do, please edit this comment and put your name at the beginning of the item you are working on so we avoid duplicate effort. Obviously, there will be some "ompi" items in this list. This is a "living" list, so expect more things to be added as they are identified.
[@rhc54] Revise command line setup/parsing. Need to expand it a bit to allow for multiple command line definitions. Need to handle different MCA params for OMPI vs PRRTE.
Singleton support. IIRC, I enabled PMIx_Init to support singletons - i.e., when the client is not launched by a daemon and thus has no contact information for a PMIx server. However, I didn't do anything about the case of singleton comm_spawn where the client needs to start a PMIx server and then connect back to it.
Resolve reported comm_spawn issues. Multiple reports of comm_spawn problems on the OMPI mailing lists and issues. Includes missing support for various MPI_Info arguments such as "add_hostfile" that may (likely) require some updates to PRRTE
Decide what to do about legacy ORTE MCA params. These probably need to be detected and converted to their PRRTE equivalent
Update PRRTE frameworks to use MCA params solely for setting default behavior, overridden on a per-job basis by user specifications.
[@jsquyres] Come up with a way for "ompi_info" to include PRRTE information
Resolve multi-mpirun connect/accept issues - do we auto-detect the presence of another DVM and launch within it, or do we launch a 2nd DVM and "connect" between them, or...?
Devise support for user obtaining an MPI "port", printing it out, and then feeding it to another mpirun on the cmd line for connect/accept
The text was updated successfully, but these errors were encountered: