Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

threadpoolexecutor for distributing MPI #230

Conversation

sahiljhawar
Copy link
Member

@sahiljhawar sahiljhawar commented Sep 10, 2023

This PR more closely follows the idea given in #215, repeatedely calling light_curve_analysis within the script. This parallelisation is free from all the issues faced in #229. Multiple methods are implemented to leverage the parallelisation based on user preference.
List of commands with explanation:

  • multi_config_analysis --config config.yaml --parallel --process 20 : This runs all the configurations in parallel by dividing 20 processes equally amongst the configs. Everything runs in parallel.
  • multi_config_analysis --config config.yaml --process 20: This runs all the configurations in serial but each configuration use 20 processes one after another.
  • multi_config_analysis --config config.yaml --parallel: If --process-per-config is given in yaml (if given, should be given individually to all the configs), then those many numbers of processes are assigned to each configuration. Everything runs in parallel.

Things to note:

  • --process is strictly required if --process-per-config is not given
  • --process-per-config takes precedence over --process

injection.log now works as it is expected to; logs from concurrent runs do not leak.

@tylerbarna
Copy link
Collaborator

if one runs multi_model_analysis --config config.yaml --parallel --process 20 for a config file that has like 30 different fits, will it start with 20 fits and then queue up the remaining 10 to start once the others have finished?

@sahiljhawar
Copy link
Member Author

sahiljhawar commented Sep 11, 2023

@tylerbarna That's a good question and I didn't had this case in my mind. When --parallel is set, in this case 20//30 will be 0, hence the configs will run with mpiexec -np 0 ... Even though it's 0, the process starts with 1 process. Reading online and on ChatGPT it seems the execution will depend on the implementation of MPI. In my case -np 0 does not fails and start with 1 process.

@tylerbarna
Copy link
Collaborator

tylerbarna commented Sep 12, 2023

@tylerbarna That's a good question and I didn't had this case in my mind. When --parallel is set, in this case 20//30 will be 0, hence the configs will run with mpiexec -np 0 ... Even though it's 0, the process starts with 1 process. Reading online and on ChatGPT it seems the execution will depend on the implementation of MPI. In my case -np 0 does not fails and start with 1 process.

looking through some stackoverflow pages, concurrent.futures might already handle this (see here), but I've seen some conflicting info. I've also seen some things along the following lines that explicitly use concurrent.futures.Queue():

import concurrent.futures

# Create a thread pool executor with 4 threads
executor = concurrent.futures.ThreadPoolExecutor(max_workers=4)

# Create a queue
queue = concurrent.futures.Queue()

# Add tasks to the queue
queue.put(task1)
queue.put(task2)
queue.put(task3)

# Start the executor
executor.map(task_function, queue)

# Wait for the executor to finish
executor.shutdown()

but I haven't experimented much with it.

There's also multiprocessing,queue

@sahiljhawar
Copy link
Member Author

@tylerbarna I have indeed used the concurrent.futures :') But what is the conflicting info you have seen?

@tylerbarna
Copy link
Collaborator

y

@tylerbarna I have indeed used the concurrent.futures :') But what is the conflicting info you have seen?

Conflicting info regarding whether or not submitting more jobs than the max number of workers handles the queueing automatically

@sahiljhawar
Copy link
Member Author

@tylerbarna Okay. We can try to implement along these lines. Maybe, defaulting to 1 process per configuration if no. of config > no. of process mentioned. Or making it strict policy for the no of processes to be an integer multiple of the no of configurations, if analysis needs to be in parallel. Also try to implement a queue based execution, as what you have asked here #230 (comment)

@mcoughlin @tsunhopang Any comments regarding these?

@tylerbarna
Copy link
Collaborator

@tylerbarna Okay. We can try to implement along these lines. Maybe, defaulting to 1 process per configuration if no. of config > no. of process mentioned. Or making it strict policy for the no of processes to be an integer multiple of the no of configurations, if analysis needs to be in parallel.

yeah, the former option seems like it would be best. There can be situations where we might have a larger number of jobs that we want to fit than number of cores we can allocate to one job in a cluster, but being able to still parallelize using the cores we have is good

@mcoughlin
Copy link
Member

@sahiljhawar I have no particularly strong preference, whatever works for folks.

@sahiljhawar sahiljhawar marked this pull request as draft October 16, 2023 16:42
@sahiljhawar
Copy link
Member Author

sahiljhawar commented Oct 16, 2023

  • Parallelised*
  • ThreadPoolExcecutor or ProcessPoolExecutor, to avoid unforseen issues with GIL when all configurations are running in parallel.

@sahiljhawar sahiljhawar force-pushed the multi_model_threadpool_exec branch from b4ac52a to ca98d7d Compare January 25, 2024 16:26
@sahiljhawar sahiljhawar marked this pull request as ready for review January 25, 2024 16:27
@sahiljhawar
Copy link
Member Author

@tsunhopang

@tsunhopang
Copy link
Collaborator

could u add a small test for this executable? E.g. Run on At2017gfo with two models

@sahiljhawar
Copy link
Member Author

sahiljhawar commented Jan 26, 2024

@tsunhopang yeah, I am trying to do that but there seems to be some issues with config file Path.

@sahiljhawar
Copy link
Member Author

@tsunhopang this is working now. can you review?

@sahiljhawar sahiljhawar requested a review from mcoughlin October 17, 2024 19:13
Copy link
Member

@mcoughlin mcoughlin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

@sahiljhawar
Copy link
Member Author

The test and code actually works but sometimes it is cancelled/exits due to Github runners limited resources.

@sahiljhawar sahiljhawar merged commit 3b0b3b0 into nuclear-multimessenger-astronomy:main Oct 21, 2024
14 of 17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants