Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix batch_run data collection on final step #2588

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

satyamlal
Copy link

@satyamlal satyamlal commented Jan 2, 2025

Summary

This is a bug where the function of batch_run, in the Mesa framework, is not collecting data on the last step of the simulation. Such an issue is affecting users because they need accurate data collection at the end of the simulation step.

Bug / Issue

The issue is tracked under#2514. Within the batch_run function, DataCollector wasn't collecting information on final step because of incorrect step range logic. The missing data had an appearance in simulations that were more than one step.

Implementation

To resolve this issue, the logic for generating the list of steps for data collection was modified. Specifically:

The line generating steps was changed from: file path: (..mesa\batchrunner.py)

    steps = list(range(0, model.steps, data_collection_period))
    if not steps or steps[-1] != model.steps - 1:
        steps.append(model.steps - 1)

to:

    # steps for data collection
    steps = list(range(0, model.steps, data_collection_period))
    
    # final step for data collection
    if steps and steps[-1] != model.steps:
        steps.append(model.steps)

Testing

  1. I created a simple model with only a few steps to make it easier to track the data.
  2. Ran it using the original code and saw that the last step’s data was missing.
  3. Ran it again with the fixed version and confirmed that the data for the last step was included this time.
  4. Compared the outputs to ensure the fix didn’t break anything else.
    Everything looks good, and the results are consistent now!

Additional Notes

While working on this issue, I consistently encountered a warning related to the following line:

from tqdm.auto import tqdm

Despite my efforts, I wasn’t able to identify a solution for this warning. If anyone has insights or suggestions on resolving this, I would greatly appreciate your guidance.

Copy link

github-actions bot commented Jan 2, 2025

Performance benchmarks:

Model Size Init time [95% CI] Run time [95% CI]
BoltzmannWealth small 🔴 +16.5% [+15.1%, +17.8%] 🔵 +2.2% [+1.6%, +2.7%]
BoltzmannWealth large 🔵 +2.2% [+0.9%, +4.3%] 🔴 +4.7% [+3.3%, +6.1%]
Schelling small 🔵 -0.1% [-0.5%, +0.3%] 🔵 +0.4% [+0.3%, +0.6%]
Schelling large 🔵 +0.3% [-0.1%, +0.9%] 🔵 +0.5% [-1.3%, +2.3%]
WolfSheep small 🔵 -1.1% [-1.8%, -0.3%] 🔵 +1.7% [-2.9%, +6.5%]
WolfSheep large 🔵 -1.1% [-2.1%, -0.2%] 🔵 -2.0% [-4.7%, +0.7%]
BoidFlockers small 🔵 +0.9% [+0.2%, +1.7%] 🔵 +0.0% [-0.7%, +0.9%]
BoidFlockers large 🔵 +0.9% [+0.3%, +1.6%] 🔵 -0.1% [-0.7%, +0.5%]

@satyamlal
Copy link
Author

I noticed that some checks are failing in the CI pipeline. I would appreciate it if you could review my PR and guide me on resolving these issues. I’m eager to learn and improve based on your feedback.

@quaquel
Copy link
Member

quaquel commented Jan 4, 2025

As can be seen by clicking on details for any of the failed tests, the batch run related tests are failing. This makes sense because this PR changes the behavior. Figure out where in the tests it fails and update it for the correct behavior. Probably there is a check (i.e., some assert statement) somewhere on the number of steps for which data is collected.

@satyamlal
Copy link
Author

Thanks for guiding @quaquel . I am working on it to fix the issue.

@EwoutH
Copy link
Member

EwoutH commented Jan 11, 2025

Let us know if you need help or are stuck on something!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants