Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check metrics exist before running tree. Possibly calculate metrics from tree #921

Closed
handwerkerd opened this issue Dec 22, 2022 · 1 comment · Fixed by #969
Closed

Check metrics exist before running tree. Possibly calculate metrics from tree #921

handwerkerd opened this issue Dec 22, 2022 · 1 comment · Fixed by #969
Labels
effort: medium Theoretically <40h total work enhancement issues describing possible enhancements to the project impact: medium Improves code/documentation functionality for some users priority: low issues that are not urgent

Comments

@handwerkerd
Copy link
Member

Summary

In Decision Tree Modularization (#756), the functions in selection_nodes.py were written so that it's possible to dry-run a tree and collect all metrics that would be needed to run the tree. This check is not currently being done. Once the metrics are gathered, it would then be possible to have tedana load a decision tree and only calculate the metrics that are requested by the tree.

Additional Detail

Every selection_node function has a only_used_metrics parameter. If that's true, it will output the metrics it will use (i.e. kappa, rho, etc) but not actually run anything. This dry run would be added to the initialization of the component_selector object.

Next Steps

  • Figure out where this fits in relation to other priorities.
@handwerkerd handwerkerd added enhancement issues describing possible enhancements to the project priority: low issues that are not urgent effort: medium Theoretically <40h total work impact: medium Improves code/documentation functionality for some users labels Dec 22, 2022
@tsalo
Copy link
Member

tsalo commented Aug 7, 2023

What about necessary_metrics at the top of the decision tree? Can that be used instead?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort: medium Theoretically <40h total work enhancement issues describing possible enhancements to the project impact: medium Improves code/documentation functionality for some users priority: low issues that are not urgent
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants