Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement caching in AiidaNodeViewWidget #686

Merged
merged 3 commits into from
Feb 15, 2025

Conversation

edan-bainglass
Copy link
Member

@edan-bainglass edan-bainglass commented Feb 14, 2025

Moving node view caching mechanism implemented by @superstar54 in the QE app (aiidalab/aiidalab-qe#921) into AiidaNodeViewWidget

@edan-bainglass
Copy link
Member Author

edan-bainglass commented Feb 14, 2025

@AndresOrtegaGuerrero this should resolve your issue. Essentially, when @superstar54 implemented caching of node views in the QE app, we stopped using AWB's AiidaNodeViewWidget, which supports non-widget views. This PR moves the mechanism into the widget, which will now be used in the QE app, which will re-enable non-widget views.

Copy link

codecov bot commented Feb 14, 2025

Codecov Report

Attention: Patch coverage is 76.47059% with 4 lines in your changes missing coverage. Please review.

Project coverage is 83.48%. Comparing base (9601736) to head (16799f2).
Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
aiidalab_widgets_base/viewers.py 76.47% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #686      +/-   ##
==========================================
+ Coverage   83.24%   83.48%   +0.23%     
==========================================
  Files          18       18              
  Lines        3588     3602      +14     
==========================================
+ Hits         2987     3007      +20     
+ Misses        601      595       -6     
Flag Coverage Δ
python-3.11 83.48% <76.47%> (+0.23%) ⬆️
python-3.9 83.51% <76.47%> (+0.23%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@danielhollas
Copy link
Contributor

Moving caching mechanism implemented by @superstar54 in the QE app into the AiiDANodeViewWidget

Can you point to the original PR in QeApp. It's not super clear what is meant by caching in this context.

Copy link
Member

@superstar54 superstar54 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

@superstar54
Copy link
Member

superstar54 commented Feb 14, 2025

Hi @danielhollas , It here: aiidalab/aiidalab-qe#921

Sorry that I didn't notice your message when approving this PR.

@superstar54
Copy link
Member

Hi @edan-bainglass , as suggested by @yakutovicha , could you add a test for the caching?

@edan-bainglass
Copy link
Member Author

Yep. I'll add a test for this and the loader from #685

@edan-bainglass edan-bainglass changed the title Implement caching in AiiDANodeViewWidget Implement caching in AiidaNodeViewWidget Feb 15, 2025
@edan-bainglass edan-bainglass merged commit f351939 into aiidalab:master Feb 15, 2025
11 checks passed
edan-bainglass added a commit to aiidalab/aiidalab-qe that referenced this pull request Feb 15, 2025
This PR reverts to using the AWB node view widget, which now supports caching (aiidalab/aiidalab-widgets-base#686).
@danielhollas danielhollas self-requested a review February 16, 2025 00:39
Copy link
Contributor

@danielhollas danielhollas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit wary of the changes in this PR since the cache introduced here is unbounded. So for example for a large workchain, a user clicking around the node tree can blow up the memory of the Python process (I don't know if that is a real concern or node, but I assume some viewers may consume nontrivial amount of memory.

At the very least we need to provide a way to clear the cache, probably in the reset method. Or better yet, use one of pythons stdlib lru_cache decorators and wrap the logic in a cached function call somehow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants