Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[r2] fix seeds in se_a and se_atten (#3880) #3947

Merged
merged 1 commit into from
Jul 3, 2024

Conversation

njzjz
Copy link
Member

@njzjz njzjz commented Jul 3, 2024

Summary by CodeRabbit

  • Bug Fixes

    • Resolved inconsistencies in seed values by incrementing self.seed conditionally in descriptor modules.
  • Tests

    • Updated test arrays refe, reff, and refv with new reference values.
    • Adjusted expected values in test_model_ener method for better accuracy.

These changes ensure more reliable descriptor computations and improved test accuracy.


(cherry picked from commit 0c472d1)

Fix deepmodeling#3799.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

- **New Features**
- Introduced flexibility in specifying seed values, allowing either an
integer or a list of integers.
- Enhanced seed parameter usage across various initialization methods
and classes for more controlled randomization.

- **Improvements**
- Updated seed initialization logic to include additional computations
and dynamic adjustments.
- Enhanced documentation for parameters in multiple classes, providing
clearer usage guidelines.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
(cherry picked from commit 0c472d1)
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Copy link
Contributor

coderabbitai bot commented Jul 3, 2024

Walkthrough

Walkthrough

This update primarily addresses issues with seed handling in the deepmd descriptor and attention modules. Specifically, seed incrementation logic was added to multiple functions in response to certain conditions. Additionally, several tests were modified to reflect new expected values due to the changed seed handling logic.

Changes

File Change Summary
deepmd/descriptor/se_a.py Added logic to increment self.seed by self.seed_shift under certain conditions; renamed return variable and modified return logic
deepmd/descriptor/se_atten.py Added conditional seed incrementation within _attention_layers and _filter_lower functions
source/tests/test_model_se_a_ebd_v2.py Updated refe, reff, and refv arrays' values in the test_model function
source/tests/test_pairwise_dprc.py Modified expected values in test_model_ener method to 4.82969 and -0.104339

Assessment against linked issues

Objective (Issue Number) Addressed Explanation
Ensure seeds of descriptor/fitting in dpmodel are passed to network (#3799)

Tip

AI model upgrade

gpt-4o model for reviews and chat is now live

OpenAI claims that this model is better at understanding and generating code than the previous models. Please join our Discord Community to provide any feedback or to report any issues.


Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between a85d58f and e4ae74b.

Files selected for processing (4)
  • deepmd/descriptor/se_a.py (3 hunks)
  • deepmd/descriptor/se_atten.py (8 hunks)
  • source/tests/test_model_se_a_ebd_v2.py (1 hunks)
  • source/tests/test_pairwise_dprc.py (1 hunks)
Additional comments not posted (15)
source/tests/test_model_se_a_ebd_v2.py (3)

142-142: Verify the updated reference value for refe.

Ensure that the new value [6.100037044296185e-01] is correct and consistent with the expected results.


144-161: Verify the updated reference values for reff.

Ensure that the new values are correct and consistent with the expected results.


164-172: Verify the updated reference values for refv.

Ensure that the new values are correct and consistent with the expected results.

source/tests/test_pairwise_dprc.py (2)

522-522: Verify the updated expected value for e[0].

Ensure that the new value 4.82969 is correct and consistent with the expected results.


523-523: Verify the updated expected value for f[0, 0].

Ensure that the new value -0.104339 is correct and consistent with the expected results.

deepmd/descriptor/se_a.py (3)

1034-1035: LGTM! Conditional increment of the seed.

The seed is incremented correctly based on the conditions.


Line range hint 1047-1067: LGTM! Correct parameters passed to filter_lower_R42GR.

The function call includes the necessary parameters including the newly added self.seed, self.seed_shift, and self.uniform_seed.


1065-1066: LGTM! Conditional increment of the seed.

The seed is incremented correctly based on the conditions.

deepmd/descriptor/se_atten.py (7)

962-963: Conditional seed increment.

The increment of self.seed ensures that each layer has a different seed when self.uniform_seed is False and self.seed is not None.


977-978: Conditional seed increment.

The increment of self.seed ensures that each layer has a different seed when self.uniform_seed is False and self.seed is not None.


992-993: Conditional seed increment.

The increment of self.seed ensures that each layer has a different seed when self.uniform_seed is False and self.seed is not None.


1026-1027: Conditional seed increment.

The increment of self.seed ensures that each layer has a different seed when self.uniform_seed is False and self.seed is not None.


1091-1092: Conditional seed increment.

The increment of self.seed ensures that each filter has a different seed when self.uniform_seed is False and self.seed is not None.


1132-1133: Conditional seed increment.

The increment of self.seed ensures that each filter has a different seed when self.uniform_seed is False and self.seed is not None.


1191-1192: Conditional seed increment.

The increment of self.seed ensures that each filter has a different seed when self.uniform_seed is False and self.seed is not None.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added the Python label Jul 3, 2024
Copy link

codecov bot commented Jul 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 81.51%. Comparing base (a85d58f) to head (e4ae74b).
Report is 3 commits behind head on r2.

Additional details and impacted files
@@           Coverage Diff           @@
##               r2    #3947   +/-   ##
=======================================
  Coverage   81.50%   81.51%           
=======================================
  Files         342      342           
  Lines       33865    33882   +17     
  Branches     2872     2876    +4     
=======================================
+ Hits        27601    27618   +17     
  Misses       5381     5381           
  Partials      883      883           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@wanghan-iapcm wanghan-iapcm merged commit 84ca63c into deepmodeling:r2 Jul 3, 2024
47 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants