Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add detailed explanations for failed assertions #7

Merged
merged 5 commits into from
Nov 4, 2024

Conversation

kdunee
Copy link
Owner

@kdunee kdunee commented Nov 4, 2024

Fixes #2

Add detailed explanations for failed assertions using LLM.

  • Modify intentguard/intentguard.py to include a _generate_explanation method that generates detailed explanations for failed assertions using the LLM.
  • Update the assert_code method to call _generate_explanation when an assertion fails and include the explanation in the AssertionError.
  • Add a new prompt template explanation_prompt in intentguard/prompts.py for generating detailed explanations for failed assertions.
  • Update the existing test case test_assert_code_false in tests/test_intentguard.py to check for the presence of the explanation in the AssertionError.

For more details, open the Copilot Workspace session.

Summary by CodeRabbit

  • New Features

    • Enhanced error reporting for assertion failures, providing detailed explanations when assertions do not hold true.
    • Introduced a new prompt for analyzing code failures, improving the explanation of unmet conditions.
  • Bug Fixes

    • Improved test coverage for assertion failures by validating the presence of explanatory messages in error outputs.

Fixes #2

Add detailed explanations for failed assertions using LLM.

* Modify `intentguard/intentguard.py` to include a `_generate_explanation` method that generates detailed explanations for failed assertions using the LLM.
* Update the `assert_code` method to call `_generate_explanation` when an assertion fails and include the explanation in the `AssertionError`.
* Add a new prompt template `explanation_prompt` in `intentguard/prompts.py` for generating detailed explanations for failed assertions.
* Update the existing test case `test_assert_code_false` in `tests/test_intentguard.py` to check for the presence of the explanation in the `AssertionError`.

---

For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/kdunee/intentguard/issues/2?shareId=XXXX-XXXX-XXXX-XXXX).
@kdunee kdunee self-assigned this Nov 4, 2024
Copy link

coderabbitai bot commented Nov 4, 2024

Warning

Rate limit exceeded

@kdunee has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 13 minutes and 7 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between e5db205 and 8455d31.

Walkthrough

The pull request introduces modifications to the IntentGuard class in intentguard/intentguard.py, enhancing the error reporting mechanism for assertion failures. A new private method, _generate_explanation, is added to create detailed explanations for failed assertions. This method generates a prompt that includes formatted object source codes and the failed expectation, which is then processed by a language model to provide a human-readable response. The assert_code method is updated to include this explanation in the error message raised during assertion failures.

Additionally, a new explanation_prompt variable is introduced in intentguard/prompts.py, which outlines a process for analyzing Python code to explain why a specific condition failed. This prompt includes detailed instructions for parsing input and evaluating conditions. Changes to the test suite in tests/test_intentguard.py enhance the validation of exception messages by checking for the presence of the word "Explanation" in the raised AssertionError. Overall, these changes improve the feedback provided to users when assertions fail.

Assessment against linked issues

Objective Addressed Explanation
Implement a mechanism that provides detailed explanations for failed assertions. (Issue #2)
Failed assertions should include a detailed explanation of the reason for the failure. (Issue #2)
The explanation should be clear and concise, and it should be helpful to developers in debugging their code. (Issue #2)
The explanation should be displayed to the user when an assertion fails. (Issue #2)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

* Correct the order of methods in the failed condition message
* Update the output message to reflect the corrected order of methods
@kdunee kdunee marked this pull request as ready for review November 4, 2024 12:43
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (6)
tests/test_intentguard.py (2)

17-21: Consider enhancing the assertion validation.

While the basic test implementation correctly verifies the presence of an explanation, consider strengthening the test to ensure the quality and relevance of the explanation.

Here's a suggested enhancement:

 def test_assert_code_false(self):
     with self.assertRaises(AssertionError) as cm:
         self.guard.assert_code(
             "{class} should not have any methods", {"class": IntentGuard}
         )
     self.assertIn("Explanation:", str(cm.exception))
+    error_message = str(cm.exception)
+    # Verify explanation mentions the actual conflict
+    self.assertIn("IntentGuard", error_message)
+    self.assertIn("methods", error_message)
+    # Verify explanation structure
+    self.assertRegex(error_message, r"Explanation:.*\w+.*")

Additionally, consider adding a new test case that specifically validates the explanation's quality with different assertion scenarios.


17-21: Add comprehensive test coverage for the explanation feature.

The current test suite only verifies the explanation feature in one basic scenario. Consider adding test cases for:

  1. Different types of assertions (e.g., complex conditions, multiple objects)
  2. Edge cases (e.g., empty objects, large objects)
  3. Various object types (e.g., custom classes, built-in types)

Would you like me to help generate additional test cases to improve coverage of the explanation feature?

intentguard/prompts.py (4)

53-60: Consider enhancing input specifications.

While the prompt's purpose and basic inputs are well-defined, consider adding:

  • Format specifications for the condition string (e.g., supported operators, syntax)
  • Handling of edge cases (e.g., malformed inputs, empty code blocks)
 You will receive:
 - A list of named objects, where each object's value is the code of a class or method.
-  A text of a condition that uses these object names.
+  A text of a condition that uses these object names. The condition should be expressed using
+    supported operators (has, contains, implements, etc.) and follow the format:
+    "{object_name} operator [target]"
 - An indication that the condition was not met.
+
+ Note: Empty code blocks or malformed inputs should result in clear error messages.

62-69: Enhance explanation generation guidelines.

The steps are well-structured, but step 5 could be more specific about the types of explanations to generate.

 4. Determine why the condition does not hold true based on the code analysis.
-5. Provide a detailed explanation of the reasons for the failure.
+5. Provide a detailed explanation of the failure, including:
+   - Specific missing elements or mismatches
+   - Relevant code structure details
+   - Suggestions for how the condition could be satisfied

74-97: Add more comprehensive examples.

The current example is good but basic. Consider adding examples that demonstrate:

  • Multiple failure reasons
  • More complex code structures (inheritance, nested classes)
  • Different types of conditions

Add an additional example like:

### Input
**Objects:**
{parent}:
```py
class Parent:
    def base_method(self):
        pass

{child}:

class Child(Unknown):
    def child_method(self):
        pass

Failed Condition:
"{child} inherits {parent} and implements method base_method"

Output

The condition was not met for multiple reasons:

  1. Child class attempts to inherit from 'Unknown' instead of Parent
  2. Child class does not override the base_method
  3. The inheritance chain is broken due to the undefined base class

---

`53-58`: **Consider adding assertion context to the prompt.**

Since this prompt is specifically for explaining assertion failures, consider mentioning this context in the input specification. This would help the LLM generate more relevant explanations.

```diff
 You will receive:
 - A list of named objects, where each object's value is the code of a class or method.
 - A text of a condition that uses these object names.
 - An indication that the condition was not met.
+- The assertion context (e.g., test name, assertion location) for more targeted explanations.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between d8639b6 and e5db205.

📒 Files selected for processing (3)
  • intentguard/intentguard.py (2 hunks)
  • intentguard/prompts.py (1 hunks)
  • tests/test_intentguard.py (1 hunks)
🔇 Additional comments (2)
intentguard/intentguard.py (2)

69-71: LGTM! Clear and well-structured error reporting.

The changes effectively integrate the explanation into the assertion error message while maintaining a clean format that separates the expectation from its explanation.


175-176: Verify system prompt compatibility and temperature consistency.

The method reuses the system prompt and uses a very low temperature setting. Let's verify:

  1. That the system prompt is appropriate for generating explanations
  2. That the temperature setting is consistent across all LLM calls

Also applies to: 179-183

✅ Verification successful

Let me gather more context about the explanation generation functionality to make a proper assessment.


System prompt and temperature settings are appropriate for both use cases

The verification shows that:

  1. The system prompt is generic enough to handle both assertion checking and explanation generation, as it focuses on "analyzing Python code by evaluating conditions against named objects"
  2. Temperature settings are consistent (1e-3) across LLM calls, which is appropriate for deterministic code analysis tasks
  3. The explanation_prompt properly extends the system prompt by specifically requesting explanation for failed conditions
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check system prompt content and temperature settings

# Test 1: Check if system prompt mentions explanation generation
echo "Checking system prompt content..."
rg -A 10 "system_prompt.*=" intentguard/prompts.py

# Test 2: Check for consistent temperature settings across LLM calls
echo "Checking temperature settings..."
rg "temperature.*=" intentguard/

Length of output: 799


Script:

#!/bin/bash
# Check the full method implementation to understand the explanation context
ast-grep --pattern 'def _generate_explanation($$$)'

# Check if there are any other prompts related to explanation
rg -B2 -A5 "explanation.*prompt" intentguard/

# Check all LLM calls to compare their configurations
ast-grep --pattern 'messages = [
  $$$
]'

Length of output: 2075

* Add `_generate_explanation` method to generate detailed explanations for failed assertions using the LLM.
* Modify `assert_code` method to call `_generate_explanation` when an assertion fails and include the explanation in the `AssertionError`.
* Update `_send_completion_request` method to handle the new explanation generation prompt.
* Import `explanation_prompt` from `prompts.py`.
Repository owner deleted a comment from coderabbitai bot Nov 4, 2024
@kdunee kdunee modified the milestone: v1.1.0 Nov 4, 2024
@kdunee kdunee merged commit ccb77f8 into main Nov 4, 2024
5 checks passed
@kdunee kdunee deleted the add-detailed-explanations branch November 4, 2024 13:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Detailed Failed Assertion Explanations
1 participant