Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions About Running In-Context Learning Editing on ZsRE #477

Closed
baselmousi opened this issue Jan 30, 2025 · 4 comments
Closed

Questions About Running In-Context Learning Editing on ZsRE #477

baselmousi opened this issue Jan 30, 2025 · 4 comments
Labels
question Further information is requested

Comments

@baselmousi
Copy link

Hi

Thanks for the amazing efforts on this toolkit.

I was trying to run the the IKE editing method on the ZsRE dataset through running the run_knowedit_llama2.py. I tracked the code to check how the metrics to check how the metrics are being computed and I have some questions.

  1. To compute the edit success, the edit_evaluation function calls the compute_icl_edit_quality in the evaluate.py file. Depending on the bool value of pre_edit, the function either passes New Fact: {prompt} {target_new}\nPrompt: {prompt} or the prompt. Why do you change the prompt based on the value of the pre_edit and what difference does it make ?

  2. To compute the edit success, The run_knowedit_llama2.py script does not pass the ground truth values of the edit prompts to the .edit method so when computing the metric the ground truth is <|endoftext|>. Why aren't the groundtruth values passed and what does the success of the edit computed against ?

  3. When computing the locality or portability, you prepend the edited fact to the locality/protability prompts. To edit an instruct tuned model, should one account for the adding the chat template ?

Thanks a lot for your efforts and I am looking forward for your reply.

@zxlzr zxlzr added the question Further information is requested label Jan 31, 2025
@zxlzr
Copy link
Contributor

zxlzr commented Feb 1, 2025

Sorry, but due to limited computational resources and upcoming paper deadlines, we will address your issue as soon as we can.

@littlefive5
Copy link
Collaborator

littlefive5 commented Feb 2, 2025

Hello, sorry for being slow.

  1. pre_edit is the performance of the model before we edit, so we do not pass the `New Fact'.
  2. Well since edit donot require the ground_truth', we just set it endoftext', we only require target_new'. You can change ground truth' as you need.
  3. Yes, we recommend using the chat_template, but in previous work we just used the original prompt. In our new paper CKnowedit, we have applied chat_template and you can select different settings via different settings: editor.generate_edit

@zxlzr
Copy link
Contributor

zxlzr commented Feb 3, 2025

hi, do you have any further questions?

@baselmousi
Copy link
Author

thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants