You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your fantastic contribution in #40 ! Your work played a key role in achieving impressive results in swe-bench/experiments#133, especially with the integration of the Claude-3.5-Sonnet model.
However, I noticed that the README_swebench.md scripts haven’t been updated to reflect the changes introduced in your PR. These scripts are crucial for reproducing benchmark results, and currently, they seem to miss alignment with the new arguments and configurations you’ve introduced.
For example, the new argument --str_replace_format added to repair.py was explicitly designed for the Anthropic API:
# str_replace_format only supported with anthropic backend
assert not (
args.str_replace_format and args.backend != "anthropic"
), "str_replace_format only supported with anthropic backend"
This argument wasn’t part of the older README scripts, and while it’s just one example, it shows that the benchmark reproduction scripts seem outdated after your recent improvements.
An updated set of scripts in README_swebench.md would not only make your contribution even more impactful but also ensure that developers (like me) can consistently reproduce the exciting benchmark results you’ve submitted. With the current scripts, metrics like %Resolved on my side don’t seem to align perfectly with your submitted all_preds.jsonl.
Thank you again for your hard work and dedication—it’s deeply appreciated. Looking forward to your updates!
The text was updated successfully, but these errors were encountered:
Hi Agentless developer!
Thanks for your fantastic contribution in #40 ! Your work played a key role in achieving impressive results in swe-bench/experiments#133, especially with the integration of the Claude-3.5-Sonnet model.
However, I noticed that the
README_swebench.md
scripts haven’t been updated to reflect the changes introduced in your PR. These scripts are crucial for reproducing benchmark results, and currently, they seem to miss alignment with the new arguments and configurations you’ve introduced.For example, the new argument
--str_replace_format
added torepair.py
was explicitly designed for the Anthropic API:This argument wasn’t part of the older README scripts, and while it’s just one example, it shows that the benchmark reproduction scripts seem outdated after your recent improvements.
An updated set of scripts in
README_swebench.md
would not only make your contribution even more impactful but also ensure that developers (like me) can consistently reproduce the exciting benchmark results you’ve submitted. With the current scripts, metrics like%Resolved
on my side don’t seem to align perfectly with your submittedall_preds.jsonl
.Thank you again for your hard work and dedication—it’s deeply appreciated. Looking forward to your updates!
The text was updated successfully, but these errors were encountered: