-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error Message after "thinking" #10
Comments
it will likely be some trial and error finding model and prompt combinations that work well together. for the vicuna 13b model i got a little further with the prompt edits made in the prompt composition discussion but it still has some issues further down the line try this thomasfifer@4ccf2de also, if you turn on debug mode in config.py you will get more information about the input/output strings before running into the error |
Thank you for your Help. |
it's all on github... should just be able to use the normal git commands to check it out
|
Thank you. I will try this. |
I cloned your Repo and tried with the standard prompt (Entrepreneur GPT.. Standard Role etc.). Same Error as last Time. "Warning: model not found. Using cl100k_base encoding." The general Output is this here: "Using memory of type: LocalCache |
with the updated prompt.txt, i'm really surprised you're still not getting any json output (you should at least have a '{' character in there somewhere). although, i have been using the legacy vicuna-13b instead of the newer 1.1 version that you're using so there might be a different prompt that works better on 1.1. i did push up another commit to the vicuna-13b branch on my fork that helps parse the output a bit better to find the start of the json. you could try the legacy version with my latest commit to at least get a few steps in to the ai trying to perform things to see what it's capable of. there's still some work to be done on modifying the prompt or trying other models to get a reliable output where it doesn't break after a few commands though. |
Since different models will need different prompts we should make the prompt configurable and add them all in data/prompts similar to how llama.cpp does it. |
Had the same issue as well: so hopefully it can be looked into |
Same issue, testing different models... same result. |
Same issue here: |
This is what GPT explain about this error: This Python error message indicates that there is an issue with finding a substring within a JSON string. Specifically, the code is attempting to find the index of the first occurrence of the character '{' within the JSON string, but this character is not present in the string. As a result, a ValueError exception is raised with the message "substring not found". To fix this error, you will need to investigate why the expected JSON string is not being generated or passed correctly to the relevant code. One possible solution is to add some error handling code to the fix_and_parse_json function to handle cases where the expected substring is not found within the JSON string. Another approach could be to modify the code that generates or passes the JSON string to ensure that it always contains the expected substring. |
Same issue |
Same issues with both legacy 13B 7B and the newest one |
I don't get an error after thinking, it just sits there forever, I do get the Warning: model not found. Using cl100k_base encoding error though |
Duplicates
Steps to reproduce 🕹
You start the Program by executing the main.py File.
Then you press "y" and Enter.
After a couple of minutes you will get an Error Message.
I am using the Vicuna 13b Model.
Current behavior 😯
I press enter and this is the Output after letting it "think".
AutoGPT INFO Error:
: Traceback (most recent call last):
File "C:\Users\alexr\Documents\Auto-Llama-cpp\scripts\main.py", line 79, in print_assistant_thoughts
assistant_reply_json = fix_and_parse_json(assistant_reply)
File "C:\Users\alexr\Documents\Auto-Llama-cpp\scripts\json_parser.py", line 52, in fix_and_parse_json
brace_index = json_str.index("{")
ValueError: substring not found
Expected behavior 🤔
I think it should continue with the Process.
Your prompt 📝
# Paste your prompt here
The text was updated successfully, but these errors were encountered: