Replies: 2 comments
-
Absolute valid question, especially with the speed everything is moving now. I don't have time to work on this at the moment. In all the experiments I conducted the 7B and 13B models didn't really capture the idea of what the prompts were asking. I did not like the code quality/design choices of autogpt when forked it (no idea how it looks now) and have thought about rewriting it with langchain several times. Again no time to do that at the moment. I will probably archive this repo and start something from scratch when i have more time. |
Beta Was this translation helpful? Give feedback.
-
This patch I found from other thread worked for me
|
Beta Was this translation helpful? Give feedback.
-
I have been running in to various issue, which mostly seem to be around incomplete JSON responses eg
`02:59:24,652 AutoGPT INFO Error:
: Traceback (most recent call last):
File "C:\Users\Auto-Llama-cpp\scripts\main.py", line 79, in print_assistant_thoughts
assistant_reply_json = fix_and_parse_json(assistant_reply)
File "C:\Users\Auto-Llama-cpp\scripts\json_parser.py", line 52, in fix_and_parse_json
# Let's do something manually:
ValueError: substring not found
02:59:25,455 AutoGPT INFO NEXT ACTION: : COMMAND = �[36mError:�[0m ARGUMENTS = �[36msubstring not found�[0m
02:59:25,710 AutoGPT INFO SYSTEM: : Command Error: threw the following error: substring not found`
I found similar issues reported in Auto-GPT but notice the "json_parser.py" is no longer part of Auto-GPT (and there is some work taking place to support a local LLM).
I was wondering if the plan was to continue with this fork, or create a more recent fork, or if this was abandoned?
Please note no offence etc is meant by the query and the efforts to date are very much appreciated.
Beta Was this translation helpful? Give feedback.
All reactions