-
Notifications
You must be signed in to change notification settings - Fork 24
Home
Welcome to the InferGPT wiki!
-
User utterance sent to Director Supervisor
-
Predict next step: Director Supervisor reviews the current ‘world state’ by reviewing canvas, previous actions and conversation history.
-
Select Supervisor Director Supervisor predicts the next step, is given supervisor availability by Agentand picks the most suitable supervisor to handle the next step.
-
Summarise conversation history based on step 1 prediction and the supervisor director has picked, the complete chat history is "contextualised" - only the most relevant messages are used for the final evaluation step
-
Evaluate & deliver A final evaluation step is run to determine what agent function is executed to try and further achieve the user's goal.
Returning of task: Director is passed a summary of the task assigned by another supervisor and is told whether it is done successfully or not. If not completed successfully, a detailed review is given to describe why not and what is needed to succeed next time.
-
If the task is completed successfully: This pattern is looped until all tasks listed in the canvas are completed and the response is handed back to the user.
-
If the task is not completed successfully: Director passes on the failed task and feedback to the quartermaster, which analysis and creates new agents or tools to fulfill the task goals. This is passed back to Director once complete and the loop is repeated.
-
Planner Agent queries data science algorthims based on graph to make predictions based on given plan. If confidence on next steps/response is low, questions are returned to gather further information and goes in a loop until confidence high.
-
Memory agent captures every action and conversation history. And is used to retrain the long term memory based how close its initial guesses were. New information if useful to predictions is stored
Evaluate and Execute: A final evaluation step is run to determine what agent function is executed to try and further achieve the user's goal.
Language models are great a predicting the next token - as they are designed to. The issue though, compared to humans, is when one human requests another, we very rarely just spew out a response. Instead, we usually ask a question back. For example, if I ask you for a film recommendation, if you know me well, you would think: "Chris loves Marvel, and I know there's been a recent film released" - so you would ask: "Have you seen the latest Ant-Man?"
Alternatively, if you didn't know me well, you would ask things such as: "What genre of films do you like?"
We believe knowledge graphs are the solution to the above issue; to understand the user's current profile and ask questions based on missing context needed to solve their issue. It can then also store conversations, context and new information as time goes on - always remaining contextually updated.
Graphs are great at this sort of task. They infer fast and they carry deep context with their edges. Most excitingly they also:
- Act as super-vector stores with Neo4j's cypher language, providing better performance vs cosine similarity methods.
- Make great recommendation models - graphs could even start to predict what you want to do next!
For a deeper dive I highly recommend the Neo4j Going Meta YouTube series