Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

arch: deprecating recall action and search_memory #2900

Merged
merged 5 commits into from
Jul 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 0 additions & 13 deletions agenthub/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ Here is a list of available Actions, which can be returned by `agent.step()`:
- [`FileReadAction`](../opendevin/events/action/files.py) - Reads the content of a file
- [`FileWriteAction`](../opendevin/events/action/files.py) - Writes new content to a file
- [`BrowseURLAction`](../opendevin/events/action/browse.py) - Gets the content of a URL
- [`AgentRecallAction`](../opendevin/events/action/agent.py) - Searches memory (e.g. a vector database)
- [`AddTaskAction`](../opendevin/events/action/tasks.py) - Adds a subtask to the plan
- [`ModifyTaskAction`](../opendevin/events/action/tasks.py) - Changes the state of a subtask.
- [`AgentFinishAction`](../opendevin/events/action/agent.py) - Stops the control loop, allowing the user/delegator agent to enter a new task
Expand All @@ -54,7 +53,6 @@ Here is a list of available Observations:
- [`BrowserOutputObservation`](../opendevin/events/observation/browse.py)
- [`FileReadObservation`](../opendevin/events/observation/files.py)
- [`FileWriteObservation`](../opendevin/events/observation/files.py)
- [`AgentRecallObservation`](../opendevin/events/observation/recall.py)
- [`ErrorObservation`](../opendevin/events/observation/error.py)
- [`SuccessObservation`](../opendevin/events/observation/success.py)

Expand All @@ -72,14 +70,3 @@ def step(self, state: "State") -> "Action"

`step` moves the agent forward one step towards its goal. This probably means
sending a prompt to the LLM, then parsing the response into an `Action`.

### `search_memory`

```
def search_memory(self, query: str) -> list[str]:
```

`search_memory` should return a list of events that match the query. This will be used
for the `recall` action.

You can optionally just return `[]` for this method, meaning the agent has no long-term memory.
3 changes: 0 additions & 3 deletions agenthub/browsing_agent/browsing_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,3 @@ def step(self, state: State) -> Action:
stop=[')```', ')\n```'],
)
return self.response_parser.parse(response)

def search_memory(self, query: str) -> list[str]:
raise NotImplementedError('Implement this abstract method')
3 changes: 0 additions & 3 deletions agenthub/codeact_agent/codeact_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -208,9 +208,6 @@ def step(self, state: State) -> Action:
)
return self.action_parser.parse(response)

def search_memory(self, query: str) -> list[str]:
raise NotImplementedError('Implement this abstract method')

def _get_messages(self, state: State) -> list[dict[str, str]]:
messages = [
{'role': 'system', 'content': self.system_message},
Expand Down
3 changes: 0 additions & 3 deletions agenthub/codeact_swe_agent/codeact_swe_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,9 +162,6 @@ def step(self, state: State) -> Action:

return self.response_parser.parse(response)

def search_memory(self, query: str) -> list[str]:
raise NotImplementedError('Implement this abstract method')

def _get_messages(self, state: State) -> list[dict[str, str]]:
messages = [
{'role': 'system', 'content': self.system_message},
Expand Down
3 changes: 0 additions & 3 deletions agenthub/delegator_agent/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,3 @@ def step(self, state: State) -> Action:
)
else:
raise Exception('Invalid delegate state')

def search_memory(self, query: str) -> list[str]:
return []
11 changes: 0 additions & 11 deletions agenthub/dummy_agent/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
Action,
AddTaskAction,
AgentFinishAction,
AgentRecallAction,
AgentRejectAction,
BrowseInteractiveAction,
BrowseURLAction,
Expand All @@ -18,7 +17,6 @@
ModifyTaskAction,
)
from opendevin.events.observation import (
AgentRecallObservation,
CmdOutputObservation,
FileReadObservation,
FileWriteObservation,
Expand Down Expand Up @@ -91,12 +89,6 @@ def __init__(self, llm: LLM):
)
],
},
{
'action': AgentRecallAction(query='who am I?'),
'observations': [
AgentRecallObservation('', memories=['I am a computer.']),
],
},
{
'action': BrowseURLAction(url='https://google.com'),
'observations': [
Expand Down Expand Up @@ -152,6 +144,3 @@ def step(self, state: State) -> Action:
hist_obs == expected_obs
), f'Expected observation {expected_obs}, got {hist_obs}'
return self.steps[state.iteration]['action']

def search_memory(self, query: str) -> list[str]:
return ['I am a computer.']
3 changes: 0 additions & 3 deletions agenthub/micro/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,3 @@ def step(self, state: State) -> Action:
action_resp = resp['choices'][0]['message']['content']
action = parse_response(action_resp)
return action

def search_memory(self, query: str) -> list[str]:
return []
23 changes: 0 additions & 23 deletions agenthub/monologue_agent/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
from opendevin.core.schema import ActionType
from opendevin.events.action import (
Action,
AgentRecallAction,
BrowseURLAction,
CmdRunAction,
FileReadAction,
Expand All @@ -17,7 +16,6 @@
NullAction,
)
from opendevin.events.observation import (
AgentRecallObservation,
BrowserOutputObservation,
CmdOutputObservation,
FileReadObservation,
Expand Down Expand Up @@ -103,8 +101,6 @@ def _add_initial_thoughts(self, task):
)
elif previous_action == ActionType.READ:
observation = FileReadObservation(content=thought, path='')
elif previous_action == ActionType.RECALL:
observation = AgentRecallObservation(content=thought, memories=[])
elif previous_action == ActionType.BROWSE:
observation = BrowserOutputObservation(
content=thought, url='', screenshot=''
Expand All @@ -128,10 +124,6 @@ def _add_initial_thoughts(self, task):
path = thought.split('READ ')[1]
action = FileReadAction(path=path)
previous_action = ActionType.READ
elif thought.startswith('RECALL'):
query = thought.split('RECALL ')[1]
action = AgentRecallAction(query=query)
previous_action = ActionType.RECALL
elif thought.startswith('BROWSE'):
url = thought.split('BROWSE ')[1]
action = BrowseURLAction(url=url)
Expand Down Expand Up @@ -192,21 +184,6 @@ def step(self, state: State) -> Action:
self.latest_action = action
return action

def search_memory(self, query: str) -> list[str]:
"""
Uses VectorIndexRetriever to find related memories within the long term memory.
Uses search to produce top 10 results.

Parameters:
- The query that we want to find related memories for

Returns:
- A list of top 10 text results that matched the query
"""
if self.memory is None:
return []
return self.memory.search(query)

def reset(self) -> None:
super().reset()

Expand Down
14 changes: 2 additions & 12 deletions agenthub/monologue_agent/utils/prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,14 +35,12 @@
* `owner` - the owner of the repo to push to
* `repo` - the name of the repo to push to
* `branch` - the name of the branch to push
* `recall` - recalls a past memory. Arguments:
* `query` - the query to search for
* `message` - make a plan, set a goal, record your thoughts, or ask for more input from the user. Arguments:
* `content` - the message to record
* `wait_for_response` - set to `true` to wait for the user to respond before proceeding
* `finish` - if you're absolutely certain that you've completed your task and have tested your work, use the finish action to stop working.

You MUST take time to think in between read, write, run, browse, push, and recall actions--do this with the `message` action.
You MUST take time to think in between read, write, run, browse, and push actions--do this with the `message` action.
You should never act twice in a row without thinking. But if your last several
actions are all `message` actions, you should consider taking a different action.

Expand Down Expand Up @@ -92,15 +90,7 @@
'It seems like I have some kind of short term memory.',
'Each of my thoughts seems to be stored in a JSON array.',
'It seems whatever I say next will be added as an object to the list.',
'But no one has perfect short-term memory. My list of thoughts will be summarized and condensed over time, losing information in the process.',
'Fortunately I have long term memory!',
'I can just perform a recall action, followed by the thing I want to remember. And then related thoughts just spill out!',
"Sometimes they're random thoughts that don't really have to do with what I wanted to remember. But usually they're exactly what I need!",
"Let's try it out!",
'RECALL what it is I want to do',
"Here's what I want to do: $TASK",
'How am I going to get there though?',
"Neat! And it looks like it's easy for me to use the command line too! I just have to perform a run action and include the command I want to run in the command argument. The command output just jumps into my head!",
"It looks like it's easy for me to use the command line too! I just have to perform a run action and include the command I want to run in the command argument. The command output just jumps into my head!",
'RUN echo "hello world"',
'hello world',
'Cool! I bet I can write files too using the write action.',
Expand Down
3 changes: 0 additions & 3 deletions agenthub/planner_agent/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,3 @@ def step(self, state: State) -> Action:
messages = [{'content': prompt, 'role': 'user'}]
resp = self.llm.completion(messages=messages)
return self.response_parser.parse(resp)

def search_memory(self, query: str) -> list[str]:
return []
3 changes: 1 addition & 2 deletions agenthub/planner_agent/prompt.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@
* `state` - set to 'in_progress' to start the task, 'completed' to finish it, 'verified' to assert that it was successful, 'abandoned' to give up on it permanently, or `open` to stop working on it for now.
* `finish` - if ALL of your tasks and subtasks have been verified or abandoned, and you're absolutely certain that you've completed your task and have tested your work, use the finish action to stop working.

You MUST take time to think in between read, write, run, browse, and recall actions--do this with the `message` action.
You MUST take time to think in between read, write, run, and browse actions--do this with the `message` action.
You should never act twice in a row without thinking. But if your last several
actions are all `message` actions, you should consider taking a different action.

Expand All @@ -109,7 +109,6 @@ def get_hint(latest_action_id: str) -> str:
ActionType.WRITE: 'You just changed a file. You should think about how it affects your plan.',
ActionType.BROWSE: 'You should think about the page you just visited, and what you learned from it.',
ActionType.MESSAGE: "Look at your last thought in the history above. What does it suggest? Don't think anymore--take action.",
ActionType.RECALL: 'You should think about the information you just recalled, and how it should affect your plan.',
ActionType.ADD_TASK: 'You should think about the next action to take.',
ActionType.MODIFY_TASK: 'You should think about the next action to take.',
ActionType.SUMMARIZE: '',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@ _Exemple de CodeActAgent avec `gpt-4-turbo-2024-04-09` effectuant une tâche de
| ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| `__init__` | Initialise un agent avec `llm` et une liste de messages `list[Mapping[str, str]]` |
| `step` | Effectue une étape en utilisant l'agent CodeAct. Cela inclut la collecte d'informations sur les étapes précédentes et invite le modèle à exécuter une commande. |
| `search_memory` | Pas encore implémenté |

### En cours de réalisation & prochaine étape

Expand All @@ -77,7 +76,6 @@ La mémoire à court terme est stockée en tant qu'objet Monologue et le modèle
`CmdRunAction`,
`FileWriteAction`,
`FileReadAction`,
`AgentRecallAction`,
`BrowseURLAction`,
`GithubPushAction`,
`AgentThinkAction`
Expand All @@ -88,7 +86,6 @@ La mémoire à court terme est stockée en tant qu'objet Monologue et le modèle
`NullObservation`,
`CmdOutputObservation`,
`FileReadObservation`,
`AgentRecallObservation`,
`BrowserOutputObservation`

### Méthodes
Expand All @@ -99,7 +96,6 @@ La mémoire à court terme est stockée en tant qu'objet Monologue et le modèle
| `_add_event` | Ajoute des événements au monologue de l'agent et condense avec un résumé automatiquement si le monologue est trop long |
| `_initialize` | Utilise la liste `INITIAL_THOUGHTS` pour donner à l'agent un contexte pour ses capacités et comment naviguer dans le `/workspace` |
| `step` | Modifie l'état actuel en ajoutant les actions et observations les plus récentes, puis invite le modèle à réfléchir à la prochaine action à entreprendre. |
| `search_memory` | Utilise `VectorIndexRetriever` pour trouver des souvenirs liés à la mémoire à long terme. |

## Agent Planificateur

Expand All @@ -116,7 +112,6 @@ L'agent reçoit ses paires action-observation précédentes, la tâche actuelle,
`GithubPushAction`,
`FileReadAction`,
`FileWriteAction`,
`AgentRecallAction`,
`AgentThinkAction`,
`AgentFinishAction`,
`AgentSummarizeAction`,
Expand All @@ -129,7 +124,6 @@ L'agent reçoit ses paires action-observation précédentes, la tâche actuelle,
`NullObservation`,
`CmdOutputObservation`,
`FileReadObservation`,
`AgentRecallObservation`,
`BrowserOutputObservation`

### Méthodes
Expand All @@ -138,4 +132,3 @@ L'agent reçoit ses paires action-observation précédentes, la tâche actuelle,
| ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `__init__` | Initialise un agent avec `llm` |
| `step` | Vérifie si l'étape actuelle est terminée, retourne `AgentFinishAction` si oui. Sinon, crée une incitation de planification et l'envoie au modèle pour inférence, en ajoutant le résultat comme prochaine action. |
| `search_memory` | Pas encore implémenté |
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@ _CodeActAgent使用`gpt-4-turbo-2024-04-09`执行数据科学任务(线性回
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| `__init__` | 使用`llm`和一系列信息`list[Mapping[str, str]]`初始化Agent |
| `step` | 使用CodeAct Agent执行一步操作,包括收集前一步的信息并提示模型执行命令。 |
| `search_memory`| 尚未实现 |

### 进行中的工作 & 下一步

Expand All @@ -77,7 +76,6 @@ Monologue Agent利用长短期记忆来完成任务。
`CmdRunAction`,
`FileWriteAction`,
`FileReadAction`,
`AgentRecallAction`,
`BrowseURLAction`,
`GithubPushAction`,
`AgentThinkAction`
Expand All @@ -88,7 +86,6 @@ Monologue Agent利用长短期记忆来完成任务。
`NullObservation`,
`CmdOutputObservation`,
`FileReadObservation`,
`AgentRecallObservation`,
`BrowserOutputObservation`

### 方法
Expand All @@ -99,7 +96,6 @@ Monologue Agent利用长短期记忆来完成任务。
| `_add_event` | 将事件附加到Agent的独白中,如独白过长自动与摘要一起压缩 |
| `_initialize` | 使用`INITIAL_THOUGHTS`列表为agent提供其能力的上下文以及如何导航`/workspace` |
| `step` | 通过添加最近的动作和观测修改当前状态,然后提示模型考虑其接下来的动作。 |
| `search_memory`| 使用`VectorIndexRetriever`在长期记忆中查找相关记忆。 |

## Planner Agent

Expand All @@ -116,7 +112,6 @@ Planner agent利用特殊的提示策略为解决问题创建长期计划。
`GithubPushAction`,
`FileReadAction`,
`FileWriteAction`,
`AgentRecallAction`,
`AgentThinkAction`,
`AgentFinishAction`,
`AgentSummarizeAction`,
Expand All @@ -129,7 +124,6 @@ Planner agent利用特殊的提示策略为解决问题创建长期计划。
`NullObservation`,
`CmdOutputObservation`,
`FileReadObservation`,
`AgentRecallObservation`,
`BrowserOutputObservation`

### 方法
Expand All @@ -138,4 +132,3 @@ Planner agent利用特殊的提示策略为解决问题创建长期计划。
| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `__init__` | 使用`llm`初始化Agent |
| `step` | 检查当前步骤是否完成,如果是则返回`AgentFinishAction`。否则,创建计划提示并发送给模型进行推理,将结果作为下一步动作。 |
| `search_memory`| 尚未实现 |
Loading