Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sourcery refactored main branch #2

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

sourcery-ai[bot]
Copy link

@sourcery-ai sourcery-ai bot commented Nov 2, 2023

Branch main refactored by Sourcery.

If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Review changes via command line

To manually merge these changes, make sure you're on the main branch, then run:

git fetch origin sourcery/main
git merge --ff-only FETCH_HEAD
git reset HEAD^

Help us improve this pull request!

@sourcery-ai sourcery-ai bot requested a review from Tuff-Madman November 2, 2023 08:24
PATH_SEPARATOR = WIN32 and "\\" or "/"
PATH_SEPARATOR = "\\" if WIN32 else "/"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 27-27 refactored with the following changes:

Comment on lines -264 to -266
# Warn if use_docker was unspecified (or None), and cannot be provided (the default).
# In this case the current behavior is to fall back to run natively, but this behavior
# is subject to change.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function execute_code refactored with the following changes:

This removes the following comments ( why? ):

# In this case the current behavior is to fall back to run natively, but this behavior
# Warn if use_docker was unspecified (or None), and cannot be provided (the default).
# is subject to change.

Comment on lines -448 to +445
if pos == -1:
return response
return response[:pos]
return response if pos == -1 else response[:pos]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _remove_check refactored with the following changes:

Comment on lines -491 to +486
"success": any(s for s in success_list),
"success": any(success_list),
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function eval_function_completions refactored with the following changes:

Comment on lines -53 to +54
if idx < 0:
return None
if idx < 0:
return None
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function last_boxed_only_string refactored with the following changes:

if isinstance(message, str):
return {"content": message}
else:
return message
return {"content": message} if isinstance(message, str) else message
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ConversableAgent._message_to_dict refactored with the following changes:

Comment on lines -331 to +328
# When the agent composes and sends the message, the role of the message is "assistant"
# unless it's "function".
valid = self._append_oai_message(message, "assistant", recipient)
if valid:
if valid := self._append_oai_message(message, "assistant", recipient):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ConversableAgent.send refactored with the following changes:

This removes the following comments ( why? ):

# unless it's "function".
# When the agent composes and sends the message, the role of the message is "assistant"

Comment on lines -380 to +374
# When the agent composes and sends the message, the role of the message is "assistant"
# unless it's "function".
valid = self._append_oai_message(message, "assistant", recipient)
if valid:
if valid := self._append_oai_message(message, "assistant", recipient):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ConversableAgent.a_send refactored with the following changes:

This removes the following comments ( why? ):

# unless it's "function".
# When the agent composes and sends the message, the role of the message is "assistant"

Comment on lines -708 to +723
else:
if self._consecutive_auto_reply_counter[sender] >= self._max_consecutive_auto_reply_dict[sender]:
if self.human_input_mode == "NEVER":
reply = "exit"
else:
# self.human_input_mode == "TERMINATE":
terminate = self._is_termination_msg(message)
reply = self.get_human_input(
f"Please give feedback to {sender.name}. Press enter or type 'exit' to stop the conversation: "
if terminate
else f"Please give feedback to {sender.name}. Press enter to skip and use auto-reply, or type 'exit' to stop the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply if reply or not terminate else "exit"
elif self._is_termination_msg(message):
if self.human_input_mode == "NEVER":
reply = "exit"
else:
# self.human_input_mode == "TERMINATE":
reply = self.get_human_input(
f"Please give feedback to {sender.name}. Press enter or type 'exit' to stop the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply or "exit"
elif self._consecutive_auto_reply_counter[sender] >= self._max_consecutive_auto_reply_dict[sender]:
if self.human_input_mode == "NEVER":
reply = "exit"
else:
# self.human_input_mode == "TERMINATE":
terminate = self._is_termination_msg(message)
reply = self.get_human_input(
f"Please give feedback to {sender.name}. Press enter or type 'exit' to stop the conversation: "
if terminate
else f"Please give feedback to {sender.name}. Press enter to skip and use auto-reply, or type 'exit' to stop the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply if reply or not terminate else "exit"
elif self._is_termination_msg(message):
if self.human_input_mode == "NEVER":
reply = "exit"
else:
# self.human_input_mode == "TERMINATE":
reply = self.get_human_input(
f"Please give feedback to {sender.name}. Press enter or type 'exit' to stop the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply or "exit"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ConversableAgent.check_termination_and_human_reply refactored with the following changes:

Comment on lines -779 to +793
else:
if self._consecutive_auto_reply_counter[sender] >= self._max_consecutive_auto_reply_dict[sender]:
if self.human_input_mode == "NEVER":
reply = "exit"
else:
# self.human_input_mode == "TERMINATE":
terminate = self._is_termination_msg(message)
reply = await self.a_get_human_input(
f"Please give feedback to {sender.name}. Press enter or type 'exit' to stop the conversation: "
if terminate
else f"Please give feedback to {sender.name}. Press enter to skip and use auto-reply, or type 'exit' to stop the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply if reply or not terminate else "exit"
elif self._is_termination_msg(message):
if self.human_input_mode == "NEVER":
reply = "exit"
else:
# self.human_input_mode == "TERMINATE":
reply = await self.a_get_human_input(
f"Please give feedback to {sender.name}. Press enter or type 'exit' to stop the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply or "exit"
elif self._consecutive_auto_reply_counter[sender] >= self._max_consecutive_auto_reply_dict[sender]:
if self.human_input_mode == "NEVER":
reply = "exit"
else:
# self.human_input_mode == "TERMINATE":
terminate = self._is_termination_msg(message)
reply = await self.a_get_human_input(
f"Please give feedback to {sender.name}. Press enter or type 'exit' to stop the conversation: "
if terminate
else f"Please give feedback to {sender.name}. Press enter to skip and use auto-reply, or type 'exit' to stop the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply if reply or not terminate else "exit"
elif self._is_termination_msg(message):
if self.human_input_mode == "NEVER":
reply = "exit"
else:
# self.human_input_mode == "TERMINATE":
reply = await self.a_get_human_input(
f"Please give feedback to {sender.name}. Press enter or type 'exit' to stop the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply or "exit"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ConversableAgent.a_check_termination_and_human_reply refactored with the following changes:

Comment on lines -962 to +951
reply = input(prompt)
return reply
return input(prompt)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ConversableAgent.get_human_input refactored with the following changes:

Comment on lines -976 to +964
reply = input(prompt)
return reply
return input(prompt)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ConversableAgent.a_get_human_input refactored with the following changes:

Comment on lines -48 to +51
else:
offset = self.agent_names.index(agent.name) + 1
for i in range(len(self.agents)):
if self.agents[(offset + i) % len(self.agents)] in agents:
return self.agents[(offset + i) % len(self.agents)]
offset = self.agent_names.index(agent.name) + 1
for i in range(len(self.agents)):
if self.agents[(offset + i) % len(self.agents)] in agents:
return self.agents[(offset + i) % len(self.agents)]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function GroupChat.next_agent refactored with the following changes:

contain_code = False
for c in cb:
if c[0] == "python" or c[0] == "wolfram":
contain_code = True
break
contain_code = any(c[0] in ["python", "wolfram"] for c in cb)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _is_termination_msg_mathchat refactored with the following changes:

  • Use any() instead of for loop (use-any)
  • Replace multiple comparisons of same variable with in operator (merge-comparisons)

Comment on lines -110 to +106
if "=" in last_line:
last_line = "print(" + last_line.split(" = ")[0] + ")"
lines.append(last_line)
else:
lines[-1] = "print(" + last_line + ")"
lines[-1] = f"print({last_line})"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _add_print_to_last_line refactored with the following changes:

Comment on lines -259 to +268
if self.verbosity >= 2:
# Use the messaging mechanism so that the analyzer's messages are included in the printed chat.
self.analyzer.reset() # Clear the analyzer's list of messages.
self.send(
recipient=self.analyzer, message=text_to_analyze, request_reply=False
) # Put the message in the analyzer's list.
self.send(recipient=self.analyzer, message=analysis_instructions, request_reply=True) # Request the reply.
return self.last_message(self.analyzer)["content"]
else:
if self.verbosity < 2:
# Use the analyzer's method directly, to leave analyzer message out of the printed chat.
return self.analyzer.analyze_text(text_to_analyze, analysis_instructions)
# Use the messaging mechanism so that the analyzer's messages are included in the printed chat.
self.analyzer.reset() # Clear the analyzer's list of messages.
self.send(
recipient=self.analyzer, message=text_to_analyze, request_reply=False
) # Put the message in the analyzer's list.
self.send(recipient=self.analyzer, message=analysis_instructions, request_reply=True) # Request the reply.
return self.last_message(self.analyzer)["content"]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function TeachableAgent.analyze refactored with the following changes:

Comment on lines -306 to +305
print(colored(" Location = {}".format(self.path_to_dict), "light_green"))
print(colored(f" Location = {self.path_to_dict}", "light_green"))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MemoStore.__init__ refactored with the following changes:

Comment on lines -320 to +319
" ID: {}\n INPUT TEXT: {}\n OUTPUT TEXT: {}".format(uid, input_text, output_text),
f" ID: {uid}\n INPUT TEXT: {input_text}\n OUTPUT TEXT: {output_text}",
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MemoStore.list_memos refactored with the following changes:

Comment on lines -328 to +327
print(colored(" Location = {}".format(self.path_to_dict), "light_green"))
print(colored(f" Location = {self.path_to_dict}", "light_green"))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MemoStore.close refactored with the following changes:

Comment on lines -347 to +346
"\nINPUT-OUTPUT PAIR ADDED TO VECTOR DATABASE:\n ID\n {}\n INPUT\n {}\n OUTPUT\n {}".format(
self.last_memo_id, input_text, output_text
),
f"\nINPUT-OUTPUT PAIR ADDED TO VECTOR DATABASE:\n ID\n {self.last_memo_id}\n INPUT\n {input_text}\n OUTPUT\n {output_text}",
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MemoStore.add_input_output_pair refactored with the following changes:

Comment on lines -365 to +362
"\nINPUT-OUTPUT PAIR RETRIEVED FROM VECTOR DATABASE:\n INPUT1\n {}\n OUTPUT\n {}\n DISTANCE\n {}".format(
input_text, output_text, distance
),
f"\nINPUT-OUTPUT PAIR RETRIEVED FROM VECTOR DATABASE:\n INPUT1\n {input_text}\n OUTPUT\n {output_text}\n DISTANCE\n {distance}",
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MemoStore.get_nearest_memo refactored with the following changes:

Comment on lines -375 to +370
if n_results > len(self.uid_text_dict):
n_results = len(self.uid_text_dict)
n_results = min(n_results, len(self.uid_text_dict))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MemoStore.get_related_memos refactored with the following changes:

Comment on lines -401 to +434
examples = []
examples.append({"text": "When I say papers I mean research papers, which are typically pdfs.", "label": "yes"})
examples.append({"text": "Please verify that each paper you listed actually uses langchain.", "label": "no"})
examples.append({"text": "Tell gpt the output should still be latex code.", "label": "no"})
examples.append({"text": "Hint: convert pdfs to text and then answer questions based on them.", "label": "yes"})
examples.append(
{"text": "To create a good PPT, include enough content to make it interesting.", "label": "yes"}
)
examples.append(
examples = [
{
"text": "When I say papers I mean research papers, which are typically pdfs.",
"label": "yes",
},
{
"text": "Please verify that each paper you listed actually uses langchain.",
"label": "no",
},
{
"text": "Tell gpt the output should still be latex code.",
"label": "no",
},
{
"text": "Hint: convert pdfs to text and then answer questions based on them.",
"label": "yes",
},
{
"text": "To create a good PPT, include enough content to make it interesting.",
"label": "yes",
},
{
"text": "No, for this case the columns should be aspects and the rows should be frameworks.",
"label": "no",
}
)
examples.append({"text": "When writing code, remember to include any libraries that are used.", "label": "yes"})
examples.append({"text": "Please summarize the papers by Eric Horvitz on bounded rationality.", "label": "no"})
examples.append({"text": "Compare the h-index of Daniel Weld and Oren Etzioni.", "label": "no"})
examples.append(
},
{
"text": "When writing code, remember to include any libraries that are used.",
"label": "yes",
},
{
"text": "Please summarize the papers by Eric Horvitz on bounded rationality.",
"label": "no",
},
{
"text": "Compare the h-index of Daniel Weld and Oren Etzioni.",
"label": "no",
},
{
"text": "Double check to be sure that the columns in a table correspond to what was asked for.",
"label": "yes",
}
)
},
]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MemoStore.prepopulate refactored with the following changes:

output_text = oai.ChatCompletion.extract_text_or_function_call(response)[0]
return output_text
return oai.ChatCompletion.extract_text_or_function_call(response)[0]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function TextAnalyzerAgent.analyze_text refactored with the following changes:

temperature_or_top_p = params.pop("temperature_or_top_p", None)
if temperature_or_top_p:
if temperature_or_top_p := params.pop("temperature_or_top_p", None):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Completion._get_params_for_create refactored with the following changes:

Comment on lines -27 to +33
teachable_agent = TeachableAgent(
return TeachableAgent(
name="teachableagent",
llm_config={"config_list": config_list, "request_timeout": 120, "use_cache": use_cache},
llm_config={
"config_list": config_list,
"request_timeout": 120,
"use_cache": use_cache,
},
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function create_teachable_agent refactored with the following changes:

Comment on lines -38 to -44
feeds_summary = "\n".join(
return "\n".join(
[
f"News summary: {f['title']}. {f['summary']} overall_sentiment_score: {f['overall_sentiment_score']}"
for f in feeds
]
)
return feeds_summary
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_market_news refactored with the following changes:

Comment on lines -42 to +48
teachable_agent = TeachableAgent(
return TeachableAgent(
name="teachableagent",
llm_config={"config_list": config_list, "request_timeout": 120, "use_cache": use_cache},
llm_config={
"config_list": config_list,
"request_timeout": 120,
"use_cache": use_cache,
},
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function create_teachable_agent refactored with the following changes:

for pair in dists.keys():
for pair in dists:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function solve_tsp refactored with the following changes:

This removes the following comments ( why? ):

# Calculate the cost of the current route

max([len(x["solution"].split()) for x in tune_data]),
max(len(x["solution"].split()) for x in tune_data),
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function test_math refactored with the following changes:

Copy link

sweep-ai bot commented Nov 2, 2023

Apply Sweep Rules to your PR?

  • Apply: Leftover TODOs in the code should be handled.
  • Apply: All new business logic should have corresponding unit tests in the tests/ directory.
  • Apply: Any clearly inefficient or repeated code should be optimized or refactored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants