-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DOC: Explain how to continue the conversation with OllamaFunctions #14360
Comments
🤖 Hello @tonylampada, I'm Dosu, an AI bot here to assist you with your LangChain issues and queries. While we're waiting for a human maintainer, I'll do my best to help you out. Nice to meet you! It seems like you're trying to continue the conversation with the In the LangChain codebase, the Here's the relevant part of the code from the OllamaFunctions class: response_message = self.llm.predict_messages(
[system_message] + messages, stop=stop, callbacks=run_manager, **kwargs
)
chat_generation_content = response_message.content
if not isinstance(chat_generation_content, str):
raise ValueError("OllamaFunctions does not support non-string output.")
try:
parsed_chat_result = json.loads(chat_generation_content)
except json.JSONDecodeError:
raise ValueError(
f'"{self.llm.model}" did not respond with valid JSON. Please try again.'
)
called_tool_name = parsed_chat_result["tool"]
called_tool_arguments = parsed_chat_result["tool_input"] To continue the conversation, you need to keep appending the new messages to the list of messages you pass to the from langchain_core.messages import HumanMessage
# Initialize the ChatOllama class
ollama = ChatOllama(model="llama2")
# Start the conversation
response = ollama._generate([
HumanMessage(content="Tell me about the history of AI")
])
# Continue the conversation
response = ollama._generate([
HumanMessage(content="Tell me about the history of AI"), # previous message
HumanMessage(content="What is machine learning?") # new message
]) In this example, the conversation starts with the question "Tell me about the history of AI". To continue the conversation, you add a new message "What is machine learning?" to the list of messages you pass to the Please note that the I hope this helps! If you have any other questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@dosu-bot you get points for effort but this answer still doesn't help fix the problem (getting the llm to use the result of a function call) |
🤖 Based on the error message you provided, it seems like the In the The error message you're seeing suggests that the To resolve this issue, you need to ensure that the function call and its arguments are correctly formatted and that the function is included in the functions = [
{
"name": "get_current_weather",
"description": "Get the current weather.",
"parameters": {
"type": "object",
"properties": {
"temperature": {
"type": "string",
"description": "The current temperature.",
},
},
"required": ["temperature"],
},
},
DEFAULT_RESPONSE_FUNCTION,
]
ollama_functions._generate(
messages,
functions=functions,
function_call={
"name": "get_current_weather",
"arguments": json.dumps({"temperature": "25 celsius"}),
},
) In this example, a new function You can find more details about this in the OllamaFunctions class source code. Additionally, there was a similar issue in the LangChain repository titled OpenAI function call is not a FunctionMessage type. The solution suggested there was to convert an def convert_aimessage_to_functionmessage(aimessage: AIMessage) -> FunctionMessage:
function_call = aimessage.additional_kwargs.get("function_call")
if function_call:
# Ensure that the "name" field is provided in the function_call
name = function_call.get("name")
if name:
return FunctionMessage(content=aimessage.content, name=name)
else:
raise ValueError("The 'name' field is required to create a FunctionMessage.")
else:
raise ValueError("The AIMessage does not contain a 'function_call' field.") Please replace the "name" in the I hope this helps! If you have any further questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
encountering the same error as you @tonylampada have you made any progress on this? {
"name": "the_tool",
"content": "xxx"
} but ollama only supports role & content parameters. |
My workaround:
from langchain_experimental.llms.ollama_functions import OllamaFunctions, DEFAULT_RESPONSE_FUNCTION
model = OllamaFunctions().bind(
functions=[
format_tool_to_openai_function(DuckDuckGoSearchRun()),
DEFAULT_RESPONSE_FUNCTION,
],
)
model.invoke([
HumanMessage(content="what is the weather in Boston?"),
AIMessage(
content="",
additional_kwargs={
"function_call": {
"name": "duckduckgo_search",
"arguments": "{\"query\": \"weather in boston\"}"
}
},
),
HumanMessage(content="Results of duckduckgo_search with query `weather in boston`: ... function result here ..."),
]) |
🤖 I see where the confusion is coming from. The issue you're encountering is due to the fact that the To continue the conversation after a function call, you need to convert the function result into a # Convert the function result into a HumanMessage
fm = HumanMessage(content=json.dumps({'temperature': '25 celsius'}))
# Append the function result to the messages list
messages.append(fm)
# Continue the conversation
aim = model.invoke(messages) In this example, the function result is converted into a Please note that this is a workaround and the behavior might change in future versions of the library. The LangChain team is aware of this limitation and is working on a more robust solution. For now, you can use this workaround to continue the conversation after a function call. I hope this helps! If you have any other questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
1 similar comment
🤖 I see where the confusion is coming from. The issue you're encountering is due to the fact that the To continue the conversation after a function call, you need to convert the function result into a # Convert the function result into a HumanMessage
fm = HumanMessage(content=json.dumps({'temperature': '25 celsius'}))
# Append the function result to the messages list
messages.append(fm)
# Continue the conversation
aim = model.invoke(messages) In this example, the function result is converted into a Please note that this is a workaround and the behavior might change in future versions of the library. The LangChain team is aware of this limitation and is working on a more robust solution. For now, you can use this workaround to continue the conversation after a function call. I hope this helps! If you have any other questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
following that workaround, conversation hangs after sending the result to model |
I have the same issue of hanging. Up until 2nd request, it seems to work well (calls function, etc.) Here is example jupyter This seems to sorta work, but just answers with the function-call again (I want it to parse it, like "the weather in Portland is ...") def ask_and_run(question):
print(f"USER: {question}")
response = model.invoke([HumanMessage(content=question)])
if response.content != '':
print(f"AI: {response.content}")
func = response.additional_kwargs['function_call']
args = json.loads(func['arguments'])
if func['name'] == 'get_current_weather':
data = get_current_weather(args['location'], args['unit'])
print(f"FUNCTION {func['name']}({args}): {json.dumps(data)}")
return model.invoke([
HumanMessage(content=question),
AIMessage(content='', additional_kwargs=response.additional_kwargs),
HumanMessage(content="FUNCTION `get_current_weather` returned: " + json.dumps(data))
])
|
This seems to sort of work ok (except for disclaimer of it being a simulated response.) Is this the right way to do it (use seperate model instances)?
Ollama Langchain Function Calling.ipynb.txt This pretty much works perfectly: chatter = Ollama(model=MODEL_NAME, base_url=OLLAMA_URL)
model = OllamaFunctions(model=MODEL_NAME, base_url=OLLAMA_URL)
# ... setup for functions on model
def ask_and_run(question):
print(f"USER: {question}\n")
response = model.invoke([HumanMessage(content=question)])
if response.content != '':
print(f"AI: {response.content}")
func = response.additional_kwargs['function_call']
args = json.loads(func['arguments'])
if func['name'] == 'get_current_weather':
data = get_current_weather(args['location'], args['unit'])
print(f"FUNCTION {func['name']}({args}): {json.dumps(data)}\n")
fresponse = chatter.invoke([
HumanMessage(content=question),
response,
HumanMessage(content=f"Describe this weather report in plain English, as if you are a weatherman on the evening news, with no introduction text: {json.dumps(data)}")
])
return f"AI: {fresponse}\n"
|
although it means there are 2 sessions now right? |
yep. I copy the conversation from 1 into the other, and use the same model, to try to minimize this, but it does seem slow. I am pretty new to all this, so I am sure I am doing it in a silly way. It seems like it'd be better if I could "switch modes" on same session. Maybe it's just a matter of changing the prompt-wrapping for each mode? I played a bit with calebfahlgren/natural-functions which used a different format for functions/prompt, but did seem to context-switch ok (it would call functions, get response from user, and put it into plain english, using the conversation-history.) It's base-model is pretty limited ( Another thing that occurs to me is that it would maybe be better if I had a really tiny model that could just extract the function-calls reliably (if there are any) then feed it to the other ("conversational") LLM. it would speed it up, if I have to do it in 2 steps, anyway. |
Issue with current documentation:
https://python.langchain.com/docs/integrations/chat/ollama_functions
Ollama functions page shows how to start a conversation where the llm calls a function.
But it stops there.
How do we keep the conversation going? (i.e. I should give the llm the answer to the function call, and then it should give a text reply with his explanation of the weather in Paris, right?)
Here's what I tried
Idea or request for content:
No response
The text was updated successfully, but these errors were encountered: