Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specific prompt adds false input variables #14343

Closed
4 of 14 tasks
aymeric-roucher opened this issue Dec 6, 2023 · 2 comments
Closed
4 of 14 tasks

Specific prompt adds false input variables #14343

aymeric-roucher opened this issue Dec 6, 2023 · 2 comments
Labels
Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules

Comments

@aymeric-roucher
Copy link
Contributor

aymeric-roucher commented Dec 6, 2023

System Info

python3.10.13
langchain==0.0.346
langchain-core==0.0.10

Who can help?

@agola11, @hwchase17

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction


from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.agents.format_scratchpad import format_to_openai_function_messages

text_with_call = """You are a helpful assistant. Here is a function call that you should not imitate: <functioncall> {"name":"generate_anagram", "arguments": {"word": "listen"}} 
    """

prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            text_with_call,
        ),
        ("user", "{input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)

llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)

agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_function_messages(
            x["intermediate_steps"]
        ),
    }
    | prompt
    | llm
    | OpenAIFunctionsAgentOutputParser()
)
output = agent.invoke(
    {
        "input": "Hello",
        "intermediate_steps": [],
    }
)

Expected behavior

I would expect not to get an error.

I get error: KeyError: 'Input to ChatPromptTemplate is missing variable \'"name"\'. Expected: [\'"name"\', \'agent_scratchpad\', \'input\'] Received: [\'input\', \'agent_scratchpad\']'

I think this error is cause by the f-string prompt template recognising the brackets inside the prompt as other variables needed an input.
I tried using Jinja2 template for my prompt instead, but I cannot setup this option in ChatPromptTemplate.
I understand this could be for security reasons as mentioned in this issue

So the possible solutions I see :

  • Use PromptTemplate like:
prompt = PromptTemplate.from_template(
    text_with_call, template_format="jinja2"
)

But I would like to make use of the agent_scratchpad, so the ChatPromptTemplate is needed in my case.

  • Change ChatPromptTemplate class to support jinja2 template, which I understand could not be done
  • Re-implement my custom ChatPromptTemplate
  • find another way to accept this prompt without falsely flagging prompt elements as input variables.

Do you have any ideas? Thanks for your help 😃

Copy link

dosubot bot commented Dec 6, 2023

🤖

Hello @A-Roucher,

Nice to meet you! I'm Dosu, a bot with the ability to help you solve bugs, answer questions, and guide you through contributing to our project. I'm here to assist you while we wait for a human maintainer. Let's work together on this!

Based on the information you've provided and the similar issues I found in the LangChain repository, it seems like the problem is with the curly brackets {} in your prompt. The ChatPromptTemplate.from_messages method is interpreting the content within these brackets as input variables, which is causing the KeyError you're seeing.

There are a few ways you can address this issue:

  1. Escape the curly brackets: You can escape the curly brackets in your prompt by using four curly brackets {{{{ instead of two {{. This should prevent the ChatPromptTemplate.from_messages method from interpreting the content within these brackets as input variables. Here's how you can modify your text_with_call:
text_with_call = """You are a helpful assistant. Here is a function call that you should not imitate: <functioncall> {{{{"name":"generate_anagram", "arguments": {{"word": "listen"}}}}}
"""
  1. Use partial_variables parameter: If you need to use variables in your prompt but don't want them to be recognized as input variables, you can use the partial_variables parameter when creating a PromptTemplate. This parameter allows you to specify variables that should be partially filled in the template, effectively excluding them from being recognized as input variables. Here's an example:
from langchain_core.prompts import PromptTemplate

# Define the variables that should not be recognized as input variables
partial_variables = {
    "agent_scratchpad": "your_value_here"
}

# Create the PromptTemplate
prompt = PromptTemplate.from_template(
    template="Your template here",
    template_format="f-string",
    partial_variables=partial_variables
)

In this example, agent_scratchpad will not be recognized as an input variable, and will instead be filled with the value you specified in partial_variables.

  1. Create HumanMessage object directly: If you don't need to use the from_template method, you can create the HumanMessage object directly. This should bypass the issue with the curly brackets. Here's how you can do it:
from langchain.schema import HumanMessage
messages=[HumanMessage(content=text_with_call)]

I hope this helps! If you have any other questions or if the issue persists, please let me know.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@dosubot dosubot bot added Ɑ: agent Related to agents module Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Dec 6, 2023
@aymeric-roucher
Copy link
Contributor Author

Thanks a lot, dear bot, the first solution (escaping the curly brackets by doubling them} does the job!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

No branches or pull requests

1 participant